logoalt Hacker News

simonwtoday at 4:59 AM13 repliesview on HN

I don't think building it the long way is necessarily a more effective way to learn.

You could spend 4 hours (that you don't have) building that feature. Or... you could have the coding agent build it in the background for you in 15 minutes, then spend 30 minutes reading through what it did, tweaking it yourself and peppering it with questions about how it all works.

My hunch is that the 30 minutes of focused learning spent with a custom-built version that solves your exact problem is as effective (or even more effective) than four hours spent mostly struggling to get something up and running and going down various rabbit holes of unrelated problem-solving.

Especially if realistically you were never going to carve out those four hours anyway.


Replies

aabhaytoday at 5:06 AM

This feels like the exactly wrong way to think about it IMO. For me “knowledge” is not the explicit recitation of the correct solution, it’s all the implicit working knowledge I gain from trying different things, having initial assumptions fail, seeing what was off, dealing with deployment headaches, etc. As I work, I carefully pay attention to the outputs of all tools and try to mentally document what paths I didn’t take. That makes dealing with bugs and issues later on a lot easier, but it also expands my awareness of the domain, and checks my hubris on thinking I know something, and makes it possible to reason about the system when doing things later on.

Of course, this kind of interactive deep engagement with a topic is fast becoming obsolete. But the essence to me of “knowing” is about doing and experiencing things, updating my bayesian priors dialectically (to put it fancily)

show 5 replies
throwaway613745today at 4:00 PM

Just speaking from personal experience but the struggle is what creates the learning.

I learned refactoring patterns from Fowler's book. But when I tried to actually use them I still struggled. I didn't fully understand how the patterns worked until I actually tried (and failed) to use them a few times.

You don't really internalize things until you understand what doesn't work just as much as what does. You don't learn nearly as much from success as you do from failure. I would say the ratio of truly internalized knowledge is much higher for failure.

The notion that you can get a bot to just vomit out a vector database and then you can just "read the code" and you'll understand how a vector database works is just ludicrous.

show 1 reply
girvotoday at 1:59 PM

> Or... you could have the coding agent build it in the background for you in 15 minutes, then spend 30 minutes reading through what it did, tweaking it yourself and peppering it with questions about how it all works

I can only speak for myself, but the only way I've been able to learn things rapidly in this industry is by writing things myself: even rote re-typing of books or SO answers was enough to trigger this for me.

Just querying models and reading output doesn't seem to work for me, but that's maybe down to my particular learning style.

show 1 reply
politelemontoday at 11:15 AM

That's assuming everyone learns the same way, which isn't true. Watching a streamer beat a dark souls boss won't automatically make you competent at the game. Reading through gobs of code generated for you without knowing why various things were needed won't help either. A middle approach could be to get the LLM to guide you through the steps.

barrkeltoday at 10:37 AM

I don't know. I built a vector similarity system for my hobby project the "hard" way, which was mostly getting Python set up with all the dependencies (seriously, Python dependency resolution is a non-trivial problem), picking a model with the right tradeoffs, installing pgvector, picking an index that optimized my distance metric, calculating and storing vectors for all my data, and integrating routes and UI which dispatched ANN search (order by / limit) to my indexed column. I also did some clustering, and learned something of how awkward it is in practice to pick a representative vector for a cluster - and in fact you may want several.

I now know what the model does (at a black box level) and how all the parts fit together. And I have plans to build classifiers on top of the vectors I built for further processing.

The experience of fighting Python dependencies gives me more appreciation for uv over venv and will leave me less stuck whenever the LLM fails to help resolve the situation.

ktzartoday at 8:27 AM

It's the same hunch we all have when we think we're going to learn something by watching tutorials. We learn by struggling.

sorokodtoday at 5:03 PM

You can spend 30 min, watching someone learning how to ski, you will learn something. You will not be able to ski by yourself though.

weitendorftoday at 5:39 AM

Generally I agree with your takes and find them very reasonable but in this case I think your deep experience might be coloring your views a bit.

LLMs can hurt less experienced engineers by keeping them from building an intuition for why things work a certain way, or why an alternative won't work (or conversely, why an unconventional approach might not only be possible, but very useful and valuable!).

I think problem solving is optimization in the face of constraints. Generally using LLMs IME, the more you're able to articulate and understand your constraints, and prescriptively guide the LLM towards something it's capable of doing, the more effective they are and the more maintainable their output is for you. So it really helps to know when to break the rules or to create/do something unconventional.

Another way to put it is that LLMs have commodified conventional software so learning when to break or challenge convention is going to be where most of the valuable work is going forward. And I think it's hard to actually do that unless you get into the weeds and battle/try things because you don't understand why they won't work. Sometimes they do

show 1 reply
gambitingtoday at 9:07 AM

>>My hunch is that the 30 minutes of focused learning spent with a custom-built version that solves your exact problem is as effective

My hunch is the exact opposite of this. You will learn close to nothing by reading this for 30 minutes.

csomartoday at 1:02 PM

The struggle is how you learn. I think that’s pretty much established scientifically by now?

show 1 reply
risyachkatoday at 12:46 PM

Reading without actually doing does not really result in learning, only very marginal one.

Try reading tutorials on a new programming language for 30 minutes and then open new text file and write basic loop with print.

It won’t even compile- which shows you haven’t really learned anything. Just read an interesting story. Sure you pita few bits here and there but you still don’t know how to do even the moat basic thing.

show 1 reply
Applejinxtoday at 12:37 PM

This really makes for a good natural experiment: carry on :)

I have a hard time imagining how much you'd have to literally bribe me to get me to try doing it the way you describe. I'm too interested in implementation details of things and looking for innovations—in fact I make my living doing that, like some cyberpunk gremlin just delighting in messing with stuff in unexpected ways. I don't understand why you're not, but maybe it's not for me to understand.

Carry on. We'll check back and see how it worked for ya :)

show 1 reply
enraged_cameltoday at 12:20 PM

Agree completely. The other aspect for me is that LLMs make me unafraid to take on initiatives in areas I know nothing about and/or am uninterested in pursuing due to discrepancy in effort vs reward. As a result I end up doing more and learning more.