logoalt Hacker News

aabhayyesterday at 5:06 AM7 repliesview on HN

This feels like the exactly wrong way to think about it IMO. For me “knowledge” is not the explicit recitation of the correct solution, it’s all the implicit working knowledge I gain from trying different things, having initial assumptions fail, seeing what was off, dealing with deployment headaches, etc. As I work, I carefully pay attention to the outputs of all tools and try to mentally document what paths I didn’t take. That makes dealing with bugs and issues later on a lot easier, but it also expands my awareness of the domain, and checks my hubris on thinking I know something, and makes it possible to reason about the system when doing things later on.

Of course, this kind of interactive deep engagement with a topic is fast becoming obsolete. But the essence to me of “knowing” is about doing and experiencing things, updating my bayesian priors dialectically (to put it fancily)


Replies

simonwyesterday at 5:30 AM

I agree that the only reliable way to learn is to put knowledge into practice.

I don't think that's incompatible with getting help from LLMs. I find that LLMs let me try so much more stuff, and at such a faster rate, that my learning pace has accelerated in a material way.

show 1 reply
mmasuyesterday at 6:53 AM

I remember a very nice quote from an Amazon exec - “there is no compression algorithm for experience”. The LLM might as well do wrong things, and you still won’t know what you don’t know. But then, iterating with LLMs is a different kind of experience; and in the future people will likely do that more than just grinding through the failure of just missing semicolons Simon is describing below. It’s a different paradigm really

show 2 replies
johnfnyesterday at 6:05 AM

But how much of that time is truly spent on learning relevant knowledge, and how much of it is just (now) useless errata? Take vector search for an example. Pre-GPT, I would spend like an hour chasing down a typo, like specifying 1023 instead of 1024 or something. This sort of problem is now trivially solved in minutes by a LLM that fully understands the API surface area. So what exactly do I lose by not spending that hour chasing it down? It has nothing to do with learning vector search better, and an LLM can do it better and faster than I can.

show 1 reply
PessimalDecimalyesterday at 3:12 PM

Forgetting LLMs and coding agents for a second, what OP describes is like watching a Youtube video on how to make a small repair around the house. You can watch that and "know" what needs to be done afterwards. But it is a very different thing to do it yourself.

Ultimately it comes to whether gaining the know how through experience is worth it or not.

show 1 reply
jstummbilligyesterday at 10:00 AM

I think is exactly right in principle and practically. The question is what domain knowledge you should improve on to maximize outcome: Will understanding the machine code be the thing that most likely translates to better outcomes? Will building the vector search the hard way be? Or will it be focusing on the thing that you do with the vector search?

At some point things will get hard, as long as the world is. You don't need to concern yourself with any technical layer for that to be true. The less we have to concern ourselves with technicalities, the further that points shifts towards the thing we actually care about.

grim_ioyesterday at 7:47 AM

Trial and error is not how apprenticeship works, for example.

As an apprentice, you get correct and precise enough instructions and you learn from the masters perfection point downwards.

Maybe we have reached a point where we can be the machine's apprentices in some ways.

bulbaryesterday at 5:14 PM

Take a look at Bloom's taxonomy. It's exactly about what you are talking about.