logoalt Hacker News

fl7305today at 7:29 AM1 replyview on HN

> I think the older AI users are even held back because they might be doing things that are not neccessary any more

As the same age as Linus Torvalds, I'd say that it can be the opposite.

We are so used to "leaky abstractions", that we have just accepted this as another imperfect new tech stack.

Unlike less experienced developers, we know that you have to learn a bit about the underlying layers to use the high level abstraction layer effectively.

What is going on under the hood? What was the sequence of events which caused my inputs to give these outputs / error messages?

Once you learn enough of how the underlying layers work, you'll get far fewer errors because you'll subconciously avoid them. Meanwhile, people with a "I only work at the high-level"-mindset keeps trying to feed the high-level layer different inputs more or less at random.

For LLMs, it's certainly a challenge.

The basic low level LLM architecture is very simple. You can write a naive LLM core inference engine in a few hundred lines of code.

But that is like writing a logic gate simulator and feeding it a huge CPU gate list + many GBs of kernel+rootfs disk images. It doesn't tell you how the thing actually behaves.

So you move up the layers. Often you can't get hard data on how they really work. Instead you rely on empirical and anecdotal data.

But you still form a mental image of what the rough layers are, and what you can expect in their behavior given different inputs.

For LLMs, a critical piece is the context window. It has to be understood and managed to get good results. Make sure it's fed with the right amount of the right data, and you get much better results.

> Nowadays I just paste a test, build, or linter error message into the chat and the clanker knows immediately what to do

That's exactly the right thing to do given the right circumstances.

But if you're doing a big refactoring across a huge code base, you won't get the same good results. You'll need to understand the context window and how your tools/framework feeds it with data for your subagents.


Replies

OJFordtoday at 10:11 AM

I think GP meant 'longer time users of AI', not 'older aged users of AI'.

Their point being that it's not really an advantage to have learnt the tricks and ways to deal with it a year, two years ago when it's so much better now, and that's not necessary or there's different tricks.