logoalt Hacker News

cgriswaldlast Thursday at 5:33 PM1 replyview on HN

There are opportunity costs to consider along with relevance. Suppose you are staying at my place. Are you going to read the manual for my espresso machine in total or are you going to ask me to show you how to use it or make one for you?

In any case, LLMs are not magical forgetfulness machines.

You can use a calculator to avoid learning arithmetic but using a calculator doesn’t necessitate failing to learn arithmetic.

You can ask a question of a professor or fellow student, but failing to read the textbook to answer that question doesn’t necessitate failing to develop a mental model or incorporate the answer into an existing one.

You can ask an LLM a question and blindly use its answer but using an LLM doesn’t necessitate failing to learn.


Replies

Retriclast Thursday at 6:11 PM

There’s plenty to learn from using LLM’s including how to interact with an LLM.

However, even outside of using a LLM the temptation is always to keep the blinders on do a deep dive for a very specific bug and repeat as needed. It’s the local minima of effort and very slowly you do improve as those deep dives occasionally come up again, but what keeps it from being a global minimum is these systems aren’t suddenly going away. It’s not a friend’s expresso machine, it’s now sitting in your metaphorical kitchen.

As soon as you’re dealt with say a CSS bug the odds of seeing another in the future are dramatically higher. Thus optimizing for diminishing returns means spending a few hours learning the basics of any system or protocol you encounter is just a useful strategy. If you spend 1% of your time on a strategy that makes you 2% more efficient that’s a net win.

show 1 reply