logoalt Hacker News

alganet06/24/20255 repliesview on HN

A good reality check is: if a stranger asks you about a specific part of your toy project, would you be able to explain it?

If you can't, it means there's something there you don't understand, and you lost an opportunity to learn. Of course, this is also true for reusing libraries and stuff.

Within a job, what matters is the _team learning_, and it's roughly the same idea. At least one person in the team should be able to explain anything the team does. Trusting it to an AI puts the whole team in a precarious situation, even if it is something as simple as a CSS layout.


Replies

hn_throwaway_9906/24/2025

I would argue that, at least for me, gen AI helps me learn things much faster.

E.g., like I said, previously I would have been turned off from doing some projects because I wouldn't have spent a ton of time dicking around with things that are just inefficient. Now I have gen AI create a starting point, and it's very clear to me "Ah, this is how I would use grid layout to get what I want". I can tweak things and it's very clear to me how things are working.

Lerc06/24/2025

>A good reality check is: if a stranger asks you about a specific part of your toy project, would you be able to explain it?

I like that as a benchmark. I think it also works with LLMs too. I have had best results with AI generation of code when I pass that check. When using a LLM, the chatbot is the stranger.

In fact much of the frustration I have with using AIs seems to be for models tuned to the situation where the user does not have the ability to explain it specifically and it has to make broad assumptions. This may be the best result for a lowest common denominator, but it's frustrating when it provides a generic solution to a specific problem.

I can see specialised models diverging to focus on different levels of assumed knowledge and expectations. Perhaps to have modes of thinking, maybe as a more tone based MOE. Is anyone doing mixture of finetunes?

mrheosuper06/25/2025

unless you own/write the whole software stack(not the web stack), i doubt anyone can fully explain their project.

For example, if in your project you have to write a file, and someone ask you how does the kernel cache file writting, that would not be an easy question.

show 1 reply
8note06/25/2025

my current opinion is that uf i can get an ai to write details into a file that another team member's ai can read and then know whats up, thats close to the same as the team member knowing it, as long as theyre either reading the file that the ai read, or working through the ai and providing that file(s) as context.

fulafel06/26/2025

> If you can't, it means there's something there you don't understand, and you lost an opportunity to learn. Of course, this is also true for reusing libraries and stuff.

Yeah, in libraries that's the aim, software composability and reusability, you don't need to keep everything in your head. Same here.

A "lost opportunity to learn" can be preferable to not making the thing or making it much slower.