logoalt Hacker News

mlyle01/21/20251 replyview on HN

Even if progress stops:

1. Current reasoning models can do a -lot- more than skeptics give them credit for. Typical human performance even among people who do something for employment is not always that high.

2. In areas where AI has mediocre performance, it may not appear that way to a novice. It often looks more like expert level performance, which robs novices of the desire to practice associated skills.

Lest you think I contradict myself: I can get good output for many tasks from GPT4 because I know what to ask for and I know what good output looks like. But someone who thinks the first, poorly prompted dreck is great will never develop the critical skills to do this.


Replies

svachalek01/21/2025

This is a good point, forums are full of junior developers bemoaning that LLMs are inhumanly good at writing code -- not that they will be, but that they are. I've yet to see even the best produce something that makes me worry I might lose my job today, they're still very mediocre without a lot of handholding. But for someone who's still learning and thinks writing a loop is a challenge, they seem magical and unstoppable already.