logoalt Hacker News

pitchedlast Wednesday at 1:25 PM3 repliesview on HN

The new tools have sets of problems they are very good at, sets they are very bad at and they are generally mediocre at everything else. Learning those lessons isn’t easy, takes time, and will produce bugs. If you aren’t making those mistakes now with everyone else, you’ll be doing them later when you do decide to start catching up and it will be more noticeable then.


Replies

SoftTalkerlast Wednesday at 5:42 PM

Disagree. For the tools to become really useful (and fulfill the expectations of the people funding them) they will need to produce good results without demanding years of experience understanding their foibles and shortcomings.

show 1 reply
ThrowawayR2last Wednesday at 9:11 PM

The AI hucksters promise us that these tools are getting exponentially better (lol) so the catch up should be exponentially reduced.

show 1 reply
_DeadFred_last Wednesday at 5:29 PM

And all of those things (good at, bad at, the lessons learned on current models current implementation) can change arbitrarily with model changes, nudges, guardrails, etc. Not sure that outsourcing your skillset on the current foundation of sand is long term smart, even if it's great for a couple of months.

It may be those un-learning the previous iteration interactions once something stable arrives that are at a disadvantage?

show 2 replies