logoalt Hacker News

daxfohlyesterday at 10:02 PM7 repliesview on HN

I think it all boils down to, which is higher risk, using AI too much, or using AI too little?

Right now I see the former as being hugely risky. Hallucinated bugs, coaxed into dead-end architectures, security concerns, not being familiar with the code when a bug shows up in production, less sense of ownership, less hands-on learning, etc. This is true both at the personal level and at the business level. (And astounding that CEOs haven't made that connection yet).

The latter, you may be less productive than optimal, but might the hands-on training and fundamental understanding of the codebase make up for it in the long run?

Additionally, I personally find my best ideas often happen when knee deep in some codebase, hitting some weird edge case that doesn't fit, that would probably never come up if I was just reviewing an already-completed PR.


Replies

mprastyesterday at 11:18 PM

It's very interesting to me how many people presume that if you don't learn how to vibecode now you'll never ever be able to catch up. If the models are constantly getting better, won't these tools be easier to use a year from now? Will model improvements not obviate all the byzantine prompting strategies we have to use today?

show 5 replies
wavemodeyesterday at 11:33 PM

> I think it all boils down to, which is higher risk, using AI too much, or using AI too little?

This framing is exactly how lots of people in the industry are thinking about AI right now, but I think it's wrong.

The way to adopt new science, new technology, new anything really, has always been that you validate it for small use cases, then expand usage from there. Test on mice, test in clinical trials, then go to market. There's no need to speculate about "too much" or "too little" usage. The right amount of usage is knowable - it's the amount which you've validated will actually work for your use case, in your industry, for your product and business.

The fact that AI discourse has devolved into a Pascal's Wager is saddening to see. And when people frame it this way in earnest, 100% of the time they're trying to sell me something.

show 3 replies
softwaredougyesterday at 10:57 PM

Even within AI coding how people use this varies wildly from one people trying to one shot apps to people being barely above tab completers.

When people talk about this stuff they usually mean very different techniques. And last months way of doing it goes away in favor of a new technique.

I think the best you can do now is try lots of different new ways of working keep an open mind

show 1 reply
mgraczykyesterday at 11:12 PM

Even if you believe that many are too far on one side now, you have to account for the fact that AI will get better rapidly. If you're not using it now you may end up lacking preparation when it becomes more valuable

show 3 replies
_seyesterday at 10:25 PM

Very reasonable take. The fact that this is being downvoted really shows how poor HN's collective critical thinking has become. Silicon Valley is cannibalizing itself and it's pretty funny to watch from the outside with a clear head.

show 1 reply
runarbergyesterday at 10:58 PM

This is basically Pascal’s wager. However, unlike the original Pascal’s wager, yours actually sounds sound.

Another good alike wager I remember is: “What if climate change is a hoax, and we invested in all this clean energy infrastructure for nothing”.

show 1 reply
zozbot234yesterday at 11:54 PM

> I think it all boils down to, which is higher risk, using AI too much, or using AI too little?

It's both. It's using the AI too much to code, and too little to write detailed plans of what you're going to code. The planning stage is by far the easiest to fix if the AI goes off track (it's just writing some notes in plain English) so there is a slot-machine-like intermittent reinforcement to it ("will it get everything right with one shot?") but it's quite benign by comparison with trying to audit and fix slop code.