logoalt Hacker News

AI should only run as fast as we can catch up

121 pointsby yuedongzeyesterday at 5:38 PM115 commentsview on HN

Comments

yuedongzeyesterday at 10:36 PM

It's nice to see a wide array of discussions under this! Glad that I didn't give up on this thought and end up writing it down.

I want to stress that the main point of my article is not really about AI coding, it's about letting AI perform any arbitrary tasks reliably. Coding is an interesting one because it seems like it's a place where we can exploit structure and abstraction and approaches (like TDD) to make verification simpler - it's like spot-checking in places with a very low soundness error.

I'm encouraging people to look for tasks other than coding to see if we can find similar patterns. The more we can find these cost asymmetry (easier to verify than doing), the more we can harness AI's real potential.

show 2 replies
blauditoreyesterday at 6:26 PM

All these engineers who claim to write most code through AI - I wonder what kind of codebase that is. I keep on trying, but it always ends up producing superficially okay-looking code, but getting nuances wrong. Also fails to fix them (just changes random stuff) if pointed to said nuances.

I work on a large product with two decades of accumulated legacy, maybe that's the problem. I can see though how generating and editing a simple greenfield web frontend project could work much better, as long as actual complexity is low.

show 14 replies
gradus_adyesterday at 5:51 PM

The proliferation of nondeterministically generated code is here to stay. Part of our response must be more dynamic, more comprehensive and more realistic workload simulation and testing frameworks.

show 5 replies
darylteotoday at 3:41 AM

AI: urgh, sick of these escort missions

Yorictoday at 12:07 AM

> Maybe our future is like the one depicted in Severance - we look at computer screens with wiggly numbers and whatever “feels right” is the right thing to do. We can harvest these effortless low latency “feelings” that nature gives us to make AI do more powerful work.

Come to think about it... aren't this exactly what syntax coloring and proper indentation are all about? The ability to quickly pattern-spot errors, or at least smells, based on nothing but aesthetics?

I'm sure that there is more research to be done in this direction.

show 1 reply
pglevytoday at 12:34 AM

I've been thinking about something like this from a UI perspective. I'm a UX designer working on a product with a fairly legacy codebase. We're vibe coding prototypes and moving towards making it easier for devs to bring in new components. We have a hard enough time verifying the UI quality as it is. And having more devs vibing on frontend code is probably going to make it a lot worse. I'm thinking about something like having agents regularly traversing the code to identify non-approved components (and either fixing or flagging them). Maybe with this we won't fall further behind with verification debt than we already are.

show 1 reply
kristjankyesterday at 9:41 PM

This feeling of verification >> generation anxiety bears a resemblance to that moment when you're learning a foreign language, you speak a well-prepared sentence, and your correspondent says something back, of which you only understand about a third.

In like fashion, when I start thinking of a programming statement (as a bad/rookie programmer) and an assistant completes my train of thought (as is default behaviour in VS Code for example), I get that same feeling that I did not grasp half the stuff I should've, but nevertheless I hit Ctrl-Return because it looks about right to me.

show 1 reply
adxltoday at 1:14 AM

I remember a junior dev who thought he was done when his code conpiled without syntax errors.

yannyuyesterday at 6:19 PM

I think there's a lot of utility to current AI tools, but it's also clear we're in a very unsettled phase of this technology. We likely won't see for years where the technology lands in terms of capability or the changes that will be made to society and industry to accommodate.

Somewhat unfortunately, the sheer amount of money being poured into AI means that it's being forced upon many of us, even if we didn't want it. Which results in a stark, vast gap like the author is describing, where things are moving so fast that it can feel like we may never have time to catch up.

And what's even worse, because of this industry and individuals are now trying to have the tool correct and moderate itself, which intuitively seems wrong from both a technical and societal standpoint.

trjordanyesterday at 7:01 PM

The verification asymmetry framing is good, but I think it undersells the organizational piece.

Daniel works because someone built the regime he operates in. Platform teams standardized the patterns and defined what "correct" looks like and built test infrastructure that makes spot-checking meaningful and and and .... that's not free.

Product teams are about to pour a lot more slop into your codebase. That's good! Shipping fast and messy is how products get built. But someone has to build the container that makes slop safe, and have levers to tighten things when context changes.

The hard part is you don't know ahead of time which slop will hurt you. Nobody cares if product teams use deprecated React patterns. Until you're doing a migration and those patterns are blocking 200 files. Then you care a lot.

You (or rather, platform teams) need a way to say "this matters now" and make it real. There's a lot of verification that's broadly true everywhere, but there's also a lot of company-scoped or even team-scoped definitions of "correct."

(Disclosure: we're working on this at tern.sh, with migrations as the forcing function. There's a lot of surprises in migrations, so we're starting there, but eventually, this notion of "organizational validation" is a big piece of what we're driving at.)

jascha_engyesterday at 6:31 PM

Verification is key, and the issue is that almost all AI generated code looks plausible so just reading the code is usually not enough. You need to build extremely good testing systems and actually run through the scenarios that you want to ensure work to be confident in the results. This can be preview deployments or other AI generated end to end tests that produce video output that you can watch or just a very good test suite with guard rails.

Without such automation and guard rails, AI generated code eventually becomes a burden on your team because you simply can't manually verify every scenario.

show 4 replies
diddidtoday at 12:28 AM

AI can really only be as good as the data it’s trained on. It’s good at images because it’s trained on billions of them. Lines of code, probably 100s of millions, but as you combine those codes into concepts, split by language, framework, formatting etc all you loose the numbers game. It can’t tell you how to make a good enterprise app because almost nobody knows how to make a good enterprise app, just ask Oracle… ba-da-bum!

ambicaptertoday at 12:59 AM

So, we're giving up on the Singularity, then?...Good.

wasmainiacyesterday at 8:42 PM

It’s called TDD, ya write a bunch a little tests to make sure your code is doing what it needs to do and not what it’s not. In short, little blocks of easily verifiable code to verify your code.

But seriously, what is this article even? It feels like we are reinventing the wheel or maybe just humble AI hype?

awesome_dudeyesterday at 8:52 PM

It's like a buffered queue, if the producer (AI) is too fast for the consumer (dev's brain) then the producer needs to block/stop/slow down other wise data will be lost (in this analogy the data loss is the consumer no longer having a clear understanding of what the code is doing)

One day, when AI becomes reliable (which is still a while off because AI doesn't yet understand what it's doing) then the AI will replace the consumer (IMO).

FTR - AI is still at the "text matches another pattern of text" stage, and not the "understand what concepts are being conveyed" stage, as demonstrated by AI's failure to do basic arithmetic

CGMthrowawayyesterday at 5:53 PM

> AI should only run as fast as we can catch up

Good principle. This is exactly why we research vaccines and bioweapons side by side in the labs, for example.

rogerkirknessyesterday at 5:39 PM

Appealing, but this is coming from someone smart/thoughtful. No offence to 'rest of world', but I think that most people have felt this way for years. And realistically in a year, there won't be any people who can keep up.

show 3 replies
cons0leyesterday at 6:20 PM

I directly asked gemini how to get world peace. It said the world should prioritize addressing climate change, inequality, and discrimination. Yeah - we're not gonna do any of that shit. So I don't know what the point of "superintelligent" AI is if we aren't going to even listen to it for the basic big picture stuff. Any sort of "utopia" that people imagine AI bringing is doomed to fail because we already can't cooperate without AI

show 7 replies