logoalt Hacker News

Aurornistoday at 5:22 PM5 repliesview on HN

> And there was something else: most early startups need to pivot, changing direction as they learn more about what the market wants and what is technically possible. By lowering the costs of pivoting, it was much easier to explore the possibilities without being locked in or even explore multiple startups at once: you just tell the AI what you want.

In my experience so far, AI prototyping has been a powerful force for breaking analysis paralysis.

In the last 10 years of my career, the slow execution speed at different companies wasn't due to slow code writing. It was due to management excesses trying to drive consensus and de-risk ideas before the developers were even allowed to write the code. Let's circle back and drive consensus in a weekly meeting with the stakeholders to get alignment on the KPIs for the design doc that goes through the approval and sign off process first.

Developers would then read the ream and realize that perfection was expected from their output, too, so development processes grew to be long and careful to avoid accidents. I landed on a couple teams where even small changes required meetings to discuss it, multiple rounds of review, and a lot of grandstanding before we were allowed to proceed.

Then AI comes along and makes it cheap to prototype something. If it breaks or it's the wrong thing, nobody feels like they're in trouble because we all agree it was a prototype and the AI wrote it. We can cycle through prototypes faster because it's happening outside of this messy human reputation-review-grandstanding loop that has become the norm.

Instead of months of meetings, we can have an LLM generate a UI and a backend with fake data and say "This is what I want to build, and this is what it will do". It's a hundred times more efficient than trying to describe it to a dozen people in 1-hour timeslots in between all of their other meetings for 12 weeks in a row.

The dark side of this same coin is when teams try to rely on the AI to write the real code, too, and then blame the AI when something goes wrong. You have to draw a very clear line between AI-driven prototyping and developer-driven code that developers must own. I think this article misses the mark on that by framing everything as a decision to DIY or delegate to AI. The real AI-assisted successes I see have developers driving with AI as an assistant on the side, not the other way around. I could see how an MBA class could come to believe that AI is going to do the jobs instead of developers, though, as it's easy to look at these rapid LLM prototypes and think that production ready code is just a few prompts away.


Replies

chunky1994today at 5:56 PM

> The dark side of this same coin is when teams try to rely on the AI to write the real code, too, and then blame the AI when something goes wrong. You have to draw a very clear line between AI-driven prototyping and developer-driven code that developers must own. I think this article misses the mark on that by framing everything as a decision to DIY or delegate to AI. The real AI-assisted successes I see have developers driving with AI as an assistant on the side, not the other way around. I could see how an MBA class could come to believe that AI is going to do the jobs instead of developers, though, as it's easy to look at these rapid LLM prototypes and think that production ready code is just a few prompts away.

This is what's missing in most teams. There's a bright line between throwaway almost fully vibe-coded, cursorily architected features on a product and designing a scalable production-ready product and building it. I don't need a mental model of how to build a prototype, I absolutely need one for something I'm putting in production that is expected to scale, and where failures are acceptable but failure modes need to be known.

Almost everyone misses this in going the whole AI hog, or in going the no-AI hog.

Once I build a good mental model of how my service should work and design it properly, all the scaffolding is much easier to outsource, and that's a speed up but I still own the code because I know what everything does and my changes to the product are well thought out. For throw-away prototypes its 5x this output because the hard part of actually thinking the problem through doesn't really matter its just about getting everyone to agree on one direction of output.

Exoristostoday at 5:25 PM

> In my experience so far, AI prototyping has been a powerful force for breaking analysis paralysis.

So is an 8-ball.

show 2 replies
ryandraketoday at 5:59 PM

Most places I've worked, the "slow execution speed" wasn't because it took a long time to physically write the code, but it took a long time to get those other Analysis Paralysis things you mentioned: consensus among multiple ImportantPeople who all were expected to demonstrate "impact", agonizing over risks (perceived and real), begging VPs/leadership for their "buy-in", informing and receiving feedback from other vague "stakeholders" and so on. The software writing itself was never the bottleneck, and could be prototyped in 1/10th to 1/100th of the time it took to actually make the decision to write it.

alexhanstoday at 5:53 PM

I haven't had the analysis paralysis problem because I've always been quite decent at restructuring environments to avoid bureocracy (which can one of the most dangerous things for a project) but one thing I've observed is that If operations are not ZeroOps then whoever is stuck maintaining systems will suffer by not being able to deliver the "value adding cool features that drive careers".

Since shipping prototypes doesn't actually create value unless they're in some form of production environment to effect change, then either they work and are ZeroOps or they break and someone needs to operate on them and is accountable for them.

This means that at some point, your thesis of

"The dark side of this same coin is when teams try to rely on the AI to write the real code, too, and then blame the AI when something goes wrong" won't really work that way but whoever is accountable will get the blame and the operations.

The same principles for building software that we've always have apply more than ever to AI related things.

Easy to change, reusable, compostable, testable.

Prototypes need to be thrown away. Otherwise they're trace bullets and you don't want to have tech debt in your tracer bullets unless your approach is to throw it to someone else ans make it their problem.

-----

Creating a startup or any code from scratch in a way that you don't actually have to maintain and find out the consequences of your lack of sustainable approaches (tech debt/bad design/excessive cost) is easy. You hide the hardest part. It's easy to do things that in surface look good if you can't see how they will break.

The blog post is interesting but, unless I've missed something, it does gloss over the accountability aspect. If you can delegate accountability you don't worry about evals-first design, you can push harder on dates because you're not working backwards from the actual building and design and its blockers.

Evals (think promtpfoo) for evals-first design will be key for any builder who is accountable for the decisions of their agents (automation).

I need to turn it into a small blog post but the points of the talk https://alexhans.github.io/talks/airflow-summit/toward-a-sha...

- We can’t compare what we can’t measure

- Can I trust this to run on its own?

Are crucial to have a live system that makes critical decisions. If you don't, have this, you're just using the --yolo flag.

sbsnjskstoday at 5:32 PM

[dead]