I (like many on HN I'm sure) have been continually pestered by management to use AI like it's some cure for polio. They just want to tell their VP that "my team is accelerating its use of AI!" so that the VP can pass that up the food chain. Same with when we started to migrate (unnecessarily imho) to the cloud. Just another checkbox and an attaboy from senior management.
There's really not much of a place for AI in my work. We're not cutting edge, we're just a large, safe business protected by a regulatory moat. We don't want to be on the cutting edge, since the bleeding is bad for profits and reputation. But the incentives our IT execs operate under is all about resume/credential building and moving on to bigger things. Our C level officers are not even slightly technical, so they defer to the CIO. Nothing new at all in this company, it's a story told a thousand times.
So I was just very curious how it would be to approach vibe coding as if I was my VP. You don't know what you don't know, right? And the ease at creating a simple app that would be beyond 99% of the people in my company gives way too much confidence. And with misplaced confidence comes poor decision-making.
I can see where someone who currently is an Excel jockey would benefit from some of this stuff. As long as they can compare and test the outputs. But the danger from false confidence has to be an institutional risk that's being ignored.
> And the ease at creating a simple app that would be beyond 99% of the people in my company gives way too much confidence. And with misplaced confidence comes poor decision-making.
I 100% agree. I heard via a CTO I've worked with before that a stakeholder in the company had come to him with a mock frontend that looked beautiful, it even had bits of interactivity where it could be used as if it was real. He asked them who made it, because it was very nice; they said "I made it!" (Claude made it). Now the plan for the next product is to shove it into a server format so that it can be used with a vibecoded frontend where the stakeholder is now responsible.
Honestly, I think the above situation is as good as it can get with these stakeholder hallucinations; ultimately the web frontend is likely to buckle under the pressure of having almost zero technical backing behind its creation, it'll perform badly with even the most basic of things and a real one will likely be created instead. The key is that the person having the LLM psychosis is the one that is responsible, so when things fail it's their fault.