> All "access control" logic lived in the JavaScript on the client side, meaning the data was literally one command away from anyone who looked
This is the top!
This is a typical example of someone using Coding Agents without being a developer: AI that isn't used knowingly can be a huge risk if you don't know what you're doing.
AI used for professional purposes (not experiments) should NOT be used haphazardly.
And this also opens up a serious liability issue: the developer has the perception of being exempt from responsibility and this also leads to enormous risks for the business.
The problem isn't AI, the problem is lack of an intelligent person somewhere in this whole situation. Way before AI I've seen a medical company create a service where frontend would tell backend what SQL queries to execute.
Also it’s the wrong tool for this kind of work.
Claude, opencode etc. Are brute force coding harnesses that literally use bash tools plus a whole bunch of vague prompting (skills, AGENT.md, MCP and all that stuff) to nudge them probabilistically into desirable behavior.
Without engineering specialized harnesses that control workflows and validate output, this issue won‘t go away.
We‘re in the wild west phase of LLM usage now, where problems emerge that shouldn’t exist in the first place and are being solved at the entirely wrong layer (outside of the harness) or with the entirely wrong tools (prompts).