logoalt Hacker News

fao_today at 6:19 AM1 replyview on HN

If software engineering wants to progress past being an "art" and be considered an engineering discipline, then it should adopt methods and practices from engineering. First and foremost, one of the universal methodologies is analysis of root cause in faults, and redundancies to avoid that. e.g. the FAA has two pilots for planes, and each system is built in redundantly so if an engineer misses a bolt or rivet, the plane won't crash. intersections are designed such that there is a forcing function[0] on the behaviour of the motorists to prevent fault. Or, to take your tool analogy, nail guns are designed to be pressed against something with a decent amount of pressure before you can fire them.

All of these systems are designed around the core idea of "a human acting irrationally or improperly is not at fault" and, furthermore, that a human can have a bad day and still avoid a mistake. They all steer someone around a possible fault. Hell, the reason why we divide the road into lanes is itself a forcing function to avoid traffic collisions!

So, where is the forcing function in large language models? What part of a large language model prevents gross misuse by laymen?

I can think of examples here and there, maybe. OpenAI had to add guard rails to stop people from poisoning themselves with botulism and boron, etc. But the problem here is that the LLM is probabilistic, so there's really no guarantee that those guard rails will hold. I seem to remember there being a paper from a few months back, posted here, that show AI guardrails cannot be proven to work consistently. In that context, LLMs cannot be considered "safe" or "reliable" enough for use. Eddie Burback has a very, very good video showing an absolute worst case result of this[1], that was posted here last year. Even then, off the top of my head Angela Collier has a really, really good video demonstrating that there's an absolute plethora of people who have succumbed, in large ways or small, to the bullshit AI can spew[2].

I feel like if most developers were actually serious about being an engineering discipline, like we claim, then we wouldn't have all jumped on the LLM bandwagon until they'd been properly tested and had a certain level of reliability. Instead there are a sizable chunk of people saying they've stopped coding by hand entirely, and aren't even reviewing the code! i.e. They've thrown out a forcing function that existed to prevent errorenous PRs being committed! And for some bizzare reason, after about 2 decades of people talking about type safety and how we need formal verification to reduce error, everyone seems to be throwing "reduction of error" out the window!

[0]: https://en.wikipedia.org/wiki/Behavior-shaping_constraint (if you're curious about the term)

[1]: https://www.youtube.com/watch?v=VRjgNgJms3Q

[2]: https://www.youtube.com/watch?v=7pqF90rstZQ


Replies

anon7000today at 7:02 AM

> I feel like if most developers were actually serious about being an engineering discipline, like we claim, then we wouldn't have all jumped on the LLM bandwagon until they'd been properly tested and had a certain level of reliability

Development can’t be a “serious” engineering discipline because the economics of tech companies doesn’t allow for it. But this has a lot less to do about developers, and significantly more to do with the severe pressure company executives are putting on everyone to use AI, no matter what.

But let’s be honest, many companies have adopted things like root cause analysis and blameless postmortems to deal with infrastructure reliability and reducing incidents. Making systems resilient to human mistakes, making it impossible for the typo to blow up a database, etc. are considered best practices at most places I’ve worked. On the product side, I think it’s absolutely normal to make it hard for a user to take an action that would seriously mess up their account.

The core problem happens when your product idea (say, social media) has vast negative externalities which the company isn’t forced to deal with economically. Whereas in other engineering disciplines, many things are actually safety related and you could get sued over. I’m imagining pretty much anything a structural engineer or electrical engineer works on could seriously hurt or kill someone if a bad enough mistake was made.

That just doesn’t apply to software. There is a lot of “life & death” software, but it’s more niche. The reality is that 90% of what the tech industry works on is not capable of physically harming humans, and it’s not really possible to sue over the potential negative consequences of… a dev tooling startup? It’s a very, very different industry than those other engineering disciplines work in.

But, software engineering has actually been extremely successful at minimizing risk from software defects. The most likely worst software level mistake I could make could… crash my own program. It likely wouldn’t even crash the operating system since it’s isolated. That lack of trust in what other people might do is codified everywhere in software. On an iPhone, I’m downloading apps edited by tens of thousands of other engineers, at essentially no risk to myself at all.