Given that some 80% of developers are now using AI in their regular work, blob-util is almost certainly the kind of thing that most developers would just happily have an LLM generate for them. Sure, you could use blob-util, but then you’d be taking on an extra dependency, with unknown performance, maintenance, and supply-chain risks.
Letting LLM write utility code is a sword that cuts both ways. You often create a throw-away code that is unproven and requires maintenance. It's not a guarantee that the blobutil or toString or whatever created by AI won't fail at some edge cases. That's why e.g. in Java there is Apache commons which is perceived as an industry standard nowadays.This mostly sounds like a good thing to me from a utilitarian standpoint. Getting all your utility classes from somewhere like npm and creating dependencies on 20 different people and organizations who may or may not maintain their software has been a security nightmare with many highly public examples. If a LLM writes a utility class for me then my supply chain is smaller, meaning less surface area to attack plus I probably benefit from some form of security through obscurity for whatever non-trivial amount that's worth. "Downside" is I don't have some rando, probably unpaid labor out there updating a piece of my app for me...
Actually sounds like something someone that wrote a really small utility that is surprisingly (to him) used by a lot of people would say.
But the benefit he provided is significantly more than he realizes/acknowledges.
It's not a new thing either, many years ago there was already the debate whether you should trust utility code copied from SO or use an NPM library. In fact, I'm 99% confident that the slew of single function NPM libraries became a thing because of that mindset.
[dead]
Exactly. When you assume blob-util to be a utility library that has been in use for quite a while by many people in many different contexts, hasn't seen much changes and just "works", IMHO the risk of weird bugs is a lot larger with LLM-generated code. Code generated by LLM's often have the problem that the code seems logical, but then contain weird bugs that aren't immediately obvious.