> A sufficiently detailed spec is code
This is exactly the argument in Brooks' No Silver Bullet. I still believe that it holds. However, my observation is that many people don't really need that level of details. When one prompts an AI to "write me a to-do list app", what they really mean is that "write me a to-do list app that is better that I have imagined so far", which does not really require detailed spec.
> When one prompts an AI to "write me a to-do list app", what they really mean is that "write me a to-do list app that is better that I have imagined so far", which does not really require detailed spec.
If someone was making a serious request for a to-do list app, they presumably want it to do something different from or better than the dozens of to-do list apps that are already out there. Which would require them to somehow explain what that something was, assuming it's even possible.
Not entirely.
For some problems, it is. Web front-end development, for example. If you specify what everything has to look like and what it does, that's close to code.
But there are classes of problems where the thing is easy to specify, but hard to do correctly, or fast, or reliably. Much low-level software is like that. Databases, file systems, even operating system kernels. Networking up to the transport layer. Garbage collection. Eventually-consistent systems. Parallel computation getting the same answer as serial computation. Those problems yield, with difficulty, to machine checked formalism.
In those areas, systems where AI components struggle to get code that will pass machine-checked proofs have potential.
I wouldn‘t say this is the core argument of No Silver Bullet. I wrote a short review of Brooks paper with respect to todays AI promises, to whoever is interested in more details:
Everyone at least heard stories of people who just want that button 5px to the right or to the left and next meeting they want it in bottom corner - whereas it doesn’t make functionally any difference.
But that’s most of the time is not that they want it from objective technical reasons.
They want it because they want to see if they can push you. They do it „because they can”. They do it because later they can renegotiate or just nag and maybe pay less. Multiple reasons that are not technical.
But if you’re selling that to-do list app, then the rules are different, and that spec is required.
I guess it depends on whether or not we want to make money, or otherwise, compete against others.
In this case a chatbot is also unlikely to succeed in pleasing the user—and how could it?
[dead]
[dead]
Yes. This happens because the training data contains countless SotA "to-do" apps. This argument does not scale well to other types of software.