logoalt Hacker News

phyzix5761today at 12:59 PM14 repliesview on HN

I think when LLMs first came out people thought they could just say something like, "Make a Facebook clone". But now we're realizing we need to be more exact with our requirements and define things better. That has always been the bottle neck in software.

When I was working we used to get requirements that literally said things like, "Get data and give it to the user". No definition of what data is, where its stored, or in what format to return it. We would then spend a significant amount of time with the product person trying to figure out what they really wanted.

In order to get good results with LLMs we need to do something similar. Vague requirements get vague results.


Replies

satvikpendemtoday at 2:43 PM

In what I've seen, tickets are much richer in detail now because PMs are using AI (connected to the codebase itself, like Claude Code or Codex) to fill out a template as to what and why the problem is (ie X field exists in the backend not frontend), how and where to get any data (query the backend), and what acceptance criteria is needed (frontend should have the field exposed and "submit" should push the field's data to the backend where it should show up in the databas), which is something they would not have done before, due I guess to laziness and thinking the devs can figure it out. Then devs can copy paste this Jira ticket content into the LLM agent of choice (or even use the Atlassian MCP to have the LLM read it automatically).

This has significantly helped devs and made sure that requirements are very clear.

Honestly, with the first step, it seems the PMs are already halfway there to implementation of the feature so I wonder if in the future they'll just do everything themselves and a few devs will be around as SDETs rather than full blown implementers.

show 7 replies
shalmanesetoday at 1:17 PM

> I think when LLMs first came out people thought they could just say something like, "Make a Facebook clone". But now we're realizing we need to be more exact with our requirements and define things better. That has always been the bottle neck in software.

This was substantially predicted by Fred Brooks in 1986 in the classic No Silver Bullets [1] essay under the sections "Expert Systems" and "Automatic Programming".

In it, he lays out the core features of vibe coding and exactly the experience we are having now with it: Initial success in a few carefully chosen domains and then a reasonable but not ground breaking increase in productivity as it expands outside of those domains.

[1] https://worrydream.com/refs/Brooks_1986_-_No_Silver_Bullet.p...

show 2 replies
stelonixtoday at 2:57 PM

You're completely right and I thought this would be obvious. I never prompted anything remotely closely to "make a facebook clone". Instead, I make an explanation of how it should work. To give you an example:

  I need a python script that
  
  1) reads /etc/hosts
  2) find values of specific configured hosts (read from a .conf which) eg server1, localhost, etc
  3) it'll assign a name to those configs eg if the .conf has
  
  [Env1]
  192.168.0.1 production-read
  192.168.0.2 production-write
  192.168.0.27 amqp
  
  [Env2]
  192.168.0.101 production-read
  192.168.0.201 production-write
  192.168.1.127 amqp
  
  Basically format:
  
  [CONFIG_NAME]
  <ip> <hostname>
  
  Like an usual hosts file
  
  4) And each of those will be stored in memory
  5) if in /etc/hosts it matches one of those, it sets the "current env" as the configname
  
  5) It'll create an icon on the top-right of ubuntu 22 default gnome with
  6) that icon could be the text of the current config name or if nothing matches, "custom" text would show
  7) When the user clicks the "tray"/appindicator(or whatever gnome is calling them) it'll list the config names in a   simple gtk/gnome
  8) When the user clicks one config, we create a backup of /etc/hosts in ~/.config/backups/ named   hosts-%UNIX_TIMESTAMP%
  9) we then apply it to hosts file (find only the line with the hostnames to change and modify only those)
And that one-shotted a simple gnome app indicator env switcher. Had to fix a few lines here and there but it mostly just worked. If you give the proper spec to the LLM, it'll do it right. You can even fake a DSL to describe what you want and it'll figure it out.
show 3 replies
niek_pastoday at 2:18 PM

What's even worse is that when dealing with human software teams, a vague requirement will (at least in a well-run org) receive demands for further specification. "What do you mean by 'get data'?", etc.

An LLM will just say, "Sure! Here's the fully implemented code that gets the data and give it to the user. " and be done with it.

show 4 replies
whstltoday at 3:14 PM

This was already a reality for a years.

In several companies I have seen product managers joining teams and failing to even have minor requirement ready for months during “onboarding” of the PM. And then code being ready but taking months to release because DevOps is busy or QA can’t find time.

The pace of release of software has been disconnected from the coding part for the longest time, and we have been quiet about it.

mike_hocktoday at 3:37 PM

> But now we're realizing we need to be more exact with our requirements and define things better.

That's why we write programs in programming languages and not English. Because they are much more efficient at giving precise instructions than natural language.

show 1 reply
rubyfantoday at 1:18 PM

We now have product owners trying to farm out their work to an LLM. The process didn’t work before because the person writing the requirements either put out vague requirements or bad requirements because they didn’t understand the business intent (or were careless).

LLMs just take the same vague or poor requirements and make them look believable until you dig in to them.

show 2 replies
faangguyindiatoday at 2:23 PM

product people love LLM because it doesn't ask

"what does X means? how will it work?"

while a programmer will ask, about all cases.

show 2 replies
cryo32today at 1:33 PM

It's worse. Vague requirements still only power vague interpretations of the problem. Even if you provide good requirements, you still only have vague interpretations at your fingertips. The promise is that such things won't be a problem in the future, which is obviously not materialising.

"Make a facebook clone" is the vague human promise to the end user. The reality is that it leads to so many assumptions which are insurmountable due to the vague interpretation so you have to change your requirements in the end to claim success.

Thus everything turns into a mediocre compromise. There is no exceptional outcome, which is what makes a marketable product. There are just corpses everywhere.

You need something better to both define requirements and implement them than this technology.

show 1 reply
steveBK123today at 1:13 PM

> When I was working we used to get requirements that literally said things like, "Get data and give it to the user". No definition of what data is, where its stored, or in what format to return it. We would then spend a significant amount of time with the product person trying to figure out what they really wanted.

This is a big HN LLM discussion divide. I am in the same no-specs work background camp, and so the idea that the humans who input that into dev teams are suddenly going to get anything out of an LLM if they directly input the same is laughable. In my career most orgs there has been no product person and we just talked directly to end users.

For that kind of org, it will accelerate some parts of the SWEs job at different multipliers, but all the non-dev work to get there with discussions, discovery, iteration, rework, etc remains.

If the input to your work is a 20 page specification document to accompany multi-paragraph Jira tickets with embedded acceptance criteria / test cases / etc, then yes there is a danger the person creating that input just feed it into an LLM.

show 1 reply
paulddrapertoday at 3:30 PM

So the agent needs a “plan” mode where it works with the user and asks questions to define the ask.

Foobar8568today at 2:02 PM

We arrived to that state today with Codex and Claude Code. I really don't know what people are doing wrong?