logoalt Hacker News

dummydummy1234last Saturday at 9:50 PM1 replyview on HN

I guess a counter, is that we don't need to understand how they work to produce a useful output.

They are a magical black box magic 8 ball, that more likely than not gives you the right answer. Maybe people can explain the black box, and make the magic 8 ball more accurate.

But at the end of the day, with a very complex system it will always be some level of black box unreliable magic 8 ball.

So the question then is how do you build an reliable system from unreliable components. Because llms directly are unreliable.

The answer to this is agents, ie feedback loops between multiple llm calls, which in isolation are unreliable, but in aggregate approach reliability.

At the end of the day the bet on agents is a bet that the model companies will not get a model that will magically be 100% correct on the first try.


Replies

drillsteps5last Saturday at 11:01 PM

THAT. This is what I don't get. Instead of fixing a complex system let's build more complex system based on it knowing that it might not always work.

When you have a complex system that does not always work correctly, you start disassembling it to simpler and simpler components until you find the one - or maybe several - that are not working as designed, you fix whatever you found wrong with them, put the complex system together again, test it to make sure your fix worked, and you're done. That's how I debug complex cloud-based/microservices-infected software systems, that's how they test software/hardware systems found in aircraft/rockets and whatever else. That's such a fundamental principle to me.

If LLM is a black box by definition and there's no way to make it consistently work correctly, what is it good for?..

show 3 replies