logoalt Hacker News

Trufayesterday at 7:35 PM3 repliesview on HN

I wonder how many inherently unsolvable problems have been fixed before.


Replies

jesse_dot_idyesterday at 8:05 PM

This problem is inherently unsolvable because LLMS are prone to hallucinations and prompt injection attacks. I think that you're insinuating that these things can be fixed, but to my knowledge, both of these problems are practically unsolvable. If that turns out to be false, then when they are solved, fully autonomous AI agents may become feasible. However, because these problems are unsolvable right now, anyone who grants autonomous agents access to anything of value in their digital life is making a grave miscalculation. There is no short-term benefit that justifies their use when the destruction of your digital life — of whatever you're granting these things access to — is an inevitability that anyone with critical thinking skills can clearly see coming.

show 2 replies
j16sdizyesterday at 7:40 PM

Human make error too, but we held them liable for lots of the mistakes they make.

Can we make the agent liable? or the company behind the model liable?

show 3 replies
jrflowersyesterday at 7:46 PM

There are a ton if you count “don’t use the thing that causes the problem” as a solution.