logoalt Hacker News

aledevvtoday at 9:56 AM1 replyview on HN

> All of these features are about breaking the coupling between a human sitting at a terminal or chat window and interacting turn-by-turn with the agent.

This means:

- less and less "man-in-the-loop"

- less and less interaction between LLMs and humans

- more and more automation

- more and more decision-making autonomy for agents

- more and more risk (i.e., LLMs' responsibility)

- less and less human responsibility

Problem:

Tasks that require continuous iteration and shared decision-making with humans have two possible options:

- either they stall until human input

- or they decide autonomously at our risk

Unfortunately, automation comes at a cost: RISK.


Replies

dist-epochtoday at 10:16 AM

AI driven cars have better risk profiles than humans.

Why do you think the same will not also be true for AI steerers/managers/CEO?

In a year of two, having a human in the loop, will all of their biases and inconsistencies will be considered risky and irresponsible.

show 5 replies