logoalt Hacker News

repeekadtoday at 1:54 PM2 repliesview on HN

Because if you didn’t already know that, like an immature deprived and desperate kid, being able to easily find out is really really bad..

Plenty of lazy AI apps just throw messages into history despite the known risks of context rot and lack of compaction for long chat threads. Should a company not be held liable when something goes wrong due to lazy engineering around known concerns?


Replies

morpheuskafkatoday at 2:13 PM

> to lazy engineering around known concerns?

That implies that it is already illegal to provide this information. But is it? If a human did so with intent to further a crime, it would be conspiracy. But if you were discussing it without such intent (e.x. red teaming/creating scenarios with someone working in chemistry or law enforcement), it isn't. An AI has no intent when it answers questions, so it is not clear how it could count as conspiracy. Calling it "lazy engineering" implies that there was a duty to prevent that info from being released in the first place.

show 1 reply
thegrimmesttoday at 2:04 PM

No, because that would indicate there should be some sort of regulatory standard for what does/does not constitute "lazy engineering". Creating this standard in turn creates regulatory/compliance overhead for every software engineering organization. This in turn slows everything right down and destroys the startup ethos. "Move fast and break things" is a thing for a reason. The whole point of the free market is to avoid this kind of burdensome regulation at all costs.

If customers want to buy "lazily-engineered" products, from where do you derive the authority to tell them they can't?

show 1 reply