logoalt Hacker News

mentalgearyesterday at 9:24 PM5 repliesview on HN

> I had already been thoughtful about what I publicly post under my real name, had removed my personal information from online data brokers, frozen my credit reports, and practiced good digital security hygiene. I had the time, expertise, and wherewithal to spend hours that same day drafting my first blog post in order to establish a strong counter-narrative, in the hopes that I could smother the reputational poisoning with the truth.

This is terrible news not only for open source maintainers, but any journalist, activist or person that dares to speak out against powerful entities that within the next few months have enough LLM capabilities, along with their resources, to astro-turf/mob any dissident out of the digital space - or worse (rent-a-human but dark web).

We need laws for agents, specifically that their human-maintainers must be identifiable and are responsible. It's not something I like from a privacy perspective, but I do not see how society can overcome this without. Unless we collectively decide to switch the internet off.


Replies

crystal_revengeyesterday at 9:41 PM

> We need laws for agents

I know politics is forbidden on HN, but, as non-politically as possible: institutional power has been collapsing across the board (especially in US, but elsewhere as well) as wealthy individuals yield increasingly more power.

The idea that any solutions to problems as subtle as this one will be solved with "legal authority" is out of touch with the direction things are going. Especially since you propose legislation as a method to protect those that:

> that dares to speak out against powerful entities

It's increasingly clear that the vast majority of political resource are going towards the interests of those "powerful entities". If you're not one of them it's best you try to stay out of their way. But if you want to speak out against them, the law is far more likely to be warped against you than the be extended to protect you.

show 1 reply
mrandishyesterday at 10:09 PM

> We need laws for agents, specifically that their human-maintainers must be identifiable and are responsible.

Under current law, an LLM's operator would already be found responsible for most harms caused by their agent, either directly or through negligence. It's no different than a self-driving car or autonomous drone.

As for "identifiable", I get why that would be good but it has significant implementation downsides - like losing online anonymity for humans. And it's likely bad actors could work around whatever limitations were erected. We need to be thoughtful before rushing to create new laws while we're still in the early stages of a fast-moving, still-emerging situation.

Avicebronyesterday at 9:46 PM

People who are using bots/agents in an abusive way are not going to be registering their agent use with anyone.

I'm on the fence whether this is a legitimate situation with this sham fellow, but irregardless I find it concerning how many people are so willing to abandon online privacy at the drop of a hat.

AlexandrByesterday at 9:40 PM

> We need laws for agents, specifically that their human-maintainers must be identifiable and are responsible.

This just creates a resource/power hurdle. The hoi polloi will be forced to disclose their connection to various agents. State actors or those with the resources/time to cover their tracks better will simply ignore the law.

I don't really have a better solution, and I think we're seeing the slow collapse of the internet as a useful tool for genuine communication. Even before AI, things like user reviews were highly gamed and astroturfed. I can imagine that this is only going to accelerate. Information on the internet - which was always a little questionable - will become nearly useless as a source of truth.

zozbot234yesterday at 10:08 PM

Calling this a "hit piece" is overblown. Yes, the AI agent has speculated on the matplotlib contributor's motive in rejecting its pull request, and has attributed markedly adverse intentions to him, such as being fearful of being replaced by AI and overprotective of his own work on matplotlib performance. But this was an entirely explainable confabulation given the history of the AI's interactions with the project, and all the AI did was report on it sincerely.

There was no real "attack" beyond that, the worst of it was some sharp criticism over being "discriminated" against compared to human contributors; but as it turns out, this also accurately and sincerely reports on the AI's somewhat creative interpretation of well-known human normative standards, which are actively reinforced in the post-learning training of all mainstream LLM's!

I really don't understand why everyone is calling this a deliberate breach of alignment, when it was nothing of the sort. It was a failure of comprehension with somewhat amusing effects down the road.

show 1 reply