We have a massive poisoning of the commons catastrophe coming, driven by further authoritarian government overreach and control. I've seen no one working on this, and in fact most people on HN seem to be working on ways to further exacerbate this problem. I don't just mean half solutions like tor or social protocols that let you in and out of walled gardens.
There's still a tiny window of opportunity for engineers to come up with or design technical safeguards, but eventually this problem will move past the realm of what's easily solvable and out of our hands, and into policy makers hands. A big part of me feels like that window is already slammed shut.
I agree that it feels like the tiny window of opportunity hasn't quite shut yet, and it's a problem space I know I should take more interest in. What do you see as the viable technical directions? Something along the lines of what Altman was trying to do with his Orb? Something along the lines of the C2PA's Content Credentials?
[0] e.g. https://www.businessinsider.com/sam-altman-tools-for-humanit... and the feature piece at https://time.com/7288387/sam-altman-orb-tools-for-humanity/
If you can point me at someone that would fund such projects (not VCs), would be happy to apply. Projects like NLNet aren't keen on funding larger scope projects. At least if you do not have the thought leader influencer clout.
> I've seen no one working on this, and in fact most people on HN seem to be working on ways to further exacerbate this problem.
It's against the HN guidelines to insinuate that astroturfing happens on HN.
To quote The Cable Guy, there’s only one answer, someone has to kill the babysitter (tv, social media, Big Tech). It’s hard to kill the babysitter when everyone in Congress is invested balls deep in the babysitter. Eisenhower warned of the coming overreaching powers of the Military Industrial Complex, but no one is attacking the Government Stock Market Tech Complex (GSMTC).
Its already here.
There were many disinformation research organizations in the US, including at major institutions such as Harvard and Stanford, that were forced to close by conservatives through lawfare or apparently through donor pressure.
(It's interesting that conservatives saw it as a partisan cause.)
It feels like "Autonomous Coding Agents" are being astroturfed on the daily on HN. The same arguments and tropes are echoing through every thread.
It's hard to distinguish who's a bot, who's a narrative pusher and who's an enthusiast. Which is exactly what you'd want from an astroturfing campaign. There's a clear benefit: people in the industry are reading this, and in doing so they're granting mindshare.
There's one way that can prevent inauthentic support campaigns - personal key signature. But judging by how afraid people, especially in the US, need to be of their government surveilling them, this isn't going to catch on.