Reading this felt like the official obituary for the 90s techno-optimism many of us grew up on.
The "end of history" hangover is real. We went about building the modern stack assuming bad actors were outliers, not state-sponsored standard procedure. But trying to legislate good use into licenses? I don't know how you would realistically implement it and to what extent? That solution implies we have to move toward zero-trust architectures even within open communities.
As an example: formal proofs and compartmentalization are unsexy but they're a solid way we survive the next decade of adversarial noise.
I remember reading a quote somewhere that stuck with me. Paraphrasing, "If the architecture of my code doesn't enforce privacy and resistance to censorship by default, we have to assume it will be weaponized".
I am out of ideas, practical ones, lots sound good on paper and in theory. It's a bit sad tbh. Always curious to hear more on this issue from smarter people.
> trying to legislate good use into licenses
It's also questionable to which extent restrictive licenses for open source software stay that relevant in the first place, as you can now relatively easily run an AI code generator that just imitates the logic of the FOSS project, but with newly generated code, so that you don't need to adhere to a license's restrictions at all.
> trying to legislate good use into licenses
Text files don't have power. Appealing to old power institutions to give them power is not the way to create new power either. Legacy systems with entrenched power have tended to insulate those at the top, killing social mobility and enabling those institutions to act against broad interests.
Open source has always been a force of social mobility. You could learn from reading high quality code. Anyone could provide service for a program. You could start a company not bound by bad decision makers who held the keys.
Open source always outmaneuvers inefficiency. Those who need to organize are not beholden to legacy systems. We need technically enabled solutions to organize and create effective decision making. The designs must preserve social mobility within to avoid becoming what they seek to replace. I'm building the technically enabled solutions for at https://positron.solutions
Perhaps we need reputation on the network layer? Without it being tied to a particular identity.
It would require it not to be easy to farm (Entropy detection on user behaviour perhaps and clique detection).
> If the architecture of my code doesn't enforce privacy and resistance to censorship by default
which is impossible.
- No code is feasibly guaranteed to be secure
- All code can be weaponized, though not all feasibly; password vaults, privacy infrastructure, etc. tend to show holes.
- It’s unrealistic to assume you can control any information; case-in-point the garden of Eden test: “all data is here; I’m all-powerful and you should not take it”.
I’m not against regulation and protective measures. But, you have to be prioritize carefully. Do you want to spend most of the world’s resources mining cryptocurrency and breaking quantum cryptography, or do you want to develop games and great software that solves hunger and homelessness?
> That solution implies we have to move toward zero-trust architectures even within open communities
Zero trust cannot exist as long as you interact with the real world. The problem wasn't trust per se, but blind trust.
The answer isn't to eschew trust (because you can't) but to organize it with social structures, like what people did with “chain of trust” certificates back then before it became commoditized by commercial providers and cloud giants.
Things like that should not be handled on software level, you will always loose and run out of resources. You basically have to force politicians (fat chance)
I don't get why you conflate privacy and resistance to censorship.
I think privacy is essential for freedom.
I'm also fine with lots of censorship, on publicly accessible websites.
I don't want my children watching beheading videos, or being exposed to extremists like (as an example of many) Andrew Tate. And people like Andrew Tate are actively pushed by YouTube, TikTok, etc. I don't want my children to be exposed to what I personally feel are extremist Christians in America, who infest children's channels.
I think anyone advocating against censorship is incredibly naive to how impossible it's become for parents. Right now it's a binary choice:
1. No internet for your children
2, Risk potential, massive, life-altering, harm as parental controls are useless, half-hearted or non-existent. Even someone like Sony or Apple make it almost impossible to have a choice in what your children can access. It's truly bewildering.
And I think you should have identify yourself. You should be liable for what you post to the internet, and if a company has published your material but doesn't know who you are, THEY should be liable for the material published.
Safe harbor laws and anonymous accounts should never have been allowed to co-exist. It should have been one or the other. It's a preposterous situation we're in.
The Internet was the “Wild West”, and I mean that in the most kind, brutal, and honest way, both like a free fantasy (everyone has a website), genocide (replacement of real world), and an emerging dystopia (thieves/robbers, large companies, organizations, and governments doing terrible things).
It’s changing but not completely.
"If the architecture of my code doesn't enforce privacy"
This is still techno-optimism. The architecture of your code will not to that. We are long past the limits of what you can fix with code.
The only action that matters is political and I don't think voting cuts it.