While scary, information like this has been pretty accessible for 20-30 years now.
In the wild west days of the early internet, there were whole forums devoted to "stuff the government doesn't want you to know" (Temple Of The Screaming Electron, anyone?).
I suppose the friction is scariest part, every year the IQ required to end the world drops by a point, but motivated and mildly intelligent people have been able to get this info for a long time now. Execution though has still steadily required experts.
Well the real issue is that it knocks down the knowledge barrier, giving your step by step guides and reinterating what parts will kill you is the important part.
Understanding and staying alive while producing neuro chemicals are the biggest challenges here.
A depressed person with no prior knowledge could possibly figure out a way to make these chemicals without killing themselves and that's the problem.
Many of these forums exist now. Let's not enumerate them as they are one of the treasures of the internet.
> been pretty accessible for 20-30 years now.
There was this book 20 years ago: "Secret of Methamphetamine Manufacturing" by Uncle Fester
https://www.amazon.de/-/en/Uncle-Fester-ebook/dp/B00305GTWU
(Actually, 8th edition :-D)
Consider two dictionaries, one in which the entries are alphabetized as usual and one in which they're randomized. Both support random access: you can turn to any page, and read any entry. Therefore both are "accessible". Only one actually supports useful, quick word lookup.
I categorize this kind of stuff as "Crisis of accessibility" . AI is not alone in this territory, happens all over the place. Basically it's a problem that's existed for ages but the barrier to entry was high enough we didn't care.
Think 3D printing, it's not all that hard to make a zip gun or similar home-made firearm, but it's still harder than selecting an STL and hitting print.
You could always find info about how to make a bomb or whatnot but you had to like, find and open a book or read a pdf, now an LLM will spoon-feed it to you step by step lowering the barrier.
"Crisis of accessibility" is simultaneously legitimate concern but also in my mind an example of "security by obscurity". that relying on situational friction to protect you from malfeasance is a failure to properly address the core issue.
https://en.wikibooks.org/wiki/Professionalism/Anarchist_Cook...
We work in the dark
we do what we can
we give what we have.
Our doubt is our passion, and our passion is our task.
The rest is the madness of art.
My username is a reference to the successor to totse. Totse was the first board I spent a lot of time on
Accessible is one thing; _easily_ accessible is another.
> Execution though has still steadily required experts.
Where experts = the government.
Information and competency are not the same thing: I know how to build a nuke, I can't actually build one.
AI is, and always had been, automation. For narrow AI, automation of narrow tasks. For LLMs, automation of anything that can be done as text.
It has always been difficult to agree on the competence of the automation, given ML is itself fully automated Goodhart's Law exploitation, but ML has always been about automation.
On the plus side, if the METR graphs on LLM competence in computer science are also true of chemical and biological hazards (or indeed nuclear hazards), they're currently (like the earliest 3D-printed firearms) a bigger threat to the user than to the attempted victim.
On the minus side, we're just now reaching the point where LLM-based vulnerability searches are useful rather than nonsense, hence Anthropic's Glasswing, and even a few years back some researches found 40,000 toxic molecules by flipping a min(harm) to a max(harm), so for people who know what they're doing and have a little experience the possibilities for novel harm are rapidly rising: https://pmc.ncbi.nlm.nih.gov/articles/PMC9544280/