> I am not a chemist so I can't back it up, but if an AI can solve mathematics it's not unreasonable to say that they can also solve creating new neurotoxins en masse.
Right now it kinda is.
LLMs can do interesting things in mathematics while also making weird and unnecessary mistakes. With tool use that can improve. Other AI besides LLMs can do better, and have been for a while now, but think about how available LLMs in software development (so, not Claude Mythos) are still at best junior developers, and apply that to non-software roles.
This past February I tried to use Codex to make a physics simulation. Even though it identified open source libraries to use, instead of using them it wrote its own "as a fallback in case you can't install the FOSS libraries"; the simulation software it wrote itself was showing non-physical behaviour, but would I have known that if I hadn't already been interested in the thing I was trying to get it to build me a simulation of? I doubt it.
Well the worst outcome is that you make something deadly which is what you are creating anyway, do that for a year and you could possibly produce a very deadly substance that doesn't have a known treatment.