Actually… I think this be solved by AI answers. I don’t look up commands on random websites, instead I ask an LLM for that kind of stuff. At the very least, check your commands with an LLMs.
Yesterday I was debugging why on Windows, my Wifi would randomly disconnect every couple hours (whereas it worked on Linux). Claude decided it was a driver issue, and proceeded to download a driver update off a completely random website and told me to execute it.
My point is, this is not solved by AI answers.
Don’t the LLMs get their information from these random websites? They don’t know what is good and what is malware. Most of the time when I get an AI answer with a command in it, there is a reference to a random reddit post, or something similar.
LLMs will allow Mal to sneak in backdoors in the dataset. Most of the popular LLMs use some kind of blacklisting instead of a smaller specific/specialised dataset. The latter seems more akin to whitelisting.
FTFA: “This is almost identical to the previous attack via ChatGPT.”
What we used to have, 15 years ago, was a really well functioning google. You could be lazy with your queries and still find what you wanted in the first two or three hits. Sometimes it was eerily accurate and figuring out what you were actually searching for. Modern google is just not there even with AI answers which is supposed to be infinitely better at natural language processing.