logoalt Hacker News

baxtryesterday at 10:52 PM5 repliesview on HN

Actually… I think this be solved by AI answers. I don’t look up commands on random websites, instead I ask an LLM for that kind of stuff. At the very least, check your commands with an LLMs.


Replies

goaliecayesterday at 11:23 PM

What we used to have, 15 years ago, was a really well functioning google. You could be lazy with your queries and still find what you wanted in the first two or three hits. Sometimes it was eerily accurate and figuring out what you were actually searching for. Modern google is just not there even with AI answers which is supposed to be infinitely better at natural language processing.

show 4 replies
OsrsNeedsf2Pyesterday at 11:56 PM

Yesterday I was debugging why on Windows, my Wifi would randomly disconnect every couple hours (whereas it worked on Linux). Claude decided it was a driver issue, and proceeded to download a driver update off a completely random website and told me to execute it.

My point is, this is not solved by AI answers.

show 1 reply
al_borlandyesterday at 11:55 PM

Don’t the LLMs get their information from these random websites? They don’t know what is good and what is malware. Most of the time when I get an AI answer with a command in it, there is a reference to a random reddit post, or something similar.

Fnoordtoday at 1:23 AM

LLMs will allow Mal to sneak in backdoors in the dataset. Most of the popular LLMs use some kind of blacklisting instead of a smaller specific/specialised dataset. The latter seems more akin to whitelisting.

JumpCrisscrosstoday at 2:21 AM

FTFA: “This is almost identical to the previous attack via ChatGPT.”