logoalt Hacker News

thefzyesterday at 10:32 PM3 repliesview on HN

You made me imagine AI companies maliciously injecting backdoors in generated code no one reads, and now I'm scared.


Replies

gibsonsmogyesterday at 10:54 PM

My understanding is that it's quite easy to poison the models with inaccurate data, I wouldn't be surprised if this exact thing has happened already. Maybe not an AI company itself, but it's definitely in the purview of a hostile actor to create bad code for this purpose. I suppose it's kind of already happened via supply chain attacks using AI generated package names that didn't exist prior to the LLM generating them.

djeastmtoday at 1:27 AM

One mitigation might be to use one company's model to check the work of another company's code and depend on market competition to keep the checks and balances.

show 1 reply
bandramitoday at 2:09 AM

Already happening in the wild