logoalt Hacker News

yencabulatortoday at 4:13 AM0 repliesview on HN

Considering that very subtle not-human-visible tweaks can make vision models misclassify inputs, it seems very plausible that you can include non-human-visible content the model consumes.

https://cacm.acm.org/news/when-images-fool-ai-models/

https://arxiv.org/abs/2306.13213