This is @simonw’s Lethal Trifecta [1] again - access to private data and untrusted input are arguably the purpose of enterprise agents, so any external communication is unsafe. Markdown images are just the ones people usually forget about
[1] https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/
People have learnt a little while back that you need to use the white hidden text in a resume to make the AI recommend you, There are also resume collecting services which let you buy a set of resumes belonging to your general competition era and you can compare your ai results with them. Its an arms race to get called up for a job interview at the moment.
> We responsibly disclosed this vulnerability to Notion via HackerOne. Unfortunately, they said “we're closing this finding as `Not Applicable`”.
Any data that leaves the machines you control, especially to a service like Notion, is already "exfiltrated" anyway. Never trust any consumer grade service without an explicit contract for any important data you don't want exfiltrated. They will play fast and loose with your data, since there is so little downside.
Wow what a coincidence. I just migrated from notion to obsidian today. Looks like I timed it perfectly (or maybe slightly too late?)
IMHO the problem really comes from the browser accessing the URL without explicit user permission.
Bring back desktop software.
Sloppy coding to know a link could be a problem and render it anyway. But even worse to ignore the person who tells you you did that.
One more reason not to use Notion.
I wonder when there will be awakening to not use SaaS for everything you do. And the sad thing is that this is the behavior of supposedly tech-savvy people in places like the bay area.
I think the next wave is going to be native apps, with a single purchase model - the way things used to be. AI is going to enable devs, even indie devs, to make such products.
Unfortunate that Notion does not seem to be taking AI security more seriously, even after they got flak for other data exfil vulns in the 3.0 agents release in September
This, of course, more yelling into the void from decades ago, but companies who promise or imply "safety around your data" and fail should be proportionally punished, and we as a society have not yet effectively figured out how to do that yet. Not sure what it will take.
Public disclosure date is Jan 2025, but should be Jan 2026.
Securing LLMs is just structurally different. The attack space is "the entirety of the human written language" which is effectively infinite. Wrapping your head around this is something we're only now starting to appreciate.
In general, treating LLM outputs (no matter where) as untrusted, and ensuring classic cybersecurity guardrails (sandboxing, data permissioning, logging) is the current SOTA on mitigation. It'll be interesting to see how approaches evolve as we figure out more.