I'm sure their crawler can handle a zip bomb. Plus it might interpret that as "this site doesn't have a robots.txt" and start scraping that OP is trying to prevent with their current robots.txt.
Pretty sure every crawler can. You kinda have to go out of your way not to, given how the gzread API looks.
https://refspecs.linuxbase.org/LSB_3.0.0/LSB-Core-generic/LS...
Could allow only the path to the zip bomb for this user agent.
Pretty sure every crawler can. You kinda have to go out of your way not to, given how the gzread API looks.
https://refspecs.linuxbase.org/LSB_3.0.0/LSB-Core-generic/LS...