I'm a little frustrated with articles like this that scattershot their critique by conflating genuine failures with problems that even FAANGs struggle with.
In particular, I don't love it when an article attacks a best practice as a cheap gotcha:
"and this time it was super easy! After some basic reversing of the Tapo Android app, I found out that TP-Link have their entire firmware repository in an open S3 bucket. No authentication required. So, you can list and download every version of every firmware they’ve ever released for any device they ever produced"
That is a good thing - don't encourage security through obscurity! The impact of an article like this is as likely to get management to prescribe a ham-handed mandate to lock down firmware as it is to get them to properly upgrade their security practices.
> I found out that TP-Link have their entire firmware repository in an open S3 bucket.
Nobody tell them about Linux!
I think maybe you’re reading this wrong. Reverse-engineering blog posts like this are just a fun and instructive way of telling the story of how someone did a thing. Having written and read a bunch of these in the past myself, I found this one to be a great read!
Edit: just want to add, the “how I got the firmware” part of this is also the least interesting part of this particular story.
I didn't notice a negative tone at all when he talked about the firmwares being publicly hosted. You did?
Yep, I think it should always be that way, firmwares should be always available.
I didnt really interpret that as a particular criticism really
I think this kind of critique often leans too hard on “security through obscurity” as a cheap punchline, without acknowledging that real systems are layered, pragmatic, and operated by humans with varying skill levels. An open firmware repository, by itself, is not a failure. In many cases it is the opposite: transparency that allows scrutiny, reproducibility, and faster remediation. The real risk is not that attackers can see firmware, but that defenders assume secrecy is doing work that proper controls should be doing anyway.
What worries me more is security through herd mentality, where everyone copies the same patterns, tooling, and assumptions. When one breaks, they all break. Some obscurity, used deliberately, can raise the bar against casual incompetence and lazy attacks, which, frankly, account for far more incidents than sophisticated adversaries. We should absolutely design systems that are easy to operate safely, but there is a difference between “simple to use” and “safe to run critical infrastructure.” Not every button should be green, and not every role should be interchangeable. If an approach only works when no one understands it, that is bad security. But if it fails because operators cannot grasp basic layered defenses, that is a staffing and governance problem, not a philosophy one.
This blog post is pretty readable, but it's still obviously written with the help of an LLM. A common trend is that LLMs lack the nuance and write everything with the same enthusiasm. So in a blogpost it'll infer things are novel or good/bad that are actually neutral.
Not a bad blogpost because of this, but you need to be careful reading. I've noticed most of the article on the HN front page are written with AI assistance.