logoalt Hacker News

175K+ publicly-exposed Ollama AI instances discovered

33 pointsby heresie-dabordtoday at 12:12 AM22 commentsview on HN

Comments

gerdesjtoday at 2:26 AM

I'm not sure the "journos" from Techradar are too familiar with how networks ... work.

IPv4 requires an inbound NAT these days to work at all globally, unless you actually have a machine with a globally routable IP. There will probably be a default deny firewall rule too. I do remember the days before NAT ...

IPv6 doesn't require NAT (but prefix translation is available and so is ULA) but again a default deny is likely in force.

You do actually have to try quite hard to expose something to the internets. I know this because I do a lot of it.

The entire article is just a load of buzz words and basically bollocks. Yes it is possible to expose a system on the internet but it is unlikely that you do it by accident. If I was Sead, I'd go easy on the AI generated cobblers and get a real job.

show 1 reply
vivzkestreltoday at 4:07 AM

- you ll be surprised how many OLLAMA API KEYS [you can find here](https://github.com/search?q=%22OLLAMA_API_KEY%22&type=code&p...) its 2026 and this technique still works. I wonder if github supports regex search

show 1 reply
meltynesstoday at 1:10 AM

This is a weakness of docker, a bit, I think.

I was rigging this up, myself, and conciscious of the fact that basic docker is "all or none" for container port forwarding because it's for presenting network services, had to dig around with iptables so it'd be similar to binding on localhost.

The use case https://github.com/meltyness/tax-pal

The ollama container is fairly easy to deploy, and supports GPU inference through container toolkit. I'd imagine many of these are docker containers.

e: i stand corrected, apparently -p of `docker run` can have a binding interface stipulated

e2: https://docs.docker.com/engine/containers/run/#exposed-ports which is not in some docs

e3: but it's in the man page ofc

show 2 replies
adwtoday at 1:42 AM

The tool-calling thing here is overblown.

When you do "tool calling" with an LLM, all you're doing is having the LLM generate output in a particular format you can parse out of the response; it's then your code's responsibility to run the tools (locally) and stick the results back into the conversation.

So that _specific_ part isn't RCE. It's still bad for the nine million other obvious reasons though.

FloatArtifacttoday at 12:21 AM

This is a combination problem poor default (listening to all interfaces?) and also IPv6 can be publicly accessible. It's a bit dependent on how this is configured upstream by default, but this is a gotcha compared to IPv4.

show 1 reply
nxobjecttoday at 2:04 AM

Pay for Shodan, folks!

cyberaxtoday at 1:36 AM

Fun fact! On macOS you can expose privileged ports (<1024) using a regular user account.

But ONLY if you don't bind the listening port to any interface. So you try to create a listening port on localhost (e.g. 127.0.0.1:443) under a non-root account you get a permission error.

Edit: the thing is, you CAN expose "0.0.0.0:443" without root privileges!

show 3 replies
rvztoday at 12:27 AM

Nevermind. [0] Nothing to see here.

[0] https://news.ycombinator.com/item?id=45116322

dfajgljsldkjagtoday at 12:57 AM

I see this happen all the time when people just want their new toys to work right away. They copy and paste commands from the internet to open up the connection but they forget to put a lock on the door. It is dangerous that so many people run these programs without understanding the basics of how networks work.

show 1 reply