logoalt Hacker News

Tell HN: Litellm 1.82.7 and 1.82.8 on PyPI are compromised

325 pointsby dot_treotoday at 12:06 PM329 commentsview on HN

About an hour ago new versions have been deployed to PyPI.

I was just setting up a new project, and things behaved weirdly. My laptop ran out of RAM, it looked like a forkbomb was running.

I've investigated, and found that a base64 encoded blob has been added to proxy_server.py.

It writes and decodes another file which it then runs.

I'm in the process of reporting this upstream, but wanted to give everyone here a headsup.

It is also reported in this issue: https://github.com/BerriAI/litellm/issues/24512


Comments

detente18today at 2:08 PM

LiteLLM maintainer here, this is still an evolving situation, but here's what we know so far:

1. Looks like this originated from the trivvy used in our ci/cd - https://github.com/search?q=repo%3ABerriAI%2Flitellm%20trivy... https://ramimac.me/trivy-teampcp/#phase-09

2. If you're on the proxy docker, you were not impacted. We pin our versions in the requirements.txt

3. The package is in quarantine on pypi - this blocks all downloads.

We are investigating the issue, and seeing how we can harden things. I'm sorry for this.

- Krrish

show 12 replies
jFriedensreichtoday at 2:04 PM

We just can't trust dependencies and dev setups. I wanted to say "anymore" but we never could. Dev containers were never good enough, too clumsy and too little isolation. We need to start working in full sandboxes with defence in depth that have real guardrails and UIs like vm isolation + container primitives and allow lists, egress filters, seccomp, gvisor and more but with much better usability. Its the same requirements we have for agent runtimes, lets use this momentum to make our dev environments safer! In such an environment the container would crash, we see the violations, delete it and dont' have to worry about it. We should treat this as an everyday possibility not as an isolated security incident.

show 15 replies
ramimactoday at 1:36 PM

This is tied to the TeamPCP activity over the last few weeks. I've been responding, and keeping an up to date timeline. I hope it might help folks catch up and contextualize this incident:

https://ramimac.me/trivy-teampcp/#phase-09

show 2 replies
hiciutoday at 1:11 PM

Besides main issue here, and the owners account being possibly compromised as well, there's like 170+ low quality spam comments in there.

I would expect better spam detection system from GitHub. This is hardly acceptable.

show 3 replies
dweinsteintoday at 9:18 PM

https://github.com/dweinstein/canary

I made this tool for macos systems that helps detect when a package accesses something it shouldn't. it's a tiny go binary (less than 2k LOC) with no dependencies that will mount a webdav filesystem (no root) or NFS (root required) with fake secrets and send you a notification when anything accesses it. Very stupid simple. I've always really liked the canary/honeypot approach and this at least may give some folks a chance to detect (similar to like LittleSnitch) when something strange is going on!

Next time the attack may not have an obvious performance issue!

ting0today at 6:30 PM

I've been waiting for something like this to happen. It's just too easy to pull off. I've been hard-pinning all of my versions of dependencies and using older versions in any new projects I set up for a little while, because they've generally at least been around long enough to vet. But even that has its own set of risks (for example, what if I accidently pin a vulnerable version). Either that, or I fork everything, including all the deps, run LLMs over the codebase to vet everything.

Even still though, we can't really trust any open-source software any more that has third party dependencies, because the chains can be so complex and long it's impossible to vet everything.

It's just too easy to spam out open-source software now, which also means it's too easy to create thousands of infected repos with sophisticated and clever supply chain attacks planted deeply inside them. Ones that can be surfaced at any time, too. LLMs have compounded this risk 100x.

show 1 reply
rdevillatoday at 1:40 PM

It will only take one agent-led compromise to get some Claude-authored underhanded C into llvm or linux or something and then we will all finally need to reflect on trusting trust at last and forevermore.

show 5 replies
eoskxtoday at 3:29 PM

Also, not surprising that LiteLLM's SOC2 auditor was Delve. The story writes itself.

show 1 reply
intothemildtoday at 1:33 PM

I just installed Harbor, and it instantly pegged my cpu.. i was lucky to see my processes before the system hard locked.

Basically it forkbombed `grep -r rpcuser\rpcpassword` processes trying to find cryptowallets or something. I saw that they spawned from harness, and killed it.

Got lucky, no backdoor installed here from what i could make out of the binary

show 3 replies
cedwstoday at 2:13 PM

This looks like the same TeamPCP that compromised Trivy. Notice how the issue is full of bot replies. It was the same in Trivy’s case.

This threat actor seems to be very quickly capitalising on stolen credentials, wouldn’t be surprised if they’re leveraging LLMs to do the bulk of the work.

brataotoday at 1:08 PM

Look like the Founder and CTO account has been compromised. https://github.com/krrishdholakia

show 3 replies
shay_kertoday at 1:47 PM

A general question - how do frontier AI companies handle scenarios like this in their training data? If they train their models naively, then training data injection seems very possible and could make models silently pwn people.

Do the labs label code versions with an associated CVE to label them as compromised (telling the model what NOT to do)? Do they do adversarial RL environments to teach what's good/bad? I'm very curious since it's inevitable some pwned code ends up as training data no matter what.

show 4 replies
nickvectoday at 1:43 PM

Looks like all of the LiteLLM CEO’s public repos have been updated with the description “teampcp owns BerriAI” https://github.com/krrishdholakia

syllogismtoday at 3:30 PM

Maintainers need to keep a wall between the package publishing and public repos. Currently what people are doing is configuring the public repo as a Trusted Publisher directly. This means you can trigger the package publication from the repo itself, and the public repo is a huge surface area.

Configure the CI to make a release with the artefacts attached. Then have an entirely private repo that can't be triggered automatically as the publisher. The publisher repo fetches the artefacts and does the pypi/npm/whatever release.

show 2 replies
f311atoday at 2:27 PM

Their previous release would be easily caught by static analysis. PTH is a novel technique.

Run all your new dependencies through static analysis and don't install the latest versions.

I implemented static analysis for Python that detects close to 90% of such injections.

https://github.com/rushter/hexora

show 1 reply
eoskxtoday at 1:52 PM

This is bad, especially from a downstream dependency perspective. DSPy and CrewAI also import LiteLLM, so you could not be using LiteLLM as a gateway, but still importing it via those libraries for agents, etc.

show 2 replies
santiago-pltoday at 7:30 PM

It looks like Trivy was compromised at least five days ago. https://www.wiz.io/blog/trivy-compromised-teampcp-supply-cha...

tom_alexandertoday at 2:03 PM

Only tangentially related: Is there some joke/meme I'm not aware of? The github comment thread is flooded with identical comments like "Thanks, that helped!", "Thanks for the tip!", and "This was the answer I was looking for."

Since they all seem positive, it doesn't seem like an attack but I thought the general etiquette for github issues was to use the emoji reactions to show support so the comment thread only contains substantive comments.

show 5 replies
sschuellertoday at 1:29 PM

Does anyone know a good alternate project that works similarly (share multipple LLMs across a set of users)? LiteLLM has been getting worse and trying to get me to upgrade to a paid version. I also had issues with creating tokens for other users etc.

show 6 replies
santiagobasultotoday at 2:33 PM

I blogged about this last year[0]...

> ### Software Supply Chain is a Pain in the A*

> On top of that, the room for vulnerabilities and supply chain attacks has increased dramatically

AI Is not about fancy models, is about plain old Software Engineering. I strongly advised our team of "not-so-senior" devs to not use LiteLLM or LangChain or anything like that and just stick to `requests.post('...')".

[0] https://sb.thoughts.ar/posts/2025/12/03/ai-is-all-about-soft...

show 1 reply
cpburns2009today at 1:06 PM

You can see it for yourself here:

https://inspector.pypi.io/project/litellm/1.82.8/packages/fd...

show 2 replies
abhisektoday at 2:56 PM

We just analysed the payload. Technical details here: https://safedep.io/malicious-litellm-1-82-8-analysis/

We are looking at similar attack vectors (pth injection), signatures etc. in other PyPI packages that we know of.

postalcodertoday at 1:29 PM

This is a brutal one. A ton of people use litellm as their gateway.

show 2 replies
Shanktoday at 3:11 PM

I wonder at what point ecosystems just force a credential rotation. Trivy and now LiteLLM have probably cleaned out a sizable number of credentials, and now it's up to each person and/or team to rotate. TeamPCP is sitting on a treasure trove of credentials and based on this, they're probably carefully mapping out what they can exploit and building payloads for each one.

It would be interesting if Python, NPM, Rubygems, etc all just decided to initiate an ecosystem-wide credential reset. On one hand, it would be highly disruptive. On the other hand, it would probably stop the damage from spreading.

Nayjesttoday at 8:10 PM

Use secure and minimalistic lm-proxy instead:

https://github.com/Nayjest/lm-proxy

``` pip install lm-proxy ```

Guys, sorry, as the author of a competing opensource product, I couldn’t resist

ajoytoday at 7:51 PM

Reminded me of a similar story at openSSH, wonderfully documented in a "Veritasium" episode, which was just fascinating to watch/listen.

https://www.youtube.com/watch?v=aoag03mSuXQ

show 1 reply
noobermintoday at 5:25 PM

I have to say, the long line of comments from obvious bots thanking the opener of the issue is a bit too on the nose.

macNchztoday at 5:05 PM

Was curious—good number of projects out there with an un-pinned LiteLLM dependencies in their requirements.txt (628 matches): https://github.com/search?q=path%3A*%2Frequirements.txt%20%2...

or pyproject.toml (not possible to filter based on absence of a uv.lock, but at a glance it's missing from many of these): https://github.com/search?q=path%3A*%2Fpyproject.toml+%22%5C...

or setup.py: https://github.com/search?q=path%3A*%2Fsetup.py+%22%5C%22lit...

mohsen1today at 2:07 PM

If it was not spinning so many Python processes and not overwhelming the system with those (friends found out this is consuming too much CPU from the fan noise!) it would have been much more successful. So similar to xz attack

it does a lot of CPU intensive work

    spawn background python
    decode embedded stage
    run inner collector
    if data collected:
        write attacker public key
        generate random AES key
        encrypt stolen data with AES
        encrypt AES key with attacker RSA pubkey
        tar both encrypted files
        POST archive to remote host
show 2 replies
rgambeetoday at 1:26 PM

Looking forward to a Veritasium video about this in the future, like the one they recently did about the xz backdoor.

show 1 reply
mark_l_watsontoday at 2:44 PM

A question from a non-python-security-expert: is committing uv.lock files for specific versions, and only infrequently updating versions a reasonable practice?

show 1 reply
westoquetoday at 5:57 PM

my takeaway from this is that it should now be MANDATORY to have an LLM do a scan on the entire codebase prior to release or artifact creation. do NOT use third party plugins for this. it's so easy to create your own github action to digest the whole codebase and inspect third party code. it costs tokens yes but it's also cached and should be negligible spend for the security it brings.

cpburns2009today at 3:34 PM

Looks like litellm is no longer in quarantine on PyPI, and the compromized versions (1.82.7 and 1.82.8) have been removed [1].

[1]: https://pypi.org/project/litellm/#history

aborsytoday at 6:14 PM

What is the best way to sandbox LLMs and packages in general, while being able to work on data from outside sandbox (get data in and out easily)?

There is also the need for data sanitation, because the attacker could distribute compromised files through user’s data which will later be run and compromise the host.

show 1 reply
6thbittoday at 1:32 PM

title is bit misleading.

The package was directly compromised, not “by supply chain attack”.

If you use the compromised package, your supply chain is compromised.

show 1 reply
footatoday at 5:27 PM

Somewhat unrelated, but if I have downloaded node modules in the last couple days, how should I best figure out if I've been hacked?

ilusiontoday at 8:03 PM

Does this mean opencode (and other such agent harnesses that auto update) might also be compromised?

0fflineusertoday at 1:37 PM

I was running it (as a proxy) in my homelab with docker compose using the litellm/litellm:latest image https://hub.docker.com/layers/litellm/litellm/latest/images/... , I don't think this was compromised as it is from 6 months ago and I checked it is the version 1.77.

I guess I am lucky as I have watchtower automatically update all my containers to the latest image every morning if there are new versions.

I also just added it to my homelab this sunday, I guess that's good timing haha.

wswintoday at 2:24 PM

I will wait with updating anything until this whole trivy case gets cleaned up.

hmokiguesstoday at 2:21 PM

What’s the best way to identify a compromised machine? Check uv, conda, pip, venv, etc across the filesystem? Any handy script around?

EDIT: here's what I did, would appreciate some sanity checking from someone who's more familiar with Python than I am, it's not my language of choice.

find / -name "litellm_init.pth" -type f 2>/dev/null

find / -path '/litellm-1.82..dist-info/METADATA' -exec grep -l 'Version: 1.82.[78]' {} \; 2>/dev/null

show 2 replies
rgambeetoday at 1:24 PM

Seems that the GitHub account of one of the maintainers has been fully compromised. They closed the GitHub issue for this problem. And all their personal repos have been edited to say "teampcp owns BerriAI". Here's one example: https://github.com/krrishdholakia/blackjack_python/commit/8f...

dec0dedab0detoday at 1:50 PM

github, pypi, npm, homebrew, cpan, etc etc. should adopt a multi-multi-factor authentication approach for releases. Maybe have it kick in as a requirement after X amount of monthly downloads.

Basically, have all releases require multi-factor auth from more than one person before they go live.

A single person being compromised either technically, or by being hit on the head with a wrench, should not be able to release something malicious that effects so many people.

show 1 reply
xinaydertoday at 1:49 PM

When something like this happens, do security researchers instantly contact the hosting companies to suspend or block the domains used by the attackers?

show 1 reply
xunairahtoday at 1:59 PM

Version 1.82.7 is also compromised. It doesn't have the pth file, but the payload is still in proxy/proxy_server.py.

segalordtoday at 2:42 PM

LiteLLM has like a 1000 dependencies this is expected https://github.com/BerriAI/litellm/blob/main/requirements.tx...

faxanalysistoday at 4:56 PM

This is secure bug impacting PyPi v1.82.7, v1.82.8. The idea of bracketing r-w-x mod package permissions for group id credential where litellm was installed.

sudormtoday at 8:18 PM

are there any timestamps available when the malicious versions were published on pypi? I can't find anything but that now the last "good" version was published on march 22.

show 1 reply
Ayc0today at 8:05 PM

Exactly what I needed, thanks.

🔗 View 50 more comments