About an hour ago new versions have been deployed to PyPI.
I was just setting up a new project, and things behaved weirdly. My laptop ran out of RAM, it looked like a forkbomb was running.
I've investigated, and found that a base64 encoded blob has been added to proxy_server.py.
It writes and decodes another file which it then runs.
I'm in the process of reporting this upstream, but wanted to give everyone here a headsup.
It is also reported in this issue: https://github.com/BerriAI/litellm/issues/24512
Good reminder to pin dependency versions and verify checksums. SHA256 verification should be standard for any tool that makes network calls.
That's a bad supply-chain attack, many folks use litellm as main gateway
Worth exploring safeguard for some: The automatic import can be suppressed using Python interpreter’s -S option.
This would also disable site import so not viable generically for everyone without testing.
Checkout LLM Gateway: https://llmgateway.io
Migration guide: https://llmgateway.io/migration/litellm
Stuff like is happening too much recently. Seems like the more fast paced areas of development would benefit from a paradigm shift
teampcp taking credit?
https://github.com/krrishdholakia/blockchain/commit/556f2db3...
- # blockchain
- Implements a skeleton framework of how to mine using blockchain, including the consensus algorithms.
+ teampcp owns BerriAIWrite it yourself, fuzz/test it yourself, and build it yourself, or be forever subject to this exact issue.
This was taught in the 90s. Sad to see that lesson fading away.
I recommend scanning all of your projects with osv-scanner in non-blocking mode
# add any dependency file patterns
osv-scanner -r .
as your projects mature, add osv-scanner as a blocking step to fail your installs before the code gets installed / executed.just wanna state this can litterally happen to anyone within this messy package ecosystem. maintainer seems to be doing his best
if you have tips i am sure they are welcome. snark remarks are useless. dont be a sourpuss. if you know better, help the remediation effort.
Exactly what I needed, thanks.
are there any timestamps available when the malicious versions were published on pypi? I can't find anything but that now the last "good" version was published on march 22.
airflow, dagster, dspy, unsloth.ai, polar
Someone needs to go to prison for this.
LiteLLM is now in quarantine on PyPI [1]. Looks like burning a recovery token was worth it.
whats up with the hundreds of bot replys on github to this?
I've been developing an alternative to LiteLLM. Javascript. No dependencies. https://github.com/johnhenry/ai.matey/
This is secure bug impacting PyPi v1.82.7, v1.82.8. The idea of bracketing r-w-x mod package permissions for group id credential where litellm was installed.
LiteLLM is the second worst software project known to man. (First is LangChain. Third is OpenClaw.)
I'm sensing a pattern here, hmm.
We need real sandboxing. Out-of-process sandboxing, not in-process. The attacks are only going to get worse.
That's why I'm building https://github.com/kstenerud/yoloai
Our modern economy/software industry truly runs on egg-shells nowadays that engineers accounts are getting hacked to create a supply-chain attack all at the same time that threat actors are getting more advanced partially due to helps of LLM's.
First Trivy (which got compromised twice), now LiteLLM.
what's up with everyone in the issue thread thanking it, is this an irony trend or is that a flex on account takeover from teampcp? this feels wild
What do we have here? Unaudited software completely compromised with a fake SOC 2 and ISO 27001 certification.
An actual infosec audit would have rigorously enforced basic security best practices in preventing this supply chain attack.
Perhaps I'm missing something obvious - but what's up with the comments on the reported issue?
Hundreds of downvoted comments like "Worked like a charm, much appreciated.", "Thanks, that helped!", and "Great explanation, thanks for sharing."
How were they compromised? Phishing?
pretty horrifying. I only use it as lightweight wrapper and will most likely move away from it entirely. Not worth the risk
LiteLLM's SOC2 auditor was Delve :))
I reviewed the LiteLLM source a while back. Without wanting to be mean, it was a mess. Steered well clear.
I work with security researchers, so we've been on this since about an hour ago. One pain I've really come to feel is the complexity of Python environments. They've always been a pain, but in an incident like this, where you need to find whether an exact version of a package has ever been installed on your machine. All I can say is good luck.
The Python ecosystem provides too many nooks and crannies for malware to hide in.
Thank you for posting this, interesting.
I hope that everyone's course of action will be uninstalling this package permanently, and avoiding the installation of packages similar to this.
In order to reduce supply chain risk not only does a vendor (even if gratis and OS) need to be evaluated, but the advantage it provides.
Exposing yourself to supply chain risk for an HTTP server dependency is natural. But exposing yourself for is-odd, or whatever this is, is not worth it.
Remember that you are programmers and you can just program, you don't need a framework, you are already using the API of an LLM provider, don't put a hat on a hat, don't get killed for nothing.
And even if you weren't using this specific dependency, check your deps, you might have shit like this in your requirements.txt and was merely saved by chance.
An additional note is that the dev will probably post a post-mortem, what was learned, how it was fixed, maybe downplay the thing. Ignore that, the only reasonable step after this is closing a repo, but there's no incentive to do that.
Edit: ignore this silliness, as it sidesteps the real problem. Leaving it here because we shouldn't remove our own stupidity.
It's pretty disappointing that safetensors has existed for multiple years now but people are still distributing pth files. Yes it requires more code to handle the loading and saving of models, but you'd think it would be worth it to avoid situations like this.
Tried running the compromised package inside Greywall, because theoretically it should mitigate everything but in practice it just forkbombs itself?
Am I the only one having feeling that with LLM-era we have now bigger amount of malicious software lets say parsers/fetchers of credentials/ssh/private keys? And it is easier to produce them and then include in some 3rd party open-source software? Or it is just our attention gets focused on such things?
Now I feel lucky that I switched to just using OpenRouter a year ago because LiteLLM was incredible flaky and kept causing outages.
What is happening in this issue thread? Why are there 100+ satisfied slop comments?
helpful
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
It's been quarantined on PyPI