I've looked at the "only 18,935 lines of code" python code and it made me want to poke my eyes out. Not sure what's the point of this extreme code-golfing.
>People get hired by contributing to the repo. It’s a very self directed job, with one meeting a week and a goal of making tinygrad better
I find this organizational structure compelling, probably the closest to reaching 100% productivity in a week as you can get.
Love this guy and how committed he is
Very weird to market this as subscribing to "Elon process for software"
I remember when defcon ctf would play Geohot's PlayStation rap video every year on the wall.
The risk for Tinygrad is that PyTorch will create a new backend for Inductor, plug in their AMD codegen stuff and walala, PyTorch still king. I mean, they could have easily just taken that route themselves instead of bothering with a new ML framework and AD engine. 99% of the work is just the AMD codegen part of the compiler.
Either way, super cool project and I wish them the best.
> To fund the operation, we have a computer sales division that makes about $2M revenue a year.
What's the margin on that? Do 5 software engineers really subsist on the spread from moving $2M/yr in hardware?
Is it really "Complex"? Or did we just make it "Complicated"? - https://www.youtube.com/watch?v=ubaX1Smg6pY
What would tinygrad replace if they continue to proceed like this?
> tinygrad is following the Elon process for software. Make the requirements less dumb. The best part is no part.
That’s not Elon. See Russian TRIZ
How are ergonomics compared to pytorch, though? Adoption can be also driven by frictionless research (e.g. torch vs. tf comes to mind). Repo is missing proper docs aimed at early adopters imho
I really hope tinygrad succeeds with their mission of commoditizing the petaflop. We're nearing a future where you own nothing and rent everything, and they are one of few companies pushing back. This combined with the focus on making models efficient through better architecture/training, not just throwing more compute at it, seems like the right direction imo.
Is this the guy who talked a big game about all the things he was going to fix at Twitter, then utterly failed when confronted with a real world codebase and gave up having done nothing of use?
So this is all python? I bet Chris Lattner probably approached them.
What happened to the tinybox red (v1)? It had way better specs than red v2.
If you want to "own" Nvidia, the much more realistic way of doing this then trying to compete with all the data centers that are already being built with Nvidia chips is obviously with open source models. In the case of open source models, inference is much more important to most people not training which a maxed out macbook already does a good job of.
>"We also have a contract with AMD to get MI350X on MLPerf for Llama 405B training."
Anything to help AMD (and potentially other GPU/NPU/IPU etc. chip makers) catch up with NVidia/CUDA is potentially worth money, potentially worth a lot of money, potentially worth up to Billion$...
Why?
If we have
a) Market worth Billion$
and
b) A competitive race in that Market...
then
c) We have VALUE in anything (product, service, ?, ???) that helps any given participant capture more of that market than their competitors...
(AMD (and the other lesser known GPU/NPU/IPU etc. chip vendors) are currently lagging behind NVidia's CUDA AI market dominance -- so anything that helps the others advance in this area should, generally speaking, be beneficial for all technology users in general, and be potentially profitable (if the correct deals could be struck!) by those that have the skills to do such assisting...)
Anyway, wishing you well in your endeavors, Tinygrad!
Dude is an absolute nut, I'll stick with PyTorch
https://geohot.github.io/blog/jekyll/update/2025/04/22/a-way...
Lots of words and weird analogies to say basically nothing.
What is the status of the project? What can it do? What has it achieved in 5 years?
But no, let's highlight how we follow the "Elon process".
As a side note, whenever someone incessantly focuses on lines of code as a metric (in either direction), I immediately start to take them less seriously.
Fell bad for geohotz. Such a lovely guy, i hope he strikes it right soon
I remember when he launched everything and was hoping that the AMD hardware (very capable) was just hamstrung by software. The idea that the easy part is actually the chips is something I never considered though I "know" that "Nvidia succeeded because of their software". Haha, very clever.
I've got a Comma 3X and I'm thankful for the 4090 P2P work too (which is now here? https://github.com/tinygrad/open-gpu-kernel-modules) so I'm excited to see it work. Rooting for the guy. Hope it's true that the chip part is the easy work. Could be that both are "the hard work".