> In order to keep prices low and quality high, we don't offer any customization to the box or ordering process. If you aren't capable of ordering through the website, I'm sorry but we won't be able to help.
Has this guy never worked on a B2B product before? Nobody is going to order a $10 million piece of infrastructure through your website's order form. And they are definitely going to want to negotiate something, even if it's just a warranty. And you'll do it because they're waving a $10 million check in your face.
The tone of this website is arrogant to the point of being almost hostile. The guy behind this seems to think that his name carries enough weight to dictate terms like this, among other things like requiring candidates to have already contributed to his product to even be considered for a job. I would be extremely surprised if anyone except him thinks he's that important.
There's no way the red v2 is doing anything with a 120b parameter model. I just finished building a dual a100 ai homelab (80gb vram combined with nvlink). Similar stats otherwise. 120b only fits with very heavy quantization, enough to make the model schizophrenic in my experience. And there's no room for kv, so you'll OOM around 4k of context.
I'm running a 70b model now that's okay, but it's still fairly tight. And I've got 16gb more vram then the red v2.
I'm also confused why this is 12U. My whole rig is 4u.
The green v2 has better GPUs. But for $65k, I'd expect a much better CPU and 256gb of RAM. It's not like a threadripper 7000 is going to break the bank.
I'm glad this exists but it's... honestly pretty perplexing
The exabox is interesting. I wonder who the customer is; after watching the Vera Rubin launch, I cannot imagine deciding I wanted to compete with NVIDIA for hyperscale business right now. Maybe it’s aiming at a value-conscious buyer? Maybe it’s a sensible buy for a (relatively) cash-strapped ML startup; actually I just checked prices, and it looks like Vera Rubin costs half for a similar amount of GPU RAM. I’m certain that the interconnect will not be as good as NV’s.
I have no idea who would buy this. Maybe if you think Vera Rubin is three years out? But NV ships, man, they are shipping.
$12,000 for the base model is insane. I have an Apple M3 Max with 128GB RAM that can run 120B parameter models using like 80 watts of electricity at about 15-20 tokens/sec. It's not amazing for 120B parameter models but it's also not 12 grand.
The problem with all these "AI box" startups is that the product is too expensive for hobbyists, and companies that need to run workloads at scale can always build their own servers and racks and save on the markup (which is substantial). Unless someone can figure out how to get cheaper GPUs & RAM there is really no margin left to squeeze out.
Where is the 120B documented? This seems to be an editorialized title.
Edit: found a third party referencing the claim but it doesn't belong in the title here I think:
Meet the World’s Smallest ‘Supercomputer’ from Tiiny AI; A Machine Bold Enough to Run 120B AI Models Right in the Palm of Your Hand
https://wccftech.com/meet-the-worlds-smallest-supercomputer-...
Is this like the new equivalent of crypto mining? I remember the early days when they would sell hardware for farming crypto, now it’s AI?
Tinybox is cool but I think the market is maybe looking more for a turn-key explicit promise of some level of intelligence @ a certain Tok/s like "Kimi 2.5 at 50Tok/s".
IDK, I feel it’s quite overpriced, even with the current component prices.
I almost sure it’s possible to custom build a machine as powerful as their red v2 within 9k budget. And have a lot of fun along the way.
The incremental price increases between products is funny.
$12,000, $65,000, $10,000,000.
I would love to see real-life tokens/sec values advertised for one or various specific open source models.
I'm currently shopping for offline hardware and it is very hard to estimate the performance I will get before dropping $12K, and would love to have a baseline that I can at least always get e.g. 40 tok/s running GPT-OSS-120B using Ollama on Ubuntu out of the box.
Perhaps this company should think about acting as a landlord for their hardware. You buy (or lease) but they also offer colocation hosting. They could partner with crypto miners who are transitioning to AI factories to find the space and power to do this. I wonder if the machines require added cooling, though, in what would otherwise be a crypto mining center. CoreWeave made the transition and also do colocation. The switchover is real.
I think Tinygrad should think about recycling. Are they planning ahead in this regard? Is anyone? My thought is if there was a central database of who own what and where, at least when the recycling tech become available, people will know where to source their specific trash (and even pay for it.) Having a database like that in the first place could even fuel the industry.
I just backed their TINY on Kickstarter.
"... and likely the best performance/$".
"likely" doesn't inspire much confidence. Surely, they have those numbers, and if it was, they'd publicize the comparisons.
Skeptical of their engineering, with replies to questions like this: https://x.com/jgarzik/status/2031312666036146460?s=20
Regarding 2x faster than pytorch being a condition for tinygrad to come out of alpha:
Can they/someone else give more details as to what workloads pytorch is more than 2x slower than the hardware provides? Most of the papers use standard components and I assume pytorch is already pretty performant at implementing them at 50+% of extractable performance from typical GPUs.
If they mean more esoteric stuff that requires writing custom kernels to get good performance out of the chips, then that's a different issue.
Cool that you have a dual power supply model. It says rack mountable or free standing. Does that mean two form factors? $65K is more than we can afford right now but we are definitely eventually in the market for something we can run in our own colo.
It's funny though... we're using deepseek now for features in our service and based on our customer-type we thought that they would be completely against sending their data to a third-party. We thought we'd have to do everything locally. But they seem ok with deepseek which is practically free. And the few customers that still worry about privacy may not justify such a high price point.
Sound like solid prebuilt with well balanced components and a pretty case
Not revolutionary in any way, but nice. Unless I'm missing something here?
Not sure why they stopped using 6 GPUs in thei builds - with 4 GPUs, both 9070 and rtx6000 come in 2 slot designs, so it easy to build it yourself using a bit more expensive, but still fairly regular motherboard.
With 6 GPUs you have to deal with risers, pcie retimers, dual PSUs and custom case for so value proposition there was much better IMO
Tinygrad devices are interesting, I wish I have screen captures - but their prices have gone up and some specs like RAM have gone down.
A single box with those specs without having to build/configure (the red and green) - I could see being useful if you had $ and not time to build/configure/etc yourself.
I just don’t believe that this can run inference on a 120 billion parameter model at actually useful speeds.
Obviously any Turing machine can run any size of model, so the “120B” claim doesn’t mean much - what actually matters is speed and I just don’t believe this can be speedy enough on models that my $5000 5090-based pc is too slow for and lacks enough vram for.
What’s the most effective ~$5k setup today? Interested in what people are actually running.
I thought the most interesting thing about tinygrad was that theoretically you could render a model all the way into hardware similar to Taalas (tinygrad might be where Taalas got the idea for all I know).
I could swear I filed a GitHub issue asking about the plans for that but I don't see it. Anyway I think he mentioned it when explaining tinygrad at one point and I have wondered why that hasn't got more attention.
As far as boxes, I wish that there were more MI355X available for normal hourly rental. Or any.
The AMD angle is interesting given the history — tinygrad has had to work around a lot of driver quirks to get ROCm into a usable state. At that price point, you're esentially betting on a software stack that NVIDIA has had years to stabilize. Would be curious to see real-world utilization numbers vs. a comparable NVIDIA setup.
Who is the intended customer for this product? I am genuinely curious.
Oh, this is geohots product?
He's an interesting guy. Seems to be one who does things the way he thinks is right, regardless of corporate profits.
exabox reads as if it was making a joke of something or someone. if it's real then it's really interesting!
Surprising to see this with AMD GPUs considering how George famously threw up his hands as AMD not being worth working with.
10 mil today... 1k in 10 years. Are OpenAI and Anthropic overvalued?
Quite expensive little bastard. I wonder how much does it make sense to invest in a such device, if you can get $0.40/mtok from hyperbolic for example
$12,000 gets you 1Gb/s networking and vanilla Ubuntu 24.04. Napkin math on the hardware it looks like margins are around 50% which feels like a school fundraiser where everyone pays what is obviously way more than normal retail price for X because "it's for the children."
I'm not sure what tinygrad is but I assume the markup is because the customer is making a conscious choice to support the tinygrad project. But what's unusual is there is apparently no reason whatsoever to buy this hardware, even if you plan on using tinygrad exclusively for your project. At least with System76 hardware I get (in theory) first class support for Pop!_OS.
Can someone explain the exabox? They say it "functions as a single GPU". Is there anything like that currently existing?
exabox -
720x RDNA5 AT0 XL 25,920 GB VRAM 23,040 GB System RAM
~ $10 Million
Who is the target market here?
I always wonder about these expensive products: Does the company make them once its ordered or do they just make them beforehand?
Are we at the point where 2x 9070XT's are a viable LLM platform? (I know this has 4, just wondering for myself).
I wonder if this is frontpage right now because of the other tiiny (the names are similar) video that went viral ... which turns out wasn't an actual product by the tinygrad linked in this post[1]
Adding this to my list of ~beautifully~ designed things to buy when I win the lottery.
How does this thing cool down?
I thought there was a typo in the price
Give me token/s for favourite models.
Meanwhile M-series processors and Qwen are racing to do the same thing for a much more approachable price.
Great idea, can you publish the power consumption units for this device
Finally, a computer that should be able to run Monster Hunter Wilds with decent performance.
But let’s be real, 12k is kinda pushing it - what kind of people are gonna spend $65k or even $10M (lmao WTAF) on a boutique thing like this. I dont think these kinds of things go in datacenters (happy to be corrected) and they are way too expensive (and probably way too HOT) to just go in a home or even an office “closet”.
Who is this for?
I have 8x RTX 6000 Pro. Better to run the 300 W version of the cards. And it costs close to their 4x version. I get why they make it so big. So you can cool it at home. I prefer to just put in datacenter. Much cheaper power.
> Can I pay with something besides wire transfer? In order to keep prices low and quality high, we don't offer any customization to the box or ordering process. Wire transfer is the only accepted form of payment.
Sorry, what? Is this just a scam?
Is this real? Reads like a joke. They sell a $12K machine, a $60K machine, and a $10M machine???
There's some irony in the fact that this website reads as extremely NOT AI-generated, very human in the way it's designed and the tone of its writing.
Still, this is a great idea, and one I hope takes off. I think there's a good argument that the future of AI is in locally-trained models for everyone, rather than relying on a big company's own model.
One thought: The ability to conveniently get this onto a 240v circuit would be nice. Having to find two different 120v circuits to plug this into will be a pain for many folks.