Am I crazy, or is Jensen's statement a copy-paste from ChatGPT?
(Could be both)
Anyone know how this compares to Apple’s M5 chips? Or is that comparison <takes off sunglasses> apples to oranges.
Does this mean their gaming GPUs are becoming less in demand, and therefore cheaper/more available again?
It is a 88-core ARM v9 chip, for somewhat more detailed spec.
Say what you want about NVIDIA (to me they are just doing what every company would do in their place), but they create engineering marvels.
So does this cut out Intel/x86 from all the massive new datacenter buildouts entirely? They've already lost Apple as a customer and are not competitive in the consumer space. I don't see how they can realistically grow at all with x86.
I'm assuming this is for tool call and orchestration. I didn't know we needed higher exploitable parallelism from the hardware, we had software bottlenecks (you're not running 10,000 agents concurrently or downstream tool calls)
Can someone explain what is Vera CPU doing that a traditional CPU doesn't?
Given the price of these systems the ridiculously expensive network cards isn't such a huge huge deal, but I can't help but wonder at the absurdly amazing bandwidth hanging off Vera, the amazing brags about "7x more bandwidth than pcie gen 6" (amazing), but then having to go to pcie to network to chat with anyone else. It might be 800Gbe but it's still so many hops, pcie is weighty.
I keep expecting we see fabric gains, see something where the host chip has a better way to talk to other host chips.
It's hard to deny the advantages of central switching as something easy & effective to build, but reciprocally the amazing high radix systems Google has been building have just been amazing. Microsoft Mia 200 did a gobsmacking amount of Ethernet on chip 2.8Tbps, but it's still feels so little, like such a bare start. For reference pcie6 x16 is a bit shy of 1Tbps, vaguely ~45 ish lanes of that.
It will be interesting to see what other bandwidth massive workloads evolve over time. Or if this throughout era all really ends up serving AI alone. Hoping CXL or someone else slims down the overhead and latency of attachment, soon-ish.
Maia 200: https://www.techpowerup.com/345639/microsoft-introduces-its-...
What the heck is agentic inference and how is it supposed to be different from LLM inference? That's a rhetorical question. Screw marketing and screw hype.
> Purpose-Built for Agentic AI
From the "fridge purpose-built for storing only yellow tomatoes" and "car only built for people whose last name contains the letter W" series.
When can this insanity end? It is a completely normal garden-variety ARM SoC, it'll run Linux, same as every other ARM SoC does. It is as related to "Agentic $whatever" as your toaster is related to it
Who wants general computing anyways?
China will beat this....
Seems like a triumph of hype over reality.
China can do breathless hype just as well as Nvidia.
Are we rapidly careening towards a world where _only_ AI “computing” is possible?
Wanted to do general purpose stuff? Too bad, we watched the price of everything up, and then started producing only chips designed to run “ai” workloads.
Oh you wanted a local machine? Too bad, we priced you out, but you can rent time with an ai!
Feels like another ratchet on the “war on general purpose computing” but from a rather different direction.
The philosophy of knowing exactly what's on your system translates directly to how you think about software you build. Local-first, no telemetry, minimal dependencies. FreeBSD instilled that mindset in a generation of developers that now pushes back hard against cloud-everything SaaS. Tauri over Electron is the same argument applied to desktop apps.