The article is missing this motivation paragraph, taken from the blog index:
> Graphics APIs and shader languages have significantly increased in complexity over the past decade. It’s time to start discussing how to strip down the abstractions to simplify development, improve performance, and prepare for future GPU workloads.
> GPU hardware started to shift towards a generic SIMD design. SIMD units were now executing all the different shader types: vertex, pixel, geometry, hull, domain and compute. Today the framework has 16 different shader entry points. This adds a lot of API surface and makes composition difficult. As a result GLSL and HLSL still don’t have a flourishing library ecosystem ... despite 20 years of existence
A lot of this post went over my head, but I've struggled enough with GLSL for this to be triggering. Learning gets brutal for the lack of middle ground between reinventing every shader every time and using an engine that abstracts shaders from the render pipeline. A lot of open-source projects that use shaders are either allergic to documenting them or are proud of how obtuse the code is. Shadertoy is about as good as it gets, and that's not a compliment.
The only way I learned anything about shaders was from someone who already knew them well. They learned what they knew by spending a solid 7-8 years of their teenage/young adult years doing nearly nothing but GPU programming. There's probably something in between that doesn't involve giving up and using node-based tools, but in a couple decades of trying and failing to grasp it I've never found it.
I see this as an expression of the same underlying complaint as Casey Muratori's 30 Million Line Problem: https://caseymuratori.com/blog_0031
Casey argues for ISAs for hardware, including GPUs, instead of heavy drivers. TFA argues for a graphics API surface that is so lean precisely because it fundamentally boils down to a simple and small set of primitives (mapping memory, simple barriers, etc.) that are basically equivalent to a simple ISA.
If a stable ISA was a requirement, I believe we would have converged on these simpler capabilities ahead of time, as a matter of necessity. However, I am not a graphics programmer, so I just offer this as an intellectual provocation to drive conversation.
I have followed Sebastian Aaltonen's work for quite a while now, so maybe I am a bit biased, this is however a great article.
I also think that the way forward is to go back to software rendering, however this time around those algorithms and data structures are actually hardware accelerated as he points out.
Note that this is an ongoing trend on VFX industry already, about 5 years ago OTOY ported their OctaneRender into CUDA as the main rendering API.
Impressive post, so many details. I could only understand some parts of it, but I think this article will probably be a reference for future graphics API.
I think it's fair to say that for most gamers, Vulkan/DX12 hasn't really been a net positive, the PSO problem affected many popular games and while Vulkan has been trying to improve, WebGPU is tricky as it has is roots on the first versions of Vulkan.
Perhaps it was a bad idea to go all in to a low level API that exposes many details when the hardware underneath is evolving so fast. Maybe CUDA, as the post says in some places, with its more generic computing support is the right way after all.
I think this almost has to be the future if most compute development goes to AI in the next decade or so, beyond the fact that the proposed API is much cleaner. Vendors will stop caring about maintaining complex fixed function hardware and drivers for increasingly complex graphics APIs when they can get 3x the return from AI without losing any potential sales, especially in the current day where compute seems to be more supply limited. Game engines can (and I assume already do) benefit from general purpose compute anyway for things like physics, and even for things that it wouldn't matter in itself for performance or would be slower, doing more on the GPU can be faster if your data is already on the GPU, which becomes more true the more things are done on the GPU. And as the author says, it would be great to have an open source equivalent to CUDA's ecosystem that could be leveraged by games in a cross platform way.
This reminds of me Makimoto’s Wave:
https://semiengineering.com/knowledge_centers/standards-laws...
There is a constant cycle between domain-specific hardware-hardcoded-algorithm design, and programmable flexible design.
>The user writes the data to CPU mapped GPU memory first and then issues a copy command, which transforms the data to optimal compressed format.
Wouldnt this mean double gpu memory usage for uploading a potentially large image? (Even if just for the time the copy is finished)
Vulkan lets the user copy from cpu (host_visible) memory to gpu (device_local) memory without an intermediate gpu buffer, afaik there is no double vram usage there but i might be wrong on that.
Great article btw. I hope something comes out of this!
I miss Mantle. It had its quirks but you felt as if you were literally programming hardware using a pretty straight forward API. The most fun I’ve had programming was for the Xbox 360.
If you enjoyed history of GPUs section, there's a great book that goes into more detail by Jon Peddie titled "The History of the GPU - Steps to Invention", definitely worth a read.
This article already feels like it’s on the right track. DirectX 11 was perfectly fine, and DirectX 12 is great if you really want total control over the hardware but I even remember some IHV saying that this level of control isn’t always a good thing.
When you look at the DirectX 12 documentation and best-practice guides, you’re constantly warned that certain techniques may perform well on one GPU but poorly on another, and vice versa. That alone shows how fragile this approach can be.
Which makes sense: GPU hardware keeps evolving and has become incredibly complex. Maybe graphics APIs should actually move further up the abstraction ladder again, to a point where you mainly upload models, textures, and a high-level description of what the scene and objects are supposed to do and how they relate to each other. The hardware (and its driver) could then decide what’s optimal and how to turn that into pixels on the screen.
Yes, game engines and (to some extent) RHIs already do this, but having such an approach as a standardized, optional graphics API would be interesting. It would allow GPU vendors to adapt their drivers closely to their hardware, because they arguably know best what their hardware can do and how to do it efficiently.
Very well written but I can't understand much of this article.
What would be one good primer to be able to comprehend all the design issues raised?
I wonder why M$ stopped putting out new Direct X? Direct X Ultimate or 12.1 or 12.2 is largely the same as Direct X 12.
Or has the use of Middleware like Unreal Engine largely made them irrelevant? Or should EPIC put out a new Graphics API proposal?
After reading this article, I feel like I've witnessed a historic moment.
And the GPU API cycle of life and death continues!
I was an only-half-joking champion of ditching vertex attrib bindings when we were drafting WebGPU and WGSL, because it's a really nice simplification, but it was felt that would be too much of a departure from existing APIs. (Spending too many of our "Innovation Tokens" on something that would cause dev friction in the beginning)
In WGSL we tried (for a while?) to build language features as "sugar" when we could. You don't have to guess what order or scope a `for` loop uses when we just spec how it desugars into a simpler, more explicit (but more verbose) core form/dialect of the language.
That said, this powerpoint-driven-development flex knocks this back a whole seriousness and earnestness tier and a half: > My prototype API fits in one screen: 150 lines of code. The blog post is titled “No Graphics API”. That’s obviously an impossible goal today, but we got close enough. WebGPU has a smaller feature set and features a ~2700 line API (Emscripten C header).
Try to zoom out on the API and fit those *160* lines on one screen! My browser gives up at 30%, and I am still only seeing 127. This is just dishonesty, and we do not need more of this kind of puffery in the world.
And yeah, it's shorter because it is a toy PoC, even if one I enjoyed seeing someone else's take on it. Among other things, the author pretty dishonestly elides the number of lines the enums would take up. (A texture/data format enum on one line? That's one whole additional Pinocchio right there!)
I took WebGPU.webidl and did a quick pass through removing some of the biggest misses of this API (queries, timers, device loss, errors in general, shader introspection, feature detection) and some of the irrelevant parts (anything touching canvas, external textures), and immediately got it down to 241 declarations.
This kind of dishonest puffery holds back an otherwise interesting article.
Personally I'm staying with OpenGL (ES) 3 for eternity.
VAO is the last feature I was missing prior.
Also the other cores will do useful gameplay work so one CPU core for the GPU is ok.
4 CPU cores is also enough for eternity. 1GB shared RAM/VRAM too.
Let's build something good on top of the hardware/OSes/APIs/languages we have now? 3588/linux/OpenGL/C+Java specifically!
Hardware has permanently peaked in many ways, only soft internal protocols can now evolve, I write mine inside TCP/HTTP.
I don't understand this part:
> Meshlet has no clear 1:1 lane to vertex mapping, there’s no straightforward way to run a partial mesh shader wave for selected triangles. This is the main reason mobile GPU vendors haven’t been keen to adapt the desktop centric mesh shader API designed by Nvidia and AMD. Vertex shaders are still important for mobile.
I get that there's no mapping from vertex/triangle to tile until after the mesh shader runs. But even with vertex shaders there's also no mapping from vertex/triangle to tile until after the vertex shader runs. The binning of triangles to tiles has to happen after the vertex/mesh shader stage. So I don't understand why mesh shaders would be worse for mobile TBDR.
I guess this is suggesting that TBDR implementations split the vertex shader into two parts, one that runs before binning and only calculates positions, and one that runs after and computes everything else. I guess this could be done but it sounds crazy to me, probably duplicating most of the work. And if that's the case why isn't there an extension allowing applications to explicitly separate position and attribute calculations for better efficiency? (Maybe there is?)
Edit: I found docs on Intel's site about this. I think I understand now. https://www.intel.com/content/www/us/en/developer/articles/g...
Yes, you have to execute the vertex shader twice, which is extra work. But if your main constraint is memory bandwidth, not FLOPS, then I guess it can be better to throw away the entire output of the vertex shader except the position, rather than save all the output in memory and read it back later during rasterization. At rasterization time when the vertex shader is executed again, you only shade the triangles that actually went into your tile, and the vertex shader outputs stay in local cache and never hit main memory. And this doesn't work with mesh shaders because you can't pick a subset of the mesh's triangles to shade.
It does seem like there ought to be an extension to add separate position-only and attribute-only vertex shaders. But it wouldn't help the mesh shader situation.
Great post, it brings back a lot of memories. Two additional factors that designers of these APIs consider are:
* GPU virtualization (e.g., the D3D residency APIs), to allow many applications to share GPU resources (e.g., HBM).
* Undefined behavior: how easy is it for applications to accidentally or intentionally take a dependency on undefined behavior? This can make it harder to translate this new API to an even newer API in the future.
NVIDIA's NVRHI has been my favorite abstraction layer over the complexity that modern APIs bring.
In particular, this fork: https://github.com/RobertBeckebans/nvrhi which adds some niceties and quality of life improvements.
ironically, explaining that "we need a simpler API" takes a dense 69-page technical missive that would make the Kronos Vulkan tutorial blush.
I started my career writing software 3D renderers before switching to Direct3D in the later 90s. What I wonder is if all of this is going to just get completely washed away and made totally redundant by the incoming flood of hallucinated game rendering?
Will it be possible to hallucinate the frame of a game at a similar speed to rendering it with a mesh and textures?
We're already seeing the hybrid version of this where you render a lower res mesh and hallucinate the upscaled, more detailed, more realistic looking skin over the top.
I wouldn't want to be in the game engine business right now :/
I'm kind of curious about something.. most of my graphics experience has been OpenGL or WebGL (tiny bit of Vulkan) or big engines like Unreal or Unity. I've noticed over the years the uptake of DX12 always seemed marginal though (a lot of things stayed on D3D11 for a really long time). Is Direct3D 12 super awful to work with or something? I know it requires more resource management than 11, but so does Vulkan which doesn't seem to have the same issue..
Is this going to materialize into a "thing"?
This seems tangentially related?
This looks very similar to the SDL3 GPU API and other RHI libraries that have been created at first glance.
This needs an index and introduction. It's also not super interesting to people in industry? Like yeah, it'd be nice if bindless textures were part of the API so you didn't need to create that global descriptor set. It'd be nice if you just sample from pointers to textures similar to how dereferencing buffer pointers works.
the article talks a lot about PSOs but never defines the term
I wonder if Valve might put out their own graphics API for SteamOS.
what level of performance improvements would this represent?
I mean sure, this should be nice and easy.
But then game/engine devs want to use the vertex shader producing a uv coordinate and a normal together with a pixel shader that only reads the uv coordinate (or neither for shadow mapping) and don't want to pay for the bandwidth of the unused vertex outputs (or the cost of calculating them).
Or they want to be able to randomly enable any other pipeline stage like tessellation or geometry and the same shader should just work without any performance overhead.
LLMs will eat this up
This is a fantastic article that demonstrates how many parts of vulkan and DX12 are no longer needed.
I hope the IHVs have a look at it because current DX12 seems semi abandoned, with it not supporting buffer pointers even when every gpu made on the last 10 (or more!) years can do pointers just fine, and while Vulkan doesnt do a 2.0 release that cleans things, so it carries a lot of baggage, and specially, tons of drivers that dont implement the extensions that really improve things.
If this api existed, you could emulate openGL on top of this faster than current opengl to vulkan layers, and something like SDL3 gpu would get a 3x/4x boost too.