I’m not the commenter that you asked, but I have also built a cross platform game framework with backends for SDLGPU and WebGL. The answer to your question is pretty basic. AI did it for me.
I asked it to create a canvas-like API, noting that it should create platform independent code. The canvas API populates arrays for vertices, indices, and other relevant things relating to draw batches. My game is built on top of this platform independent canvas code, and is itself platform independent.
Then you have the platform code, which simply reads the memory of those arrays and does what it needs to do to draw it in its environment. I have barely looked at the platform code but it seems to just work, and it is really performant. It around 1000 lines of code for the web target. The key is to use shared memory as the bridge between the compiled WASM code and the platform code for draw calls. As I said, it’s mostly just arrays of vertices, texture ids, and indices.
It took me some thinking on how to define textures in a platform independent way, but it all ended up working well. I bounced some ideas with the AI to come up with a solution just using ids.
From there I just kept adding more features, FMOD support, shaders, etc.
Edit: Oops, I misread that your comment was referring specifically to getting Monogame on web. I thought I’d leave it here anyway though because it might help you. The key insight for me was that the canvas API (and Monogame as well) is just batching up vertices, indices, into draw calls, before the platform specific stuff happens. I realised this after investigating how the Spine animation software was able to achieve so much cross platform support (it’s just providing triangles with texture ids to platform code). You don’t need any concept of a platform to represent the entirety of your games as triangles associated with texture ids in memory.