AFAIK Apple does not allow applications to render traditionally nor gives them access to the camera or other interesting effects.
You are instead given a DOM (really imagine idiosyncratic SVG for 3D) API and you must facade it to your engine object model.
Apple has forced library developers into a situation even worse than Metal: a single, idiosyncratic scene graph like API. None of the performance benefits of using the technology natively. None of the DX benefits of single code, run anywhere, since everything has to be aware of the spatial rendering limitations. It’s like Negative React Native: they had you a weird React that’s non native, and you must wrap it.
Truly, and I have no hesitation here because I will never want to work for Apple and they’re going downhill: this PR has its head so far up its butt.
Maybe this employee should have spent all this time convincing Apple to give developers access to the GPU.
Your comment is highly incorrect
You can render traditionally all you want with metal. You just don’t get some of the features like camera access, or gaze. Which does have its downsides, but is a long way from what you’re describing. I’ve ported a metal based renderer to visionOS for companies already, and you already have engines like Unreal supporting it too.
I’m not even sure what DOM you’re talking about. SwiftUI? RealityKit? The former is for Ui. The latter is an ECS like rendering engine. But neither fit what you describe.
Perhaps before being outraged by things you should be familiar with development on them first.