The submitted title buries the lede. It should be:
“OBS Studio Gets A New Renderer: How OBS Adopted Metal”
This quite clearly shows the cost of Apple preferring to build a software ecosystem moat, than using the Vulkan API which every other OS supports.
Vulkan support was introduced in OBS Studio 25.0 in March 2020, 5.5 years ago.
I'm no expert on topic. So, I maybe understood only 5% of what I read but I wish we had more posts like that. Announcements without any technical details sounds like marketing pieces.
I’m more excited about the upcoming support for VST3, but this is still welcome news. It is far easier than getting hardware encoding working with Rockchip SoCs on Linux.
> Metal takes Direct3D's object-oriented approach one step further by combining it with the more "verbal" API design common in Objective-C and Swift in an attempt to provide a more intuitive and easier API for app developers to use (and not just game developers) and to further motivate those to integrate more 3D and general GPU functionality into their apps.
slightly off-topic perhaps, but i find it amazing that an os-level 3d graphics api can be built in such a dynamic language as objective-c; i think it really goes to show how much optimization put in `objc_msgSend()`... it does a lot of heavy lifting in the whole os.I hope Modern GPU APIs are just a stepping stone to something simpler. OpenGL is loved and hated; and I have grown to love it after using the new stuff.
I wonder how this improves performance on older Intel macs with a Metal-compatible GPUs, or if it's really a M-series only improvement.
Was considering building a streaming rig around a Mac Mini. I wonder if with these performance enhancements, that will work for me?
Apple should dedicate some resources to making this successful. Metal could use more wins outside of Apple itself.
Sadly, it breaks my scene with a PIP camera with a mask...
Hope they'll fix the obvious bugs like CPU use going to 60% doing nothing after restore from hibernation next
If you are
- recording your screen but not streaming
- you are not customizing what goes into your screen
Then use something else. GPU screen recorder has a lower overhead and produces much smoother recordings: https://git.dec05eba.com/gpu-screen-recorder/about/
Great article. The description of how they handle shaders is just bonkers to me.
Is that really what you’d have to go through to have a working system with plugin shaders from 3rd parties on multiple backends? Or is mostly the result of time and trying to keep backwards compatibility with existing plugins?
Telling external devs “Write a copy in every shader language” would certainly be easier for the core team but that’s obviously undesirable.