In general when game development comes up here I tend not to engage as professional gamedev is so different than what other people tend to deal with that it's hard to even get on the same page, but seeing as how this one is very directly dealing with my expertise I'll chime in.
There are few things off with the this post that essentially sound as someone more green when it comes to Unity development (no problem, we all start somewhere).
1. The stated approach of separating the simulation and presentation layers isn't all that uncommon, in fact it was the primary way of achieving performance in the past (though, you usually used C++, not C#).
2. Most games don't ship on the mono backend, but instead on il2cpp (it's hard to gauge how feasible that'd be from this post as it lacks details).
3. In modern Unity, if you want to achieve performance, you'd be better off taking the approach of utilizing the burst compiler and HPC#, especially with what appears to be happening in the in sample here as the job system will help tremendously.
4. Profiling the editor is always a fools errand, it's so much slower than even a debug build for obvious reasons.
Long story short, Unity devs are excited for the mentioned update, but it's for accessing modern language features, not particularly for any performance gains. Also, I've seen a lot of mention around GC through this comment section, and professional Unity projects tend to go out of their way to minimize these at runtime, or even sidestep entirely with unmanaged memory and DOTS.
Author here, thanks for your perspective. Here some thoughts:
> approach of separating the simulation and presentation layers isn't all that uncommon
I agree that some level of separation is is not that uncommon, but games usually depend on things from their respective engine, especially on things like datatypes (e.g. Vector3) or math libraries. The reason I mention that our game is unique in this way is that its non-rendering code does not depend on any Unity types or DLLs. And I think that is quite uncommon, especially for a game made in Unity.
> Most games don't ship on the mono backend, but instead on il2cpp
I think this really depends. If we take absolute numbers, roughly 20% of Unity games on Steam use IL2CPP [1]. Of course many simple games won't be using it so the sample is skewed is we want to measure "how many players play games with IL2CPP tech". But there are still many and higher perf of managed code would certainly have an impact.
We don't use IL2CPP because we use many features that are not compatible with it. For example DLC and mods loading at runtime via DLLs, reflection for custom serialization, things like [FieldOffset] for efficient struct packing and for GPU communication, etc.
Also, having managed code makes the game "hackabe". Some modders use IL injection to be able to hook to places where our APIs don't allow. This is good and bad, but so far this allowed modders to progress faster than we expected so it's a net positive.
> In modern Unity, if you want to achieve performance, you'd be better off taking the approach of utilizing the burst compiler and HPC#
Yeah, and I really wish we would not need to do that. Burst and HPC# are messy and add a lot of unnecessary complexity and artificial limitations.
The thing is, if Mono and .NET were both equally "slow", then sure, let's do some HPC# tricks to get high performance, but it is not! Modern .NET is fast, but Unity devs cannot take advantage of it, which is frustrating.
By the way, the final trace with parallel workers was just C#'s workers threads and thread pool.
> Profiling the editor is always a fools errand
Maybe, but we (devs) spend 99% of our time in the editor. And perf gains from editor usually translate to the Release build with very similar percentage gains (I know this is generally not true, but in my experience it is). We have done many significant optimizations before and measurements from the editor were always useful indicator.
What is not very useful is Unity's profiler, especially with "deep profile" enabled. It adds constant cost per method, highly exaggerating cost of small methods. So we have our own tracing system that does not do this.
> I've seen a lot of mention around GC through this comment section, and professional Unity projects tend to go out of their way to minimize these at runtime
Yes, minimizing allocations is key, but there are many cases where they are hard to avoid. Things like strings processing for UI generates a lot of garbage every frame. And there are APIs that simply don't have an allocation-free options. CoreCLR would allow to further cut down on allocations and have better APIs available.
Just the fact that the current GC is non-moving means that the memory consumption goes up over time due to fragmentation. We have had numerous reports of "memory" leaks where players report that after periodic load/quit-to-menu loops, memory consumption goes up over time.
Even if we got fast CoreCLR C# code execution, these issues would prevail, so improved CG would be the next on the list.
[1] https://steamdb.info/stats/releases/?tech=SDK.UnityIL2CPP
[flagged]
I think you've unfortunately got suckered in by Unity marketing wholesale, and things would stand to be cleared up a bit.
Unity's whole shtick is that they make something horrible, then improve upon it marginally. The ground reality is that these performance enhancement schemes still fall very much short of just doing the basic sensible thing - using CoreCLR for most code, and writing C++ for the truly perf critical parts.
IL2Cpp is a horror-kludge of generated code, that generates low-quality C++ code from .NET IL, relying on the opitmizing compiler to extract decent performance out of it.
You can check it out: https://unity.com/blog/engine-platform/il2cpp-internals-a-to...
The resulting code gives up every possible convenience of C# (compile speed, convenience, debuggability), while falling well short of even modern .NET on performance.
The Burst compiler/HPC# plays on every meme perpetuated by modern gamedev culture (structure-of-arrays, ECS), but performance wise, generally still falls short of competently, but naively written C++ or even sometimes .NET C#. (Though tbf, most naive CoreCLR C# code is like 70-80% the speed of hyper-optimized Burst)
These technologies needless to say, are entirely proprietary, and require you to architect your code entirely their paradigms, use proprietary non-free libraries that make it unusable outside unity, and other nasty side effects.
This whole snakeoil salesmanship is enabled by these cooked Unity benchmarks that always compare performance to the (very slow) baseline Mono, not modern C# or C++ compilers.
These are well-established facts, benchmarked time and time again, but Unity marketing somehow still manages to spread the narrative of their special sauce compilers somehow being technically superior.
But it seems the truth has been catching up to them, and even they realized they have to embrace CoreCLR - which is coming soonTM in Unity. I think it's going to be a fun conversation when people realize that their regular Unity code using CoreCLR runs just as fast or faster than the kludgey stuff they spent 3 times as much time writing, that Unity has been pushing for more than a decade as the future of the engine.