I agree. I've known how it works for years, and I think the current setting is a cop-out.
In TFA it's set to a measly 2MiB, yet tried to allocate 2TiB. Note that the PG default is double that, at 4MiB.
What the setting does is offload the responsibility of a "working" implementation onto you (or the DBA). If it were just using the 4MiB default as a hardcoded value, one could argue it's a bug and bikeshed forever on what a "good" value is. As there is no safe or good value, the approach would need to be reevaluated.
The core issue is that there is no overall memory management strategy in Postgres, just the implementation.
Which is fine for an initial version, just add a few settings for all the constants in the code and boom you have some knobs to turn. Unfortunately you can't set them correctly, it might still try to use an unbounded amount of memory.
While the documentation is very transparent about this, just from reading it you know they know it's a bad design or at least an unsolved design issue. It just describes the implementation accurately, yet offers nothing further in terms of actual useful guidance on what the value should be.
This is not a criticism of the docs btw, I love the technically accurate docs in Postgres. But it's not the only setting in Postgres which is basically just an exposed internal knob. Which I totally get as a software engineer.
However from a product point of view, internal knobs are rarely all that useful. At this point of maturity, Postgres should probably aim to do a bit better on this front.