There's a reason nobody does this: RAM is expensive. Disabling overcommit on your typical server workload will waste a great deal of it. TFA completely ignores this.
This is one of those classic money vs idealism things. In my experience, the money always wins this particular argument: nobody is going to buy more RAM for you so you can do this.
Even if you disable overcommit, I don't think you will get pages assigned when you allocate. If your allocations don't trigger an allocation failure, you should get the same behavior with respect to disk cache using otherwise unused pages.
The difference is that you'll fail allocations, where there's a reasonable interface for errors, rather than failing at demand paging when writing to previously unused pages where there's not a good interface.
Of course, there are many software patterns where excessive allocations are made without any intent of touching most of the pages; that's fine with overcommit, but it will lead to allocation failures when you disable overcommit.
Disabling overcommit does make fork in a large process tricky; I don't think the rant about redis in the article is totally on target; fork to persist is a pretty good solution, copy on write is a reasonable cost to pay while dumping the data to disk and then it returns to normal when the dump is done. But without overcommit, it doubles the memory commitment while the dump is running, and that's likely to cause issues if redis is large relative to memory and that's worth checking for and warning about. The linked jemalloc issue seems like it could be problematic too, but I only skimmed; seems like that's worth warning about as well.
For the fork path, it might be nice if you could request overcommit in certain circumstances... fork but only commit X% rather than the whole memory space.