So I don't understand where the meme of the blurry super-resolution based down sampling comes from. If that is the case, what is super-resolution antialiasing[1] then? Images when rendered at higher resolution than downsampled is usually sharper than an image rendered at the downsampled resolution. This is because it will preserve the high frequency component of the signal better. There are multiple other downsampling-based anti-aliasing technique which all will boost signal-to-noise ratio. Does this not work for UI as well? Most of it is vector graphics. Bitmap icons will need to be updated but the rest of UI (text) should be sharp.
I know people mention 1 pixel lines (perfectly horizontal or vertical). Then they go multiply by 1.25 or whatever and go like: oh look 0.25 pixel is a lie therefore fractional scaling is fake (sway documentation mentions this to this day). This doesn't seem like it holds in practice other than from this very niche mental exercise. At sufficiently high resolution, which is the case for the display we are talking about, do you even want 1 pixel lines? It will be barely visible. I have this problem now on Linux. Further, if the line is draggable, the click zones becomes too small as well. You probably want something that is of some physical dimension which will probably take multiple pixels anyways. At that point you probably want some antialiasing that you won't be able to see anyways. Further, single pixel lines don't have to be exactly the color the program prescribed anyway. Most of the perfectly horizontal and vertical lines on my screen are all grey-ish. Having some AA artifacts will change its color slightly but don't think it will have material impact. If this is the case, then super resolution should work pretty well.
Then really what you want is something as follows:
1. Super-resolution scaling for most "desktop" applications.
2. Give the native resolution to some full screen applications (games, video playback), and possibly give the native resolution of a rectangle on screen to applications like video playback. This avoids rendering at a higher resolution then downsampling which can introduce information loss for these applications.
3. Now do this on a per-application basis, instead of per-session basis. No Linux DE implements this. KDE implements per-session which is not flexible enough. You have to do it for each application on launch.
> So I don't understand where the meme of the blurry super-resolution based down sampling comes from. If that is the case, what is super-resolution antialiasing
It removes jaggies by using lots of little blurs (averaging)