logoalt Hacker News

jchwlast Tuesday at 10:10 PM4 repliesview on HN

> It's silly that people keep complaining about this. It's a very minor effect, and one that can be solved in principle only by moving to pure vector rendering for everything.

If you have DPI virtualization, a very sufficient solution already exists: pick a reasonable scale factor for the underlying buffer and use it, then resample for any outputs that don't match. This is what happens in most Wayland compositors. Exactly what you pick isn't too important. You could pick whichever output overlaps the most with the window, or the output that has the highest scale factor, or some other criteria. It will not result in perfect pixels everywhere, but it is perfectly sufficient to clean up the visual artifacts.

Another solution would be to simply only present the surface on whatever output it primarily overlaps with. MacOS does this and it's seemingly sufficient. Unfortunately, as far as I understand, this isn't really trivial to do in X11 for the same reasons why DPI virtualization isn't trivial: whether you render it or not, the window is still in that region and will still receive input there.

> Generally speaking, a window will only ever span a single screen. It's convenient to be able to drag a window to a separate monitor, but having that kind of overlap as a permanent feature of one's workflow is just crazy.

The issue with the overlap isn't that people routinely need this; if they did, macOS or Windows would also need a more complete solution. In reality though, it's just a very janky visual glitch that isn't really too consequential for your actual workflow. Still, it really can make moving windows across outputs super janky, especially since in practice different applications do sometimes choose different behaviors. (e.g. will your toolkit choose to resize the window so it has the same logical size? will this impact the window dragging operation?)

So really, the main benefit of solving this particular edge case is just to make the UX of window management better.

While UX and visual jank concerns are below concerns about functionality, I still think they have non-zero (and sometimes non-low) importance. Laptop users expect to be able to dock and manage windows effectively regardless of whether the monitors they are using have the same ideal scale factor as the laptop's internal panel; the behavior should be clean and effective and legacy apps should ideally at least appear correct even if blurry. Being able to do DPI virtualization solves the whole set of problems very cleanly. MacOS is doing this right, Windows is finally doing this right, Wayland is doing this right, X11 still can't yet. (It's not physically impossible, but it would require quite a lot of work since it would require modifying everything that handles coordinate spaces I believe.)

> Shouldn't that kind of DPI virtualization be a concern for toolkits rather than the X server or protocol? As long as X is getting accurate DPI information from the hardware and reporting that to clients, what else is needed?

Accurate DPI information is insufficient as users may want to scale differently anyways, either due to preference, higher viewing distance, or disability. So that already isn't enough.

That said, the other issue is that there already exists applications that don't do perfect per monitor scaling, and there doesn't exist a single standard way to have the per-monitor scaling preferences propagated in X11. It's not even necessarily a solved problem among the latest versions of all of the toolkits, since it at minimum requires support for desktop environment settings daemons and etc.


Replies

kelnosyesterday at 3:53 AM

> Accurate DPI information is insufficient as users may want to scale differently anyways, either due to preference, higher viewing distance, or disability.

Which is fine. There's already a standardized property in XSETTINGS to use on X11 to advertise the user's scaling preference. For Wayland they decided to include this into the protocol, so it can be per-output and/or per-window (though the per-window fractional scaling stuff is an optional extension, sigh).

There's no reason why we couldn't do something similarly on X11, via xrandr output properties and X window properties. But it's more fun to abandon things and invent new ones than fix the things you have, so here we are.

show 1 reply
account42yesterday at 4:13 PM

> If you have DPI virtualization, a very sufficient solution already exists: pick a reasonable scale factor for the underlying buffer and use it, then resample for any outputs that don't match.

That's a shitty "solution" that doesn't even solve the issue - the result will still look bad on at least one monitor and you're wasting energy pushing more pixels than needed on the other one.

BearOsolast Tuesday at 11:01 PM

I think having any kind of "scaling" preferences focuses too much on the technical aspect. It could be narrowed down to one setting like "zoom level" or just "size." This would mean that all UI elements change size exactly proportionately to one another. Ideally, rendering should happen at the exact resolution of the display, and scaling, as in resizing a bitmap using bilinear interpolation or whatever, doesn't need to be part of the pipeline except for outdated legacy programs.

In the past, the problem with UI toolkits doing proportional sizing was because they used bitmaps for UI elements. Since newer versions of Qt and Gtk 4 render programmatically, they can do it the right way. Windows mostly does this, too, even with win32 as long as you're using the newer themes. MacOS is the only one that has assets prerendered at integer factors everywhere and needs to perform framebuffer scaling to change sizes. But Apple doesn't care because they don't want you using third-party monitors anyway.

Edit: I'm not sure about Apple's new theme. Maybe this is their transition point away from fixed asset sizes.

show 2 replies
archy_yesterday at 3:27 AM

>users may want to scale differently anyways

Users think they want a lot of things they don't really need. Do we really want to hand users that loaded gun so that they can choose incorrectly where to fire?

show 1 reply