What Haiku needs is font setting per screen/resolution for multimonitor support. This way you can support mismatched monitors with different factors.
Relative units like this are usually considered best practice, because of the exact reasons that you've listed.
“The horizontal base unit returned by GetDialogBaseUnits is equal to the average width, in pixels, of the characters in the system font; the vertical base unit is equal to the height, in pixels, of the font.
The system font is used only if the dialog box template fails to specify a font. Most dialog box templates specify a font; as a result, this function is not useful for most dialog boxes.
For a dialog box that does not use the system font, the base units are the average width and height, in pixels, of the characters in the dialog's font. You can use the GetTextMetrics and GetTextExtentPoint32 functions to calculate these values for a selected font. However, by using the MapDialogRect function, you can avoid errors that might result if your calculations differ from those performed by the system.”
Windows says reference font dpi is 72 and reference sizes for buttons, list, labels, etc is specified at 96 dpi, then you're supposed to use actual dpi/ref_dpi and scale things according to that.
Then you set DPI scaling per monitor v2 in manifest.xml and catch WM_DPICHANGED message and recompute scale factors and presto, perfect dpi scaling across any monitor at any fractional scale.
https://building.enlyze.com/posts/targeting-25-years-of-wind...
Target from NT4 (up to Windows 11, I suppose) with LLVM and Visual Studio.
Do apps start drawing their lines wider if the default font size goes up? When? Is it consistent system wide?
further, the “pixel” unit widely used today is quite far removed from physical pixels on consumer devices and is more similar to an abstracted “point” unit - for example, apple alone supports devices at 1x, 2x, and 3x resolutions (or pixel densities)
Who’s the “you” in here? If it’s the end user, I don’t think it’s a better solution for the general population.
> Sloppy apps ignore this, but the devs are quickly notified.
Anything that increases friction for developers is bad. The API for HiDPI should be so seamless that the only thing developer does is to provide higher resolution bitmap icons.
Imagine what web app developers need to do when their users switch from a regular display to a HiDPI display. They do nothing; the browser knows how to scale on its down. And that should be the bar we are aiming for.
Background:
https://gitlab.gnome.org/GNOME/gtk/-/merge_requests/6190
Basically gtk-hint-font-metrics=1 was needed with Gtk4 on non-HiDPI displays to get crisp text. Thanks to the change from 6190 above it is already automatically applied, when appropriate depending on which display is used. Mixed setups with multiple displays are common and Gtk4 cares about. The whole topic caused a heated issue before - because it depends on your own vision, taste and hardware.
Apple avoids trouble and work by always using HiDPI displays. Attach a MacMini to a non-HiDPI display and you could recognize that the font rendering is awkward.
You may personally find the output awkward, but typographers will disagree. They didn't always have high density displays. They did always have superior type rendering, with output more closely matching the underlying design. Hinting was used, but they didn't clobber the shapes to fit the pixel grid like Microsoft did.
I personally like the Apple rendering, but I realize that many people around me don't. In the end, it is subjective.
Ironically for "always expose relevant options through system settings" Apple, you can still access font smoothing via command line, e.g. "defaults -currentHost write -g AppleFontSmoothing -int 3". You can use 0-3 where 0 disables it (default) and 3 uses "strong" hinting, with 1 and 2 in between.
And then on the other hand you have finally a seemingly great solution, despite their sabotage. So, yeah gnome?
[1] https://gitlab.gnome.org/GNOME/gtk/-/merge_requests/6190
I enjoyed the side-by-side comparisons of the old vs new renderer, and especially the idea of homing in on particular letters ('T' and 'e') and extracting them, that really made the improvement clear.
Cool stuff, and many thanks to the developers who keep pushing the GTK stack forwards.
P.S. Since pango 1.44 some letters at some positions became blurry and some letters are more close to the previous one than being in the middle. Actually the later issue might be needed to prevent the first one, in theory. In practice, there might be other constraints, which force the corruption.
> 4 b: in contact with
> "leaning against the wall"
OP's code is "in contact with" the public APIs provided by GTK. That is, OP uses the GTK library.
This is great work btw.
[0]: https://github.com/snowie2000/mactype/issues/932
[1]: https://github.com/microsoft/PowerToys/issues/25595
[2]: https://freetype.org/freetype2/docs/reference/ft2-lcd_render...
https://glenwing.github.io/docs/VESA-EEDID-DDDB-1.pdf (page 13)
[1] https://faultlore.com/blah/text-hates-you/#anti-aliasing-is-...
You'd want to render to that grid; then apply bayering fused with color mapping. No need to transmit 3x the data over the cable. And with a hint of sufficient software support (I think the high DPI stuff might (almost?) suffice already!), I'd actually prefer such a screen over a traditional vertical bars of RGB (or BGR, as I'm currently suffering; sadly the thermal design of the screen won't allow me to safely just rotate it to"wrong" landscape) LCD monitor... provided both have the same number of subpixels.
Probably bonus for a 4th deep cyan pixel color in place of a duplicate green to get a wider gamut. Or something similar in spirit.
A device like this would allow hobby developers to bring the software/driver support up to "daily driver, no regrets" levels.
Similarly I still wonder why no OLED seems to have shipped a "camera-based motion/pan/defocus self-calibrated" burn-in scan& compensate function where you just every ~100 hours move a cheap camera on front of the screen following the on-screen movement directions, to mechanically create an opportunity for the cheap sensor to be calibrated, and then use this freshly calibrated brightness sensor to map the OLED panel and make up for that the exact burn-in is very hard to simulate, but is required precisely to compensate/pre-distort without leaving visible residuals from incomplete/imperfect cancellation between the pre-distortion and the burned-in pattern.
For one, subpixels aren't just lines in some order - they can have completely arbitrary geometries. A triangle, 3 vertical slats, a square split in four with a duplicate of one color, 4 different colors, subpixels that activate differently depending not just on chromaticity but also luminance (i.e., also differs with monitor brightness instead of just color), subpixels shared between other pixels (pentile) and so on.
And then there's screenshots and recordings that are completely messed up by subpixels antialiasing as the content is viewed on a different subpixel configuration or not at 1:1 physical pixels (how dare they zoom slightly in on a screenshot!).
The only type of antialiasing that works well is greyscale/alpha antialias. Subpixel antialiasing is a horrible hack that never worked well, and it will only get worse from here. The issues with QD-OLED and other new layouts underline that.
The reason we lived with it was because it was necessary hack for when screens really didn't have anywhere near enough resolution to show decently legible text at practical font sizes on VGA or Super VGA resolutions.
While not solving the blurriness completely it, removing the border with the inspector helps a lot.
I think the remaining blurriness comes from the images using grey scale hinting rather than subpixel hinting (the hinted pixels are not colored)
For convenience (second is hinted)
+ https://blog.gtk.org/files/2024/03/Screenshot-from-2024-03-0...
+ https://blog.gtk.org/files/2024/03/hinting-125.png
You can really spot the difference.
Nobody does it properly on linux, despite freetype's recommendations.. a shame..
https://freetype.org/freetype2/docs/hinting/text-rendering-g...
It's even worse for Light text on Dark backgrounds.. text becomes hard to read..
GTK is not alone
Chromium/Electron tries but is wrong 1.2 instead of 1.8, and doesn't do gamma correction on grayscale text
https://chromium-review.googlesource.com/c/chromium/src/+/53...
Firefox, just like Chromium is using Skia, so is using proper default values but ignores it for grayscale text too..
https://bugzilla.mozilla.org/show_bug.cgi?id=1882758
A trick that i use to make things a little bit better:
In your .profile:
export FREETYPE_PROPERTIES="cff:no-stem-darkening=0 autofitter:no-stem-darkening=0 type1:no-stem-darkening=0 t1cid:no-stem-darkening=0" flatpak --socket=wayland run com.visualstudio.code --enable-features=UseOzonePlatform --ozone-platform=wayland
https://github.com/flathub/com.visualstudio.code/issues/398 :> Various font-related flags I found in solving for blurry fonts on wayland
Is there an environment variable to select Wayland instead of XWayland for electron apps like Slack and VScode where fractional scaling with wayland doesn't work out of the box?
Thanks for the tip, though.
I don't know what my life would like without such decisions...
When I boot into windows, the fonts especially in some applications look horrible and blurry because of my high DPI monitor. Windows has like 10 settings you can try to tweak high dpi fonts and man none of them look good. I think my Linux boot on the same machine has much better font smoothness and of course the MacBook is perfect.
Somehow most windows systems I see on people’s desks now look blurry as shit. It didn’t use to be this way.
I really don’t understand why high dpi monitors cause (rather than solve) this problem and I suspect windows has some legacy application considerations to trade off against but man - windows used to be the place you’d go to give your eyes a break after Linux and now it’s worse!
I realize I am ranting against windows here which is the most cliched thing ever but really come on it’s like right in your face!
I guess we all have different issues we care about, but I'm always surprised when I have to point out how awful Windows is with fonts and people just shrug and say they didn't notice. For me it's painfully obvious to the point of distraction.
I'm pretty sure they used to be bit mapped, or had excellent hinting. Now that high dpi is common, maybe they figured that wasn't needed anymore. And indeed, on my 24", 4k monitor at "200%", windows is pretty sharp if I start it that way. If I change it while running, it becomes a shit-show. But when running at 100% on a FHD 14" laptop, sharpness is clearly lacking.
Regarding the Linux situation, yes, it's subjectively better on that same laptop. But it depends a lot on the fonts used. Some are a blurry / rainbowy mess. However, on Linux, I run everything at 100% and just zoom as needed if the text is too small (say on the above 24" screen).
Since the 80s I've been wishing for a 300/600dpi resolution-independent screen. Sure, it's basically like wishing for a magic pony, but I was spoiled by getting a Vectrex[1] for a birthday in the 80s, and I really liked the concept. I know the Vectrex was a different type of rendering to the screens we use today, but I still find it fascinating.
It saddens me when I see people measuring things in pixels. It should all be measured relative to the font or perhaps the viewport size. The font size itself should just be how big the user wants the text which in turn will depend on the user's eyes and viewing distance etc. The size in pixels is irrelevant but is calculated using the monitor's PPI. Instead we get people setting font sizes in pixels then having to do silly tricks like scaling to turn that into the "real" size in pixels. Sigh...
So software "pixels" are relative units now, but you would have them get larger or smaller as the user moves closer to or further from the screen? (I think I'm not quite getting it, sorry.)
In any case, it makes much less difference (almost none practically speaking) on hi-dpi displays.
One of the reasons web designers have issues with text looking different between Windows and MacOS is that Windows' font renderer tries to force things to align with the pixel grid more, reducing a sharper result but slightly distorting some font characteristics. Apple's renderer is more true to the font's design, can can produce a little fuzziness like you see here. It also makes many shapes look a little more bold (at least on standard-ish DPI displays). A couple of old posts on the subject: https://blog.codinghorror.com/whats-wrong-with-apples-font-r..., https://blog.codinghorror.com/font-rendering-respecting-the-.... Differences in sub-pixel rendering also make a difference, so where people have tweaked those options, or just have the colour balances on their screens set a little differently (intentionally or due to design/manufacturing differences) you might see results that differ even further for some users even on the same OS.
So it just requires 6x more memory, GPU power and HDMI/DP bandwidth and prevents usage of large monitors ...
But I don't think that's relevant here anyway, since the article refers to the auto-hinter which as far as I know was never patent-encumbered.
Of course that's why subpixel rendering is all a bit moot on Apple devices. For a long time now they've just toggled the default font rendering to the equivalent of "none none" in this article and relying on the high quality screens the devices ship with/most users will plug in to make up for it.
It seems that they:
- Fail to properly position glyphs horizontally (they must obviously be aligned to pixels horizontally and not just vertically)
- Fail to use TrueType bytecode instead of the autohinter
- Fail to support subpixel antialiasing
These are standard features that have been there for 20 years and are critical and essential in any non-toy software that renders text.
How come GTK+ is so terrible?
EDIT: they do vertical-only rather than horizonal-only. Same problem, needs to do both.
If you read the article carefully, it mentions it aligns vertically to the _device_ pixel grid.