Granted, it doesn't matter much at all because they're just looking for the closest color, but it might be nice to do a transform into a perceptually uniform space (Oklab is both nice and computationally cheap) before calculating the distance, rather than just taking the euclidean distance of the raw pixel values like they do here.
If you combine that with caching the best matching emoji for each palette instead of calculating everything on the fly for each new color, there wouldn't even be a performance penalty for this.
The dude's blog has a few more posts about colour in software: https://bottosson.github.io/
The gamut clipping post is quite interesting - https://bottosson.github.io/posts/gamutclipping/
Oklab: A perceptual color space for image processing https://news.ycombinator.com/item?id=25525726
An interactive review of the Oklab perceptual color space https://news.ycombinator.com/item?id=25830327
And zooming in on the YouTube videos is limited. Also, pausing them shows advertisements/suggestions on top of it.
Additional self-hosted high-res videos would be great to see the full beauty !
I don't know doomgeneric, however before it, Chocolate Doom has been for long a common modern port (while respecting the original code as much as possible), allowing running on modern O/Ss.
This isn't a nitpick, its a very cool article