So I would say it's not unheard of for an alphabet to be shaped by technology. We English speakers are not the worse for wear.
Also it's from 2002 and now i'm curious how it ended.
Uzbekistan is greatly underappreciated for having done something really neat after the Soviet breakup --- designing an orthography with no "funny letters". They use context to distinguish the "back" and "front" versions of phonemes like i [1]. And in the two cases where that would be too confusing (o and u), they put an apostrophe after the letter --- i.e. u', instead of something like ü.
[1] Turkey, which speaks a closely-related language, solved the "front i" vs "back i" problem by making one dotted and the other dotless --- those of us here probably know all of the toLowerCase-related bugs that caused, I think articles about this have been posted a couple of months ago
But compared to Azerbaijan, I understand we were lucky - at least we didn't have to mess around with creating our own fonts :)
Good point on how tools put constraints on expression, BTW. I guess ASCII art is another example, and the ASCII smiley :-) is still with us.
Even today mixing text and freehand drawing or images is troublesome enough that we rarely bother. In a sense it is somewhat curious that 60 years of computer science has not been enough to replicate the convenience of a handwritten note...
Nowadays, adding that glyph to an existing font could be a very easy thing to do. It depends a lot on the font design you're after...
Fonts released under a free license would have helped.
A nice font editor: http://fontforge.sourceforge.net/
Had they chosen æ instead then they could have just used Latin-1 and the large number of existing fonts, keyboards, etc. for languages like Danish which already solved the problem.