I don't think it's aesthetics vs usability that's at the core here -- I don't think at all that aesthetics and usability are somehow mutually exclusive. I think it's simply the lack of focus on first principles outlined by Don Normal himself.
HCI used to be front and center in the collective minds of the Internet, but it slowly faded to the background. As an example, check out the dates on the articles referenced in the "Mystery Meat Navigation" Wikipedia article: https://en.wikipedia.org/wiki/Mystery_meat_navigation#Refere...
I think it's neat that our affordances are evolving (we don't need to have things looking exactly like physical buttons anymore for us to click on them). But at the same time, we should still apply ergonomic guidelines when designing interfaces, whether it's for the elderly, or not.
I used to think it was the designer-as-dictator that was the problem but now I believe it's the self-anointing-expert manager who believes design is merely intuition and not a rigorous field of engineering.
and IMHO apple's design went south post-jobs with ios 7.
The main problems that I didn't like:
- the flat look meant buttons didn't look like buttons anymore. This means that you didn't know what is actionable on the screen
- hiding stuff offscreen. This hiding of complexity also hides common uncomplicated actions and requires extra swiping and fiddling for routine actions.
My special gripe is removing text labels below abstract, monochrome icons. The Markup toolbar being a good example of this {expletives deleted}. That an how finder colour labels turned into tiny little dots.
John Siracusa's Mac OS X reviews, do a good job of documenting the downfall at least for Mac OS X (before it became the barf that is macos).
Here's a start
https://arstechnica.com/gadgets/2005/04/macosx-10-4/3/#mail-...
I say that because the reliability with advanced hardware (power management, power per watt) has been famously rejoiced over increasingly with the M1 and M2 hardware and macos on it.
Personally I too get annoyed with the UI inflexibilities, prefer Asahi so i can just control the visual aspects.
And this is not just Mac OS, it's Microsoft and an increasing amount of Windows-compatible software. I wouldn't mind the Ribbon half as much if I could just turn off the hieroglyphs, I mean the icons, and keep the text labels.
I'm not sure we can say this. There are tradeoffs when you can't make assumptions about your end user. Text that is comfortably large for someone with bad eyesight might not allow enough text for someone with better eyesight. Volumes for someone with poor hearing might be painful for someone with better hearing, etc. A company like Apple will always err towards the demo that isn't likely on a fixed income.
Maybe your argument has some merit to it, but based on where we are, I don't think it needs to be worried about too much.
Having huge text everywhere or loud volume is not the same as having a good way to easily change text size or adjust the volume.
Interfaces cannot appeal to everyone. We can do better to make magnification universal, but that's not what the article means.
It's also funny to read this article as they think that older people want scooters. Most of the people who are experiencing problems are having issues with strength and balance. Scooters are the last thing they want.
Intricate gestures are difficult to grok for some, and difficult to perform for others. Try using an iPhone and closing an app with shaky hands.
Currently the iPhone SE still has a physical button, but I'm worried what device I'll start recommending to older/less tech savvy people when that goes away.
iOS itself is a bit of a disaster zone too now. I see people constantly get stuck having activated the "press to edit your lock screen" by mistake, or getting confused by a constant stream of ads for iCloud, Apple Arcade etc.
It's sad because most of this poor UX is unnecessary. It feels like its origins are in Apple no longer caring, combined with running out of real ideas and getting distracted with things like widgets.
Things do need to change over time, I get it, I create things too. Sometimes new functionality evolves and has to go somewhere, sometimes you find a previous design was bad and there truly is an improved layout that will help most. Fine. Those sorts of states should converge quickly so I can memorize and dedicate it to muscle memory vs having to actively look and think all the time.
Typical company these days: you're who? Ah yes, you're an existing customer. You're already paying us, sunk time into learning our product, and rearranged your work or life to be at least minimally dependent on us. We can safely ignore you - it's unlikely you'll leave near-term, so our focus is much better spent on acquiring new customers.
To be clear: I hate it, but this seems to be how most software products are being developed these days - all focus is on making them dumb and pretty enough to sell to first-time users, at the expense of already onboarded users.
That means that any tutorials will quickly get outdated and you can spend half your mental capacity just keeping up with this crap. The amount of times I googled how to do something in MS office, clicked an article from half a year ago and found that one of the options it mentions doesn't exist anymore is too damn high.
Things are nice on linux, especially the CLI world. You learn a little program once and use it for decades without thinking about it.
UX went from an altruist field around making tasks easier to perform to tricking people into spending money, clicking away rights to their personal data, etc.
It's the logical path of end users being the product. This axiom started around "free" services like Facebook, but now we see it even in expensive products like iPhones and Windows 11.
https://www.macrumors.com/2023/05/16/apple-previews-ios-17-a...
Funny you mention home buttons being more usable. I've got mild wrist pain (not RSI levels fortunately), and I find the pressure required to press it disturbingly high, often resorting to using the assitivetouch button instead.
Not sure if it had special hardware or just well written drivers, but it always worked flawlessly even with a hung app in the foreground.
The other day, my mom was complaining that her phone was not ringing and it took me forever to figure it out. I had to go to Google to find a troubleshooting guide.
The problem is that there are multiple ways to prevent a phone call from ringing. You can switch the hardware button (silent mode), you can set focus mode on (or have it set automatically), and you can mute individual people in the address book. Or you can add people to a group so that they ring even if the phone is in focus mode (but not in silent mode). There are probably other ways I've forgotten.
Already we've introduced a bunch of concepts: silent mode, focus mode, muting individual people, exceptions to focus mode, etc. And the user has to figure out these concepts just from looking at the UI. But if you don't understand the entire conceptual model, you may not know why something is not working.
This problem can't be solved with better affordances or more text labels, unfortunately. Maybe LLMs will eventually save us. Instead of the user having to figure out the capabilities and UI of the device, the device tries to figure out the intent of the user.
> What is a feature interaction?
> A feature interaction is some way in which a feature or features modify or influence another feature in defining overall system behavior. Feature interactions are especially common in telecommunications, because all features are modifying or enhancing the same basic service, which is real-time communication among people.
> Features are popular because they are easy to add and change. The dark side of features is feature interaction, which is implicit in feature composition and therefore difficult to understand. [emphasis added]
Since part of this thread is complaining about the overuse and lack of universality of jargon.
Conceptual complexity aside, for this case I wonder if it's almost a cultural/social problem. Before ubiquitous cell phones, you could be "disconnected", and this was normal.
Now, it may seem weird to say, 'don't call unless it's an emergency'.
That last part - "a segment that you too will one day inhabit" - is one which should be shouted from the rooftops and ingrained into folks when they are in their early teens or twenties - before they get employed as designers of any kind.
I think that a big part of the reason why vi (later vim) and Emacs used to enjoy dual status as the canonical hackers' text editors is because their interfaces didn't change much, so skill with them would serve you a lifetime and could be passed to upcoming generations. I recently fired up Xenix in an emulator, and found that I was quite facile in using its copy of vi to manipulate text, because the skills I'd developed on Vim on modern Linux machines translated well all the way back to that ancient editor. Vim added a lot but the fundamentals are the same.
When the interface changes, just for the sake of changing, every two years or less, how can you feel like anything you learn will be relevant?
The company OXO makes kitchen gadgets originally designed for reduced mobility (i.e. older people) but now popular with everyone.
The ADA: having, for example, a ramp, doesn't just help people in wheelchairs: if I have something difficult to carry (or am using a cart) or have a temporary injury that makes steps hard to navigate I'm glad there's a ramp.
In many ways I consider the vast majority of designers and architects to be working away from their putative goals, instead pursuing egotism.
In his original coining of the term, Normal used "affordance" to mean a thing an object allowed to be done, but some user, usually a human, sometimes another object. For instance, a chair affords sitting by a person. A door handle affords opening.
But in the design world "affordance" is now almost ubuiquitously used to mean some visual hint added to a design element to indicate what can be done with it. For instance, in a UI, you might say that you added an "affordance" in the form of a drop shadow to show a button is clickable (probably a crappy example, me being a non-ui person).
In the later editions of Design of Everyday Things, Norman addresses this difference (perhaps we could say evolution) of his idea and term. If I remember correctly, he does not love this conflation of ideas, but has come to terms with it.
It's a bit more subtle than that. From my copy of the 2002 edition (p.9):
> Affordances provide strong clues to the operations of things. Plates are for pushing. Knobs are for turning. Slots are for inserting things into. Balls are for throwing or bouncing. When affordances are taken advantage of, the user knows what to do just by looking: no picture, label, or instruction is required.
Then on p.88
> Consider the hardware for an unlocked door. It need not have any moving parts: it can be a fixed knob, plate, handle, or groove. Not only will the proper hardware operate the door smoothly, but it will also indicate just how the door is to be operated: it will exhibit the proper affordances. Suppose the door opens by being pushed. The easiest way to indicate this is to have a plate at the spot where the pushing should be done. A plate, if large enough for the hand, clearly and unambiguously marks the proper action.
It's not just that handles afford opening, it's that they afford pulling.
I've had a quick flick through the pages on affordances, and can't see anything that stands out about the drift of the word "affordance", so that might be in a later edition than the 2002 one. (The original edition is from 1998).
It's also worth pointing about that the original definition of affordances, by Gibson, is about animals and their relationship to their environment, which can be quite broad in its totality.
During UX work on projects, it's simple enough and comes naturally to most for everyone involved to phrase it something like "we should add X so it's more obvious you can Y". I'm don't see the gain in breaking it down more outside of more academic discussions, and introducing unnecessary terminology creates a barrier for communication.
A signifier helps to indicate the presence of an affordance that might not be immediately apparent.
Adjacent to the handles, there was also, on each side of this door, a hand-scribbled sign reading “Push”.
Needless to say, I pulled.
The door did not open.
I pushed.
The door did not open.
I pulled again.
The door opened.
(Also, affordance isn't mentioned in the article anyway?)
Whereas a pushbar both tells you the door will open and how to do it and it's all tied together without a sign.
Just an observation: My main takeaway from The Design of Everyday Things was that design should make it obvious what the thing is for, and how to use it. Affordance is the big keyword. I think these mobility tools succeed in that respect. Maybe his point here is that an ugly cane makes it look like it's a tool for dying slowly, but a more likely explanation is that it is what he's saying on the surface: that aesthetics matter too. I wonder whether this is a change of heart, of just a change of emphasis for this particular article.
I have also not observed a change that emphasizes form over function. If anything it's been the opposite, because today's product-driven world knows that websites which are easy to use make more money.
(There is this question of whether the design benefits the users, or if it is only to serve the company's bottom line, even at the expense of user happiness. These incentives lead to so-called dark patterns, but I don't call that behavior "bad" in the sense of execution, even though it is "bad" in the sense of morality.)
There's also been a ton of standardization in UI patterns, which lowers the floor on just how awful a UI can really be. Those had to be invented, and now they're relatively stable. We have good patterns now for how to make a product listing, or a detail page, or a checkout process, or an accessible form. And they are widely known. In many cases they've just been internalized by younger designers before they even start. There aren't many Kai's Power Tools style UIs out there anymore.
In general, I've observed web design getting better and better. It's easy to cherry pick counterexamples, but I would not go back to the design of the average early 00s website.
Probably because modern design took the “less is more” mantra as its root value and thus became lazy. Less is not more, it’s easier to make it look good, it’s like throwing away all your furniture and painting everything white and calling that interior design.
Admittedly, I have only read specific chapters, but all were easy reading.
Tufte is a much harder read.
I don't believe that. I'd say the high-end begins with Knuth's TAOCP with the end of the spectrum being something like HOTT https://homotopytypetheory.org/book/
https://support.apple.com/guide/assistive-access-iphone/welc...
There's very little natural discovery. And this is made worse when the gestures don't consistently work for you. This is common for older people who develop "zombie finger" where contact with a touchscreen only sometimes activates the capacitive screen, but if they knew they were otherwise doing the gesture "correctly" they might be okay.
Another fun one is that my Apple Watch just updated, and the swipe left/right gesture doesn't do anything anymore (it used to change watch faces). It took me longer than I care to admit hire long it took me to find out that I needed to Force Touch or Long Press in order to access a new menu where I can swipe left/right to change the face. Other gestures were also just straight up removed. There was no explanation and no tutorial.
These are just small examples of many. I do want power-user features. I just want them to be defined cohesively, so that once you've learn discovery for one set of gestures, you can easily access your "cheat sheet" for deploying them in new contexts, even for Apple applications you've never used before.
Everything from the small font sizes, inconsistently sized window/dialog close buttons, the animations and sound effects, to the terrible text contrast in dark mode makes it a really 2nd rate UX.
Windows has its own disasters, but Windows 11 -- IMO -- is the better OS from a UX perspective.
I’ve ranted here plenty of times about my late-80s aunt who will painstakingly document every tiny step to do something on her iPhone, with me patiently practicing with her, only to have something she’s gotten down change on her with the next update. It’s all very well to add new features, but for the love of older people (my mid-40s self increasingly included), do not change how common features work!
https://en.wikipedia.org/wiki/Pie_menu
>Pie menus are a self-revealing gestural interface: they display multiple options to a user and direct them to select one.
>Users operate the menu by observing the labels or icons present as options, moving the pointer in the desired direction, then clicking to make a selection. This action is called a "mark ahead" ("mouse ahead" in the case of a mouse, "wave ahead" in the case of a dataglove).
>Repetition of actions and memorization of the interface further simplify the user experience. Pie menus take advantage of the body's ability to remember muscle motion and direction, even when the mind has forgotten the corresponding symbolic labels.[1]
However Apple has never adopted pie menus, Steve Jobs thought they sucked, and Donald Norman has never been a big fan, and he totally missed the point when I tried to explain it to him at Ted Selker's NPUC workshop.
I had the honor of meeting Steve Jobs at EduCom on October 25 1988, when he released the NeXT machine. Sun had lent me a workstation to demonstrate NeWS software in their booth, right across from the NeXT booth, and Ben Shneideman brought him over for a demo of the stuff we'd developed at HCIL.
So I gave Jobs a whirlwind tour of pie menus, the NeWS window system, UniPress Emacs and HyperTIES for about half an hour. Jobs was jumping up and down, pointing at the screen, and yelling "That sucks! That sucks! Wow, that's neat! That sucks!"
When I explained to him how flexible NeWS was, he replied "I don't need flexibility -- I got my window system right the first time!" But I gave him a NeRD button, anyway (which I'd made for NeWS window system NeRDs, but he liked because it had a lowercase "e" like NeXT).
Years later I gave a talk about pie menus at Ted Selker's epic (and free!) NPUC (New Paradigms for Using Computers) workshop at IBM Almaden Research Lab in the early 90's, and after giving my talk, I watched Don Norman give his talk. He he started at the left end of the room and moved across to the right complaining about everything he saw from the tray of the chalk board to each cabinet to the microphone to the fluorescent lights to the wall socket.
At that point it just seemed like a contrarian schtick, reflexively taking trite cheap shots at everything. But I got in a zinger when he complained about how the pie menus in SimCity that I'd just shown were so horrible because they made it easy to build a city quickly without thinking about it.
He totally missed the point that pie menus were faster and more efficient than linear menus, and just bitched about how fast efficient menus in a game about city planning made it easy to quickly plan bad cities, as if that was what playing SimCity with pie menus taught you.
Without considering that millions of people waste millions of hours picking items from inefficient linear menus every day for all kinds of applications.
Don Hopkins and Donald Norman at IBM Almaden's "New Paradigms for Using Computers" workshop:
https://www.youtube.com/watch?v=5GCPQxJttf0
He never has a positive word or comment about anything, no constructive suggestions, just negative complaints. This is the video he was responding to by complaining about pie menus making SimCity too easy to play:
X11 SimCity Demo with Pie Menus and Mouse Ahead Gestures:
https://www.youtube.com/watch?v=Jvi98wVUmQA
Talks by Don Hopkins and Donald Norman at IBM Almaden's "New Paradigms for Using Computers" workshop. Organized and introduced by Ted Selker. Talks and demonstrations by Don Hopkins and Don Norman.
Norman: "And then when we saw SimCity, we saw how the pop-up menu that they were doing used pie menus, made it very easy to quickly select the various tools we needed to add to the streets and bulldoze out fires, and change the voting laws, etc. Somehow I thought this was a brilliant solution to the wrong problems. Yes it was much easier to now to plug in little segments of city or put wires in or bulldoze out the fires. But why were fires there in the first place? Along the way, we had a nuclear meltdown. He said "Oops! Nuclear meltdown!" and went merrily on his way."
Hopkins: "Linear menus caused the meltdown. But the round menus put the fires out."
Norman: "What caused the meltdown?"
Hopkins: "It was the linear menus."
Norman: "The linear menus?"
Hopkins: "The traditional pull down menus caused the meltdown."
Norman: "Don't you think a major cause of the meltdown was having a nuclear power plant in the middle of the city?"
(laughter)
Hopkins: "The good thing about the pie menus is that they make it really easy to build a city really fast without thinking about it."
(laughter)
Hopkins: "Don't laugh! I've been living in Northern Virginia!"
Norman: "Ok. Isn't the whole point of SimCity how you think? The whole point of SimCity is that you learn the various complexities of controlling a city."
(My joking but also serious point was that in SimCity "Meltdown" is on the linear "Disaster" menu. So linear menus cause meltdowns. But the pie menus has bulldozers and roads, that you can use to recover from meltdowns with.)
My talk was about pie menus, not about SimCity, which was only an example of pie menus.
But he disregarded the point of my talk and instead criticized the game design of SimCity instead.
Which I wholeheartedly agreed with (that pie menus make it really easy to build a city really fast without thinking about it, and you end up with something like Northern Virginia).
But that wasn't the fucking point of my talk, it was that pie menus can make any game or application more efficient, and that they're self revealing and easier to learn than invisible gestures.
I would have been much more interested to hear if he had any criticisms or comments on the actual pie menus, the audio feedback, the design of the menu layout and icons, the mouse ahead, popup menu display pre-emption, gestural interaction, muscle memory, and so many other things that he squandered the opportunity to discuss.
(That talk was a few years before I started working on The Sims and implemented pie menus in that game too. But maybe they would have been better if Don had taken the opportunity to criticize the pie menus in SimCity, instead of the game itself.)
And then we could have had the whole interesting discussion about what and how SimCity really does teach you, constructionist education, and so on. (See the HAR talk below.)
The Sims, Pie Menus, Edith Editing, and SimAntics Visual Programming Demo:
https://www.youtube.com/watch?v=-exdu4ETscs
OLPC SimCity Demo with Pie Menus:
https://www.youtube.com/watch?v=EpKhh10K-j0
Micropolis Online (SimCity) Web Demo with Pie Menus:
https://www.youtube.com/watch?v=8snnqQSI0GE
Multi Player SimCityNet for X11 on Linux with Pie Menus:
https://www.youtube.com/watch?v=_fVl4dGwUrA
Unity3D Pie Menus:
https://www.youtube.com/watch?v=sMN1LQ7qx9g
MediaGraph Music Navigation with Pie Menus Prototype developed for Will Wright's Stupid Fun Club:
https://www.youtube.com/watch?v=2KfeHNIXYUc
ActiveX Pie Menus for IE 5:
https://www.youtube.com/watch?v=nnC8x9x3Xag
JavaScript Pie Menus for IE 5:
https://www.youtube.com/watch?v=R5k4gJK-aWw
The NeWS Toolkit Tabbed Window Manager with Pie Menus:
https://www.youtube.com/watch?v=tMcmQk-q0k4 (blocked due to music -- alt download:)
https://donhopkins.com/home/movies/TabWindowDemo.mov
Just the Pie Menus from All the Widgets:
https://www.youtube.com/watch?v=mOLS9I_tdKE
Pie Menus: A 30 Year Retrospective, By Don Hopkins, Ground Up Software, May 15, 2018:
https://donhopkins.medium.com/pie-menus-936fed383ff1
>Useful Technique: Mouse Ahead Display Preemption
>The first step in learning a pie menu, using it in “novice” mode, is rehearsal for using it in “expert” mode. So if you remember that you want to move the mouse down, you can press and move the mouse, then you wait, and it pops up only after you stop moving.
>Pie menus should support an important technique called “Mouse Ahead Display Preemption”. Pie menus either lead, follow, or get out of the way. When you don’t know them, they lead you. When you are familiar with them, they follow. And when you’re really familiar with them, they get out of the way, you don’t see them. Unless you stop. And in which case, it then pops up the whole tree.
The Design and Implementation of Pie Menus They’re Fast, Easy, and Self-Revealing. By Don Hopkins. Originally published in Dr. Dobb’s Journal, Dec. 1991, cover article, user interface issue:
https://donhopkins.medium.com/the-design-and-implementation-...
>For the novice, pie menus are easy because they are a self-revealing gestural interface: They show what you can do and direct you how to do it. By clicking and popping up a pie menu, looking at the labels, moving the cursor in the desired direction, then clicking to make a selection, you learn the menu and practice the gesture to “mark ahead” (“mouse ahead” in the case of a mouse, “wave ahead” in the case of a dataglove). With a little practice, it becomes quite easy to mark ahead even through nested pie menus.
>For the expert, they’re efficient because — without even looking — you can move in any direction, and mark ahead so fast that the menu doesn’t even pop up. Only when used more slowly like a traditional menu, does a pie menu pop up on the screen, to reveal the available selections.
>Most importantly, novices soon become experts, because every time you select from a pie menu, you practice the motion to mark ahead, so you naturally learn to do it by feel! As Jaron Lanier of VPL Research has remarked, “The mind may forget, but the body remembers.” Pie menus take advantage of the body’s ability to remember muscle motion and direction, even when the mind has forgotten the corresponding symbolic labels.
Micropolis: Constructionist Educational Open Source SimCity:
https://donhopkins.medium.com/har-2009-lightning-talk-transc...
>We’ll go straight in, we’ll get rid of this. Oh, pie menus, right! If you click… (Dutch “Taartmenu” cursor pops up!) I’ve got to have a talk with my translator.
>You click, and you get a pie menu, which has items around the cursor in different directions. So if you click and go right, you get a road. And then you can do a little road. And if you click and go up and right, you get a bulldozer.
>And then there are submenus for zoning parks, and stuff like that. This gives you a nice quick gesture interface.
Gesture Space:
https://donhopkins.medium.com/gesture-space-842e3cdc7102
>Excerpt About Gesture Space
>I think it’s important to trigger pie menus on a mouse click (and control them by the instantaneous direction between clicks, but NOT the path taken, in order to allow re-selection and browsing), and to center them on the exact position of the mouse click. The user should have a crisp consistent mental model of how pie menus work (which is NOT the case for gesture recognition). Pie menus should completely cover all possible “gesture space” with well defined behavior (by basing the selection on the angle between clicks, and not the path taken). In contrast, gesture recognition does NOT cover all gesture space (because most gestures are syntax errors, and gestures should be far apart and distinct in gesture space to prevent errors), and they do not allow in-flight re-selection, and they are not “self revealing” like pie menus.
>Pie menus are more predictable, reliable, forgiving, simpler and easier to learn than gesture recognition, because it’s impossible to make a syntax error, always possible to recover from a mistaken direction before releasing the button, they “self reveal” their directions by popping up a window with labels, and they “train” you to mouse ahead by “rehearsal”.
[...]
>DonHopkins on March 19, 2018
>There have been various implementations of pie menus for Android [1] and iOS [2]. And of course there was the Momenta pen computer in 1991 [3], and I developed a Palm app called ConnectedTV [4] in 2001 with “Finger Pies” (cf Penny Lane ;). But Apple has lost their way when it comes to user interface design, and iOS isn’t open enough that a third party could add pie menus to the system the way they’ve done with Android. But you could still implement them in individual apps, just not system wide.
>Also see my comment above about the problem of non-transparent fingers.
>Swiping gestures are essentially like invisible pie menus, but actual pie menus have the advantage of being “Self Revealing” [5] because they have a way to prompt and show you what the possible gestures are, and give you feedback as you make the selection.
>They also provide the ability of “Reselection” [6], which means you as you’re making a gesture, you can change it in-flight, and browse around to any of the items, in case you need to correct a mistake or change your mind, or just want to preview the effect or see the description of each item as you browse around the menu.
>Compared to typical gesture recognition systems, like Palm’s graffiti for example, you can think of the gesture space of all possible gestures between touching the screen, moving around through any possible path, then releasing: most gestures are invalid syntax errors, and they only recognizes well formed gestures.
>There is no way to correct or abort a gesture once you start making it (other than scribbling, but that might be recognized as another undesired gesture!). Ideally each gesture should be as far away as possible from all other gestures in gesture space, to minimize the possibility of errors, but in practice they tend to be clumped (so “2” and “Z” are easily confused, while many other possible gestures are unused and wasted).
>But with pie menus, only the direction between the touch and the release matter, not the path. All gestures are valid and distinct: there are no possible syntax errors, so none of gesture space is wasted. There’s a simple intuitive mapping of direction to selection that the user can understand (unlike the mysterious fuzzy black box of a handwriting recognizer), that gives you the ability to refine your selection by moving out further (to get more leverage), return to the center to cancel, move around to correct and change the selection.
>Pie menus also support “Rehearsal” [7] — the way a novice uses them is actually practice for the way an expert uses them, so they have a smooth learning curve. Contrast this with keyboard accelerators for linear menus: you pull down a linear menu with the mouse to learn the keyboard accelerators, but using the keyboard accelerators is a totally different action, so it’s not rehearsal.
>Pie menu users tend to learn them in three stages: 1) novice pops up an unfamiliar menu, looks at all the items, moves in the direction of the desired item, and selects it. 2) intermediate remembers the direction of the item they want, pop up the menu and moves in that direction without hesitating (mousing ahead but not selecting), looks at the screen to make sure the desired item is selected, then clicks to select the item. 3) expert knows which direction the item they want is, and has confidence that they can reliably select it, so they just flick in the appropriate direction without even looking at the screen.
"Tutorial" style introductions to the OS make sense.
I remember when Ubuntu first came out with Unity, and had really powerful window tiling features bound to variations of "Super" key + arrow key, as well as some other hotkeys.
The great thing about it was that you could hold down "Super" for 1 second, and a reference would show up explaining all the different keybinds.
Is that the whole article? If not, maybe it should be.
I can't remember the last time I sat in a meeting with someone with the title "interface designer". Everyone in this realm today is a "UX something" and commonly it seems these people have never:
- heard the term HCI or know what it actually stands for.
- read and/or internalized the human interface guidelines for the platform(s) they're building for (there is a lot of overlap but still).
- thought in way that puts ease of use/discoverability/context dependence front and center, over anything else. How to do something seems often arbitrary/there seem to be no HCI-based guide rails by which decisions are taken.
That said, there are exceptions of course, but they seem rarer by the year.
One issue is that we now have a generation of young people that just grok stuff because they grew up completely digital and with apps that already have arguably crappy interfaces.
I.e. they can and will work with even the worst interface or something that shuns all standards/guidelines of the platform/OS it runs under.
When you then have people from this generation getting jobs as "UX something" you have self a perpetuating loop that inevitably leads to the increased enshittification of user interfaces.
And no one is really to blame for it.
The canes didn't change. If anything they look nicer, and you have more options.
People are going to hate anything associated with being handicapped or elderly, no matter what the design is.
Ever read both Don Norman's books? One clearly counters the other one and yet most design posers never have noticed!