That's at least partly because there are some vocal blind people who don't want websites to be able to know that they're running a screen reader at all, for fear of discrimination. I believe that stance is misguided. I'll illustrate why with a story. A few years ago, my best friend, who is blind, was trying to do something on PayPal, and couldn't complete the task with his screen reader. I tried to do the same task, with the same screen reader, and didn't have any problem. So I figure we got caught in an A/B test or phased rollout. And it occurred to me that PayPal would never know that he failed to complete the process because he was using a screen reader. If we allowed websites to know what screen reader a user is running, they could collect useful data that could help them improve. And frankly, the problem that we actually have with accessibility is not willful discrimination, but indifference.
P.S. It was a weird feeling to hear the name of a product that I developed from the ground up in the "What about ..." section heading. Yeah, I'm talking about System Access, the most obscure (and perhaps poorly named) screen reader mentioned in the article. No offense taken though; I understand where the author is coming from.
Separate, but equal isn't equal.
I agree with this. I will say that developers who are 100% unwilling to entertain changes to their "main" UI to accommodate accessibility are in the clear minority. But even the most well-meaning devs and designers will ask questions like, "could we just do this for screen reader users ...?". It's a slippery slope from there, with technical debt, legacy implementation and ghetto user flows all the way down.
The world of mobile web sites demonstrates that this swings both ways. Some web sites have mobile-specific interfaces that greatly enhance the experience when using the site on a small screen touch device compared to a large screen keyboard/mouse device. Wikipedia for example, the desktop interface is terrible on a phone and the mobile interface is terrible on a desktop. Obviously other sites are well known to have swung in the opposite direction and whichever variant is not their primary target is significantly worse.
I think that in a similar way the ability to serve specific content optimized for screen readers would allow those who care to deliver a much better experience, but likewise those who don't might make something worse.
---
Of course the simple web site purist in me wants to say that every web site should just stick to basic formatting and text-primary designs that work well in any browsers, but that idea is not just unrealistic thanks to marketing people but it would simply not work for so many modern sites and web applications.
I say the solution is the same as it's always been for when bad web sites do stupid things based on user agent or whatever else. Lie to them. As long as the screen reader can turn off the identifier tag it seems like the worst case outcome is a minor annoyance to have to blacklist that web site from receiving the tag.
--- edit: also a side thought, the things that make a screen reader work well also make it easier to separate content from ads, so there is an inherent commercial incentive for ad-supported content providers to only do the minimum legally required.
detecting interaction with visually hidden element, but no interaction with visually shown element, can be registered as screen reader. Obviously this can also happen with bots, but probably you would prefer to register the bot as screen reader rather than to try to filter them out and inadvertently filter out a screen reader (which can often happen with bot filtering strategies anyway)
This makes sense, it's probably not a good idea to give that information to the website. It would only make people using screen readers more vulnerable to scams.
Similar to how "scam callers" work. If a "scam caller" rings someone and an older person answers the phone. The fact that they can hear an older person, means that they have found an easy target, and can use specific tactics to take advantage of them (techno jargon, etc..). If they don't have that information, then it's much harder for them to use specialised tactics to manipulate people.
Actually I think varying behaviour depending on reported user agent was a source of frustration to users from the beginning. Maybe it'll turn out fine this time or in this case, but those concerns sound valid to me.
I wouldn't worry about testing with those screen readers for now, as there still aren't that many people using them, but it's something worth looking out for in the future.
I wouldn't take this data too seriously. The WebAIM user survey is only available in english, and usually filled by tech-savvy blind users who are part of the blind community and are told about it.
At this point, JAWS is mostly used in corporate environments in the United States, mostly due to the number of scripts already written for it, a business-friendly (non GPL) license, and easy enforcement of restrictions given by IT, which are features that NVDA doesn't provide. Some countries give out JAWS for free to their blind residents, so the number of JAWS users there is probably going to be pretty significant too. However, in most parts of the world, NVDA is the screen reader most people use. As a person living in eastern Europe, with many friends from around the world (including the U.S.), I know exactly two people using JAWS as a daily driver.
When I visited a blind person to test for us at a previous workplace, I was astonished about what we found. It was very different from our own attempts. His voice-speed and navigation was so fast that the parts we felt were sluggish just took him a second to navigate through. He had other issues, however.
I'm one of the co-owners.
Of course, it always depends on the overall funding situation of the app, but if funding exists, then I think AX testing should be paid like the highly qualified work it is.
You could be right, but keep in mind that this may not significantly lower the amount of time required for developers to understand and remediate problems. Users have wildly differing levels of technical experience, so you may end up with plenty of feedback that you then have to spend hours understanding, sorting, de-duplicating and following up on.
Not to mention the fact that, bluntly, users who aren't being paid as "experts" just may not be that willing to shit all over your product. I have encountered more than one case of a limited subset of screen reader users reporting a positive experience with a component which broke every rule in the book, and caused very real problems for users outside of that core group.
Also, keep in mind that something that technically works correctly with screen readers is just the beginning. User testing might reveal lots of issues you wouldn't think of yourself. And yes, I know that resources are usually limited and there is not much room for user testing, especially testing with screen reader users and other groups that have some kind of disability. I recently worked as the accessibility lead of a mobile COVID exposure notification app that had a very simple UI and a hard accessibility requirement. We had the luxury to do extensive user testing and even in this simple interface we found lots of small changes that improved the experience for screen reader users.
* The microcopy matters, a lot. We had a button stating "I've got a notification: read what you should do after getting a notification" (from the top of my head and freely translated from Dutch, we didn't have an English translation back then). This was part of a bunch of buttons on the main screen that all gave information. Some screen reader users got confused and thought that they had a notification. If you don't see the visual layout, it is not obvious that this is just a plain button and not a bold text in red that is giving you a warning. * In the same category: the app has a status text that says "The app is working fine" or "The app is not working fine". Visually, the error state is signified by an exclamation mark and styling that makes clear that this is a serious issue. However, in text there is just one word, not, to signify that there is a serious issue. Following WCAG, the info signified by the exclamation mark icon was available in text, so no text alternative was required. However, we gave it a text alternative anyway to ensure screen reader users were also clearly alerted that something is wrong. Same goes for the "all is ok" icon, we gave that one a text alternative as well to ensure users all is fine.
https://www.accessibility-developer-guide.com/setup/screen-r...
Disclosure: I used to work on the Narrator team at Microsoft.
NVDA, mentioned prominently in the article as a free and open source screen reader, has a floating-window-style speech viewer. I sometimes use it when demonstrating a screen reader user's experience of a particular component when asked to share my audio, because slowing down the screen reader to a rate that everyone on the call will understand will also make the meeting much longer.
JAWS also has a speech history viewer, and there are keystrokes to dump VoiceOver speech as text and audio files.
https://www.nvaccess.org/category/careers/ https://news.ycombinator.com/item?id=25639590
The author may be interested in the ARIA-AT project[1], which aims to thoroughly test assistive technology support for WAI-ARIA and HTML constructs. It's still a relatively young effort, but the community group is open and always happy for participation.
Does anyone know if something like that is available for screen readers - at least for a free and open source one?
Nothing in this space is really mature yet, but there are some efforts to make it a reality. The ARIA-AT project[1], which aims to test assistive technology support for various WAI-ARIA and HTML constructs, is aiming to automate its testing across multiple screen readers [2]. NVDA, the free and open source screen reader mentioned in the article, also includes some integration-style tests[3].
[1] https://github.com/w3c/aria-at [2] https://github.com/w3c/aria-at/issues/349 [3] https://github.com/nvaccess/nvda/tree/master/tests/system
It probably comes as no surprise that EdgeHTML and Chromium have completely different accessibility implementations. Narrator always had the best support for EdgeHTML. I was a third-party screen reader developer when EdgeHTML first came out, and for us third-party developers, EdgeHTML was a drastic change from IE. For over a decade, we had provided access to IE by injecting code into the IE process (yes, Windows lets you do that) and accessing the IE DOM in-process using COM. We did something similar for Firefox and Chromium, but using the IAccessible2 API (also COM-based). To improve security, old Edge disallowed this kind of injection; it could only be accessed through the UI Automation API. Narrator was built for this; the rest of us had to adapt after the fact. And since we could only access UIA through inter-process communication, not in-process like we did with the IE DOM and IAccessible2, there were performance problems, even with Narrator. (Luckily, I got to help solve those problems during my time on the Windows accessibility team.)
With Chromium (in both Google Chrome and the new Edge), screen readers can still inject code in-process and use the legacy IAccessible2 API. And NVDA, JAWS, and System Access (which I developed before joining Microsoft) do that. These third-party screen readers access Chrome and new Edge in the same way, at least inside the web content area, so if you're testing with one of these screen readers, it probably doesn't matter which browser you use. The situation with Narrator and Chromium-based browsers is more interesting. Narrator uses the UI Automation API to access all applications. Chromium has a native UIA implementation, largely contributed by the Edge team, but while that implementation is enabled by default in the new Edge, it isn't yet in Chrome. So Narrator accesses Edge using UIA. But for Chrome, and other Chromium-based apps (e.g. Electron apps), Narrator uses a bridge from IAccessible2 to UIA that's built into the UIA core module. So in corner cases, there may be differences in how Narrator behaves in Chrome and Edge.
So, should developers test with Narrator and/or Edge? Well, I may be too biased to answer that. But I think it's likely that Narrator usage is on the rise. While I was on the Narrator team at Microsoft, we heard from time to time about praise that Narrator was getting in the blind community. (Naturally I can't take full credit for that; it was a team effort.) Moreover, since Narrator is the option built into Windows, there will come a point (if it hasn't come already) when it's good enough for many users and they have no reason to seek a third-party alternative. Also, there are some PCs where Narrator is the only fully functional screen reader, specifically those running Windows 10 S (the variant that doesn't allow traditional side-loaded Win32 apps). I'd guess that an increasing number of students and users of corporate PCs are saddled with that variant of Windows. And while I can't say anything about future versions of Windows, one can make an educated guess based on the broader trajectory of the industry.
As for whether it's worth testing with Edge as opposed to Chrome, I don't know. Fortunately, browser usage data is readily available.
Really interesting to hear some of the technical details from someone that worked on it - thank you!
They implement their own. But the browser also has responsibilities in this area, to construct an accessibility tree from the DOM which screen readers can parse.