So yea, I think it is an issue with generating a data set and not hitting a sufficient amount of test cases. Because in this instance, asians would be an edge case where creating a small data set to train an algorithm on with a group with a lower representation in the population.
Let's separate the general case from the specific. Generally, we know that representation in the people who make things changes what they make. This is obvious and undeniable. For example, look at ASCII vs Unicode. The Chinese invented movable type 500 years before Gutenberg, so it's not like the idea of printing non-roman characters was novel. In the age of telegraphy, Europeans developed encodings that included umlauts and accents; by 1851 they were merged into International Morse Code.
So why in 1963 was ASCII codified without any of that? And why did that become the dominant standard for an extended period? Because it was mainly Americans in the rooms where the technology was being created.
Similarly, we know that standard color films were developed by white people to represent white people well: https://www.vox.com/2015/9/18/9348821/photography-race-bias
And we all know how this happens. It's the same reason a lot of open-source software is good for a developer audience, not an end-user one: making things means iterating on them until they're good enough for the people involved.
That's the general case, so let's return to the specific case. If you want to prove that ML systems doing racist stuff has nothing to do with who made it, then you can't just handwave it away. You have to show why that specific project was set up so carefully and so well that it would avoid the natural pitfalls of any technology project. And then despite that it went on to do racist stuff. For reasons that you'd then have to explain.
'Eleven Jinping': Indian TV fires anchor over blooper.[1]
That would be a valid reason, but I suspect a more culturally appropriate one: loss of reputation. We are sensitive to that.
My point was this isn't something that only goes on in 'white' brains but more of a cultural issue. Most people in the West are incapable of pronouncing Asian names. I don't see people making a big issue out of it.
Or do you think they had a team (on a completely different project or perhaps company) write a text to speech function that wasn't well suited for directions.
Streets have lots of numbers after all. People frequently have numbers in their name.
(Leave the software engineering to the software engineers)
I wager that there is more text online about Louis XIV than of Malcom X. Certainly there are many more books on that epic corner of French history than one modern US leader. Then there are all the British kings. Point an AI at the internet and it likely would decide that roman numerals are most often pronounced as number than letters. Malcom X would be rare an exception that might need to be hard coded.
I personally have gotten bugs fixed at Google. How? Because I, a white man, spotted a bug, cared about it, and talked to white men of my acquaintance at Google who had enough power to get things done. How did I know them? From other tech companies created, run, and majority staffed by other white men.
Why am I in these networks at all? Well, my dad was a software developer and he introduced me early on. How did he get his start? His dad, an insurance company exec, brought him in to deal with this newfangled computer thing they had just gotten. That was in Milwaukee in the mid-1960s. I promise you that although Milwaukee had a significant black population, exactly zero of them were insurance company executives in the mid-1960s.
So what Allie Bland knew when she wrote her tweet was that she did not have any connection to Google where she might be able to get a to-her glaringly obvious pronunciation issue fixed. That in her estimation no black person did. And I see no reason to think she was wrong.