We actually have a reasonable way to test for human biases in AI - perturb the input a bit and see how the AI responds - For e.g., change the name, change the gender etc. and we use them to measure if AI is fair. It is different question whether all AI can be subject to such tests. For e.g., how will you detect a human bias in a page ranking algorithm? but for where it matters, you can test them and we do test them.
Yes. True and fair. But how can we test the page rank algorithm “ourselves"? Who is "we" in the "we do test them"? Is the public even asking for 3rd party examinations by transparent/public organizations (or at least, publicly funded)? Seems like we only get to "test" against the live system, and third party examination seems relatively impossible. It seems like something with such far reaching and invasive results should be more accessible, at the very least.