You may be thinking narrow-mindedly about what is meant by "interpretation". Or rather, conflating "interpretation of predictions of ML system", which is the common understanding in professional circles, with "interpretation of the real system whose aspects we are predicting with ML", which is a more colloquial frame. I hold you to no fault as I have been ambiguous in my usage and the two overlap quite substantially, particularly at the outputs of the ML system.
An alleged association between homosexuality and passport photos, for instance, is an interpretation of the ways humans exist and what they are fundamentally (read: physiognomy). Automating this association encodes a specific human-level interpretation about what is true about people into the ML system. But this joint distribution between homosexuality and the way a face looks when you record a picture of it is bogus in ways that are hard to put into words. The principle is lacking completely. And this kind of system can very easily be used for extreme harm in the wrong hands.
Nevertheless, surely someone motivated would (1) consider this approach convenient, (2) would have an accurate (vs data) model after the training completes, and (3) would use the raw predictions as they think those "are what matters".
I find, not only for myself but others as well, that being aware of the technical foundations opens the space of cognition to other perspectives of thinking about these issues which find synthesis between the technical and the social impacts of design decisions.
No comments yet.