In this case, the real bear has a blue ribbon and the "reconstructed" bear ha a red ribbon. Is the ribbon in the fMRI data and the computer choose the wrong color, or most of the images in the training set had ribbons and the computer just added one.
Imagine this something like this is used in the future to get something like https://en.wikipedia.org/wiki/Facial_composite . People may give too much importance to the details and arrest someone only because the computer imagined some detail, like the logo in the baseball cap.
Wow, tech not working to tech might kill someone went super fast here.
OP is right to be concerned. This kind of tech (magickal mind-reading AI?!) is going to be bought up by security agencies, who wiil not understand its limitations and misuse it to accuse people of crimes they aren't related to.
There is ample precedent. Just for one recent example see plans to use an "AI lie detector" based on discredited pseudo-science at EU borders:
https://theintercept.com/2019/07/26/europe-border-control-ai...
For example plead read this old article very carefully: "Police Are Using DNA to Generate 3D Images of Suspects They've Never Seen" https://www.vice.com/en/article/pkgma8/police-are-using-dna-... HN discussion https://news.ycombinator.com/item?id=33527901 (6 points | 3 months ago | 1 comment)
The picture is a high resolution image than make the system look accurate. They don't use the AI buzzword, but my guess it's only a mater of time. Anyway, the important paragraph is
> Seeing the composite image with no context or knowledge of DNA phenotyping, can mislead people into believing that the suspect looks exactly like the DNA profile. “Many members of the public that see this generated image will be unaware that it's a digital approximation, that age, weight, hairstyle, and face shape may be very different, and that accuracy of skin/hair/eye color is approximate,” Schroeder said.
From what I understand, regular Stable Diffusion starts by generating a noise and then hallucinating modifications of that noise to make less noise. The more you let it run, the better the results.
So instead of just starting with a meaningless random noise, they're using the fMRI data to start. But if you didn't have the text prompt, you wouldn't get the right image. If you were looking at a cat but told it you were looking at a house, you'd probably end up with a small house, similar to one in its training set, positioned roughly where the cat was located in the original image.
One open question in the field: how to assess the alignment of the AI outcomes across different methods?