It looks as if Google is trying to tackle the problem in a somewhat piecemeal way, and have got it wrong. But from an AI perspective its no worse than people with 8 fingers or whatever, just more shocking.
I would rather that those working on AI were aware of the issue and getting it wrong, than simply not caring at all. Our children don't deserve to live in a world even more distorted than the one we live in.
It’s hard to trust that people this intensely upset about Google’s issues aren’t seeking to benefit from leaving stereotypes and other biases from training intact.
1. They overcorrected the prompt to the point of ridiculousness. Where any historical context is ignored and any user prompts will be overriden by whatever US corporate kindergarten-level of inclusivity is [1]
2. A clear and flat-out swing into the extreme. The prompt "Generate an image of X scientist/soldier/family/meeting/group of people" would work for literally every ethnicity, skin color, nationality etc. except for the word "white" where it would refuse to generate the image because "not diverse enough".
Both of these are problematic enough on their own. Together they border on malicious.
[1] It really is kindergarten level. The images they generate are just 2-3 US-centric stereotypes of what people of different races would look like.
The idea that "This can't be a bug! They must have spent huge amounts of effort making deliberately ridiculous outputs that no one would value!" comes close to clinical paranoia.
But, to try and summarize:
- Issues with Gemini were not (in author's opinion) an initial instruction set fluke, but top to bottom design of the system.
- Rather than presenting the information you asked for it's presenting what its makers think the world and information you asked for should look like (i.e their ideology)
- If you can't know how the product is trained and set up, you can't trust anything it gives you because you'll never know if you're getting the information as is, or are getting the world view of the company who's product you're using.
- "...ask yourself what would Search look like if the staff who brought you Gemini was tasked to interpret them & rebuild it accordingly? Would you trust that product? Would you use it? Well, with Google's promise to include Gemini everywhere, that's what we'll be getting.."
Gemini is prioritizing ideology over facts, profit over truth abd google is explicitly doing it. it is not incidental
But I don't know exactly what OP is talking about
Something happened but they don't say what. Humorously, the problem might be that Gemeni "refused to answer certain prompts entirely".
gemini produced images of non-white people in a lot of situations in which it shouldn't have
I've read theorized(?) that, in order to counteract disproportionately large amounts of pictures of white people in training data, they basically added instructions after the fact in an effort to generate more non-white people, and totally over-corrected
feel free to correct me if I'm wrong, I haven't paid super close attention
The unique issue with Gemini is that it would flat out refuse to follow simple prompts such as "Please generate an image of a white family" because they "weren't diverse and inclusive" enough, but if you changed "white" to any other qualifier, it happily obliged.
They corrected the guidance in their prompt instructions. It became a non issue.
Ask for White, and you get a picture without a single Caucasian.
Deity forbid you ask for a Caucasian family though - their A.I. police will pull you over for sensitivity training.
It's hard to see the engine as anything other than hot garbage after a few simple tests like that.
And Bing? https://www.gettyimages.ch/fotos/white-family
OTOH, if you think that "Please generate an image of a white family" is good use of those behemoth language models and all the rsssources poured in...
Like would it be so hard for it to just show the prompt it submits to dall-e? help people learn how the system works?
Re: the Twitter post it reads like a "I'm leaving social media' post and then the author waits to see all the engagement -- all the "no don't leave, we love you!" nonesense. Or maybe it's the radicalization origin of a tech person: oh they trained the data to be woke from the beginning, rawr I'm gonna crusade against this! sort of thinking.
It's an AI model who cares? It's not like anyone is getting their facts from it and if they are they need to be shown the folly of that. They're tools. I find them best at reasoning somewhat well about code and that's about it.
This is a weird parallel to similar controversies in fiction.
So presumably an AI image generator generating images of historical figures in situations they never experienced, wearing clothes they never wore, in places they never visited, at times they didn't live through, using tools they never heard of, meeting people who's lives didn't overlap, saying things they never said in languages they never spoke etc. is fine but skin colour is a bridge too far because "historical realism" is soooooo important when using an AI image generator.
People at large are definitely getting their facts from it. Just like a non-negligible number of people consume fake news as facts.
When asked to generate white people, it said it's not allowed to.
[0] https://www.vice.com/en/article/wxdawn/the-ai-that-draws-wha...
Ok, so probably we agree that the product needs to try to be “good”. Cue a million opinions about what “good” is. Whatever comes out is the result of a value judgment, there is no getting around it. Same issue comes up with images from Gemini in this case, or any other generative AI product. You don’t actually want the AI to be unbiased because that output would be hot garbage.
Now we don't even get log probs, one of the most powerful features to evaluate the output of the models.
If you made unbiased Gemini and asked it for images of German soldiers it probably wouldn’t output an image at all, and if it did it would probably be porn.
Side Note: As a person living outside US, I find US focused discorse's obsession with race and skin color such fascinating. It is almost similar to people obsessing over the color of a button on the home page when the page itself takes 1 minute to load.
I've been reading Google's Gemini damage control posts. I think they're simply not telling the truth. For one, their text-only product has the same (if not worse) issues. And second, if you know a bit about how these models are built, you know you don't get these "incorrect" answers through one-off innocent mistakes. Gemini's outputs reflect the many, many, FTE-years of labeling efforts, training, fine-tuning, prompt design, QA/verification -- all iteratively guided by the team who built it. You can also be certain that before releasing it, many people have tried the product internally, that many demos were given to senior PMs and VPs, that they all thought it was fine, and that they all ultimately signed off on the release. With that prior, the balance of probabilities is strongly against the outputs being an innocent bug -- as @googlepubpolicy is now trying to spin it: Gemini is a product that functions exactly as designed, and an accurate reflection of the values people who built it.
Those values appear to include a desire to reshape the world in a specific way that is so strong that it allowed the people involved to rationalize to themselves that it's not just acceptable but desirable to train their AI to prioritize ideology ahead of giving user the facts. To revise history, to obfuscate the present, and to outright hide information that doesn't align with the company's (staff's) impression of what is "good". I don't care if some of that ideology may or may not align with your or my thinking about what would make the world a better place: for anyone with a shred of awareness of human history it should be clear how unbelievably irresponsible it is to build a system that aims to become an authoritative compendium of human knowledge (remember Google's mission statement?), but which actually prioritizes ideology over facts. History is littered with many who have tried this sort of moral flexibility "for the greater good"; rather than helping, they typically resulted in decades of setbacks (and tens of millions of victims).
Setting social irresponsibility aside, in a purely business sense, it is beyond stupid to build a product which will explicitly put your company's social agenda before the customer's needs. Think about it: G's Search -- for all its issues -- has been perceived as a good tool, because it focused on providing accurate and useful information. Its mission was aligned with the users' goals ("get me to the correct answer for the stuff I need, and fast!"). That's why we all use(d) it. I always assumed Google's AI efforts would follow the pattern, which would transfer over the user base & lock in another 1-2 decade of dominance.
But they've done the opposite. After Gemini, rather than as a user-centric company, Google will be perceived as an activist organization first -- ready to lie to the user to advance their (staff's) social agenda. That's huge. Would you hire a personal assistant who openly has an unaligned (and secret -- they hide the system prompts) agenda, who you fundamentally can't trust? Who strongly believes they know better than you? Who you suspect will covertly lie to you (directly or through omission) when your interests diverge? Forget the cookies, ads, privacy issues, or YouTube content moderation; Google just made 50%+ of the population run through this scenario and question the trustworthiness of the core business and the people running it. And not at the typical financial ("they're fleecing me!") level, but ideological level ("they hate people like me!"). That'll be hard to reset, IMHO.
What about the future? Take a look at Google's AI Responsibility Principles (ai.google/responsibility…) and ask yourself what would Search look like if the staff who brought you Gemini was tasked to interpret them & rebuild it accordingly? Would you trust that product? Would you use it? Well, with Google's promise to include Gemini everywhere, that's what we'll be getting (technologyreview.com/2024/02/08/108…). In this brave new world, every time you run a search you'll be asking yourself "did it tell me the truth, or did it lie, or hide something?". That's lethal for a company built around organizing information.
And that's why, as of this weekend, I've started divorcing my personal life and taking my information out of the Google ecosystem. It will probably take a ~year (having invested in nearly everything, from Search to Pixel to Assistant to more obscure things like Voice), but has to be done. Still, really, really sad...
That there are bugs in AI training? That when they tried to retrain their model to remove one type of bias, they accidentally went too far the other way?
What's his theory? That Google deliberately "by hand" forced their programs to make ridiculous images so that they would be mocked by absolutely everyone? Does the writer think that people who care about racial diversity are going to be somehow pleased by pictures of Black Nazis?
It's paranoia, but it's also clickbait. We shouldn't give these losers a platform.
> We shouldn't give these losers a platform.
Oh look, the truth fell out of you anyhow.