- cross-posted to:
- sneerclub@awful.systems
- cross-posted to:
- sneerclub@awful.systems
Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.
No matter what Google does, people are going to come up with gotcha scenarios to complain about. People need to accept the fact that if you don’t specify what race you want, then the output might not contain the race you want. This seems like such a silly thing to be mad about.
It’s really a failure of one-size-fits-all AI. There are plenty of non-diverse models out there, but Google has to find a single solution that always returns diverse college students, but never diverse Nazis.
If I were to use A1111 to make brown Nazis, it would be my own fault. If I use Google, it’s rightfully theirs.
The issue seems to be the underlying code tells the ai if some data set has too many white people or men, Nazis, ancient Vikings, Popes, Rockwell paintings, etc then make them diverse races and genders.
What do we want from these AIs? Facts, even if they might be offensive? Or facts as we wish they would be for a nicer world?