sometimes this works out well though. I like when I get say black sailor moon:
Well, this isn’t the first time ai became racist.
I accidentally got Dall-E to be racist. I’m on a forum where we do endless AI Godzilla pictures (don’t ask) and I did “Godzilla crosses the border illegally” hoping for some sort of spy thing. Instead, I got Godzilla in a sombrero by a Trump-like border wall.
Yeah why not just expand the dataset it draws from to be less racially biased?
Ah, right, that would require effort.
Word, here’s a prompt with dalle3 that also said something similar.
Ethnically ambiguous guy duo!!!
Bromer Samson
Bromine Saturation
hand = white (yellow)
face = black
That explanation makes no fucking sense and makes them look like they know fuck all about AI training.
The output keywords have nothing to do with the training data. If the model in use has fuck all BME training data, it will struggle to draw a BME regardless of what key words are used.
And any AI person training their algorithms on AI generated data is liable to get fired. That is a big no-no. Not only does it not provide any new information from the data, it also amplifies the mistakes made by the AI.
any AI person training their algorithms on AI generated data is liable to get fired
though this isn’t pertinent to the post in question, training AI (and by AI I presume you mean neural networks, since there’s a fairly important distinction) on AI-generated data is absolutely a part of machine learning.
some of the most famous neural networks out there are trained on data that they’ve generated themselves -> e.g., AlphaGo Zero