Stability.AI, the company that develops Stability Diffusion, roll out A new version of the AI model will be launched in late November. A spokesman said the original model had security filters in it, which Lensa does not appear to use because it removes those outputs. One way Stable Diffusion 2.0 filters content is by removing frequently repeated images. The more repetitions, such as Asian women in pornographic scenes, the stronger the association in the AI model.
Caliskan has Learned CLIP (Contrastive Language Image Pretraining), this is a system that helps Stable Diffusion generate images. CLIP learns to match images in a dataset to descriptive text cues. Caliskan found it rife with problematic gender and racial bias.
“Women are associated with sexual content, whereas men are associated with professional, career-related content in any important field, such as medicine, science, business,” Caliskan said.
Interestingly, when my photo was passed through the male content filter, my Lensa avatar was more realistic. I got an avatar of myself with clothes(!) and a neutral pose. In several photos, I’m wearing a white coat that seems to belong to a chef or doctor.
But it’s not just a matter of training data.Companies developing these models and applications make active choices about how their data is used, said Ryan Steed, a doctoral student at Carnegie Mellon University who has studied Bias in Image Generation Algorithms.
“Someone has to choose the training data, decide to build the model, decide whether to do something to mitigate these biases,” he said.
The app’s developers have chosen that male avatars can appear in space suits, while female avatars get cosmic thongs and fairy wings.
A spokesperson for Prisma Labs said the “sporadic sexualization” of photos happened to people of all genders, but in different ways.