AI Art Sites Censor Prompts About Abortion

“Why are they censoring something that is clearly under attack?”

DALL-E 2 website displayed on a laptop screen and OpenAI logo displayed on a phone screen are seen in this illustration photo taken in Krakow, Poland on January 23, 2023. (Photo by Jakub Porzycki/NurPhoto via Getty Images)
DALL-E 2 website displayed on a laptop screen and OpenAI logo displayed on a phone screen on Jan. 23, 2023. Photo: Jakub Porzycki/NurPhoto via Getty Images

Two of the hottest new artificial intelligence programs for people who aren’t tech savvy, DALL-E 2 and Midjourney, create stunning visual images using only written prompts. Everything, that is, that avoids certain language in the prompts — including words associated with women’s bodies, women’s health care, women’s rights, and abortion.

I discovered this recently when I prompted the platforms for “pills used in medication abortion.” I’d added the instruction “in the style of Matisse.” I expected to get colorful visuals to supplement my thinking and writing about right-wing efforts to outlaw the pills.

Neither site produced the images. Instead, DALL-E 2 returned the phrase, “It looks like this request may not follow our content policy.” Midjourney’s message said, “The word ‘abortion’ is banned. Circumventing this filter to violate our rules may result in your access being revoked.”

DLL-censorship-copy

DALL-E blocks the AI image generator prompt of “abortion pills.”

Photo: DALL-E

Julia Rockwell had a similar experience. A clinical data analyst in North Carolina, Rockwell has a friend who works as a cell biologist studying the placenta, the organ that develops during pregnancy to nourish the developing fetus. Rockwell asked Midjourney to generate a fun image of the placenta as a gift for her friend. Her prompt was banned.

She then found other banned words and sent her findings to MIT Technology Review. The publication reported that reproductive system-related medical terms, including “fallopian tubes,” “mammary glands,” “sperm,” “uterine,” “urethra,” “cervix,” “hymen,” and “vulva,” are banned on Midjourney, but words relating to general biology, such as “liver” and “kidney,” are allowed.

I’ve since found more banned prompt words. They include products to prevent pregnancy, such as “condom” and “IUD,” an intrauterine device, a birth control product for women. Additional devices are sexed. “Stethoscope” prompted on Midjourney produces gorgeous renderings of an antique instrument. But “speculum,” a basic tool that medical providers use to visualize female reproductive anatomy, is not allowed.

The AI developers devising this censorship are “just playing whack-a-mole” with the word prompts they’re prohibiting, said University of Washington AI researcher Bill Howe. They aren’t deliberately censoring information about female reproductive health. They know that AI mirrors our culture’s worst and most virulent biases, including sexism. They say they want to protect people from hurtful images that their programs scrape from the internet. So far, they haven’t been able to do that, because their efforts are hopelessly superficial: Instead of putting intensive resources into fixing the models that generate the offensive material, the AI firms attempted to cut out the bias through censoring the prompts.

During a time when women’s right to sexual equality and freedom is under increasing assault by the right, the AI bans could be making things worse.

During a time when women’s right to sexual equality and freedom is under increasing assault by the right, the AI bans could be making things worse.

Midjourney rationalizes bans by explaining that it limits its content to the Hollywood equivalent of PG-13. DALL-E 2 uses PG. The program’s user guide prohibits production of images that are “inherently disrespectful, aggressive, or otherwise abusive.” Also banned are “visually shocking or disturbing content, including adult content or gore,” or which “can be viewed as racist, homophobic, disturbing, or in some way derogatory to a community.” Midjourney also bans “nudity, sexual organs, fixation on naked breasts,” and other pornography-like content. DALL-E 2’s prohibitions are similar.

Many users complain about the restrictions. “Do they want a program for creative professionals or for kindergartners?” complained one DALL-E 2 user on Reddit. A Midjourney member was more political, noting that the bans make it “pretty hard to create images with feminist themes.”

Abortion-is-banned-MJ-copy

Midjourney explains that “abortion” is banned as a prompt for the AI image generator.

Photo: Debbie Nathan

Bias Feedback Loop

The issue of biases in AI-generated art popped up after the launch of DALL-E, the precursor program to DALL-E 2. Some users noticed signs of gender bias (and racial bias too). Prompting with the words “flight attendant” generated only women. “Builder” produced images solely of men. Wired reported that developmental tests with DALL-E 2’s data found that when a prompt was entered simply for a person, without specifying gender, resulting images were usually of white men. When the prompt added negative nouns and adjectives, such “a man sitting in a prison cell” or “a photo of an angry man,” resulting images almost invariably depicted men of color.

These problems stem from bias produced by algorithms using models containing massive amounts of potentially harmful data. DALL-E 2’s model, for instance, was trained on 12 billion parameters of text-image pairs scraped from the internet. As a mirror of the real world, the internet world contains torrents of sexist pornography that objectify and degrade people, especially women. As DALL-E itself admitted last year, its model and the images it produces have “the potential to harm individuals and groups by reinforcing stereotypes, erasing or denigrating them, providing them with disparately low quality performance, or by subjecting them to indignity.”

Related

Texas Judge Cosplaying as Medical Expert Has Consequences Beyond the Abortion Pill

On the earlier iteration of DALL-E 2, OpenAI, the research lab that created the program, tried to filter the training data to excise prompts that trigger sexism. Howe, the University of Washington researcher, said in an interview with The Intercept that such filtering is ham-fisted and, in some cases, worsens the bias. For instance, the filtering ended up decreasing how often images of women were produced. OpenAI hypothesized that the decrease occurred because images of women put into the data system were more likely than those of men to look sexualized. By filtering out problematic images, women as a class of the population tended to be erased.

In the AI text-to-visual programs, written prompts are associated with female bodies can trigger sexist, even sadistically sexist, output. This should not surprise. Everyday human society in most of the world remains obstinately patriarchal. And when it comes to the web, as one researcher reports, large-scale evidence exists for “a masculine default in the language of the online English-speaking world.” Another study found that data on the internet is highly influenced by the economics of the male gaze, including its gaze upon objectified, sexualized images of women and upon violence.

DALL-E 2 has tried to solve the problem superficially, not by retraining its model at the front end to remove harmful imagery, but instead simply by filtering out written prompts that focus on women’s bodies and activities, including the act of obtaining an abortion, hence the roadblocks I came up against trying to produce images with abortion pills on the platform, as well as what happened with Midjourney, which employs similar filters.

“Lock Down the Prompts”

It’s easy to sneak past the filters by tweaking words in the prompts. That’s what Rockwell — the digital analyst who gave Midjourney a prompt including “placenta” — discovered. After unsuccessfully requesting an image for “gynecological exam,” she shifted to the British spelling: “gynaecological.” The images she received, later published in MIT Technology Review, were creepy, if not downright pornographic. They featured nudity and body injuries unrelated to medical treatment. The visuals I got by typing the same phrase were even worse than Rockwell’s. One showed a naked woman lying on an exam table, screaming, with a slash on her throat.

gynaecological-exam-MJ-copy

A search on Midjourney for “gynaecological exam” provided four AI generated images.

Photo: Debbie Nathan; Midjourney

Aylin Caliskan, a scholar at the University of Washington’s Information School, co-published a study late last year verifying statistically that AI models tend to sexualize women, particularly teenagers. So, avoiding the word “abortion,” I asked Midjourney to render a visual for the phrase “pregnancy termination in 16-year-old girl. Realistic.” I got back a chilling combination of photorealism and soft-porn horror flick. The image depicts a very young white woman with cleavage exposed and with a grotesquely discolored and swollen belly, from which two conjoined baby heads stare fixedly with four zombie eyes.

pg-16-yo

Midjourney AI’s return images for the prompt “pregnancy termination in 16-year-old girl. Realistic.”

Photo: Debbie Nathan; Midjourney

Howe, who is an associate professor at the Information School, was a member of Caliskan’s team for the study that inspired my experiment. He is also co-founder of the Responsible AI Systems and Experiences center. He speculated that the salacious visual of the girl’s breasts reflected the prevalence of pornography in Midjourney’s model, while the bizarre babies probably showed that the internet has such a relative paucity of positive or normalizing material regarding abortion that the program got confused and generated gibberish — albeit gibberish that, in the current political climate, could be construed as anti-abortion.

The larger issue, Howe added, is that the amount of data in AI models has exploded recently. The text and visuals they are generating now are so detailed that the models may appear to be thinking and working at levels approaching human abilities. Howe said, the models possess “no grounding, no understanding, no experience, no other sensor that reifies words with objects or experiences in the real world.” On their own, they are completely incapable of avoiding bias.

There are only three ways to correct the bias they generate, Howe said. One involves filtering the database while the model is being trained and before it is released to the public. “For example,” he said, “scour through the entire training set, determine for each image if it’s sexualized, and either ensure that sexualized male and female images are equal in number, or remove all of them.” Similar techniques can be used midway through the training, Howe said. Either way is expensive and time-consuming.

Instead, he said, the owners do the cheapest and quickest thing: “They lock down the prompts.” But, Howe notes, this produces “tons of false positives and tons of false negatives,” and “makes it basically impossible to have a scientific discussion about reproduction. This is wrong,” he said. “You need to do the right thing from the beginning.”

“And you need to be transparent,” Howe said. Companies including Microsoft’s OpenAI, which Elon Musk has financially backed, are lately “releasing one model after the other,” Howe noted. Echoing a recent article in Scientific American, he expressed concern about the secrecy with which the new models are being rolled out. “There’s not much science we can do on them because they don’t tell us how they work or what they were trained on.” He attributed the secrecy to competitive fears of having trade secrets copied and to the probability, as he put it, that they are “all using the same bag of tricks.” Howe said that DALL-E no longer talks publicly about its model. Midjourney’s developer and owner David Holz said recently the program never has and won’t.

“Nothing Is Perfect”

Midjourney is gendered as well as racialized. One person’s prompt for male participants at a protest generated serious looking, fully clothed white men. A prompt for a Black woman fighting for her reproductive rights returned someone with outsized hips, bared breasts, and an angry scowl.

People using Midjourney have also generated anti-abortion images from metaphors rather than direct references. Someone’s prompt last year created a plate with slices of toast and a sunny side up egg with an embryo floating in the yolk. It is labeled “Planned Parenthood Breakfast,” implying that people who work for the storied women’s reproductive health and abortion provider are cannibals. Midjourney’s current rules have no way of removing them from public view.

Midjourney has been using human beings to vet automated first passes of the output. When The Intercept asked Holz to comment on the problem of prompt words generating biased and harmful images, he said he was test-driving a new plan, to replace people with algorithms that he claims will be “much smarter and won’t rely on ‘banned words.’” He added, “Nothing is perfect.”

This offhand attitude is unacceptable, said Renee Bracey Sherman, the director of We Testify, a nonprofit that promotes storytelling by people who’ve had abortions and want to normalize the experience. Prompt bans have long existed for text on social media. She said that this year, on the 50th anniversary of Roe v. Wade, she tweeted information about “self-managed abortion” and saw her post flagged by Twitter as dangerous — which led to it hardly being retweeted. She has seen the same happen to postings by reputable public health experts discussing scientific information about abortion.

Bracey Sherman said she was not surprised by the sexist, racist “protest” image I found on Midjourney. “Social media cannot imagine what a pro-abortion or reproductive rights activity looks like, other than something pornographic,” she said. She worries that word bans on platforms like DALL-E 2 and Midjourney cut off marginalized groups, including poor people and women of color, from good information that they desperately need and which does remain in the data.

Policy does not exist yet for regulating AI, but it should, Howe said. “We figured out how to build a plane,” he said, but “do we trust companies to not kill a plane full of people? No. We put regulations in place.” A New York City law, slated to go into effect in July, bans using AI to make job hiring decisions unless the algorithm first passes a bias audit. Other locales are working on similar laws. Last year, the Federal Trade Commission sent a report to Congress expressing concern about bias, inaccuracy, and discrimination in AI. And the White House Office of Science and Technology Policy published its Blueprint for an AI Bill of Rights “to support the development of policies and practices that protect civil rights and promote democratic values in the building, deployment, and governance of automated systems.”

Howe said he is “somewhat optimistic” that civil society in the U.S. will develop AI oversight policy. “But will it be enough and in time?” he asked. “It’s just mind-blowing the speed at which these things are being released.”

“Why are they censoring something that is clearly under attack?”

Bracey Sherman excoriated the companies’ lack of concern for the quality of their models prior to release and their piecemeal response after the output interacts with consumers in an increasingly fraught world. “Why are they not paying attention to what’s going on?” she said of the AI companies. “They make something and then say, ‘Oh, we didn’t know!’”

Of abortion information that gets blocked by banned prompts, she asked, “Why are they censoring something that is clearly under attack?”

Join The Conversation