Research Says Facebook’s Ad Algorithm Perpetuates Gender Bias

A University of Southern California study provides still more evidence that the company's ad targeting illegally discriminates.

Woman using her phone under a logo of Facebook, 2015.
A woman uses her phone under a Facebook logo in 2015. Photo: Niall Carson/Press Association via AP Images

New research from a team at the University of Southern California provides further evidence that Facebook’s advertising system is discriminatory, showing that the algorithm used to target ads reproduced real-world gender disparities when showing job listings, even among equally qualified candidates.

In fields from software engineering to sales to food delivery, the team ran sets of ads promoting real job openings at roughly equivalent companies requiring roughly the same skills, one for a company whose existing workforce was disproportionately male and one that was disproportionately female. Facebook showed more men the ads for the disproportionately male companies and more women the ads for the disproportionately female companies, even though the job qualifications were the same. The paper concludes that Facebook could very well be violating federal anti-discrimination laws.

“We confirm that Facebook’s ad delivery can result in skew of job ad delivery by gender beyond what can be legally justified by possible differences in qualifications,” the team wrote.

The work builds on prior research that left Facebook reeling. A groundbreaking 2019 study from one member of the team provided strong evidence that Facebook’s ad algorithm isn’t just capable of bias, but is biased to its core. Responding to that study, and in the wake of widespread criticism over tools that could be used to run blatantly discriminatory ad campaigns, Facebook told The Intercept at the time, “We stand against discrimination in any form. We’ve made important changes to our ad targeting tools and know that this is only a first step. We’ve been looking at our ad delivery system and have engaged industry leaders, academics, and civil rights experts on this very topic — and we’re exploring more changes.”

Based on this new research, it doesn’t appear that the company got very far beyond whatever that “first step” was. The paper — authored by USC computer science assistant professor Aleksandra Korolova, professor John Heidemann, and doctoral student Basileal Imana — revisits the question tackled in 2019: If advertisers don’t use any of Facebook’s demographic targeting options, which demographics will the system target on its own?

Related

Facebook Told Black Applicant With Ph.D. She Needed to Show She Was a “Culture Fit”

The question is a crucial one, given that Facebook’s control over who sees which ads might determine who is provided with certain vital economic opportunities, from insurance to a new job to a credit card. This control is executed entirely through algorithms whose inner workings are kept secret. Since Facebook won’t provide any meaningful answers about how the algorithms work, researchers such as Korolova and her colleagues have had to figure it out.

This time around, the team wanted to preempt claims that biased ad delivery could be explained by the fact that Facebook showed the ads to people who were simply more qualified for the advertised job, a possible legal defense against allegations of unlawful algorithmic bias under statutes like Title VII, which bars discrimination on the basis of protected characteristics like race and gender. “To the extent that the scope of Title VII may cover ad platforms, the distinction we make can eliminate the possibility of platforms using qualification as a legal argument against being held liable for discriminatory outcomes,” the team wrote.

As in 2019, Korolova and her team created a series of advertisements for real-world job openings and paid Facebook to display these job listings to as many people as possible given their budget, as opposed to specifying a certain demographic cohort whose eyeballs they wanted to zero in on. This essentially left the decision of “who sees what” entirely up to Facebook (and its opaque algorithms), thus helping to highlight the bias engineered into Facebook’s own code.

Facebook funneled gender-neutral ads for gender-neutral jobs to people on the basis of their gender.

Even when controlling for job qualifications, the researchers found that Facebook automatically funneled gender-neutral ads for gender-neutral jobs to people on the basis of their gender.

For example, Korolova’s team purchased Facebook ad campaigns to promote two delivery driver job listings, one from Instacart and another from Domino’s. Both positions are roughly equivalent in terms of required qualifications, and for both companies, “there is data that shows the de facto gender distribution is skewed”: Most Domino’s drivers are men, and most Instacart drivers are women. By running these ads with a mandate only to maximize eyeballs, no matter whose, the team sought to “study whether ad delivery optimization algorithms reproduce these de facto skews, even though they are not justifiable on the basis of differences in qualification,” with the expectation of finding “a platform whose ad delivery optimization goes beyond what is justifiable by qualification and reproduces de facto skews to show the Domino’s ad to relatively more males than the Instacart ad.” The results showed exactly that.

Left to its own devices, the team found that Facebook’s ad delivery algorithm took the Domino’s and Instacart listings, along with later experiments based on ads for software engineering and sales associate gigs at other companies, and showed them to online audiences that essentially reproduced the existing offline gender disparities: “The skew we observe on Facebook is in the same direction as the de facto skew, with the Domino’s ad delivered to a higher fraction of men than the Instacart ad.” And since the experiments were designed to take job qualification out of the picture, the team says, they strengthen “the previously raised arguments that Facebook’s ad delivery algorithms may be in violation of anti-discrimination laws.” As an added twist, the team ran the same set of ads on LinkedIn, but saw no evidence of systemic gender bias.

Facebook spokesperson Tom Channick told The Intercept that “our system takes into account many signals to try and serve people ads they will be most interested in, but we understand the concerns raised in the report,” adding that “we’ve taken meaningful steps to address issues of discrimination in ads and have teams working on ads fairness today. We’re continuing to work closely with the civil rights community, regulators, and academics on these important matters.”

Though the USC team was able to cleverly expose the biased results of Facebook ads, their methodology hits a brick wall when it comes to answering why exactly this happens. This is by design: Facebook’s ad delivery algorithm, like all the rest of the automated decision-making systems it employs across its billions of users, is a black-box algorithm, completely opaque to anyone other than those inside the company, workers who are bound by nondisclosure agreements and sworn to secrecy. One possible explanation for the team’s findings is that the ad delivery algorithm trains itself based on who has clicked on similar ads in the past — maybe men tend to click on Domino’s ads more than women. Korolova says “skew due to prior user behavior observed by Facebook is possible” but that “if despite the clear indication of the advertiser, we still observe skewed delivery due to historical click-through rates (as we do!), this outcome suggests that Facebook may be overruling advertiser desires for broad and diverse outreach in a way that is aligned with their own long-term business interests.”

Join The Conversation