Twitter, Zoom Slapped With Allegations Of Racial Bias In Algorithms, Twitter Says There Is More Work To Be Done

Twitter, Zoom Alleged Of Racial Bias In Algorithms

Highlights:

  • Zoom, Twitter Hit by Allegations of Racial Bias in Algorithms, Twitter Says Work to Be Done
  • Twitter seems to give preferences to white people in images
  • Zoom erases faces of black people when applying virtual backgrounds

US based tech giants Twitter and Zoom, over the weekend, were spotted to have racial biases in their visual algorithms. The allegations started when someone, on Zoom, spotted that the video calling platform appeared to be removing the head of people with a darker skin pigmentation when they use a virtual background while it dis not make this move on people who had a lighter skin pigmentation.

Ironically, in the tweet which was posted to report the Zoom issue, Twitter was spotted to have a similar racial bias as well when it cropped thumbnails to favour the faces of white people over a person with darker pigmentation.

Twitter has responded to the outrage that emerged, saying it was clear it had more work to do.

Initially, Zoom appeared to have a problem with its virtual background algorithms which manifested as a racial bias, however, researcher Colin Madland posted a thread on Twitter, on Saturday, which underlined the issue with the face-detection algorithm which allegedly erases black faces when applying a virtual background on the video conferencing application.

Upon reaching out to Zoom for getting clarification on the algorithm the company did not respond at the time of writing this article.

In the same Tweet thread, with Madland posting photos of each user in the chat, Twitter’s image thumbnail cropping algorithm seemed to be favouring Madland over his black colleague.

In response to Madland’s observations, Twitter Chief Design Offer Dantley Davis said “It’s 100 percent our fault. No one should say otherwise. Now the next step is fixing it.”

Soon after, many Twitter users posted photos on the microblogging website which highlighted the apparent bias. An example was cryptographic engineer Tony Arcieri, who, on Sunday, tweeted the mugshots of former US President Barack Obama and Senate majority leader Mitch McConnell to understand whether the platform’s algorithm would highlight the former or latter.

Arcieri used different patterns of putting the mugshots in the images, but in all cases, Twitter showed McConnell over Obama.

However, once the engineer inverted the colours of the mugshots, Obama’s image showed up on the cropped view.

TWEET: <blockquote class=”twitter-tweet” data-conversation=”none”><p lang=”en” dir=”ltr”>Let&#39;s try inverting the colors… (h/t <a href=”https://twitter.com/KnabeWolf?ref_src=twsrc%5Etfw”>@KnabeWolf</a>) <a href=”https://t.co/5hW4owmej2″>pic.twitter.com/5hW4owmej2</a></p>&mdash; Tony “Abolish (Pol)ICE” Arcieri ? (@bascule) <a href=”https://twitter.com/bascule/status/1307454928806318080?ref_src=twsrc%5Etfw”>September 19, 2020</a></blockquote> <script async src=”https://platform.twitter.com/widgets.js” charset=”utf-8″></script>

Intertheory producer Kim Sherrell was also among those who found that the algorithm tweaks the preference once the image of Obama is changed with a higher contrast smile.

Some users also found that the algorithm appears to give focus to brighter complexions even in case of cartoons and animals.

Liz Kelley, Twitter’s spokesperson responded to the tweets raising the racial bias allegations against the platform saying, “We tested for bias before shipping the model and didn’t find evidence of racial or gender bias in our test, but it’s clear that we’ve got more analysis to do.” She added saying, “We’ll open source our work so others can review and replicate.”

In the year 29017, Twitter discontinued face-detection for automatically cropping images in users’ timeline and deployed a saliency detection algorithm which was aimed to focus on the “salient” image regions.

Twitter engineer Zehan Wang tweeted saying, “We’ll look into this. The algorithm does not do face detection at all (it actually replaced a previous algorithm which did). We conducted some bias studies before release back in 2017. At the time we found that there was no significant bias between ethnicities (or genders).”

About The Author

Robin Mishra
Robin Mishra is a content writer with TechTalkCounty and lives for news technology and gaming. He is fond of reading about new stuff in the technology world and also strives to gets his hand on the latest gadget in the market. When he is not busy with his work, you can find him reading novels of different genres. View More Posts