Highlights:
- Zoom, Twitter Hit by Allegations of Racial Bias in Algorithms, Twitter Says Work to Be Done
- Twitter seems to give preferences to white people in images
- Zoom erases faces of black people when applying virtual backgrounds
US based tech giants Twitter and Zoom, over the weekend, were spotted to have racial biases in their visual algorithms. The allegations started when someone, on Zoom, spotted that the video calling platform appeared to be removing the head of people with a darker skin pigmentation when they use a virtual background while it dis not make this move on people who had a lighter skin pigmentation.
Ironically, in the tweet which was posted to report the Zoom issue, Twitter was spotted to have a similar racial bias as well when it cropped thumbnails to favour the faces of white people over a person with darker pigmentation.
Twitter has responded to the outrage that emerged, saying it was clear it had more work to do.
Initially, Zoom appeared to have a problem with its virtual background algorithms which manifested as a racial bias, however, researcher Colin Madland posted a thread on Twitter, on Saturday, which underlined the issue with the face-detection algorithm which allegedly erases black faces when applying a virtual background on the video conferencing application.
Upon reaching out to Zoom for getting clarification on the algorithm the company did not respond at the time of writing this article.
In the same Tweet thread, with Madland posting photos of each user in the chat, Twitter’s image thumbnail cropping algorithm seemed to be favouring Madland over his black colleague.
In response to Madland’s observations, Twitter Chief Design Offer Dantley Davis said “It’s 100 percent our fault. No one should say otherwise. Now the next step is fixing it.”
INVISIBLE MAN. @zoom_us virtual background amputates a Black colleague’s head until he places a pale globe behind.?
— Ruha Benjamin (@ruha9) September 19, 2020
[photos by @colinmadland shared with permission] pic.twitter.com/Y6jYFfVlOD
Flipped the image….@Twitter is trash. pic.twitter.com/GxlNIEryFD
— Colin Madland (@colinmadland) September 19, 2020
Soon after, many Twitter users posted photos on the microblogging website which highlighted the apparent bias. An example was cryptographic engineer Tony Arcieri, who, on Sunday, tweeted the mugshots of former US President Barack Obama and Senate majority leader Mitch McConnell to understand whether the platform’s algorithm would highlight the former or latter.
Arcieri used different patterns of putting the mugshots in the images, but in all cases, Twitter showed McConnell over Obama.
"It's the red tie! Clearly the algorithm has a preference for red ties!"
— Tony “Abolish (Pol)ICE” Arcieri ? (@bascule) September 19, 2020
Well let's see… pic.twitter.com/l7qySd5sRW
However, once the engineer inverted the colours of the mugshots, Obama’s image showed up on the cropped view.
TWEET: <blockquote class=”twitter-tweet” data-conversation=”none”><p lang=”en” dir=”ltr”>Let's try inverting the colors… (h/t <a href=”https://twitter.com/KnabeWolf?ref_src=twsrc%5Etfw”>@KnabeWolf</a>) <a href=”https://t.co/5hW4owmej2″>pic.twitter.com/5hW4owmej2</a></p>— Tony “Abolish (Pol)ICE” Arcieri ? (@bascule) <a href=”https://twitter.com/bascule/status/1307454928806318080?ref_src=twsrc%5Etfw”>September 19, 2020</a></blockquote> <script async src=”https://platform.twitter.com/widgets.js” charset=”utf-8″></script>
Intertheory producer Kim Sherrell was also among those who found that the algorithm tweaks the preference once the image of Obama is changed with a higher contrast smile.
Some users also found that the algorithm appears to give focus to brighter complexions even in case of cartoons and animals.
I wonder if Twitter does this to fictional characters too.
— Jordan Simonovski (@_jsimonovski) September 20, 2020
Lenny Carl pic.twitter.com/fmJMWkkYEf
I tried it with dogs. Let's see. pic.twitter.com/xktmrNPtid
— – M A R K – (@MarkEMarkAU) September 20, 2020
Liz Kelley, Twitter’s spokesperson responded to the tweets raising the racial bias allegations against the platform saying, “We tested for bias before shipping the model and didn’t find evidence of racial or gender bias in our test, but it’s clear that we’ve got more analysis to do.” She added saying, “We’ll open source our work so others can review and replicate.”
In the year 29017, Twitter discontinued face-detection for automatically cropping images in users’ timeline and deployed a saliency detection algorithm which was aimed to focus on the “salient” image regions.
Twitter engineer Zehan Wang tweeted saying, “We’ll look into this. The algorithm does not do face detection at all (it actually replaced a previous algorithm which did). We conducted some bias studies before release back in 2017. At the time we found that there was no significant bias between ethnicities (or genders).”
We’ll look into this. The algorithm does not do face detection at all (it actually replaced a previous algorithm which did). We conducted some bias studies before release back in 2017. At the time we found that there was no significant bias between ethnicities (or genders). https://t.co/8M70RLSccv
— Zehan Wang (@ZehanWang) September 19, 2020
Leave a Reply