According to an investigation into the algorithmic bias at Twitter, their image cropping algorithm prefers younger, slimmer faces with lighter skin.
The company had previously apologized to users after reports of bias. This finding marks the conclusion of Twitter’s ‘algorithmic bug bounty’.
The company paid $3,500 to Bogdan Kulynych, graduate student at the Swiss EFPL University, who illustrated the bias in the algorithm. It is used to emphasize image previews on the most fascinating parts of pictures, as part of a competition at the DEF CON security conference in Las Vegas.
Kulynych was able to prove the bias by artificially generating faces with different features and running them through Twitter’s cropping algorithm to see where the software would focus on.
It was possible to generate almost identical faces but with different skin tones, width, gender or age because the faces were artificial. This demonstrated that the algorithm focused on lighter, younger and slimmer faces over its counterparts.
‘When we think about biases in our models, it’s not just about the academic or experimental… but how that also works with the way we think in society’, said Rumman Chowdhury, the head of Twitter’s AI ethics team.
‘I use the phrase “life imitating art imitating life”. We create these filters because we think that’s what “beautiful” is, and that ends up training our models and driving these unrealistic notions of what it means to be attractive.’
Twitter was criticized in 2020 about this same image cropping algorithm with users alleging that it appeared to regularly focus on white faces over black faces. The company initially apologized, saying: ‘Our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing. But it’s clear from these examples that we’ve got more analysis to do. We’ll continue to share what we learn, what actions to take and will open source our analysis so others can review and replicate’. Twitter’s own researchers however only a mild bias in favour of white faces and of women faces.
The backlash prompted Twitter to launch the algorithm harms bug bounty which promised thousands of dollars in prizes for researchers who could illustrate harm done by the company’s image cropping algorithm.
Kulynych, the winner said he had mixed feelings about the competition. ‘Algorithmic harms are not only “bugs”. Crucially, a lot of harmful tech is harmful not because of accidents, unintended mistakes, but rather by design. This comes from maximization of engagement and, in general, profit externalizing the costs to others. As an example, amplifying gentrification, driving down wages, spreading clickbait and misinformation are not necessarily due to “biased” algorithms’.
By Marvellous Iwendi.
Source: The Guardian