Twitter to pay a financial “bounty” to users and researchers who could assist the social media site eliminate algorithmic bias, announced on Friday. This would be “the industry’s first algorithmic bias bounty competition,” with prizes up to $3,500, according to the San Francisco tech firm. The competition is patterned after “bug bounty” programmes that some websites and platforms give to uncover security weaknesses and vulnerabilities, according to Twitter executives Rumman Chowdhury and Jutta Williams. “Finding bias in machine learning models is difficult, and sometimes, companies find out about unintended ethical harms once they’ve already reached the public,” Chowdhury and Williams wrote in a blog post “We want to change that.”
According to them, the hacker bounty concept has the potential for detecting algorithmic bias.
“We’re inspired by how the research and hacker communities helped the security field establish best practices for identifying and mitigating vulnerabilities in order to protect the public,” they wrote “We want to cultivate a similar community… for proactive and collective identification of algorithmic harms.”
The action comes amid rising worries about automated algorithmic systems that, despite their best efforts, can include racial or other types of prejudice.
Twitter, which announced an algorithmic fairness project earlier this year, announced in May that it was abandoning an automatic image-cropping system after an investigation discovered bias in the algorithm that controlled the function.
The messaging platform discovered “unequal treatment based on demographic disparities,” with white individuals and males being favored over Black people and females, as well as “objectification” prejudice that concentrated on a woman’s breast or legs, dubbed “male gaze.”