After accusations, Twitter will pay hackers to find biases in its automatic image crops

Technology

The Verge 30 July, 2021 - 06:02pm 54 views

The competition’s winners will be announced at Def Con

Those competing will have to submit a description of their findings, and a dataset that can be run through the algorithm to demonstrate the issue. Twitter will then assign points based on what kind of harms are found, how much it could potentially affect people, and more.

The winning team will be awarded $3,500, and there are separate $1,000 prizes for the most innovative and most generalizable findings. That amount has caused a bit of a stir on Twitter, with a few users saying it should have an extra zero. For context, Twitter’s normal bug bounty program would pay you $2,940 if you found a bug that let you perform actions for someone else (like retweeting a tweet or image) using cross-site scripting. Finding an OAuth issue that lets you take over someone’s Twitter account would net you $7,700.

Twitter has done its own research into its image-cropping algorithm before — in May, it published a paper investigating how the algorithm was biased, after accusations that its previews crops were racist. Twitter’s mostly done away with algorithmically cropping previews since then, but it’s still used on desktop and a good cropping algorithm is a handy thing for a company like Twitter to have.

Opening up a competition lets Twitter get feedback from a much broader range of perspectives. For example, the Twitter team held a space to discuss the competition during which a team member mentioned getting questions about caste-based biases in the algorithm, something that may not be noticeable to software developers in California.

It’s also not just subconscious algorithmic bias Twitter is looking for. The rubric has point values for both intentional and unintentional harms. Twitter defines unintentional harms as crops that could result from a “well-intentioned” user posting a regular image on the platform, whereas intentional harms are problematic cropping behaviors that could be exploited by someone posting maliciously designed images.

Twitter says in its announcement blog that the competition is separate from its bug bounty program — if you submit a report about algorithmic biases to Twitter outside of the competition, the company says your report will be closed and marked as not applicable. If you’re interested in joining, you can head over to the competition’s HackerOne page to see the rules, criteria, and more. Submissions are open until August 6th at 11:59PM PT, and the winners of the challenge will be announced at the Def Con AI Village on August 9th.

Subscribe to get the best Verge-approved tech deals of the week.

Check your inbox for a welcome email.

Read full article at The Verge

Twitter to offer 'bounty' to find algorithmic bias

GMA News Online 31 July, 2021 - 03:55am

WASHINGTON - Twitter said Friday it would offer a cash "bounty" to users and researchers to help root out algorithmic bias on the social media platform.

The San Francisco tech firm said this would be "the industry's first algorithmic bias bounty competition," with prizes up to $3,500.

The competition is based on the "bug bounty" programs some websites and platforms offer to find security holes and vulnerabilities, according to Twitter executives Rumman Chowdhury and Jutta Williams.  

"Finding bias in machine learning models is difficult, and sometimes, companies find out about unintended ethical harms once they've already reached the public," Chowdhury and Williams wrote in a blog post.

They said the hacker bounty model offers promise in finding algorithmic bias.

"We're inspired by how the research and hacker communities helped the security field establish best practices for identifying and mitigating vulnerabilities in order to protect the public," they wrote.

"We want to cultivate a similar community... for proactive and collective identification of algorithmic harms."

The move comes amid growing concerns about automated algorithmic systems, which, despite an effort to be neutral, can incorporate racial or other forms of bias.

Twitter, which earlier this year launched an algorithmic fairness initiative, said in May it was scrapping an automated image-cropping system after its review found bias in the algorithm controlling the function.

The messaging platform said it found the algorithm delivered "unequal treatment based on demographic differences," with white people and males favored over Black people and females, and "objectification" bias that focused on a woman's chest or legs, described as "male gaze." -- Agence France-Presse

After accusations, Twitter will pay hackers to find biases in its automatic image crops

Engadget 30 July, 2021 - 06:02pm

The competition’s winners will be announced at Def Con

Those competing will have to submit a description of their findings, and a dataset that can be run through the algorithm to demonstrate the issue. Twitter will then assign points based on what kind of harms are found, how much it could potentially affect people, and more.

The winning team will be awarded $3,500, and there are separate $1,000 prizes for the most innovative and most generalizable findings. That amount has caused a bit of a stir on Twitter, with a few users saying it should have an extra zero. For context, Twitter’s normal bug bounty program would pay you $2,940 if you found a bug that let you perform actions for someone else (like retweeting a tweet or image) using cross-site scripting. Finding an OAuth issue that lets you take over someone’s Twitter account would net you $7,700.

Twitter has done its own research into its image-cropping algorithm before — in May, it published a paper investigating how the algorithm was biased, after accusations that its previews crops were racist. Twitter’s mostly done away with algorithmically cropping previews since then, but it’s still used on desktop and a good cropping algorithm is a handy thing for a company like Twitter to have.

Opening up a competition lets Twitter get feedback from a much broader range of perspectives. For example, the Twitter team held a space to discuss the competition during which a team member mentioned getting questions about caste-based biases in the algorithm, something that may not be noticeable to software developers in California.

It’s also not just subconscious algorithmic bias Twitter is looking for. The rubric has point values for both intentional and unintentional harms. Twitter defines unintentional harms as crops that could result from a “well-intentioned” user posting a regular image on the platform, whereas intentional harms are problematic cropping behaviors that could be exploited by someone posting maliciously designed images.

Twitter says in its announcement blog that the competition is separate from its bug bounty program — if you submit a report about algorithmic biases to Twitter outside of the competition, the company says your report will be closed and marked as not applicable. If you’re interested in joining, you can head over to the competition’s HackerOne page to see the rules, criteria, and more. Submissions are open until August 6th at 11:59PM PT, and the winners of the challenge will be announced at the Def Con AI Village on August 9th.

Subscribe to get the best Verge-approved tech deals of the week.

Check your inbox for a welcome email.

Twitter is Offering Cash Prizes to People Who Can Help Spot AI Bias

HYPEBEAST 30 July, 2021 - 03:57pm

The latest in street and contemporary art

Shop the latest drops for Men's and Women's

The bounty-style competition will offer prizes to participants who build their own assessment of the code responsible for automatically cropping images on the platform.

In May, Twitter addressed months of feedback from users who said that the image cropping mechanism skewed towards favoring white people over Black people via a “saliency algorithm,” which deems what the most important part of an image is when our eyes scroll through the timeline. Bias in the same AI algorithm tended to favor women over men, with the image crop focusing on a woman’s chest or legs as a salient feature.

For the competition, Twitter has shared its saliency model and the code used to generate an image crop. Entries should utilize quantitative and qualitative methods in their approach and include a description of results and code that details the harm of the algorithm, as well as relevant image files that demonstrate the bias.

A panel of judges will review entries and award five winning individuals or teams cash prizes with first place reward of $3,500 USD and bonus awards for Most Innovative and Most Generalizable. Winners will also be invited to present their work at the DEF CON AI Village hosted by Twitter on August 8.

The Algorithmic Bias Bounty Challenge is accepting entries now through August 6. More details on the competition are available on its site.

In other tech news, Facebook is working on Ray-Ban smart glasses.

HYPEBEAST® is a registered trademark of Hypebeast Hong Kong Ltd.

Gain access to exclusive interviews with industry creatives, think pieces, trend forecasts, guides and more.

We charge advertisers instead of our readers. If you enjoy our content, please add us to your adblocker's whitelist. We'd really appreciated it.

Twitter offers bug bounty to spot AI bias so it can fix its algorithms

CNET 30 July, 2021 - 11:00am

Twitter has a new way to rid itself of artificial intelligence bias: pay outsiders to help it find problems. On Friday, the short-message app maker detailed a new bounty competition that offers prizes of up to $3,500 for showing Twitter how its technology incorrectly handles photos.

Earlier this year, Twitter confirmed a problem in its automatic photo cropping mechanism, concluding the software favored white people over Black people. The cropping mechanism, which Twitter calls its "saliency algorithm," is supposed to present the most important section of an image when you're scrolling through tweets.

Twitter's approach to tackling algorithmic bias -- asking outside experts and observers to study its code and results -- innovates on bug bounties, which have historically been used for reporting security vulnerabilities. Twitter says its bias bounty is an industry first and hopes other companies will follow suit. The competition is intended to help Twitter's internal efforts.

"It sparks more people to be involved who maybe didn't have resources and free time," said Rumman Chowdhury, director of Twitter's Machine Learning Ethics, Transparency and Accountability program. "We want to start cultivating and creating a community of ethical AI hackers."

Tackling algorithmic bias has become an increasingly important concern for technology. AI can cause problems, including denigrating particular populations or reinforcing stereotypes, if the software isn't trained effectively. Twitter's project is designed to solidify standards around ideas like representational harm.

AI has revolutionized computing by teaching devices how to make decisions based on real-world data instead of rigid programming rules. That helps with messy tasks like understanding speech, screening spam and identifying your face to unlock your phone.

The algorithms that power AI, however, can be opaque and reflect problems in training data. That's led to problems like Google mistakenly labeling Black people as gorillas in photos. Fixing AI problems is important as we rely on the technology to run more and more of our digital lives. It also can be important within companies: Google acknowledges that its handling of an AI ethics issue hurt its program's reputation.

Twitter's algorithmic bias bounty is similar to programs that many tech companies now offer to find security problems in their products. For example, Google has paid $29 million for 11,055 vulnerabilities found in Android, Chrome and other Google products over the last decade.

Startup HackerOne is helping to run Twitter's algorithmic bias bounty competition, sharing rules and accepting submissions. The deadline for entries is 11:59 p.m. PT on Aug. 6, and Twitter will announce winners Aug. 8.

AI shortcomings can be exploited in many ways, including specially crafted images that could turn Twitter's saliency software into an unwitting accomplice of an outside attack. Researchers might want to examine other algorithms for bias -- the tweets Twitter chooses to spotlight or omit from your feed, for example. For the moment, Twitter's bias bounty is limited to its cropping algorithm.

Technology Stories