After accusations, Twitter will pay hackers to find biases in its automatic image crops

After accusations, Twitter will pay hackers to find biases in its automatic image crops

Twitter is sponsoring a competition in the hopes of finding biases in its image cropping algorithm, and the best teams will receive cash rewards (via Engadget). Twitter hopes that by allowing teams access to its code and image cropping model, they will be able to identify ways in which the algorithm could be problematic (for example, cropping in a way that stereotypes or erases the image’s content).

Those that compete must submit a summary of their findings as well as a dataset that can be run through the algorithm to demonstrate the problem. Twitter will then allocate points based on the kind of damages discovered, the potential impact on people, and other factors.

The winning team will get $3,500, with $1,000 prizes awarded for the most inventive and generalizable findings. On Twitter, that figure has sparked some debate, with some users arguing that it should include an extra zero. For example, if you discovered a bug that allowed you to do activities for someone else (such as retweeting a tweet or image) via cross-site scripting, Twitter’s usual bug bounty program would pay you $2,940. You’d make $7,700 if you could find an OAuth flaw that allowed you to take over someone’s Twitter account.

READ ASLO: Google may be working on an answer to Apple’s device-locating network

Twitter had previously conducted its own research into its image-cropping algorithm, publishing a paper in May that looked into how the system was biased in the wake of accusations that its preview crops were racist. Since then, Twitter has generally abandoned algorithmically trimming previews, but it is still utilized on a desktop, and a good cropping algorithm is a useful tool for a firm like Twitter.

Opening a competition allows Twitter to receive feedback from a much wider spectrum of people. For example, the Twitter team convened a meeting to discuss the competition, during which a team member said that they were getting inquiries about caste-based biases in the algorithm, something that software developers in California may not be aware of.

Twitter is also looking for more than simply unintentional algorithmic bias. Both deliberate and unintended harms have point values on the scale. Unintentional harms are cropping behaviors that might be abused by someone publishing maliciously created photographs, according to Twitter. Intentional harms are cropping behaviors that could be exploited by someone posting maliciously intended images.

The competition, according to Twitter’s announcement blog, is distinct from its bug bounty program; if you submit a report concerning algorithmic biases to Twitter outside of the competition, your report will be closed and labeled as not applicable, the company warns. If you’re interested in participating, visit the competition’s HackerOne page to learn more about the rules, qualifications, and other details. Submissions are open until August 6th at 11:59 p.m. PT, and the challenge winners will be announced on August 9th at the Def Con AI Village.

Leave a Reply

You May Also Like
The FTC has reportedly opened an investigation into Amazon’s MGM acquisition
Read More

The FTC has reportedly opened an investigation into Amazon’s MGM acquisition

According to The Information, the Federal Trade Commission has begun a probe into Amazon’s acquisition of MGM. The…
Read More
Here are the latest accusations Activision Blizzard employees have leveled at the company
Read More

Here are the latest accusations Activision Blizzard employees have leveled at the company

Blizzard employees: Following a massive lawsuit filed against the firm by California’s Department of Fair Employment and Housing…
Read More
Read More

New Authorities Job Alternative at Mzumbe College (MU) – Challenge Coordinator

[ad_1]   The Mzumbe College invitations functions from suitably certified and competent Tanzanians to fill the next vacant…
Read More