Uploading personal photos to the internet can feel like letting go. Who else will have access to them, what will they do with them—and which machine-learning algorithms will they help train?

The company Clearview has already supplied US law enforcement agencies with a facial recognition tool trained on photos of millions of people scraped from the public web. But that was likely just the start. Anyone with basic coding skills can now develop facial recognition software, meaning there is more potential than ever to abuse the tech in everything from sexual harassment and racial discrimination to political oppression and religious persecution.

A number of AI researchers are pushing back and developing ways to make sure AIs can’t learn from personal data. Two of the latest are being presented this week at ICLR, a leading AI conference.

“I don’t like people taking things from me that they’re not supposed to have,” says Emily Wenger at the University of Chicago, who developed one of the first tools to do this, called Fawkes, with her colleagues last summer: “I guess a lot of us had a similar idea at the same time.”

Data poisoning isn’t new. Actions like deleting data that companies have on you, or deliberating polluting data sets with fake examples, can make it harder for companies to train accurate machine-learning models. But these efforts typically require collective action, with hundreds or thousands of people participating, to make an impact. The difference with these new techniques is that they work on a single person’s photos.

“This technology can be used as a key by an individual to lock their data,” says Daniel Ma at Deakin University in Australia. “It’s a new frontline defense for protecting people’s digital rights in the age of AI.”

Hiding in plain sight

Most of the tools, including Fawkes, take the same basic approach. They make tiny changes to an image that are hard to spot with a human eye but throw off an AI, causing it to misidentify who or what it sees in a photo. This technique is very close to a kind of adversarial attack, where small alterations to input data can force deep-learning models to make big mistakes.

Give Fawkes a bunch of selfies and it will add pixel-level perturbations to the images that stop state-of-the-art facial recognition systems from identifying who is in the photos. Unlike previous ways of doing this, such as wearing AI-spoofing face paint, it leaves the images apparently unchanged to humans.

Wenger and her colleagues tested their tool against several widely used commercial facial recognition systems, including Amazon’s AWS Rekognition, Microsoft Azure, and Face++, developed by the Chinese company Megvii Technology. In a small experiment with a data set of 50 images, Fawkes was 100% effective against all of them, preventing models trained on tweaked images of people from later recognizing images of those people in

Read More

 

————

By: Will Douglas Heaven
Title: How to stop AI from recognizing your face in selfies
Sourced From: www.technologyreview.com/2021/05/05/1024613/stop-ai-recognizing-your-face-selfies-machine-learning-facial-recognition-clearview/
Published Date: Wed, 05 May 2021 19:13:49 +0000

 

 

 

 

Comments

0 comments