Unapproved photos used for AI facial recognition

A report revealed that photos of people’s faces are routinely taken from websites to help develop face recognition algorithms, without the subjects’ consent.

Background

Facial recognition is a system built to identify a person from an image or video. It is useful across many applications and industry verticals. Cameras paired with artificial intelligence facial recognition software can identify a person's age, gender, and ethnicity. It can also be used to unearth other personal data associated with an individual – such as other photos featuring the individual, blog posts, social networking profiles, Internet behavior, travel patterns, etc. – all through facial features alone.

Analysis

People’s online photos are being used without consent to train face recognition AI.

In January, IBM released a data set of almost a million photos that had been scraped from photo-sharing website Flickr then annotated with information about details like skin tone. The company pitched this as part of efforts to reduce the problem of bias within face recognition. However, it didn’t get consent from anyone to do this, and it’s almost impossible to get the photos removed.

The technology — which is imperfect but improving rapidly — is based on algorithms. The algorithms must be fed thousands of images of a diverse array of faces. Increasingly, those photos are coming from the internet, where they’re swept up by the millions without the knowledge of the people who posted them, categorized by age, gender, skin tone and dozens of other metrics, and shared with researchers at universities and companies.

“None of the people I photographed had any idea their images were being used in this way,” said Greg Peverill-Conti, a Boston-based public relations executive who has more than 700 photos in IBM’s collection, known as a “training dataset.” John Smith, who oversees AI research at IBM, said that the company was committed to “protecting the privacy of individuals” and “will work with anyone who requests a URL to be removed from the dataset.”

That’s a particular concern for minorities who could be profiled and targeted, the experts and advocates say.

Assessment

Our assessment is that the people’s faces are being used without their consent, in order to power technology that could eventually be used to surveil them. We feel that like many emerging technologies, it lacks many regulations to govern its use.