Decisions in the UK and Australia, and lawsuits in the United States, could force facial-recognition providers to remove data from their machine-learning models.

5 Min Read
Facial biometrics of man's faceSource: EyeEm via Alamy Stock Photo

New York-based Clearview AI is paying the price for launching a facial-recognition service based on publicly posted pictures, as the company has become a focus of numerous privacy investigations and lawsuits alleging that the firm violated individuals' rights by collecting online pictures and making them searchable.

On Monday, the top privacy official of the United Kingdom levied a potential fine of more than £17 million, or about US $26.6 million, for the company's collection of facial data from images posted online without gaining the consent of the subjects. The ruling, stemming from a joint investigation with the Office of the Australian Information Commissioner (OAIC), also ordered the company to stop processing the data of UK citizens. A separate ruling is expected from the Australian government.

The decision comes three months after an Illinois court ruled that a lawsuit against Clearview AI for allegedly violating the state's Biometric Information Privacy Act (BIPA) could continue and dismissed a variety of legal defenses argued by the company.

"The Court favors [allowing the application of BIPA], fully recognizing that this may have an effect on Clearview's business model," Judge Pamela McLean Meyerson wrote in her ruling. "Inevitably, Clearview may experience 'reduced effectiveness' in its service as it attempts to address BIPA's requirements. That is a function of having forged ahead and blindly created billions of faceprints without regard to the legality of that process in all states."

Policy Catches Up With Technology
The privacy cases highlight the problems that occur when policy finally catches up to technology. BIPA, passed in Illinois, following the meltdown of fingerprint biometric service Pay By Touch in 2008, requires that a private entity inform citizens when it intends to use a biometric information or identifiers, set specific terms and uses for the information, and obtain permission from the subject. The American Civil Liberties Union (ACLU) sued Clearview AI in May 2020 on behalf of Illinois residents who are required to be notified of any biometric data collection.

"[T]he involuntary capture of biometric identifiers — which cannot be changed — can pose greater risks to an individual’s security, privacy, and safety than the capture of other identifiers, such as names and addresses," the ACLU stated. "And capturing an individual’s faceprint — akin to generating their DNA profile from genetic material unavoidably shed on a water bottle, but unlike the publication or forwarding of a photo — is conduct, not speech, and so is appropriately regulated under the law."

The travails of Clearview AI underscore the problems that innovative technology firms face when forging ahead with new technology. While the US does not have a federal biometric privacy law, five states have already passed such legislation, although only two have any teeth, such as the ability to bring private legal actions, says Christopher Ward, a partner with the law firm Foley & Lardner LLP, who represents businesses and employers defending against biometrics-related lawsuits. In addition to Illinois, California allows private lawsuits, or will starting in 2022.

"The Illinois law is where all the action is — in the short term, the primary focus of the legal arena is going to be a money grab until the gravy train runs out," Ward says, adding that business law will always trail behind technology. "The law moves a lot more slowly than technology does, and we are still working on a lot of wage and hourly employment issues dating back to the New Deal."

A Trio of Lawsuits
Currently, at least three lawsuits have targeted Clearview AI in Illinois courts, while Facebook settled a lawsuit in Illinois for identifying people as part of its "tag suggestions" feature. Some 21 other states are considering — or have considered — legislation regarding the collection of biometric information and the use of biometric identifiers, Ward and an associate wrote in a legal analysis.

Despite the initial ruling in the Illinois court, the company gained another $30 million in a Series B round of investments and is now valued at $130 million.

The company has pushed the bounds between using the online world to have an impact on the real world, where pictures posted on social media can suddenly lead to identification of participants in the Jan. 6 Capitol riot. The UK Information Commissioner argued that people need to be aware of and consent to how their information is being used.

"UK data protection legislation does not stop the effective use of technology to fight crime, but to enjoy public trust and confidence in their products technology providers must ensure people's legal protections are respected and complied with," UK Information Commissioner Elizabeth Denham said in a statement. "[T]he evidence we've gathered and analyzed suggests Clearview AI Inc. were and may be continuing to process significant volumes of UK people’s information without their knowledge."

The UK ruling, part of the Preliminary Enforcement Notice, allows Clearview AI to respond and refute the allegations. The company has already stopped doing business in the country.

"The UK ICO Commissioner's assertions are factually and legally incorrect," Clearview AI’s UK attorney Kelly Hagedorn, said in a statement sent to Dark Reading. "The company is considering an appeal and further action. Clearview AI provides publicly available information from the internet to law enforcement agencies."

The impact on facial recognition databases is currently not clear. Nearly half of 42 federal agencies that employ law enforcement officers currently use facial recognition technology, according to a June 2021 report by the General Accounting Office. Both the Illinois court ruling and the UK privacy commissioner's preliminary ruling raise the possibility that facial recognition will require the permission of subjects before using their images and faces as training data for the machine-learning models that power the technology.

San Francisco; Portland, Oregon; and Portland, Maine, have banned the use of facial recognition in their cities.

Companies need to be aware of their risk before using facial recognition, attorney Ward says. "So far, the compliance piece of using this technology is not all that difficult" for businesses, he says. "You just need the proper notices and consent. It is really just an issue of having the knowledge of what you need to do, or having good advisers as part of your team."

About the Author(s)

Robert Lemos, Contributing Writer

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline Journalism (Online) in 2003 for coverage of the Blaster worm. Crunches numbers on various trends using Python and R. Recent reports include analyses of the shortage in cybersecurity workers and annual vulnerability trends.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights