Clearview AI violated privacy rights

Clearview AI violated privacy rights.jpg

A recent decision of the Privacy Commissioners of Canada, Quebec, British Columbia, and Alberta decided that Clearview AI’s operations amounted to mass surveillance and violated privacy rights.

Clearview AI scraped over 3 billion images of people off various social media and websites. It sold services to law enforcement and others to compare images from things like security cameras to identify people in the images. According to the decision, Clearview AI had provided its services to 48 organizations in Canada (including the RCMP) but stopped providing its services in Canada after the privacy investigation began.

Is Clearview AI just a search engine?

Clearview AI took the position that consent to use people’s images was not required for various reasons. Among these reasons is the information was publicly available. It also claimed to be just a search engine.

That publicly available argument is that we have put our images on social media or consented to them appearing in various places on the web, so any use of them should be okay. The flaw in that argument is that when we put our pictures there, we didn’t agree to them being harvested into a massive database so we could be tracked and identified anywhere. In other words, we did not consent to be in a 24/7 police lineup. Just because we share some personal information doesn’t mean anyone can do whatever they want with it. While privacy legislation does have consent exceptions for publicly available information, those exceptions are narrow and specific.

That publicly available argument is reminiscent of the creepy and short-lived “Girls Around me” app from a decade ago.

Another way to look at it is that police can’t obtain and save our fingerprints or DNA or mugshot unless we have been charged with a serious crime. So why should they be able to use a database of our images for a similar purpose?

Opting Out

Faced with public pressure, Clearview AI adopted a process to allow individuals to ask what info Clearview AI had about them, and then request an opt-out or deletion. But an individual’s ability to opt out that way was not enough to comply with privacy laws.

I documented my correspondence with them to find out if they had images of me and to request their deletion in a previous post. The Commissioners’ press release says:

“The privacy authorities recommended that Clearview stop offering its facial recognition services to Canadian clients; stop collecting images of individuals in Canada; and delete all previously collected images and biometric facial arrays of individuals in Canada.”

“However, Clearview disagreed with the findings of the investigation and did not demonstrate a willingness to follow the other recommendations. Should Clearview maintain its refusal, the four authorities will pursue other actions available under their respective Acts to bring Clearview into compliance with Canadian laws.”

Guidelines for law enforcement

The press release points out that an investigation by the Federal Privacy Commissioner into the RCMP’s use of Clearview AI is still underway. And that the Commissioners are working on guidance for law enforcement use of facial recognition. Stay tuned for those results.

Not all countries think about this the same way, though. For anyone wanting to take a deeper dive into how Clearview AI is treated in the United States, look at this New York Times article. It talks about the founder’s far-right ties, and how U.S. police forces love the app.

David Canton is a business lawyer and trade-mark agent with a practice focusing on technology issues and technology companies. Connect with David on Twitter and LinkedIn.

David Canton