Hundreds of law enforcement agencies are using a little-known and secretive app that The New York Times says could “end privacy as we know it.”
The app is called Clearview AI. And it’s a facial recognition system built on a backbone of billions of images scraped from internet sites ranging from Facebook to Venmo, according to a new investigation by NYT privacy writer Kashmir Hill.
When a user uploads a photo to the app, it searches those billions of pictures and provides results containing publicly viewable images of that person, as well as the links where the images appeared.
And the NYT reports that police have already used the system to solve crimes ranging from shoplifting to murder. In one situation, the Indiana State Police were able to solve a case in 20 minutes.
But beyond the crime-fighting prowess of the app, there are undeniable privacy risks associated with the use of facial recognition by law enforcements.
Because of that, many technology companies capable of creating facial recognition technology have steered clear of it. Back in 2011, Google said it held back from creating facial recognition tech since it could be used “in a very bad way.”
Of course, Clearview’s image scraping may very well have violated the terms of many websites. And there are definite concerns about the security of the company’s database and servers — and whether employees could view its contents.
When the reporter asked police to scan her face as part of her investigation, Clearview apparently could see that she was being searched for by police.
The NYT also had trouble pinning down who actually worked at the company. The only employee on LinkedIn was found to be Hoan Ton-That, the developer of the app, using a fake name.
Despite the ethical and privacy questions, it appears that law enforcement agencies are rushing to use the service. More than 600 have signed up — even though they have “limited knowledge” of how it works and who’s behind it.
The New York Times’ full piece is well worth a read.