Skip to content

In clear view: Confronting Canadian police use of facial recognition technology

If you blinked, you would have missed it: Last June, Canada’s national police force was found to have broken the law when they used facial recognition technology that violated the most basic aspects of Canada’s privacy laws.

March 1, 2022

6-minute read

If you blinked, you would have missed it: This past June, Canada’s national police force was found to have broken the law when they used facial recognition technology that violated the most basic aspects of Canada’s privacy laws.

While it could be chalked up to it being the summer, or focus being on the pandemic, it unfortunately fits in the long history of police and intelligence agencies in Canada being able to skirt privacy and other laws under the guise of protecting public safety and national security, with few repercussions apart from a soundbite from relevant officials and ministers promising to do better and that, of course, protection of rights is paramount.

It was on June 10, 2021, that the Privacy Commissioner of Canada and several provincial and territorial counterparts released their finding that the RCMP had violated the federal Privacy Act by using controversial—and illegal—facial recognition software provided by Clearview AI. The Privacy Commissioner and his colleagues had already found, in February 2021, that Clearview AI broke Canadian law first by collecting over 3 billion facial images without the consent of a single person to populate their system, and then by contracting their facial recognition database and software to police agencies and private companies across the country.

As federal Privacy Commissioner Daniel Therrien said when the decision on Clearview AI was released, “What Clearview does is mass surveillance and it is illegal. It is an affront to individuals’ privacy rights and inflicts broad-based harm on all members of society, who find themselves continually in a police lineup. This is completely unacceptable.”

When the RCMP’s ties to Clearview AI were originally revealed in early 2020, the force initially denied, then downplayed their use of software from what was already a company mired in controversy—from its co-founder’s cozy relationship with white supremacists to its fast-and-loose relationship with the law in the United States, to its secretive contracting with hundreds of police forces across that country. What eventually came out was that the RCMP had been using the Clearview system for months and made hundreds of searches—for what, we don’t know—hiding it from the Privacy Commissioner, the media and the public. In fact, it was eventually revealed that the RCMP has had an almost 20-year history of using facial recognition technology, without ever revealing what technology they’ve used or how they use it.

Concerns around the RCMP and facial recognition...didn’t appear in a vacuum; it’s part of a long, ongoing debate about surveillance, privacy and the use of new technology in the pursuit of national security.

The Mounties’ recent foray into facial recognition isn’t limited to Clearview AI. Last year, the Tyee also revealed that the RCMP in B.C. had contracted with a US company for use of its “terrorist” facial recognition database. This company promised access to a databank of 700,000 images of terrorists; who they are, how they are determined to be “terrorists” or the accuracy of the company’s information is impossible to assess. The RCMP won’t reveal why or how they used this system either.

We’ve known for years that law enforcement and intelligence agencies have been dodging rules around surveillance and privacy protections. While the problem dates back much further, the issue has grown exponentially in the 20 years since the start of the War on Terror, following the attacks on the United States on September 11, 2001. It led to an all-out effort by national security agencies to collect as much information as possible, with little regard for privacy or other rights. In 2013, National Security Agency (NSA) contractor and whistle-blower Edward Snowden revealed what many suspected was bubbling below the surface: that the US spy agency, along with allies in countries like the UK and Canada, were running vast, covert mass surveillance operations of questionable legality, out of view of politicians, oversight bodies and the public. In 2016, it was revealed that the Canadian Security Intelligence Service (CSIS) had been illegally retaining troves of Canadians’ data completely unrelated to threats in order to engage in data analysis. While the data was “walled-off,” the response from the federal government wasn’t to forbid such data-collection, but instead to legalize it with the passage of the National Security Act in 2019. While the bill established a series of strict safeguards around private data about Canadians, this was less so for foreign information, and created nearly open-season on the wide-spread collection of vaguely-defined “publicly available information.”

So the concerns around the RCMP and facial recognition—and the use of facial recognition surveillance in general—didn’t appear in a vacuum; it’s part of a long, ongoing debate about surveillance, privacy and the use of new technology in the pursuit of national security.

That said, facial recognition technology, and in particular facial recognition surveillance, presents its own particular set of hazards. That’s why organizations who study the issue across Canada—ranging from academic institutes to think tanks to human right and civil liberties groups—have called for, at a minimum, a moratorium on law enforcement and intelligence agency use of the technology until there is further, public study and appropriate rules put in place. We at the International Civil Liberties Monitoring Group, along with nearly 70 other organizations and experts on the issue, have demanded an outright ban on the use of facial recognition for surveillance purposes, along with a moratorium for all other uses of the technology.

Multiple independent studies have shown that the algorithms on which some of the most widely used facial recognition matching technology is based are biased and inaccurate.

Why is there such an urgent need for action?

First, facial recognition allows for mass, indiscriminate and warrantless surveillance. Both real-time (live) and after-the-fact facial recognition surveillance systems subject members of the public to intrusive and indiscriminate surveillance. This is true whether it is used to monitor travellers at an airport, individuals walking through a public square, or activists at a protest.

While police are required to obtain a warrant to surveil individuals either online or in public places, there are gaps in our current laws about whether this applies to facial recognition surveillance. These gaps may also allow police and other agencies to use mass surveillance in the hopes of being able to identify a person of interest—putting all of us under the microscope.

Second, there is a dangerous lack of regulation of facial recognition technology in Canada, including around transparency and accountability of law enforcement and intelligence agencies. The Privacy Commissioner of Canada has warned that current privacy laws are a “patchwork” that do not address the risks posed by facial recognition technology. We’ve seen this play out when the RCMP and police forces across the country lied about whether they use facial recognition technology, without repercussion. Some police forces even said they weren’t aware that their officers had started using facial recognition technology. Municipal, provincial and federal oversight boards and elected representatives certainly weren’t aware. Even once it was revealed police services were using this technology—some of it illegal, in the case of Clearview AI tech—there was no fall-out or accountability.

Third, facial recognition systems are inaccurate and biased. Multiple independent studies have shown that the algorithms on which some of the most widely used facial recognition matching technology is based are biased and inaccurate. This is especially true for people of colour, who already face heightened levels of surveillance and profiling by law enforcement and intelligence agencies in Canada.

Even if the algorithms could be improved, there are concerns about the kinds of databases that are used to match and identify facial patterns. For instance, some police forces use mugshot databases as the comparison dataset. However, these databases are flawed and should be questioned in terms of their reliability and whether they increase further stigmatization, especially given the disproportionate policing of communities of colour across Canada.

Finally, facial recognition technology is a slippery slope. The current scope for the use of facial recognition technology in Canada by law enforcement is unknown. What we do know is that multiple police forces are using various versions of facial recognition technology for multiple purposes all across the country, at all levels. We also know that they have access to the most intrusive forms of facial recognition surveillance. For example, the Canada Border Services Agency (CBSA) ran a pilot project using real-time facial recognition surveillance at Toronto’s Pearson Airport for six months in 2016, with little to no public notice beyond the Privacy Impact Assessment on its website. Meanwhile, the Canadian Security Intelligence Service (CSIS) has refused to confirm whether or not they use facial recognition technology in their work.

Even if we choose to believe that the current use of facial recognition technology by Canadian law enforcement is limited, the fact that it is unregulated means that even limited use has the potential for serious harm. Its use normalizes its role in society, allowing facial recognition to spread and gain acceptance over time, until it can no longer be put back in the box.

We have seen this in other jurisdictions: limited use of facial recognition by law enforcement in other countries has typically led to greater and much broader rollouts of the technology.

In the UK, facial recognition is already being used at sports matches, street festivals, protests, and even on the streets to constantly monitor passers-by.

It is easy to imagine that without proper scrutiny, public debate and regulation, the same will eventually come to Canada.


To send a message to the Minister of Public Safety calling for a ban on facial recognition surveillance and legislative reform, visit iclmg.ca/banfr.


Show your support

Since the beginning of the pandemic, our writers and researchers have provided groundbreaking commentary and analysis that has shaped Canada's response to COVID-19. We've fought for better supports for workers affected by pandemic closures, safer working conditions on the frontline, and more. With the launch of the new Monitor site, we're working harder than ever to share even more progressive news, views and ideas for Canada's road to recovery. Help us grow.

Support the Monitor