THEY LOVE TO KEEP ON EYE ON US. Bosses and the police always have. But not the way they can now.
New digital tools with AI (artificial intelligence) allow them to keep a watch on many more of us, in many more places, at many more times, in many more ways for about as long as they want.
The good news is these digital surveillance tools help us to combat the coronavirus. The worry is, what else will they be used for once the virus is under control.
Workers on side
Workers are glad to have employers use digital tools such as thermal scanners, contact tracing wearables and artificial intelligence-powered video surveillance software to monitor social distancing to keep workplaces safe.
Maclean’s magazine reports: “The Laborers’ International Union of North America is ... working with Richmond Hill, Ontario-based startup Facedrive to issue wristbands loaded with a contact tracing app to construction workers.
“It’s the tool that enables them to feel safe in showing up that day,” says Kevin Davies, the chief marketing officer for Provision Analytics.
Toyota Canada credits its use of several new digital procedures for the fact that, as of October, it had not had a single case of the virus being transmitted in its plants since bringing its 8,500 employees back to work in May. However, it is an open question as to whether that success is completely due to the new digital tools.
Unlikely to be removed
Technology experts are asking what will happen after the virus is gone and the security cameras remain wired into the walls.
People working from home, meanwhile, face digital monitoring of a different kind. Software cannot be used to keep home workers any safer, but it can be used—and is being used— to tattle on who aren’t at their computers during work hours. There is a huge demand for this kind of software.
ActivTrak, a company that sells software that monitors how much time workers spend in various apps and includes an option to take screenshots at set intervals, says interest from new clients is double pre-pandemic levels.
Hubstaff, which provides similar software, says the number of Canadian customers it serves has increased 293 per cent year-over-year, from March 2020 to early October.
“There’s a big risk that the pandemic is leading us down this route of more extensive and long-term data collection that will eventually be unconnected to the current COVID-19 crisis,” says Sarah Villeneuve, a policy analyst at a think tank at Ryerson University in Toronto. “It might set us up for workplace environments where this type of software is normal.”
Canadian employers have the right to introduce monitoring technology as long as their workers consent to it, but depending on employers to be scrupulous about workers rights is always perilous.
Feeling safe is not being safe
Brenda McPhail, director of the Canadian Civil Liberties Association’s Privacy, Surveillance and Technology Project, says feeling safe and being safe are two different things.
“In the security context, we talk about security theatre. I think increasingly what we’re seeing is health theatre. We’re seeing technologies being leveraged as a way of showing somebody is doing something that is protective, or has safety in mind,” McPhail says. “Sometimes it’s all just for the show, because there’s really no evidence that it’s going to be effective.”
“In times of crisis, we’re increasingly coming to see technology as a magic fix, a silver bullet,” McPhail says. “There’s a real question as to whether the potential benefits outweigh the incredible intrusion into people’s lives.”
Break the law to watch us
The tendency of authorities to ignore legal limits to what they can do in the name of public safety and security was exposed again February 3, when federal and provincial privacy commissioners released a report that at least 48 agencies across the country, most of them police forces, broke the law when they used facial recognition software from Clearview AI.
The privacy watchdogs found Clearview AI’s technology enabled mass surveillance of Canadians and violated their rights.
Clearview AI’s collection of billions of images including those of Canadians was a “clear violation” of privacy rights in Canada, agreed federal, B.C., Quebec and Alberta privacy commissioners.
Both federal and provincial jurisdictions have commissioners in place who are supposed to review new technologies for potential threats to privacy before government agencies start using them.
The Tyee reports that police forces, including the RCMP, evade the legislation by claiming the software programs are being trialled or not fully implemented, all while using them in active operations on Canadian citizens.
“Time and again we see organizations are trying to get away with it until they can’t,” said BC Freedom of Information and Privacy Association executive director
- 30 -