China like Britain is installing cameras in public places to track its citizens. Britain already has at least 1 surveillance camera for every 11 people, a fraction that is rising. China wants in on the photographic fun. The Washington Post reports:
The intent is to connect the security cameras that already scan roads, shopping malls and transport hubs with private cameras on compounds and buildings, and integrate them into one nationwide surveillance and data-sharing platform.
It will use facial recognition and artificial intelligence to analyze and understand the mountain of incoming video evidence; to track suspects, spot suspicious behaviors and even predict crime; to coordinate the work of emergency services; and to monitor the comings and goings of the country’s 1.4 billion people, official documents and security industry reports show.
Computers Make Mistakes
“Artificial intelligence” (a.k.a. “deep learning”) always sounds scary, but it is nothing more than old-fashioned statistical modeling done on a large scale. Because it is a form of modeling, it is imperfect. That means that when an algorithm designed to look at a picture of Mr. Wong and say, “This is Mr. Wong”, sometimes it won’t. Sometimes it will say it is you.
What harm could there be in that?
Consider that you have been incorrectly identified as standing outside a certain building where known troublemakers have been seen. The algorithm that said you were there then looks to the “Police Cloud” database that has “such data as criminal and medical records, travel bookings, online purchase and even social media comments.”
The computer next looks up the “meta data” from your phone records. This tells exactly where you were when you made every call, who you called and for how long, on what device you and the other party used, whether the call was followed by any data (say, a Snapchat), and so on. The only thing the computer does not admit to knowing is what you said.
The algorithm now updates your “social credit” score, lowering it. Not only does it ding your score, but the people you called also take a small hit.
The entire process is automatic, with no way to check errors, so you’ll never know why the hiring manager rejected your application. (You won’t know at Google, either.)
We’re All Guilty
There is another possibility. The facial-recognition algorithm does not make a mistake. It really was you standing there. You may have had an innocent explanation for being at that suspicious corner. But we’re talking public safety here. Why take a chance? A suspicious corner was involved. And it’s always better to be safe than sorry, isn’t it?
Here we recall the words […]
Click here to read the rest. Clear your cookies after to maintain plausible deniability.