I Saw You!–or Maybe I Didn’t

Unease about the growing use of facial recognition technology has clustered around some by now well-known difficulties: inaccurate results when non-white individuals are involved; inadequate training of personnel who “read” the results; adoption of privacy-invading systems without public knowledge or input; inadequate monitoring of accuracy.

Despite these concerns, use of this technology is expanding in law enforcement, border control, airport screening, even business, including retail. Remember Rite Aid? How’s it really being used? What are the benefits as well as the unintended consequences? These questions create a fertile arena for authors of crime fiction.

Properly implemented, facial-recognition could help make policing more efficient. It has been used to identify a number of the January 6 rioters, whose participation was then verified by other evidence (their own Facebook posts, often). The UK, whose cities are blanketed with camera surveillance, nevertheless still values the human element—a cadre of super-recognizers who “never forget a human face.” That would not be me.

Unfortunately, an identification facilitated by the facial recognition technology sometimes trumps other evidence gathered in more traditional ways. The arrestee’s alibi, for example. No matter how sophisticated the technology becomes, it remains true that a person cannot be in two places at once.

Police departments say the technology is used only to generate leads, and people should not be charged with a crime until there is corroborating evidence. In practice, though, the algorithm’s output often doesn’t mark the beginning of an investigation, but its end. That perception was borne out by a Washington Post story yesterday about. A Texas man has filed a lawsuit against Macy’s (and others) claiming overconfidence in the technology led to his arrest by the Houston police. While in custody, he was sexually assaulted. This is one of a half-dozen ongoing wrongful arrest cases are around the country.

In the U.S., a growing number of state and local law enforcement agencies have available to them the faces of nearly half the adult population. These photos come from various sources—including billions scraped from social media, as well as government-issued I.D. cards, mug shots, driver’s licenses, etc. (In which case I don’t need to worry, because my driver’s license photo looks nothing like me! Though probably it matches up with the eighty or so “nodal points” that define a particular face.) The Georgetown University Law Center’s project on privacy and technology calls this vast database “the perpetual line-up.” And you’re likely in it, no matter how law-abiding you are.

Maybe you’re thinking, “So what? I’m not a criminal. This doesn’t affect me.” At least not until there’s a misidentification. Crime fiction writers should have a field day with this one. It’s one thing in a traffic stop or arrest situation to attempt to verify someone’s identity; it’s quite another to use the database for a fishing expedition after-the-fact. And fish will be caught, possibly using grainy, out-of-focus, out-of-date, candid selfies, to create a list of possible matches. Facebook for a while identified individuals in the photos on our news feeds. My friend’s wife was consistently identified as me. I didn’t think we look at all alike, but the algorithm did, so I understand the reality of misidentifying people.

Police departments in several major American cities are experimenting with street surveillance cameras that can continuously scan the faces of people in real time. More than a whiff of China here. The People’s Daily has reported that China’s facial recognition system needs only one second to scan the faces of its 1.4 billion people.

Warrants aren’t required for a search of facial databases. The investigations aren’t necessarily limited to serious crimes. Defendants may never be told that it was an algorithm, not a human witness, that identified them. People who don’t trust the justice system, may prefer to take a plea deal and never have their case tried in court and face a potentially longer sentence. This means the true rate of false positive identifications is unknowable. All these aspects of the technology and its implementation, good and bad, lend themselves to situations crime writers can exploit.

Graphic by Mike MacKenzie (www.vpnsrus.com) under Creative Commons license 2.0 Generic.

Amazon: All About Customer Experience?

Millions of people have benefited from Amazon’s single-minded quest to create frictionless commerce. Pretty much everything it might occur to us to want—from a book to laundry detergent to a snow blower—arrives, if not overnight, well before we’ve forgotten ordering it. Customer reviews, price comparisons, and Q&As guide our choices and let us weigh in with praise or complaints.

Behind that wall of customer-facing information is a lot of other information. About us. Information we have trusted the company with. Yet it seems Amazon has done a remarkably poor job minding that particular store. In the current issue of Wired, Will Evans writes about “Amazon’s Dark Secret”—one that’s been obscured by Amazon’s disingenuous assertions that privacy is “sewn into” everything the company does. (Read the full eye-popping article from Reveal and Wired here.)

Too many of the company’s 575,000 employees worldwide have access to customer data. This has allowed low-level employees to snoop on purchases made by celebs, to use customer data to help third-party sellers sabotage their competitors, to mess with Amazon’s product review system, and to enable sale of low-quality knock-off products.

Our data were so readily available that, for years, Amazon didn’t even know where the relevant databases—including credit card numbers—were. Funny, hackers could find them. If a design team wanted a database, it was readily available to them. If they made a copy, no one in the company security apparatus knew. In short, “Amazon had thieves in its house and sensitive data streaming out beyond its walls.”

Management for years turned a blind eye to these problems. Raising a red flag was a good way for an employee, including members of the too-small security staff, to get shut down or shut out. The whole edifice became shakier when the EU established its General Data Protection Regulation, and Amazon, like every other company dealing with EU members’ citizens, had to comply by the May 2018 deadline.

Amazon spokespeople deny the general tenor of the article and emphasize progress that’s been made, but you might want to read the whole electrifying saga. Bits and pieces of this story have been coming out for several years, but like Gerald Posner’s excellent God’s Bankers, pulling all these stories together in a coherent narrative, as here, makes for a compelling indictment.

Wikimedia Privacy & You

Privacy

photo: SparkCBC, creative commons license

What is privacy in an era of NSA mega-sweeps, email hacking, and rampant security breaches? Sure, companies all have privacy policies, full of boilerplate, but what do they mean in practice?  The recent Wikimedia Foundation transparency report shines a light on one tiny piece of our potentially massive digital persona. If you use Wikimedia often, as I do, you may realize that it keeps some non-public user-identifiable information. Law enforcement and security agencies may be interested in those data.

Sometimes I joke about this, because, as a writer of crime thrillers, my history of searches would be highly suspicious. It has happened to writers, and  here’s a case where a Long Island family’s Google searches got them into trouble. UK’s Daily Mail has published a looooong list of search keywords and phrases of supposed interest to the Department of Homeland Security. Examples of suspect words: exercise (which I use mainly in the context of “I should get more”), prevention, organized crime (oops! a biggie for me), sick, smart. With such a “broad, vague, and ambiguous list,” as the Electronic Privacy Information Center termed it, adding Wikimedia searches to the data would generate a bazillion hits.

Wikimedia’s Privacy Practices

Wikimedia’s transparency report for the six-month period July to December 2015 is therefore a welcome peek behind the privacy curtain. It receives requests for user data from government, individuals, and corporations, but doesn’t collect much non-public data or retain it for long, so often does not even have what people want. Case closed. But when it does, it will notify you before disclosing any information and may even assist you in fighting “invalid requests.”

Between July and December 2015, Wikimedia received 25 user data requests, 14 of which were from non-government entities. It produced the requested information for only one of them—in response to a court order from France, affecting one user account. This is of course a vanishingly small number of requests compared to what Facebook or Google receive.

Wikimedia also sometimes discloses information to the authorities on its own initiative. That happened a dozen times in the same six-month time period. For example, it alerted authorities to a bomb threat originating from an IP address physically near the target site (an arrest and confession followed);  reported a detailed threat against President Obama; and disclosed a credible suicide threat, with another positive outcome.

The Internet Never (?) Forgets

Also in that period, Wikimedia received 220 legal requests to alter content or remove information, granting none of them. It encourages complainers to work with the community to rectify what they perceive as errors or inaccuracies.

You may know about “Right To Be Forgotten” (RTBF) efforts, authorized under a 2014 European court decision involving Google Spain. Wikimedia opposes this movement, and tends not to grant RTBF requests, though people may do a workaround, by having Wikipedia links removed from search engines. (Here’s an example.)

Dig Deeper

Although Wikimedia’s efforts are a tiny finger in the dike, its commitment to privacy and to letting users know it, is laudable. Read more on this topic:

privacy

graphic: Bernard Goldbach, creative commons license