Earlier this week, Rana Foroohar wrote in the Financial Times that “Companies are the cops in our modern-day dystopia”:
The mass surveillance and technology depicted in the [2002 movie Minority Report] — location-based personalised advertising, facial recognition, newspapers that updated themselves — are ubiquitous today. The only thing director Steven Spielberg got wrong was the need for psychics. Instead, law enforcement can turn to data and technologies provided by companies like Google, Facebook, Amazon and intelligence group Palantir.
The dystopian perspective on these capabilities is worth remembering at a time when the private sector is being pulled ever more deeply into the business of crime fighting and intelligence gathering. Last week, the American Civil Liberties Union and several other rights groups called on Amazon to stop selling its Orwellian-sounding Rekognition image processing system to law enforcement officials, saying it was “primed for abuse in the hands of government”.
I have written a few posts already about the potential for governments and private companies to use new technologies such as cryptocurrencies, biometrics and data mining to engage in activities that we would normally associate with the fictional totalitarian regimes of George Orwell or Aldous Huxley. With regards to state actors, like China, using biometrics for crime prevention, I wrote:
But still, if we move to a system of Big Brother with ubiquitous cameras capturing our facial images 24/7 and the system is only 80% accurate, that leads to arguably an unbearably high threshold for potential abuse. Democracies are supposed to accept some criminals getting away with crime in exchange for the innocent not being locked up. It’s the authoritarian regimes who place law and order above the protection of the innocent.
Between companies, governments and new technologies, the potential for opportunities, efficiencies and abuse are endless. It is a Brave New World.
And with regards to cryptocurrencies, I wrote:
Although neither George Orwell or Aldous Huxley’s dystopian futures predicted a world governed by corporations as opposed to authoritarian governments, it may be more plausible to imagine a world where corporations control the money supply, not with coins and bills but cryptocurrencies. In fact, the fad amongst many technologists today is to encourage the disintermediation (or deregulation) of money by moving to Blockchain-based cryptocurrencies like Bitcoin. But instead of removing the middleman, we are more likely – contrary to the idealists’ ambitions — to open the door to empower big tech companies like Amazon, Facebook and Google to tokenize their platforms, replacing one currency regulator with corporate ones.
But private companies are able to do so much more with the data that we so generously (and often naively) hand them. The possibilities for abuse seem endless. To a large degree, the new GDPR mitigates this risk by giving the consumer visibility about and control over how her data is being used, and hopefully building trust between consumers and their service providers. As stated here before, more important than complying with strict new laws, “to be commercially viable, these technologies need to gain consumers’ confidence and trust. Otherwise consumers will not be comfortable sharing their data and will simply not use the service.”
But what happens if consumers are not given the opportunity to intelligently grant consent or agree to use a service that shares their data? The first GDPR complaints have been filed precisely on these grounds:
Across four complaints, related to Facebook, Instagram, WhatsApp and Google’s Android operating system, European consumer rights organisation Noyb argues that the companies have forced users into agreeing to new terms of service, in breach of the requirement in the law that such consent should be freely given.
Especially after the Cambridge Analytica scandal, many people (including myself) have considered leaving the Facebook universe once and for all. But what if Facebook has become more than a voluntary social media platform? What if it has become a utility, comparable to a telephone line? You could live without a phone, but you would have trouble participating in the modern economy without access to the necessary communication channels. To a large extent, Facebook is now an essential tool to validate one’s identity and interact with one’s peers. While I may be able to give up my endless Facebook political debates cold turkey, how would I maintain communications with my +1000 personal and professional Facebook contacts if left? If Facebook is a monopolistic utility, then how can I meaningfully give my consent to their data policy when there is no meaningful alternative? This was raised by Belgian lawmaker, Guy Maurice Marie Louise Verhofstad, last week when Mark Zuckerberg appeared before the European Parliament:
“You cannot convince him because it is nonsense, naturally!” he scoffed. “You have given the example of Twitter, you have given the example I think also of Google as some of your competitors, but it’s like somebody who has a monopoly in making cars is saying, ‘Look, I have a monopoly making cars, but there is no problem. You can take a plane! You can take a train! You can even take your bike! So I have no monopoly.’ There is a problem there.”
Let me ask you: if you are a Facebook user, how easy is it to log-off permanently?
Now imagine you do not use Facebook. You have never signed-up or did but have since closed your account. This week you attend an offsite team building meeting with work colleagues from across the globe. Everyone takes loads of pictures, many of which include you. A couple of your colleagues post those photos on Facebook. As mentioned, you’re not a Facebook user, but Facebook uses face recognition technology to identify each person in your colleagues’ posts. You’re face has been recognized and a shadow profile for you has been created. Facebook can now pick you out in any photo. Sometimes it will get it right, but other times it will mistake you for other people and other people for you.
Besides the fact that this is just plain creepy, Facebook could arguably sell data surrounding your shadow profile to third parties. For example, with your shadow profile, Facebook can recognize you in any past or future photos that anyone posts on the site, even if you are merely an unbeknownst bystander. They can use artificial intelligence and machine learning to draw conclusions from those photos, like your gender, what you were doing, what you like (were you holding a Coke or wearing a Real Madrid jersey in the photo?), where you were doing it, and then predict future behavior based on that data. All of this information can then be sold to third parties to target you for ads (or additional security checks) without you ever having consented or opted-in or out of the service.
Does this sound like science fiction? In fact, a California judge has certified a class action suit against Facebook for gathering biometrics information about individuals without their consent. Now I don’t mean to pick on Facebook. Apple also uses biometrics technology to create profiles of anyone you take pictures of on your iPhone. Google uses all sorts of artificial intelligence tools to profile its users, track their movements and behavior and predict their future actions. What it knows about you and where you have been is astonishing. And what about Amazon Echo which may be “recording every conversation in a person’s home and transmitting it to the cloud” without your consent? Even for services that we love like the personalized Amazon recommendations — which have led me to fantastic books and music — by creating these profiles, companies also run the risk of conditioning consumers, creating societal bubbles, fomenting group think and promoting sectarianism. It is why fake news can become so viral and successful as a political weapon and public relations strategy.
As stated above, the possibilities are endless, and as these technologies improve efficiencies and help decrease operating costs, it will become harder for consumers to opt-out of services that amass large amounts of personal data. Handing over your personal data, including biometric data, will become mandatory in order to engage all sorts of activities from making purchases at the grocery store to entering into public spaces like airports or concert halls, or even opening the door to your house.
So will private corporations armed with the latest technology take over the world and control our lives? Let’s not all start freaking out just yet. Technology since the dawn of time has always provoked controversy but has proven to be a powerful force for good in the lives of mankind.
Just to give a simple example, think about the evolution over the past century in how we communicate and travel. When my great grandparents emigrated from Europe to the United States at the beginning of the 20th Century, they left their families behind forever. And by forever, I mean they would not only never see their parents and siblings again, but they would also never ever hear the sound of their voices again either. Transatlantic passage was too expensive and communication was limited to snail mail. Today, I have done the reverse migration, living in Europe with my parents and siblings in the U.S. Yet, I am able to I interact with them constantly by video, email, chat, and social media, and I see them in person two to four times a year. With air travel, as Carole King sang, “you’re just time away” and with everything else, it’s virtually real-time.
So yeah, I am a big fan of new technologies! I blog, I tweet, and I engage with my friends and colleagues through Facebook and Linkedin. I even work for a leading global technology company that processes vast amounts of personal data. I want to see innovation that improves the lives of consumers and society. But as a lawyer, I cannot stop being a skeptic, questioning everything around me. New technologies, especially ones that amass large amounts of personal data, are powerful tools that can simplify and improve our lives. But unchecked they give private companies and governments unprecedented control over even our most basic freedoms. If we become too complacent, then science fiction can become a dystopian reality.