Why Technology Favors Tyranny

Harari Technology Tyranny.png

You all are going to think I am the Grim Reaper of new technologies, crying that the sky is falling at every turn. Yes, I am using this blog as a forum – amongst other things — to discuss the difficult decisions that businesses, lawyers and society need to face when looking at how new technologies like Artificial Intelligence, Blockchain and Biometrics may impact our lives. (examples, here, here and here).

Working for a tech company that invests millions in innovation, I am very interested in seeing how we can use new technologies to improve society. But in order to do that, we need to be very vigilant. The consequences of not doing so could be disastrous and significantly change the course of humankind.

Am I exaggerating? In a must read article in The Atlantic, Yuval Noah Harari (author of Sapiens: A Brief History of Humankind and Homo Deux: A Brief History of Tomorrow) makes precisely that argument:

More practically, and more immediately, if we want to prevent the concentration of all wealth and power in the hands of a small elite, we must regulate the ownership of data. In ancient times, land was the most important asset, so politics was a struggle to control land. In the modern era, machines and factories became more important than land, so political struggles focused on controlling these vital means of production. In the 21st century, data will eclipse both land and machinery as the most important asset, so politics will be a struggle to control data’s flow.

Unfortunately, we don’t have much experience in regulating the ownership of data, which is inherently a far more difficult task than regulating land or machines. Data are everywhere and nowhere at the same time, they can move at the speed of light, and you can create as many copies of them as you want. Do the data collected about my DNA, my brain, and my life belong to me, or to the government, or to a corporation, or to the human collective?

. . . Currently, humans risk becoming similar to domesticated animals. We have bred docile cows that produce enormous amounts of milk but are otherwise far inferior to their wild ancestors. They are less agile, less curious, and less resourceful. We are now creating tame humans who produce enormous amounts of data and function as efficient chips in a huge data-processing mechanism, but they hardly maximize their human potential. If we are not careful, we will end up with downgraded humans misusing upgraded computers to wreak havoc on themselves and on the world.

If you find these prospects alarming—if you dislike the idea of living in a digital dictatorship or some similarly degraded form of society—then the most important contribution you can make is to find ways to prevent too much data from being concentrated in too few hands, and also find ways to keep distributed data processing more efficient than centralized data processing. These will not be easy tasks. But achieving them may be the best safeguard of democracy.

The world my children and their children will inhabit will be vastly different from ours in ways we cannot even begin to imagine.

Amazon Rekognition and A.I. Bias

Amazon Face Rekognition

Whenever writing about face recognition technologies, particularly in the context of policing, I have raised the issue of the potential for false positives caused by unintentional biases embedded in the algorithms. Artificial intelligence tools, such as face recognition, are only as good as the algorithms they are based on, and it is very easy for the developers to unknowingly program their own biases into the algorithm, with very negative consequences:

False positives can mean that certain people are regularly stopped and potentially harassed by the police. Now imagine that the biometric engineers who set the algorithms are all from the same racial and ethnic groups, whether on purpose or not, their biases will be factored into the accuracy of the results. This will likely translate into minority groups taking the brunt of the false positives. For artificial intelligence and machine learning to be effective, it needs to be accurate at least 80% of the time. When that happens it will always be better than humans. But still, if we move to a system of Big Brother with ubiquitous cameras capturing our facial images 24/7 and the system is only 80% accurate, that leads to arguably an unbearably high threshold for potential abuse. Democracies are supposed to accept some criminals getting away with crime in exchange for the innocent not being locked up. It’s the authoritarian regimes who place law and order above the protection of the innocent.

Am I exaggerating? The American Civil Liberties Union (the ACLU) doesn’t think so. It recently ran a test of Amazon’s Rekognition — which Amazon has been aggressively marketing to police forces — by running the face recognition tool on the faces of members of the U.S. Congress against a sample of 25,000 mugshots. The results?

. . . according to the ACLU’s report, the technology is far from perfect. Rekognition incorrectly identified more than two dozen lawmakers as people who have been arrested for a crime, and the false matches were disproportionately people of color, the ACLU said. Six members of the Congressional Black Caucus, including noted civil rights leader Rep. John Lewis, were each identified as a match for a mugshot in the Rekognition database.

This doesn’t mean that we should disregard the huge positive potential for biometrics, but we need to be smart about how and when it is used.

“These results are consistent with a broader pattern of results from the machine learning literature,” Kroll told BuzzFeed News. “Not only does face recognition of large sets of individuals remains difficult to do accurately, face recognition systems have been shown to perform much less well for women, people of color, and especially women of color.

“It is important when fielding advanced computer technologies to do so responsibly,” Kroll continued. “These results show that Rekognition shouldn’t be used for some applications in law enforcement as it is currently.”

Face recognition works best with small sets of people, where it is used for the benefit of consumers and where consumers have the opportunity to opt out of the service. It is definitely not reliable in the context of law enforcement where decisions about you are being made without your knowledge, consent or control.

Unfortunately when Amazon, or other companies, get it wrong, consumers lose confidence in the new technology. That negatively affects the market perception for tools that have lots of useful applications that – when designed with consumer’s best interests at heart — can better our lives.

Five Things Companies Can Do

fb congress

Earlier this week I wrote a long-winded post describing steps companies can take – in light of recent concerns about companies misusing personal data – to make sure their technologies are offering us all something of value.

Here are the five things, in abbreviated form, that companies can start doing now:

  1. Privacy by Design (and security by design): Put the end user at the center of your technology’s architecture, minimize the amount of personal data you will need to provide the service, give the end-user control, and be transparent. If you concentrate on what the end user will be comfortable with and empower her with control over her data, then you are on the right track.
  2. Value Proposition: Make privacy protections and good practice a central point of differentiation. Make it core to your overall value proposition.
  3. Business Model. Re-think the business model. Propose different fee structures or revenue sharing options that give end users more control and something of value in return for handing over their data.
  4. Product Ethics: Before thinking about the legality of a new product or service, focus on it from an ethical viewpoint. Consider a products ethics committee, including bringing in an ethicist. Look not just at data use but the potential for a product or service to be misused (even if hacked) with results that are contrary to the company’s values. Remember the last thing you want is for your CEO to have to sit in front of lawmakers struggling to explain why your service was linked to a major human rights violation, political scandal, or massive leak of sensitive personal data.
  5. Data Use as a Corporate Social Responsibility: Make data use and innovation part of your company’s CSR policies where you commit to (i) not use the personal data and technology at your disposal in a way that has a negative effect on your community and stakeholders, and (ii) affirmatively use technology and innovation for the good of your community and stakeholders.

Put all together, the most important thing a company can do is to take the time to have open, internal conversations about the effects that its products and services may have on users and society. That way senior management can make informed decisions in line with the companies core values and identity. Lawyers don’t like surprises, and neither do their client.

Brave New World, Inc.

Minority Report

Earlier this week, Rana Foroohar wrote in the Financial Times that “Companies are the cops in our modern-day dystopia”:

The mass surveillance and technology depicted in the [2002 movie Minority Report] — location-based personalised advertising, facial recognition, newspapers that updated themselves — are ubiquitous today. The only thing director Steven Spielberg got wrong was the need for psychics. Instead, law enforcement can turn to data and technologies provided by companies like Google, Facebook, Amazon and intelligence group Palantir.

The dystopian perspective on these capabilities is worth remembering at a time when the private sector is being pulled ever more deeply into the business of crime fighting and intelligence gathering. Last week, the American Civil Liberties Union and several other rights groups called on Amazon to stop selling its Orwellian-sounding Rekognition image processing system to law enforcement officials, saying it was “primed for abuse in the hands of government”.

the-wire-lester

I have written a few posts already about the potential for governments and private companies to use new technologies such as cryptocurrencies, biometrics and data mining to engage in activities that we would normally associate with the fictional totalitarian regimes of George Orwell or Aldous Huxley. With regards to state actors, like China, using biometrics for crime prevention, I wrote:

But still, if we move to a system of Big Brother with ubiquitous cameras capturing our facial images 24/7 and the system is only 80% accurate, that leads to arguably an unbearably high threshold for potential abuse. Democracies are supposed to accept some criminals getting away with crime in exchange for the innocent not being locked up. It’s the authoritarian regimes who place law and order above the protection of the innocent.

Between companies, governments and new technologies, the potential for opportunities, efficiencies and abuse are endless. It is a Brave New World.

And with regards to cryptocurrencies, I wrote:

Although neither George Orwell or Aldous Huxley’s dystopian futures predicted a world governed by corporations as opposed to authoritarian governments, it may be more plausible to imagine a world where corporations control the money supply, not with coins and bills but cryptocurrencies. In fact, the fad amongst many technologists today is to encourage the disintermediation (or deregulation) of money by moving to Blockchain-based cryptocurrencies like Bitcoin. But instead of removing the middleman, we are more likely – contrary to the idealists’ ambitions — to open the door to empower big tech companies like Amazon, Facebook and Google to tokenize their platforms, replacing one currency regulator with corporate ones.

But private companies are able to do so much more with the data that we so generously (and often naively) hand them. The possibilities for abuse seem endless. To a large degree, the new GDPR mitigates this risk by giving the consumer visibility about and control over how her data is being used, and hopefully building trust between consumers and their service providers.  As stated here before, more important than complying with strict new laws, “to be commercially viable, these technologies need to gain consumers’ confidence and trust. Otherwise consumers will not be comfortable sharing their data and will simply not use the service.”

But what happens if consumers are not given the opportunity to intelligently grant consent or agree to use a service that shares their data? The first GDPR complaints have been filed precisely on these grounds:

Across four complaints, related to Facebook, Instagram, WhatsApp and Google’s Android operating system, European consumer rights organisation Noyb argues that the companies have forced users into agreeing to new terms of service, in breach of the requirement in the law that such consent should be freely given.

Continue reading “Brave New World, Inc.”

Do We Want our IDs Verified on a Blockchain?

pexels-photo-786801.jpegOne of the use cases most commonly discussed today for Blockchain is identity verification  or authentication. This could come in the form of storing bits of encrypted data on a Blockchain that would facilitate identifying individuals for any number of purposes from buying groceries to making online purchases, validating a state issued ID (like a passport or driver’s license), checking in at a hotel, passing security at an airport, or voting in an election.

The argument, as always with Blockchain, is that by having a distributed database of encrypted and validated entries, you are able to create trusted and secure transactions, avoid fraud, reduce errors, save money, and leave an indelible trace of activities.

Personally, I think that the Blockchain use case for identify verification is fantastic for voting, especially where we can quickly validate a citizen is authorized to vote while avoiding revealing how she voted.

But what about other types of transactions? One area where I am struggling with is whether consumers will be comfortable leaving immutable traces of their movements and activities on a Blockchain, even if their ID is revocable (meaning that the individual could change her passport, ID, or biometric). From a consumer-centric standpoint, one would think that a person would want to be able to remove, not just revoke, her biometric or public ID. Will consumers want the right to have their bad biometric selfies or other transactions “forgotten”?

Just because it can go into a Blockchain, doesn’t automatically mean it should.

What do you think?

The Legal Implications are Not My First Concern

home aloneWhenever I look at a new product, business model or technology, the legal implications are never my first concern. I prefer to focus on whether there is a viable business model, whether we can actually deliver the product or service, and how end users will feel about the product or services.

This short article lists the main legal implications of using Artificial Intelligence:

  • Personal Data
  • IP
  • Liablity

To be honest, for us who are working with these issues every day, this article isn’t particularly informative. Whether we’re talking about AI, Blockchain, Biometrics or some other new service, I would argue that I am much less concerned about those issues than the article is, mainly because I work with very capable privacy and IP specialists and know that both of those issues can be addressed in the product’s design and contract drafting.

For privacy what is very important, is not so much the law, but that if your product involves processing personal data, that the end users’ interests are at the heart of the design (ie, what is called privacy by design).

With regards to liability, we will have worked closely with the business to define our risk profile, factoring it into the business case and then reflecting that in the liability clauses. In other words, the liabilities and indemnities clauses will look pretty much the same as they do in any other IT supply agreement.

What I will be most concerned about is reputation. Will our service actually work? Will end users whose data is being processed through our service feel comfortable with their data being used? Assuming we have leverage, we can draft circles around our contractual risk to protect our intellectual property, limit our liability in case of our service failure, and define our privacy obligations. But what happens if our service doesn’t meet up to expectations or if users find it creepy? Will anyone want to contract with us in the future?

That’s reputation, pure and simple. And nothing you draft in a contract is going to save a bad reputation. So first figure out if you can deliver, put the end user at the center of the product architecture, get your business case in order, and then you can do the easy part which is to put together the contract.

Big Brother, Cars, Face Recognition and Riding Like the Wind

pexels-photo-720815.jpegSince its inception the automobile has always been a romantic figure in American popular culture and emblematic of the American way of life. In short, the automobile has been freedom incarnate. On our sixteenth birthdays, we Americans take the day off and go straight to the DMV to take our driver’s exam. With our newly minted license, we are set free from the bounds of our parents and their ever-watching eye. It is our first right of passage.

As explained in an article in yesterday’s Washington Post, car companies can now store and mine lots of information about car owners, from how fast you are driving to where and when you are traveling. That means it becomes much harder for you to use your wheels to be free. Your spouse or your parents may not know where you are going or where you have been, but your car company does. If you’re cheating, you better hope your spouse doesn’t have a friend at your car company. And what if the police get a warrant to search your car company’s data on your vehicle’s performance? Forget contesting that speeding ticket in court. Who needs the Fifth Amendment when your car can incriminate against you instead?

Am I overreacting? Maybe, but the U.S. Senate did just approve with support from Democrats the extension and expansion of Donald Trump’s ability to spy on U.S. citizens and that includes reading your emails without a warrant. In fact, there were Democrats who said the matter didn’t even deserve to be debated. I would imagine that means mining data from our car companies as well.

Earlier this month, the Washington Post also reported on China’s intention to use facial recognition technology to keep a watchful eye on all citizens to help predict and combat crime. We should all be concerned about the government and private companies as Big Brother, but with facial recognition there is also the issue of accuracy.

facial recognition

False positives can mean that certain people are regularly stopped and potentially harassed by the police. Now imagine that the biometric engineers who set the algorithms are all from the same racial and ethnic groups, whether on purpose or not, their biases will be factored into the accuracy of the results. This will likely translate into minority groups taking the brunt of the false positives. For artificial intelligence and machine learning to be effective, it needs to be accurate at least 80% of the time. When that happens it will always be better than humans. But still, if we move to a system of Big Brother with ubiquitous cameras capturing our facial images 24/7 and the system is only 80% accurate, that leads to arguably an unbearably high threshold for potential abuse. Democracies are supposed to accept some criminals getting away with crime in exchange for the innocent not being locked up. It’s the authoritarian regimes who place law and order above the protection of the innocent.

Between companies, governments and new technologies, the potential for opportunities, efficiencies and abuse are endless. It is a Brave New World.