Location Location Location

Last week, I read this article in the New York Times about Amazon’s search for a second North American headquarters, being dubbed HQ2. For both large and small, tech and non-tech companies, where you put your headquarters is a key strategic decision. It helps you attract, hire, maintain and train talent. It helps you be close to distribution and travel hubs, close to customers, and it may help better place you politically or improve your regulatory and tax outlook.

So, for example, if you area tech company looking to recruit young talent: you want to be close to universities with quality students, and you want to be located where qualified talent want to live when they’re done with college or when they start raising a family. If you think it’s cheaper to build your headquarters in the boondocks, then you are going to have to pay your team more, and keep paying them more each year to retain them.

Just think about the wars between Google, Apple and Microsoft to entice the best talent. It’s a competetive world.

All of these issues are what Amazon is thinking now. The article lists the leading contenders, and happily for me, among the 20 finalists there are three sites – Northern Virginia, Washington DC, and Montgomery Country MD – all within the Washington Metropolitan Area where I am from. The area has great universities, lots of diversity, domestic and international airports, an urban setting, and is the nation’s capital.

Similarly, I read today in TechCrunch that Google will open its AI headquarters in Paris, France. Paris is a great urban setting for young professionals, has access to universities and business schools, is the center of French political power, but more importantly as mentioned in the article:

In recent years, Google faced a huge $1.3 billion fine for tax noncompliance in France. A court in Paris canceled the fine in July 2017. But it’s clear that France represents an important market and a regulatory risk for big tech companies. Hiring people in France, investing in France and “training” people about Google’s services is a great way to lobby the French government using a bottom-up approach.

That is smart politics especially when the Europeans are giving U.S. tech companies the stink-eye.

Finally, if I were looking for inexpensive, quality developers, I would be focusing on smaller cities in Spain. Spain is full of great young talent who are willing to stay local if the opportunities are right. They are also much cheaper than other EU nationals and likely easier to manage than their counterparts in developing markets. Plus Northern Europeans are looking for any excuse to move to Spain. They just need a job. If I were the regional government in places like Zaragoza, Murcia, Alicante or Valencia, I would be bending over backwards to find the right incentives to bring tech employers to my neck of the woods.

Advertisements

The Legal Implications are Not My First Concern

home aloneWhenever I look at a new product, business model or technology, the legal implications are never my first concern. I prefer to focus on whether there is a viable business model, whether we can actually deliver the product or service, and how end users will feel about the product or services.

This short article lists the main legal implications of using Artificial Intelligence:

  • Personal Data
  • IP
  • Liablity

To be honest, for us who are working with these issues every day, this article isn’t particularly informative. Whether we’re talking about AI, Blockchain, Biometrics or some other new service, I would argue that I am much less concerned about those issues than the article is, mainly because I work with very capable privacy and IP specialists and know that both of those issues can be addressed in the product’s design and contract drafting.

For privacy what is very important, is not so much the law, but that if your product involves processing personal data, that the end users’ interests are at the heart of the design (ie, what is called privacy by design).

With regards to liability, we will have worked closely with the business to define our risk profile, factoring it into the business case and then reflecting that in the liability clauses. In other words, the liabilities and indemnities clauses will look pretty much the same as they do in any other IT supply agreement.

What I will be most concerned about is reputation. Will our service actually work? Will end users whose data is being processed through our service feel comfortable with their data being used? Assuming we have leverage, we can draft circles around our contractual risk to protect our intellectual property, limit our liability in case of our service failure, and define our privacy obligations. But what happens if our service doesn’t meet up to expectations or if users find it creepy? Will anyone want to contract with us in the future?

That’s reputation, pure and simple. And nothing you draft in a contract is going to save a bad reputation. So first figure out if you can deliver, put the end user at the center of the product architecture, get your business case in order, and then you can do the easy part which is to put together the contract.

Ten Things: Making Legal the Department of Yes

My boss just recommended that I check out the Ten Things You Need to Know as In-House Counsel blog, written by Sterling Miller who is a General Counsel with over 25 years in-house experience. I am not from the West Coast, so I don’t use “awesome” lightly, and this blog is “awesome”.

So far my favorite post (which I immediately shared with my own team) was his “Ten Things: Making Legal the Department of Yes“.  My team has, without making a list and without knowing that such a list existed, consciously made an effort to implement each of those recommendations. We started this two years ago and have largely succeed, but it is very important to go back, remind ourselves what we are doing and refresh our efforts.

Big Brother, Cars, Face Recognition and Riding Like the Wind

pexels-photo-720815.jpegSince its inception the automobile has always been a romantic figure in American popular culture and emblematic of the American way of life. In short, the automobile has been freedom incarnate. On our sixteenth birthdays, we Americans take the day off and go straight to the DMV to take our driver’s exam. With our newly minted license, we are set free from the bounds of our parents and their ever-watching eye. It is our first right of passage.

As explained in an article in yesterday’s Washington Post, car companies can now store and mine lots of information about car owners, from how fast you are driving to where and when you are traveling. That means it becomes much harder for you to use your wheels to be free. Your spouse or your parents may not know where you are going or where you have been, but your car company does. If you’re cheating, you better hope your spouse doesn’t have a friend at your car company. And what if the police get a warrant to search your car company’s data on your vehicle’s performance? Forget contesting that speeding ticket in court. Who needs the Fifth Amendment when your car can incriminate against you instead?

Am I overreacting? Maybe, but the U.S. Senate did just approve with support from Democrats the extension and expansion of Donald Trump’s ability to spy on U.S. citizens and that includes reading your emails without a warrant. In fact, there were Democrats who said the matter didn’t even deserve to be debated. I would imagine that means mining data from our car companies as well.

Earlier this month, the Washington Post also reported on China’s intention to use facial recognition technology to keep a watchful eye on all citizens to help predict and combat crime. We should all be concerned about the government and private companies as Big Brother, but with facial recognition there is also the issue of accuracy.

facial recognition

False positives can mean that certain people are regularly stopped and potentially harassed by the police. Now imagine that the biometric engineers who set the algorithms are all from the same racial and ethnic groups, whether on purpose or not, their biases will be factored into the accuracy of the results. This will likely translate into minority groups taking the brunt of the false positives. For artificial intelligence and machine learning to be effective, it needs to be accurate at least 80% of the time. When that happens it will always be better than humans. But still, if we move to a system of Big Brother with ubiquitous cameras capturing our facial images 24/7 and the system is only 80% accurate, that leads to arguably an unbearably high threshold for potential abuse. Democracies are supposed to accept some criminals getting away with crime in exchange for the innocent not being locked up. It’s the authoritarian regimes who place law and order above the protection of the innocent.

Between companies, governments and new technologies, the potential for opportunities, efficiencies and abuse are endless. It is a Brave New World.