“You should presume that someday, we will be able to make machines that can reason, think and do things better than we can,” Google co-founder Sergey Brin said in a conversation with Khosla Ventures founder Vinod Khosla. To someone as smart as Brin, that comment is as normal as sipping on his super-green juice, but to someone who is not from this landmass we call Silicon Valley or part of the tech-set, that comment is about the futility of their future.
And more often than not, the reality of Silicon Valley giants, who are really the gatekeepers of the future, is increasingly in conflict with the reality of the real world! What heightens that conflict — the opaque and often tone-deaf responses from companies big and small!
Silicon Valley (both the idea and the landmass) means that we always try to live in the future. We imagine what the future looks like and then we try and build it. Sometimes that future delights us and we embrace it whole heartedly, like with iPhones and Android-based smartphones. And sometimes, that future seems so dystopian that society is scared and unnerved by the unknown.
That Uncanny Feeling
Facebook’s emotional experiments are an example of that future. Sara Watson, a fellow at the Berkman Center for Internet and Society, in an essay about data and advertising brought up the 1970s concept of the Uncanny Valley aka “the unsettling feeling some technology gives us.” Watson continues in her essay: “Technologies that are simultaneously familiar and alien evoke a sense of dread. In the same way, when our data doesn’t match our understanding of ourselves, the uncanny emerges.”
That uncanny feeling is what we are confronted with Facebook’s emotional manipulation through algorithms. It is not necessarily because of the experiment, but what the experiment portends. It is the future where machines manipulate our wants and our desires and preempt our needs and emotions. We are scared because we will lose the illusion that we are making decisions that run our life. There is no coming back once we cross the threshold.
Facebook’s emotion-driven-engagement experiments are tiny glimpse of what really awaits us: a data-driven and alogrithmic future, where machines make decisions on our behalf, nudging us into making decisions. As I pointed out in my recent FastCompany magazine column, the new machine age is already underway, unseen by us. “It is not really just a human world,” said Sean Gourley, cofounder and CTO of Quid who points out that our connected world is producing so much data that it is beyond human cognitive abilities and machines are going to be part of making sense of it all. So the real question is what will we do and what should we — the technology industry and we the people do? From my perspective, we need to start with the raw material of this algorithmic future: data. Whether it is a billions of photos that carry a payload of emotions, relationships and location data, or status updates announcing the arrival of a new one or those searches for discount Prada shoes or a look-up about a medical condition — there is someone somewhere vacuuming our data droppings and turning them into fodder for their money machine.
For sale, our data
Forbes tells us that even seemingly benign apps like Google-owned Waze, Moovit or Strava are selling our activity and behavior data to someone somewhere. Sure they aren’t selling any specific person’s information, but who is to say that they won’t do it in the future or will use the data collected differently. I am actually amazed that cities are willing to trade data such as photos from traffic cameras that impacts its citizenry to a privately-owned company (in this case, Google) without as much as a debate. I am sure, a new parking lot gets more attention from the legislators.
Further down in the story, a Waze spokesperson remarked that the company can tell what speeds you drove from a “point a” to a “point b.” What if they sell that data to an insurance company, that then uses that information to raise insurance premiums.
Did you know at the time of signing up for Strava, that lovable cycling and running activity tracker is sharing real time user data and selling that to municipalities for 80 cents a year. In what universe does it make sense for the company to do that without asking, and have a company spokesperson blatantly admit to a Forbes reporter that, the default is opt-in — a malaise popularized by Facebook. Because not doing so means, actually explaining to people what they intend to do with that all that personal information.
And to be honest that is the crux of the problem — we, the citizens don’t really know what these data-hoarding companies — big and small are really going to do with all the data they have about us in their databases. How does a big company like Google use the data that resides in various different databases — Nest, DropCam, Waze, Android, Google Maps, Google Mail and Google Search — in tandem?
A few weeks ago when reading The New York Times interview with Google co-founder and CEO Larry Page, I kept hoping that the interviewer would really dig deeper into Google’s stance on privacy, data gathering and what they plan to do with all the information they are gathering about us. What and why of Google’s grand vision for the data it collects is an important issue and it would be nice to know what Google intends to really do with it.
The reality is that with all this information that is out there about me, then we should have a talk about things such as our rights as a citizen over that data. I am not saying let’s all go back to the villages and caves, but instead why not have a conversation (that is not hysterical but also not dismissive) about these issues around data, expectations of privacy and transparency.
When Facebook released its Home app, I was unsettled by it mostly because it took away any notion of privacy. That post just might have been about Google or Amazon or Apple as well. Data from GPS Sensors is enough to quickly deduce your home location, work location, sleep and patterns. Add data from Waze app or Google maps, and Google can figure out what route you take. The data from accelerometers and gyro meters, a company can deduce some physical ailments. New sensor processors can add even more human-like abilities to our phones.
Look, I am actually delighted about the possibilities of what can happen with all that data and sensors. I can’t wait for future of better medicine to arrive. I also can’t wait for Google Cars to become common place. What I don’t care about is that all these changes are happening with nary a thought about its impact on our society. If we as an industry are change agents and can want to talk about age of abundance in 50 years, we can’t ignore that next 50 years might mean a tear in our social fabric.
It is important for us to talk about the societal impact of what Google is doing or what Facebook can do with all the data. If it can influence emotions (for increased engagements), can it compromise the political process? What more, today Facebook has built a facial recognition system that trumps that of FBI — think about that for a minute.
Can we trust these Medici of modern times to regulate themselves and do the right thing? How long before the pressure of Wall Street and its incessant quarterly demands makes Facebook or Google go to unthinkable places? These are issues of our times — something I had initially discussed in my posts about data darwinism and its impact on society.
Automation Ahead
Automation of our society is going to cause displacement, no different than mechanization of our society in the past. There were no protections then, but hopefully a century later we should be smarter about dealing with pending change. People look at Uber and the issues around it as specific to a single company. It is not true — drones, driverless cars, dynamic pricing of vital services, privatization of vital civic services are all part of the change driven by automation, and computer driven efficiencies. Just as computers made corporations efficient — euphemism for employed fewer people and made more money — our society is getting more “efficient,” thanks to the machines.
“There is an increasing realization of the pain brought about by all these changes, especially the number of industries being disrupted and the many jobs that have been lost and will never come back. We hope that, as in the past, new industries will give rise to exciting new jobs,” writes Irving Wladawsky-Berger, a veteran technologist who worked for IBM, “But, no one knows for sure. It’s important that we collaborate across disciplines, – technology, business, social sciences, humanities, – to better understand and anticipate where the journey might take us.”
And that is exactly the problem — no one really wants to take that humanistic approach. The hybridization of man and machine has begun in earnest. Google, Facebook and Amazon know that and are quite far ahead than rest of the world. Take Facebook, for instance knowing how to manipulate our emotions based on information they surface for us? How about Amazon’s future ability to predict our commercial needs.
What about those new voice processing chips inside the smartphones that will constantly listen to what is happening to the world around us and help create magical experience for you. What about the sensor data collected from other sensors on the phones. What are the rules around the privacy of that information? Who is making those rules?
John Foreman, a data scientist at MailChimp, in an eloquent essay, pointed out that “humans are bad at discerning the value of their data” and that the “personal data just appears out of nowhere, exhaust out of life’s tailpipe” and thus we are willing to trade it for something that seems less valuable. Foreman’s argument points out the futility — we are trading our freedoms in the data age for some minor gains.
In March 2013, in his keynote at Gigaom’s Structure Data conference Quid’s Gourley estimated that it costs $1.20 a year for Facebook to generate over $6 per year in revenues. We are willing to trade our data for less than what it costs to get a cup of coffee at Starbucks. “Our past data betrays our future actions, and rather than put us in a police state, corporations have realized that if they say just the right thing, we’ll put the chains on ourselves,” Foreman writes. “In the hands of machine learning models, we become nothing more than a ball of probabilistic mechanisms to be manipulated with carefully designed inputs that lead to anticipated output.”
MORAL IMPERATIVE
While many of the technologies will indeed make it easier for us to live in the future, but what about the side effects and the impacts of these technologies on our society, it’s fabric and the economy at large. It is rather irresponsible that we are not pushing back by asking tougher questions from companies that are likely to dominate our future, because if we don’t, we will fail to have a proper public discourse, and will deserve the bleak future we fear the most.
The sad part is that the legislators and the judiciary bodies of our nations are woefully under equipped to deal with the monumental change that as a society are experiencing. In a way, I feel, Silicon Valley and the companies that control the future need to step back and become self accountable, and develop a moral imperative. My good friend and a Stanford D.School professor Reilly Brennan points out that it is all about consumer trust. The concept of Waze working with municipal groups in theory should be a good thing, but we are all highly skeptical and suspicious of the motives of data collectors.
Like I said, a lack of clarity around data-intentions is to blame. And the only way I see to overcome that challenge is if companies themselves come up with a clear, coherent and transparent approach to data. Instead of an arcane Terms of Service, we need plain and simple Terms of Trust. To paraphrase Peter “Spiderman” Parker’s Uncle Ben — with big data, comes big responsibility. The question is will the gatekeepers of the future rise to the challenge?
It sound like at some point the idea about open data might be opposed by ‘some’ I mean government. Imagine a rogue government develop a soft ware that hacks, scan and analyze your security system and act according?
One very fine article sir. Have you ever considered that one of the underlying problems in the way we use data is that we have no real idea of who we actually are as human beings. I suggest you read a short little book by the philosopher Alan Watts called the Wisdom of Insecurity.
http://www.amazon.com/Database-Nation-Death-Privacy-Century/dp/0596001053 – a book from 14 years ago wrote about most of this. We ignored it then. We will ignore it now.
Om for President!
goooooooooooooooooooooooood
“The question is will the gatekeepers of the future rise to the challenge?” Only if the users themselves find the wherewithal to rise to the challenge. For they guard one side of the gate and access by “big data”.
I have written about the dangers of the data being gathered and stored and the potential implications of the ASSUMPTIONS that can be made based upon it since at least 2008. Few are taking this nearly as seriously as they should.
Beyond your suggestion that an insurance company could raise your rates or refuse to insure you because cameras indicate you drove faster than the posted limit, imagine the uproar if cash-strapped government decides to start sending retroactive speeding tickets with fines of however much they choose? There is nothing to stop that from happening now as far as I know.
All this data will create a society that pushes people into a black market against their will. It can be used to prevent you from getting a job, renting a home, obtaining health care so you will have no choice but to operate outside the system. Unfortunately, “we the people” no longer have any say in anything as money has corrupted everything. The only exception is possibly very locally – which is why the U.S. was designed as it was – to prevent what is happening right now.
Some will “opt-out” and leave technology behind to live in very remote rural areas where they can still ALMOST have privacy. I miss the days when you could use your backyard swimming pool or jacuzzi and not have to worry about your photo being plastered online for anyone to see. Today, you can never know if there is a drone, or someone with a camera phone or a satellite looking down snapping your photo. Perhaps that was always a risk – but less so when images could not be uploaded and shared where anyone can find them.
Opting out will eventually mean not being able to buy anything. It already means not traveling. Did you know Walmart is testing cashless stores? In them you can not pay with cash. One day there may be almost nowhere you can pay with cash. Those who want to opt out better be planning ahead. Some will live the simple life where the only necessities are access to clean water, locally grown food, and shelter.
“And the only way I see to overcome that challenge is if companies themselves come up with a clear, coherent and transparent approach to data.”
That is a big ask. If it does happen it will be on their terms with the foxes minding the chickens. Politicians and governments are decades behind the curve. Maybe the EU will get on top of it with extensions to their Data Protection laws but so far it hasn’t in any meaningful way. A simple law has to be passed to state that personal data belongs to the person and only that person can give permission to use it. With such a law enacted, a new technology industry would grow and prosper.
It feels like slowly we are now developing a mindset to protect our personal data and in maybe next few years some new start up or the existing once will discover a way to protect our data and will charge us for the services. We will be happy to grow their business in order to save the most important part of our life.
Thx Om, always a pleasure!
You ask: “What are the rules around the privacy of that information? Who is making those rules?”
I think that, as shown by Facebook multiple times, the companies will inexorably abuse this information and harm will be done. It is in our nature to abuse others until we are stopped by our environment or society.
Unfortunately as a society we will react only after the fact, just like when we go to the doctor when we are already sick instead of going for regular preventive care.
In the mean time new startups that can disrupt the original disruptors (Google, Facebook, Amazon, Apple, etc.) must be supported in their free enterprises by government and us as consumers.
I think that we have to become unfragile, the title of a book. big data is not going to break us anymore than did the civil war cannon that was said to control the weather. its all chaos theory from now on and upon entering the ZETTABYTE age, no one can even translate it anyway. lets agree on something we can control, like weather?
I’ve been struggling with this issue for a while, because of the NSA surveillance, and the TSA. That’s where my data might go that might be the most harmful. There’s no denying the convenience of Global Entry when I travel, however, so I’ve just traded my fingerprints and retina to the TSA. Now what does it matter that I’ve traded Waze my driver data or someone else my health data.
Caveat: I am not a job seeker and I don’t live in China. If I were, or if I did, I bet I’d have completely different opinions on this. The greatest point you make in this thoughtful piece is that we have to have a discussion.
We have the capacity now for mutually assured destruction without an external enemy. And because you’ve been around the Valley as long as I have you know that things have really changed. The threats are much greater now.
Only eleven comments? I can’t believe it.
Om, this is such an important and under-appreciated subject. Big Data [usage and policy] is probably neck-n-neck with climate change as being the one of the biggest challenges of our time.
The thought that I could look up “bankruptcy” (or comment somewhere using the word lol), and that could have an impact on interest rates offered to me is HUGE.
I have become increasingly aware of how what I do online profiles me, and it really seems counterproductive to the utopian aspects of the Web. But of course, I benefit from a lot of the profiling.
I made an audio version of this post (kind of funny me thinks):
All I can say is this as a computer scientist I know AI is a world away and that keeps people feeling safe “the computers can’t think yet so what if they everything about us” well as this Essay clearly shows the computers aren’t the issue but rather what the humans are going to do with the data and they definitely can think! I personally believe it’s team we had machine laws that govern the conduct and use of all machines and then a subset of those laws centered around what is allowable programming, application and data use. This is now overdue imagine if a serial killer could program Candy Crush Saga? Or even worse imagine if terrorist got a hold of Facebook, Google and Apple access codes? How would we stop them or would we even know?
I do not believe the responsibility of Big Data, Robots, 3D, etc. will ever capture common sense. It is in the spirit of the human.
Great article, it really destroys any illusion of privacy we have in this age. If one day robots can think and act like humans, would they be able to feel too?
Reblogged this on AB's Reflections and commented:
Of course the smart governments can have a field day with all the data being collected, and would be more reluctant than ever that one of the essential services get blocked by other states…
Its fascinating how we’ve been so conditioned to immediately placing the fear of ‘big government’ at the top of the the list of horrors. I at least have SOME voice in government. I have none in Corporations and the role they’ve taken in manipulating government.
Could not agree more.
It is funny how anarchy becomes instantly acceptable by just getting online.
To change that internet has to rebuild it self.
Undoubtedly it has power to improve our life experience, only in anarchy it is on people to take responsibility and initiative.
Sharing for exchange of value or common good is great, but the moment this data collection is linked to a person, this is stalking.
Entrance to internet should be opposite to now, private first, shared by choice.
The beautiful thing is, we can do it right now.
2 points…
1) The claim that ‘no personally identifying information is collected’ is meaningless. Add the ultra precise predicative capabilities of big data with the very personal info (location, local search history, etc) the they DO have, and the distinction becomes meaningless.
2) I remember 40 years ago when the big debate was about how we’d deal with all the leisure time we’d all have resulting from automation. The real result? As you note, those efficiency gains have largely gone to the 1%. And the need for 2 worker families and 24/7 work tethering is producing the opposite of a leisure ‘problem’.
My own thought is that, just as the costs of economic ‘externalities’ need to be included into the up-front price of all products, the gains from automation need to be allocated to those displaced, up front as well.
As a science fiction writer and, more generally, someone who studies how it is that major transformations happen in human society, it is interesting to think about what our concept of ‘privacy’ will look like once the Data Age has fully emerged. Based on history, it will surely not be the same as it is in America or Western Europe at the moment. Whatever it is, it will not be either the utopia or dystopia that we see right now, but a shift in focus. I suspect that the word ‘privacy’ will remain; it will just have a different meaning in the future.