Last night I heard Yuval Noah Harari, the author of Sapiens: A Brief History of Humankind, talk about his new book Homo Deus: A Brief History of Tomorrow. It’s his attempt to answer the question “What next?” which was the question he is asked most often by people who have read his first book.
I’ve summarised his core argument in a blog posted on the BMJ website: http://blogs.bmj.com/bmj/2016/09/07/richard-smith-how-humans-might-divide-into-a-superclass-and-a-useless-class/
In essence he thinks that a superclass of humans may with the assistance of artificial intelligence become gods (achieving immortality, creating life). Meanwhile, because machines will take over everything the rest of us will become a “useless class.”
But here are other ideas that Harari shared in 90 interesting minutes.
Philosophical problems become practical problems
In the age of machines philosophical problems will turn into practical problems. Consider the problem that philosophy students have discussed for years of a car you are driving that is about to drive off the road a car containing five people. If you continue they will die, if you swerve you will die.
Philosophy students do not agree a solution, but it doesn’t matter both because it’s hypothetical and because even if it was real we know that what humans do in theory has little relation to what they do in the real world.
But for engineers designing a car, or the machines that will design them in the future, it’s a real problem. How do they engineer the car with the single driver? Let the market decide, joked Harari: in a cheaper car you’ll die, in an expensive car the five will die.
Elections don’t include what matters most
Elections don’t include the things that matter most but the things that people understand. The most important development of the past 20 years is the Internet. Many crucial decisions had to be made about access, security, privacy, governance, and many other issues. These decisions were made by small groups of people. “I never had to vote about the Internet,” said Harari.
The election in the US is about issues like immigration that have will nothing like the impact of machines replacing people.
We are selling our personal information for baubles as Native Americans old Manhattan for beads
Our most valuable asset is our personal information, but we give it away to Facebook and Google in exchange for being able to post pictures of our holidays and look at videos of cats. We are like the Native Americans who sold Manhattan for beads.
Making decisions: moving from feelings to algorithms
The core message from Harari’s first book was that one group of apes, Homo Sapiens, managed to dominate other species by evolving to work together in large groups. And sharing fictions was what allowed humans to come together in groups larger than the 150 who could know each other. Those fictions might be religions, ideologies like nationalism, or the rule of law.
Liberal humanism is now the dominant ideology or fiction in much of the world, and so decisions are made based on feelings. British people voted for Brexit based on feelings. The next American president will be elected based on feelings. In economics the customer is always right, so if customers feel that they want to smoke cigarettes and drink sugary drinks then they will. Ethical and aesthetic questions are also decided by feelings.
Better, said Harari, to base decisions on feelings than on the Bible. The Bible was put together centuries ago by a small group of men, while our feelings are an (albeit inefficient) accumulation of ideas, thoughts, and feelings of millions over centuries.
Basing decision making on feelings becomes a problem when people’s feelings conflict, and anyway, argues Harari, modern science has shown that free will is a fairy story. People’s feelings are simply biochemical algorithms. Nobody could read those algorithms, but biology and computing are moving to the point where we can. The machine will “understand me better than I understand myself.”
Electronic books read us better than we read them
Electronic books are already reading us better than we are reading them. Amazon knows what you are reading on your Kindle, when you start, when you stop, how fast you read, and what you highlight. Soon with face recognition software it will assess you emotional reaction to every sentence, or it may connect to a device that records your heart rate, breathing rate, blood pressure, sweating etc. We forget most of what we read, but Amazon never forgets.
Through such mechanisms algorithms and artificial intelligence will know much more about us, including our feelings, than we know about ourselves. So decisions will be based on algorithms not our feelings.
Artificial intelligence as therapist
Surely, asked somebody in the audience, machines will never be able to replace therapists when treating people with, say, depression?
Harari thought it possible. Therapists take years to train but are probably not truly effective until they have treated many patients. Machines will be able to take in the knowledge behind treatments in seconds and then they can be connected to information about thousands, even millions, of people who have been treated effectively.
“Emotional intelligence” can, to put it crudely, be faked. Harari is also convinced that machines don’t need to develop consciousness in order to treat patients, take over from human beings, or do most of what humans can do. Intelligence and consciousness are different constructs, and the intelligence of machines can far exceed that of humans. Consciousness is not needed to rule the world; intelligence is.
And a machine therapist will have the advantage that it is everywhere all the time and very cheap. Indeed, it will be able to know that you are having a “nervous breakdown” before you do.
It’s not hard, concluded Harari, to be better than human beings, but he conceded that whether or not machines will be better than therapists is an empirical question that should be tested in a trial.