End of liberalism

3 minute read

I’m about to finish reading “Homo Deus” by Yuval Noah Harari. One of the ideas the author brings up is that because of advance in computer algorithms, an AI agent will be able to represent our interests better than ourself.

This is not an apocalyptic scenario. Algorithms won’t revolt and enslave us. Rather, they will be so good at making decisions for us that it would be madness not to follow their advice.

The books gives several examples how state of the art algorithms are capable of making better decisions. It even goes as far as proposing Google to make a voting decision instead of us, because it knows my “real feeling and interests”.

Of course this is very dangerous idea. But I did not find any indication that the statement was not a genuine opinion. People should not trust “algorithms” too much, because in the end these are other people who write the algorithms. A more elaborate opinion on this I found in Cathy O’Neil’s “Weapons of math destruction”, which is in my opinion is exaggeration, but brings an important point up for a discussion. Here I give a short response to Harari’s thought that bothered me so much.

Google and other Internet giants probably have a lot of information about me. Maybe Google can even know my feelings and interests sometimes even better than I do. But one should not forget that Google act in interest of Google, whereas I act in my own interest. If Google can vote instead of me, then my vote is casted to honor “feelings and interests” of Google (or Sergey Brin), not mine.

History already knows examples when overpowered corporations do not stop themselves from using completely immoral means for getting more profits. United Fruit Company is a textbook example for such kind of company. Mass scale bribing of foreign governments, being involved in shooting strike participants and organizing private army to overthrow a legitimate government is not the end of the list. If we voluntarily give control over our faith to a corporation, how silly is it to expect that it wouldn’t try to get covertly the same power as others were killing for?

“Know thyself” is the principle Harari advocates for. He builds a beautiful concept of a society with perfect knowledge, as it is seen from his ivory tower, but forgets an importance of “control thyself”. Idea of knowing oneself is essentially a humanist idea. Ironically, right before “Homo Deus” I started reading “Dialogs” by Hryhorii Skovoroda. Despite the same formulation, in times of Enlightenment times “knowing yourself” was inseparable from taking responsibility for own actions, in contrast to relying on a machine with perfect rationality. Unfortunately, I had hard time reading the dialogs from XVIII century, since the arguments were heavily based on Orthodox Christian world view. Being atheist meant that most arguments were simply passing buy. But I can’t disagree with the conclusion.

Enlightenment taught us that people are responsible for own actions, because they have “freedom of will”. In eighteenth century freedom of will originated from a God given soul. Whereas, as Harari correctly points out, modern science sees no such part in a human body that could be responsible for making human free. Soul is not measurable and it is known for a fact that people do not have full conscious control over themselves. All of this did have an impact on how nowadays people interpret “freedom of will”, some even saying there is even no such thing at all.

If we finally accept that freedom of will is a bogus concept, how our society would change? If people have no freedom over themselves, then maybe indeed it is up for Google to decide how we should live? It has its own interest, people have their own. The interests of people and Google are not aligned, but Google is wiser and smarter. Then maybe is it up for Google to decide? And is there place for democracy and liberalism?