When I think about how to address the potential risks associated with advanced technology, I see that China is now in a much better position to regulate because there is much stronger regulation there in general. If a really dangerous situation arises, then they can stop it, at the same time in the US almost everyone can agree that it is going to be bad, but it could still take them decades to stop the process. The ineffective elimination of the impact of such external factors is an additional evidence of the weakness of capitalism.
— Modern economists debate a lot about different systems. For example, the new term of "state capitalism" in China was introduced as opposed to the traditional market capitalism. But if capitalism has challenges, the planned Soviet economy, as we know for sure, does not work, then what system, in your opinion, would be the best for a country or even for the global level of coordination?
— I did try to not take on very strong views. As a technologist, I'm very fascinated by blockchain. In this framing the most important thing that blockchain has brought to the world is that now we have the ability to globally agree about the piece of data without trusting anyone to maintain that piece of data. I have done a couple of workshops with AI safety and blockchain people to think if there are any positive use cases that are now possible to implement for people to coordinate in the cheapest way in terms of how much trust people need to invest into the system.
In general, I believe that in global governance, concepts such as cooperation and transparency of processes are important, and blockchain at least at some level from some angle brings these two things together.
But I don't really hold any strong opinion about a particular system, I just think it's important for us to increase our ability to coordinate.
— Do you feel that there is no valuable structured idea for the global governance mechanism still?
— The world has never been coordinated globally. Sometimes there are important "tragedy of commons" situations between states.
For example, let's take the arms race. The reason you are in the arms race is because other people are engaged in the arms race, so it is a vicious circle. That is why it is so important to have international treaties in order to limit things that might push you towards competing interests. This is the classic "prisoner's dilemma" or "tragedy of commons".
It is in the common interests of all people for no one to invest, for example, in more and more powerful weapons, but at the same time it is even more profitable for a separate group to be the only ones who do this. This means that you are in a situation that doesn't have a stable Nash Equilibrium or has a bad Nash Equilibrium.
— You had an idea about the Global Preference Discovery System.
— I used this node term in some preliminary work that I did. Right now the governance in democracy, for example, in some ways is conflating two different concerns: one is figuring out what people want from the future, and the other one is how to get it.
Politicians usually say they know how to solve both issues — we want X and we are going to do Y for this, so vote for us. But I think it would be useful to have a certain system so that people could vote for what they think a good future looks like, so that people independently determine X. The easiest way to do this is to establish a system of regular polls, for example, randomly ask people about their life.
If we have sufficient randomised information from the world about how people are feeling right now, we could build a World Wellness Index to make future predictions. For example, what would happen to this global index if the US completely opened its borders? This could be an interesting tool for policy debates and regulations that would decouple the question of what we want from the future from the deliberation of how to get to this bright future.
— I understand the mechanism, though, to be honest, I was deeply impressed for the second time by the fact that these issues are discussed in a closed technology community as well as the way they are discussed. And I am a Ukrainian economist and it is not so easy to impress me at all.
— Recently in the rationalist community, Alexander Scott, a very well known figure in the community, and Glen Weyl, researcher from Microsoft, had a super interesting discussion and friendly debate on the issue you are talking about. Glen Weyl said — yes, I am a mechanism designer, but I do not trust mechanisms, at the end of the day you can just shoot yourself in the foot all the time. Scott Alexander had this response, perhaps overly critical, but still very good, that this doesn't mean that we should always rely on our gut feeling.
At the end of the day, it is a spectrum of how much you trust people or how much you trust mechanisms. If you only trust people, then you still have to have some constraints that guide people, such as the rule of law. And this is already a mechanism, democracy is already a mechanism. It's important to strike a balance.
— The concern is that a very small group of technology specialists is developing a mechanism for understanding, for example, such important questions as universal human values.
I don't think I should oversell this thing, it was just an example of a mechanism that the world doesn't have yet, but it could have, and I think it could be very useful.
When I talk about the mechanism for preference discovery, I would like to point to the fact that we need mechanisms that would give people more voice in the outcomes, and I am open to any ideas. The current tools that we have are several hundred years old and were invented and put into place when we still had horses as our main transportation. Our results in deciding how to give people more opportunities to express their opinions could be better, but at the same time I understand very well all the dangers associated with such a system — such as exaggeration or delusion, manipulation in order to pretend, that people want, for example, to maximize the profits of one definite company.
One can look at the example of Amazon's review and preference discovery system. The system, in theory, is supposed to discover preferences based on the entered information, but in practice it gets scammed all the time. It is very important to verify that the system is robust and resilient enough to actually do what it was designed to do.
— Thank you for reassuring me a bit that you are aware of this risk. It was also fascinating to learn about the AI alignment movement. How would you describe the goals of the movement for the general public?
— There are several ways to look at it. If we think about AIs as machines to delegate our human decisions, then we need to ensure that our ideas about what the good future looks like are fully transferred to AI.
What is really important to realize is that AIs are more aliens than aliens. They don't have any biological background. They were not evolved biologically, they don't have any concerns about the environment.
We shouldn't underestimate the difficulty of transferring human values to AIs. Because AIs are as autistic as one can get. People are prone to think that AIs are basically human and as they get smarter, they can become even more human like. No, they will not.
— Why are you so sure about it?
— There are very good arguments for that, for example, humans were shaped by biological evolution in a social context, in groups. There is a great book called "Moral Tribes: Emotion, Reason, and the Gap Between Us and Them" by Joshua Green from Harvard.
Joshua looks at how human morality developed, and it was indeed in small tribes of 50 to 100 people. Morality was something that appeared automatically because it helped those tribes to become more competitive, people were looking after each other and behaved altruistically. On some level, altruism is kind of self-sacrificing, but on a tribal level it makes the group more competitive.