I want to return to the idea that by sharing AI, we might not suffer the worst of its negative consequences. Isn’t there a risk that by making it more available, you’ll be increasing the potential dangers?
Altman: I wish I could count the hours that I have spent with Elon debating this topic and with others as well and I am still not a hundred percent certain. You can never be a hundred percent certain, right? But play out the different scenarios. Security through secrecy on technology has just not worked very often. If only one person gets to have it, how do you decide if that should be Google or the U.S. government or the Chinese government or ISIS or who? There are lots of bad humans in the world and yet humanity has continued to thrive. However, what would happen if one of those humans were a billion times more powerful than another human?
Musk: I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower.
Steven Levy
There’s certainly some validity in the concept that dangerous technology should be developed under constant public scrutiny, but that’s where real problems start. First we need to assume the public can understand the issues and has the power to prevent hazardous developments – but on a subject as complex as artificial intelligence I highly doubt anyone but a couple of experts can grasp the full implications. Scientists have been warning about the negative effects of climate change for decades and political institutions have just started to take action – such delayed response would prove ineffectual when dealing with rapidly evolving algorithms of AI precursors.
Secondly, how will OpenAI empower people to have AI? If we make an analogy with the development of computers, the first were huge mainframes operated by dozens of people for a simple calculation. I think it’s safe to assume AI will start in a similar manner, requiring too much computation power for a single personal computer. With hardware centralized in control of large corporations and countries, I don’t see how the general public could tap into ‘AI power’. Unless it’s distributed somehow, like the Internet, but in that case it will probably be autonomous from the start – and the race to control AI will be already lost. In the end, I think this initiative will only serve to accelerate ‘closed’ AI research, who will be able to tap the results shared by OpenAI in the open while keeping their own initiatives secret.
But I worry it’s worse than either of those two things. I got a chance to talk to some people involved in the field, and the impression I got was one of a competition that was heating up. Various teams led by various Dr. Amorals are rushing forward more quickly and determinedly than anyone expected at this stage, so much so that it’s unclear how any Dr. Good could expect both to match their pace and to remain as careful as the situation demands. There was always a lurking fear that this would happen. I guess I hoped that everyone involved was smart enough to be good cooperators. I guess I was wrong. Instead we’ve reverted to type and ended up in the classic situation of such intense competition for speed that we need to throw every other value under the bus just to avoid being overtaken.
In this context, the OpenAI project seems more like an act of desperation. Like Dr. Good needing some kind of high-risk, high-reward strategy to push himself ahead and allow at least some amount of safety research to take place. Maybe getting the cooperation of the academic and open-source community will do that. I won’t question the decisions of people smarter and better informed than I am if that’s how their strategy talks worked out. I guess I just have to hope that the OpenAI leaders know what they’re doing, don’t skimp on safety research, and have a process for deciding which results not to share too quickly. But I’m terrified that it’s come to this. It suggests that we really and truly do not have what it takes, that we’re just going to blunder our way into extinction because cooperation problems are too hard for us.
Scott S Alexander
Post a Comment