Former OpenAI senior employee: AI has a certain probability of causing a large number of human deaths

In recent months, the rapid rise of new artificial intelligence models, such as OpenAI’s ChatGPT, has led some technologists and researchers to wonder whether artificial intelligence will soon surpass human capabilities. A former key researcher at OpenAI says this future is highly likely, but he also warns that the chances of human-level or superhuman-level AI controlling or even eradicating humanity are not zero.

OpenAI’s former head of language model alignment on the security team, Paul Cristiano, was interviewed by tech podcast Bankless in late April. The most worrying scenario, he said, is “full-fledged artificial intelligence taking over humans”. In the interview, he warned that advanced artificial intelligence has a considerable chance of triggering a catastrophe that could lead to the destruction of the world in the not-too-distant future.

Cristiano said: “Overall, after the artificial intelligence system reaches human level, there is a more than 50% probability that we will encounter the doomsday soon. I think there is a 10% to 20% probability that artificial intelligence will cause many or most Humans die, and take over humans.”

Cristiano will leave OpenAI in 2021. He explained his departure in an “Ask Me Anything” column on LessWrong, a community blog founded by artificial intelligence researcher Eliezer Yudkowski. Yudkowski has been warning for years that superhuman AI could destroy humanity. Cristiano wrote at the time that he wanted to “study more conceptual/theoretical issues in AI alignment,” a subfield of AI safety research that seeks to ensure that AI systems are aligned with human interests and ethics guidelines. OpenAI “isn’t the best” at research in this area, he said.

Cristiano currently runs the Alignment Research Center, a nonprofit organization dedicated to researching theoretical AI alignment strategies. Research in this area has garnered a lot of attention in recent months as companies race to roll out increasingly complex AI models. In March of this year, OpenAI released GPT-4, which updated the artificial intelligence model that drives ChatGPT. ChatGPT will only be released publicly in November 2022. At the same time, technology giants such as Google (Google) and Microsoft (Microsoft) have launched an artificial intelligence arms race, and have released artificial intelligence models that support commercial applications, hoping to occupy a place in this emerging market.

But there are still a lot of errors and disinformation in the artificial intelligence systems released to the public, so Cristiano and many experts warn against rushing. OpenAI co-founder Elon Musk parted ways with the company in 2018. In March, 1,100 technologists, including Musk, issued an open letter calling for a six-month moratorium on the development of advanced AI models more powerful than GPT-4 and a refocusing of research on improving the reliability of existing systems. sex. (Musk later announced that he would launch TruthGPT, a rival to ChatGPT, which he said would be dedicated to “seeking the truth,” not profit.)

One of the concerns in the open letter is that existing AI models could pave the way for superintelligent models that could threaten human civilization. While existing generative AI systems such as ChatGPT are capable of specific tasks, they are still far from reaching human-level intelligence. This possible future artificial intelligence is called artificial general intelligence (AGI).

Experts disagree on the timeline for the development of general artificial intelligence. Some experts believe that the development of general artificial intelligence may take decades, but some experts say that general artificial intelligence will never be born. However, the rapid development of artificial intelligence has begun to change people. position. According to a research report released by Stanford University in April this year, about 57% of artificial intelligence and computer science researchers said that artificial intelligence is rapidly developing in the direction of general artificial intelligence, and 36% of the respondents The author said that handing over important decisions to advanced artificial intelligence may bring “a catastrophe at the level of a nuclear explosion” to human beings.

Experts have warned that even if a more powerful artificial intelligence is neutral when developed, it could become very dangerous if it is used by malicious people. Jeffrey Hinton, a former Google researcher and known as the “godfather of artificial intelligence,” told the New York Times this week: “It is difficult for us to find a way to prevent criminals from using artificial intelligence to do evil. . I don’t think you should continue to scale up AI until you’ve determined that you can control it.”

Cristiano said in an interview that if artificial intelligence develops to the point where human society cannot function normally without it, then human civilization will be threatened. Without service, humanity becomes extremely vulnerable.

He said: “The most likely way for humanity to die is not that AI suddenly kills everyone, but that we deploy AI so much that it becomes ubiquitous. If for some reason these AI systems try to kill Humans, they will certainly be able to do it, let’s just hope that doesn’t happen.”

However, there are those who refute this interpretation of artificial intelligence. Some experts believe that although artificial intelligence designed to complete specific tasks will inevitably appear, due to the limitations of computers in interpreting life experience, it is not technically feasible to develop a general artificial intelligence that can rival humans.

For the latest dire warnings about artificial intelligence, entrepreneur and computer scientist Perry Metzger tweeted in April that while “highly superhuman” AI might emerge, general artificial intelligence could evolve to defy It may take years or even decades before its creators get to that level, and its creators will have time to tune the AI ​​in the right direction. Responding to Metzger’s tweet, Yann LeCun, a computer scientist at New York University, wrote that the fatalistic scenario of AGI becoming dangerously, uncontrollably capable overnight is “fundamentally Impossible to happen”. Yang Likun has chaired Meta’s artificial intelligence research since 2013.

Leave a Reply

Your email address will not be published. Required fields are marked *