100+ CEOs and scientists, including ChatGPT boss Sam Altman, warn of ‘risk of extinction’ from future AI systems

Photo credit: Andrea De Santis

Even as many industries — including the music business — explore AI in the hopes of finding efficiencies and new abilities, concerns about future applications of the technology continue to grow.

In the latest example of this, more than 100 experts in AI and entrepreneurs in the AI and high-tech fields have signed a one-sentence statement warning about the potential dangers of artificial intelligence.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads the statement issued on Tuesday (May 30) by the Center for AI Safety (CAIS).

Among the notable people who put their signature to the statement were the CEOs of major AI labs, including Sam Altman, the CEO of ChatGPT developer OpenAI; Denis Hassabis, the CEO of Google DeepMind; and Dario Amodei, CEO of Anthropic. Musician Grimes is also a signatory.

Altman recently testified before the US Congress, arguing for regulation of AI technology, including the licensing of AI developers.

Other signatories include Geoffrey Hinton and Yoshua Bengio, two of the three people referred to as the “godfathers of AI.” Along with Yann LeCun, they won the 2018 Turing Award for their work on machine learning.

LeCun, who works at Facebook owner Meta, didn’t sign the letter, and a press release from CAIS singled out Meta for its absence.

Hinton recently caught the public’s attention when he resigned from his position at Google in order to focus his efforts on warning the public about the dangers of AI. Hinton told media he now regrets his work in the AI field.

“Right now, what we’re seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it’s not as good, but it does already do simple reasoning,” he said.

“And given the rate of progress, we expect things to get better quite fast. So we need to worry about that.”

The CAIS statement follows an earlier open letter signed by CEOs and AI experts, including Tesla CEO and Twitter owner Elon Musk, to “immediately pause for at least six months the training of AI systems more powerful than GPT-4.”

The letter, issued in March, added:  “This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

“We need to be having the conversations that nuclear scientists were having before the creation of the atomic bomb.”

Dan Hendrycks, Center for AI Safety

In a press release accompanying Tuesday’s statement, CAIS drew a parallel between the creation of large language model AI systems and the development of the nuclear bomb in the 1940s, and suggested that, just as the atomic bomb was accompanied by serious debate and discussion about containing its risks, so too should the development of AI be accompanied by serious debate about its potential impacts.

“We need to be having the conversations that nuclear scientists were having before the creation of the atomic bomb,” said Dan Hendrycks, CAIS Director.

Over the past few years – and especially in the past six months – generative AI has taken rapid hold in various industries, not least in the music business.

AI music creation sites like Boomy have generated millions of tracks, and some executives worry about the flood of music – some of it AI-generated – that is making its way onto streaming platforms. At last count, an estimated 120,000 new tracks are being uploaded to music streaming services every day.


In the broader public sphere, attention on generative AI has focused on chatbots such as ChatGPT, which appeared on the scene at the end of 2022 and reached 100 million active users within a few months.

“The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question.”

Noam Chomsky, linguist

Some have expressed concern about the potential impact on the workforce. One report, from investment bank Goldman Sachs,  estimated that the equivalent of 300 million full-time jobs could be eliminated by large language model AI tech.

However, as chatbots become more ubiquitous, users are finding flaws in them that have put a question mark on just how “intelligent” – and how useful – these apps will really prove to be in the longer run.

In a recent Twitter thread, Calvin Howell, a professor at Duke University in North Carolina, asked his students to use ChatGPT to write an essay, and then to grade that essay, looking out for false information and other problems.

“All 63 essays had hallucinated information. Fake quotes, fake sources, or real sources misunderstood and mischaracterized,” Howell wrote. “Every single assignment. I was stunned — I figured the rate would be high, but not that high.

“The biggest takeaway from this was that the students all learned that it isn’t fully reliable. Before doing it, many of them were under the impression it was always right.”

In another instance, a lawyer arguing before a court in New York was forced to admit he used ChatGPT to write his briefs, after it was discovered that the chatbot invented case precedents out of thin air.

Lawyer Steven Schwartz told the court that he had never used ChatGPT for legal research prior to the case, and “was unaware of the possibility that its content could be false”.


The revelations that ChatGPT is capable of fabricating information echoes a warning from Noam Chomsky, the famed linguistics professor, who warned in a New York Times essay this spring that large language model apps like ChatGPT will likely be found to be defective by their very nature.

“However useful these programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects,” Chomsky wrote with co-authors Ian Roberts and Jeffrey Watamull.

“The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question.”

Chomsy and his co-authors added: “ChatGPT and similar programs are, by design, unlimited in what they can ‘learn’ (which is to say, memorize); they are incapable of distinguishing the possible from the impossible.”Music Business Worldwide

Related Posts