Skip to main content

In a recent development, the Center for AI Safety, a nonprofit organization committed to reducing societal risks stemming from artificial intelligence, has publicized an open letter. Signed by numerous prominent figures in the AI community, the document underscores the concerted efforts in mitigating AI-related risks.

Signatories of the Open Letter

Among the key figures appending their signatures to the letter is Sam Altman, CEO of OpenAI – the AI firm responsible for ChatGPT. Other signatories include Demis Hassabis, head of Google‘s AI lab, Google DeepMind. The appeal in the open letter has also been supported by numerous industry leaders such as Microsoft, Inflection AI, Stability AI, Skype, and Notion, alongside academics, professors, and researchers.

The statement is succinct yet powerful:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Concerns Over Future AI Developments

While current AI experiments are mainly focused on text generators like ChatGPT, image generators like Stable Diffusion, and seemingly innocuous language understanding models like Whisper, future AI developments might pose substantial risks to humanity.

Back in March, more than 1,000 researchers and IT heavyweights, including Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, had signed an open letter urging governments to address the risks of AI systems with human-like intelligence.