Skip to content

AI giants support warning about advanced AI as an ‘extinction’ risk

AI giants support warning about advanced AI as an ‘extinction’ risk

[ad_1]

AI Scientists and Tech CEOs Urge Focusing on Existential AI Risk

A group of AI scientists, academics, tech CEOs, and public figures have signed a statement hosted on the website of a Californian non-profit organization, the Centre for AI Safety, urging the global attention towards the existential risks posed by AI. The signatories, including OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, and Skype co-founder Jaan Tallinn, among others, seek to equate AI risk with the harms caused by nuclear war and disasters and ask policymakers to mitigate doomsday extinction-level AI risk.

Section 1: What is the Statement?

The statement seeks global attention on existential AI risk, promoting mitigation of the risk that equates to the existential harms posed by nuclear war. The signatories aim to avoid their message on the most profound risks of AI being drowned out by discussions of other essential risks from AI.

Section 2: AI Risk and Public Opinion

There has been a recent surge in AI hype after expanding access to generative AI tools leading to a flurry of discussions on the risk of superintelligent killer AIs. So far, we have only seen theoretical discussions and AI risks that do not exist yet. Hysterical headlines have overshadowed deeper scrutiny of existing harms such as the use of copyrighted data, scraping of online personal data privacy violations, or the lack of transparency in AI giants’ data used to train these tools.

Section 3: The Motivation Behind Focusing on Existential AI Risk

AI giants, including OpenAI, DeepMind, Stability AI, and Anthropic, want to route attention away from fundamental competition and antitrust considerations. The sudden keen interest in existential AI risk arises from AI risks being the furthest in the theoretical future, moving the regulators’ eyes from the problems surrounding AI that we see currently. Instead of calling for a development pause, lobbing policymakers for effective mitigation is the latest statement’s goal.

Compelling Conclusion

Many AI industry players are aware of the potential of AI to create profound harm. The signatories seek to address the existential risks posed by AI, calling for global attention to be paid to this important issue. However, policymakers must also consider other essential risks associated with AI, including misinformation, systemic bias, malicious use, cyberattacks, and weaponization. From a risk management perspective, we must address not just the future harms but current AI industry problems as well.

FAQs

1. What is the Centre for AI Safety?
The Centre for AI Safety (CAIS) is a San Francisco-based, privately-funded, non-profit organization that funds research, encourages field-building, and advocates policy change to reduce societal-scale risks from AI.

2. Who are signatories to the statement?
Several signatories include OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, Skype co-founder Jaan Tallinn, legendary AI computer scientist Geoffrey Hinton, MIT’s Max Tegmark, and musician Grimes, among others.

3. What motivated the signatories to focus on Existential AI Risk?
AI giants want to redirect the regulators’ attention away from fundamental questions concerning competition and antitrust into renewable theoretical future risks such as doomsday extinction-level AI risk.

4. What are the other essential risks associated with AI?
Beyond existential risks, policymakers also need to consider other essential risks associated with AI, including misinformation, systemic bias, malicious use, cyberattacks, and weaponization.

[ad_2]

For more information, please refer this link