Artificial intelligence: Experts are warning it may lead to human extinction

Artificial intelligence: Experts are issuing warning over it leading to human extinction, should we be worried?
© Photo by ThisIsEngineering on Pexels.com
Artificial intelligence: Experts are issuing warning over it leading to human extinction, should we be worried?
More under this ad

Artificial intelligence is one of the hottest topics in technology right now but some experts warn that it may lead humankind to catastrophic consequences. Here is what they say and how worried we should be.

Worries about Artificial Intelligence (AI) and how it will impact our future made headlines in recent months. Some people voiced their fears, and rightly so, that AI-powered robots will take their jobs. Others support the feature, believing that it will speed up innovation.

Discover our latest podcast

Even the American entrepreneur and owner of Tesla and Twitter Elon Musk put his foot down and publicly spoke out about AI hazards. It is worth mentioning that the billionaire’s complicated relationship with the advanced tech started after he invested in AI startups such as OpenAI years ago and that it is rumoured that he is currently working on creating a rival to ChatGPT.

More under this ad
More under this ad

Elon Musk is also amongst those who signed a previous open letter concerning AI threats urging to pause all its development for six months and assess how beneficial the tech will be not just for businesses and their profits but the humankind overall.

While the world is divided over whether AI will ultimately benefit or destroy humanity, here is what the experts actually say.

More under this ad
More under this ad

AI professionals sign an open letter with a terrifying warning

This week dozens of AI industry leaders and public figures, including OpenAI's chief executive Sam Altman and Google DeepMind's chief executive Demis Hassabis, signed another open letter to the public in a bid to raise an alarm over the consequences of rapid AI development.

Open letters were already signed before, but the latest one, published on the website of the Centre for AI Safety, had only one sentence and it sounded out a terrifying warning:

More under this ad
More under this ad
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

The most respected figures in technology believe that the imminent AI revolution will lead to irreversible changes and ultimately the destruction of humanity.

More under this ad
More under this ad

They urge policymakers to create laws and regulations to eliminate the dangers that come with the industry before it’s too late.

More under this ad
More under this ad

Read more:

ChatGPT owner forced to apologize for major privacy issue that could affect millions

We asked ChatGPT where the next pandemic will begin and the answer is not what you'd expect

What exactly did the experts say about AI’s threat?

Dan Hendrycks, the executive director of the Center for AI Safety, warned of the risks of AI-driven systemic bias, misinformation, malicious use, cyberattacks, and weaponization, calling the situation surrounding it ‘reminiscent of atomic scientists issuing warnings about the very technologies they’ve created’.

Dr Geoffrey Hinton, who earlier quit Google over AI dangers and supported the open letter, warned that the technology ‘worked better than it was expected a few years ago' and could become ‘more intelligent than us’, potentially ‘taking control’.

More under this ad
More under this ad

Hinton, together with Yoshua Bengio, professor of computer science at the University of Montreal, and NYU Professor Yann LeCun are often described as ‘godfathers of AI’ for their groundbreaking work in the field. They jointly won the 2018 Turing Award for their outstanding contributions in computer science.

More under this ad
More under this ad

But while Hinton and Bengio voiced their concerns over the rapid advance of the technology, Prof LeCun, who still works at Meta, said the apocalyptic warnings were exaggerated.

Is there evidence AI is a threat?

Earlier this year UK PM Rishi Sunak met OpenAI’s Sam Altman, GoogleDeepMind's chief executive Demis Hassabis and AnthropicAI's Dario Amodei to discuss ‘what are the guardrails that we need to put in place’ to regulate AI and keep people safe.

More under this ad
More under this ad

The PM said:

People will be concerned by the reports that AI poses existential risks, like pandemics or nuclear wars. I want them to be reassured that the government is looking very carefully at this.
More under this ad
More under this ad

The G7 has also recently created a working group on AI.

While opinions are still divided over the benefits or harm of the technology, Cynthia Rudin, a computer science professor and AI researcher at Duke University, wondered if more evidence of AI was needed for governments to become more proactive.

She said:

Do we really need more evidence that AI’s negative impact could be as big as nuclear war?
More under this ad
More under this ad

Read more:

People are shocked to realize what clicking 'I'm not a robot' really does

Artificial intelligence robot paints surprising portrait of the Queen for her Platinum Jubilee

Sources used:

- CNN Business: 'Experts are warning AI could lead to human extinction. Are we taking it seriously enough?'

- BBC News: 'Artificial intelligence could lead to extinction, experts warn'

More under this ad