https://sputnikglobe.com/20230615/almost-half-of-ceos-think-ai-could-destroy-humanity-in-5-10-years---poll-1111184159.html
Almost Half of CEOs Think 'AI Could Destroy Humanity in 5-10 Years' - Poll
Almost Half of CEOs Think 'AI Could Destroy Humanity in 5-10 Years' - Poll
Sputnik International
Forty-two percent of CEOs surveyed at the recent Yale CEO Summit are convinced that artificial intelligence (AI) could potentially destroy humanity as soon as in five to ten years from now.
2023-06-15T12:35+0000
2023-06-15T12:35+0000
2023-06-24T13:12+0000
beyond politics
artificial intelligence (ai)
threat
elon musk
yale
extinction
openai
chatgpt
https://cdn1.img.sputnikglobe.com/img/105204/85/1052048566_0:60:1920:1140_1920x0_80_0_0_a02af20896329d7bdcfa14d8603157fd.jpg
Forty-two percent of CEOs surveyed at the recent Yale CEO Summit are convinced that artificial intelligence (AI) could potentially destroy humanity as soon as in five to ten years from now.A poll was carried out at the semiannual event held by Sonnenfeld’s Chief Executive Leadership Institute for business leaders, political leaders, and scholars. The results, shared by a US media report after the virtual meeting wrapped up, were described as “pretty dark and alarming” by Yale professor Jeffrey Sonnenfeld.119 CEOs had responded to the survey, including Coca-Cola CEO James Quincy, Walmart CEO Doug McMillion, media CEOs, leaders of IT companies such as Zoom and Xerox, along with bosses of pharmaceutical and manufacturing companies.Of those questioned, 34% believed that the tremendous strides that AI technology is making could result in it potentially destroying humanity in ten years. A smaller number of those surveyed - 8% - were of the opinion that humankind could face such an existential threat and emerge the loser in the next five years.Despite the hundreds of artificial intelligence researchers and technology executives having recently signed off on a stark warning that AI was fraught with the risk of mankind’s extinction, 58% were "not worried," as this could "never happen."In a separate question, 58% of the surveyed CEOs insisted the concerns regarding AI were not overstated, while 42% were inclined to dismiss the much-peddled warnings of a potential catastrophe linked with AI’s advance overstated.Previously, AI industry leaders and scholars signed an open letter urging swift steps to mitigate the risks ostensibly linked with it. The letter was signed by some of the industry’s biggest players, with signatories including OpenAI CEO and ChatGPT creator Sam Altman, Geoffrey Hinton, the "godfather of AI," Dan Hendrycks, director of the Center for AI Safety, top executives from Microsoft and Google and Microsoft. Dan Hendrycks tweeted that the situation was "reminiscent of atomic scientists issuing warnings about the very technologies they've created."The current open letter was preceeded by an April message, which was signed by Tesla CEO Elon Musk and a handful of other prominent figures in the field, advocating for a pause in AI research.
https://sputnikglobe.com/20230529/scientist-warns-of-looming-existential-threat-as-hyper-intelligent-ai-could-decide-to-take-over-1110772797.html
https://sputnikglobe.com/20230614/not-all-doom--gloom-ais-superpower-may-bring-44-trillion-in-value-to-global-economy-1111135352.html
Sputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
2023
News
en_EN
Sputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
https://cdn1.img.sputnikglobe.com/img/105204/85/1052048566_161:0:1761:1200_1920x0_80_0_0_b24fdb8870d4c3d79eaf0557ad2a53dd.jpgSputnik International
feedback@sputniknews.com
+74956456601
MIA „Rossiya Segodnya“
forty-two percent, ceos, survey, yale ceo summit, artificial intelligence, ai, potentially destroy humanity, stark warning, risk of mankind’s extinction, openai, chatgpt, is ai dangerous, how dangerous is ai, will ai kill humans, is ai apocalypse possible
forty-two percent, ceos, survey, yale ceo summit, artificial intelligence, ai, potentially destroy humanity, stark warning, risk of mankind’s extinction, openai, chatgpt, is ai dangerous, how dangerous is ai, will ai kill humans, is ai apocalypse possible
Almost Half of CEOs Think 'AI Could Destroy Humanity in 5-10 Years' - Poll
12:35 GMT 15.06.2023 (Updated: 13:12 GMT 24.06.2023) Top business leaders appear to be sharing the concerns about artificial intelligence (AI) that have been voiced by tech gurus and scholars. A growing number of technology leaders, including Tesla CEO Elon Musk, have been unnerved about "how smart” AI tools are becoming, warning of the potential dangers.
Forty-two percent of CEOs surveyed at the recent Yale CEO Summit are convinced that artificial intelligence (AI) could potentially
destroy humanity as soon as in five to ten years from now.
A poll was carried out at the semiannual event held by Sonnenfeld’s Chief Executive Leadership Institute for business leaders, political leaders, and scholars. The results, shared by a US media report after the virtual meeting wrapped up, were described as “pretty dark and alarming” by Yale professor Jeffrey Sonnenfeld.
119 CEOs had responded to the survey, including Coca-Cola CEO James Quincy, Walmart CEO Doug McMillion, media CEOs, leaders of IT companies such as Zoom and Xerox, along with bosses of pharmaceutical and manufacturing companies.
Of those questioned, 34% believed that the tremendous strides that AI technology is making could result in it potentially destroying humanity in ten years. A smaller number of those surveyed - 8% - were of the opinion that humankind could face such an existential threat and emerge the loser in the next five years.
Despite the hundreds of artificial intelligence researchers and technology executives having recently signed off on a stark warning that AI was fraught with the risk of mankind’s extinction, 58% were "not worried," as this could "never happen."
In a separate question, 58% of the surveyed CEOs insisted the concerns regarding AI were not overstated, while 42% were inclined to dismiss the much-peddled warnings of a potential catastrophe linked with AI’s advance overstated.
Previously, AI industry leaders and scholars signed an open letter urging swift steps to mitigate the risks ostensibly linked with it. The letter was signed by some of the industry’s biggest players, with signatories including OpenAI CEO and ChatGPT creator Sam Altman, Geoffrey Hinton, the "godfather of AI," Dan Hendrycks, director of the Center for AI Safety, top executives from Microsoft and Google and Microsoft.
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," said the statement published on May 30.
Dan Hendrycks tweeted that the situation was "reminiscent of atomic scientists issuing warnings about the very technologies they've created."
The current open letter was preceeded by an April message, which was signed by Tesla CEO Elon Musk and a handful of other prominent figures in the field, advocating for a pause in AI research.