Between 2015-2017, I spent a considerable amount of time around Google’s and other tech giants’ campuses in Palo Alto and Mountain View, and Cambell, California.
I spoke with dozens of engineers employed with companies involved in the beginning stages of Artificial Intelligence development.
I can distinctly remember speaking with the CEO of a small Israeli AI start-up who told me that he was already working on some technology that could be scary if not developed correctly with safety guards remaining in place.
Fast forward to 2023 and an increasing number of those in the tech sector are sounding the alarm that the technology is growing out of control, and if success in becoming fully autonomous, could usher in a Terminator-type scenario for humanity.
AI theorist and provocateur, Eliezer Yudkowsky, has made a bold prediction that artificial intelligence will inevitably lead to the demise of humanity, Knewz has learned.
Yudkowsky has been a staunch believer in the “AI apocalypse” scenario. His views have increased in acceptance in recent years as advancements in AI technology have accelerated, causing even the most prominent computer scientists to question the potential consequences.
Yudkowsky is particularly concerned about the rapidly evolving capability of large language models, such as OpenAI’s ChatGPT. He views these models as a significant threat, capable of “surpassing human intelligence” and potentially causing “irreparable harm.”
“I think we’re not ready, I think we don’t know what we’re doing, and I think we’re all going to die,” Yudkowsky said on an episode of the Bloomberg series AI IRL.
“The state of affairs is that we approximately have no idea what’s going on in GPT-4,” he continued.
“We have theories but no ability to actually look at the enormous matrices of fractional numbers being multiplied and added in there, and what those numbers mean.”
Trending News – Kari Lake Triggers Woke Mob Again As Tucson Brewery Targeted For Hosting Her Event
Many other tech leaders and experts in the AI industry have expressed similar concerns, even advocating for a temporary halt in further technological advancement beyond the capabilities of GPT-4.
One of them in the AI industry who exemplifies this behavior is OpenAI CEO, Sam Altman.
But in a form of hypocrisy, even though he has concerns about the potential destructive power of AI, Altman’s company has accepted billions of dollars in funding from Microsoft.
Others in the AI space believe the fears of AI are exaggerated and unwarranted.
The Guardian reports:
The man once described as the father of artificial intelligence is breaking ranks with many of his contemporaries who are fearful of the AI arms race, saying what is coming is inevitable and we should learn to embrace it.
Prof Jürgen Schmidhuber’s work on neural networks in the 1990s was developed into language-processing models that went on to be used in technologies such as Google Translate and Apple’s Siri. The New York Times in 2016 said when AI matures it might call Schmidhuber “Dad”.
That maturity has arrived, and while some AI pioneers are looking upon their creations in horror – calling for a handbrake on the acceleration and proliferation of the technology – Schmidhuber says those calls are misguided.
Sasha Luccioni, a researcher at the AI startup Hugging Face, insists that the AI ‘doomsday’ remarks are “dangerous” and a distraction from AI’s more immediate consequences such as plagiarism, displacement of human workers, and its effect on the environment.
“Companies who are adding fuel to the fire are using this as a way to duck out of their responsibility,” Luccioni told Bloomberg. “If we’re talking about existential risks, we’re not looking at accountability.”
Filton Ezrati writing in Forbes penned: Fears Of AI Are Greatly Exaggerated
“All this recent anxiety echoes past reactions to earlier innovations, whether spinning and weaving machines in the late 1700s or railroads or telephones or computers or a long list of other innovations. But for all the fear expressed at each stage in this process, the innovations, as they destroyed some jobs helped create as many or more new jobs too.”
“And because the innovations have expanded productive capacities, these disruptive transitions have always occurred amid a greater material abundance than previously, he added.
Knewz previously reported, the CEO of Tesla and SpaceX, and owner of Twitter, Elon Musk, announced the debut of his new AI company, xAI, to take on “woke” alternatives.
Musk is positioning xAI to be a competitor to established companies like OpenAI, Google, and Anthropic, which are responsible for leading chatbots like ChatGPT, Bard, and Claude.
During an interview with Fox News Musk shared details of his plans for a new AI tool called “TruthGPT” adding that he feared existing AI companies are prioritizing systems that are “politically correct.”