Sam Altman’s Ouster Followed Dangerous AI Breakthrough Claim: Reuters

[adinserter block=”2″]

Sam Altman's Ouster Followed Dangerous AI Breakthrough Claim: Reuters

[ad_1]

On Wednesday, Sam Altman was reinstated as CEO of OpenAI after 72 hours of turmoil. But even as the dust settles at the AI firm, a new report by Reuters suggests that the justification of Altman’s removal—that he was “not forthcoming”—may have to do with OpenAI reaching a major milestone in the push towards artificial general intelligence (AGI).

According to the news agency, sources familiar with the situation said researchers sent a letter to the OpenAI board of directors warning of a new AI discovery that could threaten humanity, which then prompted the board to remove Altman from his leadership position.

These unnamed sources told Reuters that OpenAI CTO Mira Murati told employees that the breakthrough, described as “Q Star” or “(Q*),” was the reason for the move against Altman, which was made without participation from board chairman Greg Brockman, who resigned from OpenAI in protest.

The turmoil at OpenAI was framed as an ideological battle between those who wanted to accelerate AI development and those who wished to decelerate work in favor of more responsible, thoughtful progress, colloquially known as decels. After the launch of GPT-4, several prominent tech industry members signed an open letter demanding OpenAI slow down its development of future AI models.

But as Decrypt reported over the weekend, AI experts theorized that OpenAI researchers had hit a major milestone that could not be disclosed publicly, which forced a showdown between OpenAI’s nonprofit, humanist origins and its massively successful for-profit corporate future.

On Saturday, less than 24 hours after the coup, word began to spread that OpenAI was looking to arrange a deal to bring Altman back as hundreds of OpenAI employees threatened to quit. Competitors opened their arms and wallets to receive them.

OpenAI has not yet responded to Decrypt’s request for comment.

Artificial general intelligence refers to AI that can understand, learn, and apply its intelligence to solve any problem, much like a human being. AGI can generalize its learning and reasoning to various tasks, adapting to new situations and jobs it wasn’t explicitly programmed for.

Until recently, the idea of AGI (or the Singularity) was thought to be decades away, but with advances in AI, including OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Bard, experts believe we are years, not decades, away from the milestone.

“I would say now, three to eight years is my take, and the reason is partly that large language models like Meta’s Llama2 and OpenAI’s GPT-4 help and are genuine progress,” SingularityNET CEO and AI pioneer Ben Goertzel previously told Decrypt. “These systems have greatly increased the enthusiasm of the world for AGI, so you’ll have more resources, both money and just human energy—more smart young people want to plunge into work and working on AGI.”

Edited by Ryan Ozawa.

Stay on top of crypto news, get daily updates in your inbox.



[ad_2]

Source link

[adinserter block=”2″]

Be the first to comment

Leave a Reply

Your email address will not be published.


*