Geoffrey Hinton has thrown his weight behind
Elon Musk, in his fight to stop ChatGPT-maker OpenAI’s shift from nonprofit to a for-profit structure. The 2024 Nobel Prize winner in Physics, citing risks to its mission of ensuring artificial general intelligence (AGI) benefits humanity, wants to stop
OpenAI's move. Musk shared a post on X, amplifying Geoffrey Hinton’s concerns, posting a screenshot of Google search of Hinton, and highlighting that he's a nobel laureate, trying to emphasise on his and the godfather of AI's worry.
Hinton’s open letter to the Attorneys General of California and Delaware, urging them to halt OpenAI’s restructuring. Hinton, revered as the “Godfather of AI” for his contributions to artificial neural networks, warned that the shift could prioritize profits over safety, undermining the ethical development of AGI.
‘Godfather of AI’ and ex-OpenAI employees say AGI is dangerous
Hinton, joined by over 30 AI experts and former OpenAI staff, described AGI as “the most important and potentially dangerous technology of our time,” emphasizing the need for robust safety structures. His letter, backed by Encode, criticizes OpenAI for abandoning its original safety-focused nonprofit charter, a move he believes could have catastrophic consequences.
Musk, who co-founded OpenAI but later sued the company over its alleged betrayal of its nonprofit roots, has long advocated for truth-seeking AI, as he reiterated at the Breakthrough Prize ceremony in 2024, stressing AI’s need to prioritize curiosity and humanity’s well-being.
OpenAI, the creator of ChatGPT, announced plans in December 2024 to restructure, reducing its nonprofit arm’s authority to attract significant investments, including $40 billion from Japan’s SoftBank. The company insists it will maintain its mission by converting its for-profit arm into a public-benefit corporation, similar to models used by Anthropic and Musk’s xAI. However, critics, including Hinton and Musk, argue this hybrid structure risks diluting safety oversight at a time when Hinton estimates a 10-20% chance of AI surpassing human control within decades.