- Elon Musk and Sam Altman’s trial veered into AI extinction debate
- The judge shut down claims AI could pose a real-world threat
- The case could reshape OpenAI and the future of ChatGPT
“This is a real risk, we all could die as a result of artificial intelligence.”
That stark warning cut through a tense courtroom this week as Elon Musk’s legal battle with Sam Altman took an unexpected turn — briefly shifting from a corporate dispute into a debate about whether AI could wipe out humanity.
Judge Yvonne Gonzalez Rogers quickly shut it down, reminding Musk’s attorney, Steven Molo, to stay focused on the issue at trial, delivering a withering rebuttal:
Article continues below
“It’s ironic your client, despite these risks, is creating a company that is in the exact space,” Rogers said. “There are some people who do not want to put the future of humanity in Mr Musk’s hands. But we’re not going to get into that business.”
The Musk vs Altman feud
The Musk vs OpenAI trial is the latest chapter in a feud between rival CEOs Musk and Altman that has been building for years. Much of it has played out through public comments and online jabs, but it has now escalated into a month-long federal court case in California.
At the heart of Musk’s claim is the allegation that OpenAI — the company he co-founded in 2015 — drifted away from its original non-profit mission. He argues that Altman betrayed public trust by turning the organization into a profit-driven company.
Musk has also named OpenAI president Greg Brockman and Microsoft as part of the case, claiming they played a role in the company’s shift toward commercialization — allegations Microsoft denies.
The judge is right, of course. This case isn’t about whether AI should exist. It’s about the future direction of OpenAI. A Musk victory could trigger a major shake-up at the company and potentially even lead to Altman’s removal as CEO.
But the fact that extinction came up at all points to the real story here — whether AI could pose an existential threat to humanity.
An old debate
The technology being debated in abstract terms is already here, embedded in tools like ChatGPT and rapidly spreading into everyday life. The people at the center of the case are the same figures shaping the future of AI itself, and moments like this week’s courtroom exchange point to unresolved issues beyond a corporate battle.
Even as AI becomes more embedded in everyday products, there is still no consensus among its creators about how risky it really is. Some frame it as a transformative tool that will improve productivity, creativity, and access to information. Others continue to warn, sometimes in uncompromising terms, about long-term dangers that are harder to define, let alone regulate.
The same companies racing to roll out smarter, faster AI tools are also, at times, the ones raising concerns about where that race could lead. That tension is not new — but it is rarely expressed this directly, and almost never in a legal setting like this.
The trial is expected to run for several weeks, with billions of dollars and the future structure of OpenAI on the line. But it also captures the central contradiction of the AI era right now: the people building the technology are still debating how dangerous it might be — even as they continue to build it at speed.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

The best business laptops for all budgets
https://cdn.mos.cms.futurecdn.net/T92beSHk2LpXWzzpYpqveg-2560-80.jpg
Source link




