[
Hello and welcome to Eye on AI. In this edition…The Pentagon fight with Anthropic raises three crucial questions…OpenAI raises $110 billion in new funding…Meta experiments with an AI shopping assistant…LLMs can identify pseudonymous internet users at scale…data centers on the front lines in the Iran war.
The most important story in AI at the moment, without a doubt, is the fight between the U.S. Department of War and Anthropic. If you haven’t been following the drama, you can catch up on the story by reading coverage from me and my Fortune colleagues here, here, here, here, here and here.
This story raises at least three critical questions: who should have control over how AI is used in a democratic society? How should that control be exercised? What should the consequences be for a company that disagrees with the government’s policy?
Whatever you think of OpenAI CEO Sam Altman and his decision to sweep in and sign a deal with the Pentagon—including a contractual obligation to allow the military to use OpenAI’s AI models “for any lawful purpose” that Anthropic had refused to agree to—Altman correctly identified what’s at stake in this fight.
In an “Ask Me Anything” session on X over the weekend, Altman said:
A really important point: we are not elected. We have a democratic process where we do elect our leaders. We have expertise with the technology and understand its limitations, but I think you should be terrified of a private company deciding on what is and isn’t ethical in the most important areas. Seems fine for us to decide how ChatGPT should respond to a controversial question. But I really don’t want us to decide what to do if a nuke is coming towards the US.
This was the crux of the Pentagon’s stated objection to Anthropic’s existing contract. The military did not think it was right to have a private company dictating policies to an elected government.
AI moves lightning fast, Congress at a snail’s pace
Most Americans might agree with the Pentagon’s position—in principle. Except it is complicated, in practice, by three things. First, AI technology is moving extremely fast, but the mechanisms of democratic control—legislation, Congressional oversight, elections—move extremely slowly. In the three years since ChatGPT debuted, Congress has not passed any federal AI legislation. The Trump Administration has dismantled limited AI regulations put in place by its predecessor, while also acting to punish states that pass their own AI regulations.
So while many people might agree that policies on the government’s AI use ought to be set by elected officials, there is the practical issue of what to do when those elected representatives fail to act. The idea of trying to arrive at AI policy through contractual negotiations between labs and government is a poor substitute for true democratic governance, but it might be better than no governance at all. The controversy over Anthropic’s Pentagon contract should be a wake up call for Congress to act.
Second, the trend among governments over the past several decades has been to interpret existing laws broadly in order to expand the power of the government to use technology to surveil its citizens. (The story has been one of the executive branch gradually clawing back surveillance powers it lost through Congressional action following the scandals that emerged with Watergate and the Church Committee hearings in the mid-1970s.) Many activities of the military are also cloaked in secrecy that makes democratic oversight and accountability difficult. This constant pushing at the boundaries of what the law will allow has made the public distrustful of the government’s intentions. So it’s not surprising that some people at this point may actually have more faith in a seemingly well-intentioned and brilliant, but unelected, technology executive, such as Anthropic’s Dario Amodei, to do the right thing and set the right policies.
Finally, there is the issue that many Americans have with this specific government. The Trump administration has repeatedly taken unprecedented actions to punish domestic dissent, often on flimsy legal justifications, or with no legal justification at all, and has repeatedly deployed the military domestically to intimidate or punish perceived domestic opposition. It has also launched several military actions overseas with little to no legal justification. So is it any wonder that many question whether this particular administration should be given the power to use AI for anything its own lawyers believe is legal?
Is the nationalization of AI inevitable?
Even if you think the Pentagon is correct that democratic governments, not private companies, should decide on how AI is used, the next question becomes how that control should be exercised? Altman put his finger on the ultimate question hanging over the industry: if frontier AI is a strategic technology, why doesn’t the government simply nationalize it? After all, many other breakthroughs with big strategic implications—from the Manhattan Project to the space race to early efforts to develop AI—were government-funded and largely government-directed. As Altman said, “it has seemed to me for a long time it might be better if building AGI were a government project,” though he added it “doesn’t seem super likely on current trajectory.”
The Pentagon’s current approach comes close to nationalization by other means. One option the DoW threatened was using the Defense Production Act, a Cold War-era law, to compel Anthropic to deliver an AI model on its preferred terms—a sort of soft nationalization of Anthropic’s production pipeline. And the retaliatory decision to label Anthropic a “supply chain risk” is designed in part to intimidate other AI companies into accepting the Pentagon’s preferred contract terms, which again seems nationalization-adjacent.
What should the cost of dissent be in a democracy?
Finally, this brings us to the question of what an appropriate punishment should be for an AI company that refuses to agree to the government’s preferred contract terms. As Dean Ball, an AI policy expert who worked briefly for the Trump administration on its AI Action Plan, has said, the government seems within its rights to cancel its $200 million contract with Anthropic.
But the decision to go much further and label Anthropic a “supply chain risk” strikes at the heart of private property rights and free speech in a liberal democracy. The designation—which was intended to be used against technologies that could help a foreign adversary sabotage critical defense systems—had never before been applied to a U.S. company and never before been used to punish a company for not agreeing to contract terms that U.S. military desired. The decision, Ball has said, amounts to “attempted corporate murder,” since under the SCR designation any company doing business with the Pentagon would be barred from any commercial relationship with Anthropic. If that interpretation stands—and many legal scholars have said it will not—it could be a mortal blow to Anthropic, which depends on selling to large Fortune 500 companies that also do work for the Pentagon for revenue, cloud computing infrastructure, and venture capital backing. Should the punishment for disagreeing with the government be the death of your business? That certainly seems un-American.
Altman has claimed he struck his deal with the Pentagon in part to de-escalate the tension between the government and AI companies, saying that “a close partnership between governments and the companies building this technology is super important.” While I’m unsure of Altman’s true motives, I agree with him on this last point. At a time when AI potentially threatens unprecedented changes to the economy and society, fomenting distrust and conflict between the government and the people building advanced AI systems seems like a pretty bad idea.
With that, here’s more AI news.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
FORTUNE ON AI
Anthropic’s Claude overtakes ChatGPT in App Store as users boycott over OpenAI’s $200 million Pentagon contract—by Marco Quiroz-Gutierrez
Iran has the intent—and increasingly the tools—for AI-powered cyberattacks—by Sharon Goldman
Exclusive: CrowdStrike and SentinelOne veterans raise $34M to tackle enterprise AI’s governance gap—by Beatrice Nolan
OpenAI’s Pentagon deal raises new questions about AI and mass surveillance—by Beatrice Nolan
The week the AI scare turned real and America realized maybe it isn’t ready for what’s coming—by Nick Lichtenberg
AI IN THE NEWS
OpenAI closes a $110 billion funding round that values it at $730 billion. The round includes $30 billion from SoftBank, $50 billion from Amazon, and $30 billion from Nvidia. The Amazon investment is tied partly to OpenAI purchasing Amazon’s Trainium chips and comes in tranches contingent on OpenAI hitting certain milestones around either achieving artificial general intelligence (AGI) or holding an initial public stock offering. The agreement to build on Amazon’s AWS marks a strategic shift for OpenAI, which has historically been reliant on Microsoft’s Azure cloud and Nvidia GPUs, even as OpenAI says its Microsoft partnership remains central. OpenAI has talked about trying for an IPO as soon as later this year. Read more from the Wall Street Journal here.
Government agencies begin dropping Anthropic following Pentagon ‘supply chain risk’ designation and Trump announcement. The U.S. Treasury Department, State Department, and the Department of Health and Human Services all announced they were ending their use of Anthropic’s Claude model following a directive issued Friday by U.S. President Donald Trump. Trump’s announcement came in the closing hours of Anthropic’s contentious negotiations with the Pentagon, which ultimately collapsed, leading the U.S. to label the AI company a ‘supply chain risk.’ Anthropic had been gaining significant federal business. Now the government is switching to AI models from OpenAI, Google, and in some cases, xAI. See more here from Reuters.
Meta is testing an AI shopping assistant. That’s according to a story in Bloomberg, which says the social media giant is hoping to create an AI shopping tool that can rival ecommerce offerings being incorporated into OpenAI’s ChatGPT and Google’s Gemini. The Meta feature, now rolling out to some US web users, provides product recommendations in a carousel format with images, prices, brand details, and brief explanations, and tailors suggestions based on inferred data such as location and gender, though purchases must be completed on external merchant sites. CEO Mark Zuckerberg has framed the move as part of Meta’s push toward “personal superintelligence,” hinting that future agentic shopping tools could deepen ties between its AI products and its advertising ecosystem.
Thinking Machines loses two more founding team members. Christian Gibson and Noah Shpak, two members of the founding team at the high-profile AI “neolab” founded by former OpenAI CTO Mira Murati, have quietly left to join Meta. That’s according to Business Insider. Their departures add to a broader wave of exits from the San Francisco-based company, which raised a $2 billion seed round at a $12 billion valuation but has struggled to retain key personnel as rivals like Meta and OpenAI poach engineers.
Correction, March 4: An earlier version of this newsletter misstated OpenAI’s valuation from its latest fund raising. It is $730 billion (pre-money) not $730 million.
EYE ON AI RESEARCH
AI Can Unmask Anonymous Internet Users at Scale. That’s according to a recently published paper from researchers at ETH Zurich, Anthropic, and MATS (the ML Alignment & Theory Scholars program). The researchers found that if they gave an AI agent full internet access, it could re-identify pseudonymous individuals who were interviewed for 10 minutes by Anthropic’s Claude model based on analyzing those interviews and then other posts to forums such as Hacker News, Reddit forums, and Linkedin profiles. They said the AI was able to do this in minutes, whereas as each identification would have taken a human investigator hours. The LLM-based methods performed substantially better than previous machine learning methods, achieving 90% precision (meaning of the people it identified, it was correct 90% of the time) and 68% recall (meaning that it failed to find the identity in 32% of cases.) The findings have big implications for online privacy. The ability of LLMs to do things like this is one of the issues Anthropic was concerned about in its negotiations with the Pentagon, since re-identifying anonymous internet users from other publicly available or commercially-purchasable sources of information is not something that could easily be done at scale before, but might not meet the classic definition of “mass surveillance.” You can read the research paper here.
AI CALENDAR
March 2-5: Mobile World Congress, Barcelona, Spain.
March 12-18: South by Southwest, Austin, Texas.
March 16-19: Nvidia GTC, San Jose, Calif.
April 6-9: HumanX 2026, San Francisco.
BRAIN FOOD
As AI becomes increasingly important to fighting wars, do data centers become prime targets? That’s what some people are asking after Amazon reported that two of its AWS data centers in the UAE and one in Bahrain had been struck by Iranian missiles or drones, taking them out of service. The attacks forced users to switch to services hosted in more distant regions and resulted in temporary service outages. It also may have introduced additional latency into cloud-based applications.
It is not known exactly why the Iranians struck the data centers. It could be that they were merely trying to disrupt internet services as a way of punishing Gulf States that hosted U.S. military bases. But Yanis Varoufakis, the economist and former Greek finance minister, was among those speculating that Iran hit the facilities in an effort to disrupt the U.S. military’s use of Anthropic’s Claude AI models.
Despite the Pentagon labeling Anthropic a “supply chain risk” and saying the military would cease using Anthropic’s Claude “immediately,” the Wall Street Journal and Axios have reported that the military is using Claude for help with target processing as part of Operation Epic Fury, its war against Iran. It is also known that at least some of the classified networks the military runs Claude on are hosted by AWS.
So it stands to reason, Varoufakis and others speculate, that Iran attacked the data centers in an effort to disrupt the U.S. military’s use of Claude. It’s not clear if it is true in this case, but it is likely to be true in future conflicts that data centers, even those very distant from the front lines, will become targets because of how critical AI is becoming to war fighting.
https://fortune.com/img-assets/wp-content/uploads/2026/03/GettyImages-2264385089-e1772559639635.jpg?resize=1200,600
https://fortune.com/2026/03/03/the-pentagons-fight-with-anthropic-was-the-first-real-test-for-how-we-will-control-powerful-ai-the-bad-news-we-all-failed/
Jeremy Kahn




