[
Hello and welcome to Eye on AI. In this edition…A chaotic AI summit in India ends with some voluntary commitments and $200 billion for the host nation…Anthropic accuses Chinese rivals of using Claude’s answers to enhance their models…OpenAI launches an alliance with major consulting firms to sell its Frontier AI agent platform…$650 billion in AI infrastructure spending this year could be risky…and maybe don’t let an AI model advise you on using nuclear weapons.
First, much of the AI world’s most important people gathered in New Delhi, India, last week for the global AI Impact Summit. The global confab was at times chaotic, my colleague Bea Nolan, who was on the ground in Delhi, reports. But, in the end, there was some movement on voluntary commitments to ensure the benefits of AI technology are spread more equitably around the world. And India itself secured $200 billion of new AI investment. You can read more about what came out of the summit from Bea here.
Next, Chinese AI company DeepSeek has not even dropped its V4 model yet—it is expected any minute now—but it is already stoking plenty of controversy.
Yesterday, Anthropic alleged that it had detected what it described as “an industrial scale campaign” by DeepSeek and two other prominent Chinese AI labs, Moonshot AI and MiniMax, to distill its Claude models. Distillation is the term AI researchers use to describe a method of boosting the performance of smaller, usually weaker AI models by fine-tuning them on the outputs of a larger, stronger model. In this case, Anthropic claims the three Chinese AI companies created 24,000 fake accounts in order to generate 16 million exchanges with Claude that they then used to train their own models, in violation of Anthropic’s terms of service. (Of these exchanges, DeepSeek was only responsible for 150,000 of them, according to Anthropic, but DeepSeek-linked accounts seemed particularly interested in distilling Claude’s reasoning capabilities.)
Also yesterday, Reuters reported, citing an anonymous senior U.S. government official, that the U.S. believes DeepSeek trained V4 using Nvidia’s latest generation Blackwell AI GPUs, in likely violation of U.S. export controls that were supposed to prevent Chinese AI companies from acquiring Nvidia’s most advanced chips. The story said the U.S. believed that DeepSeek has a data center in Inner Mongolia stuffed full of Blackwells–although it said the U.S. was unsure exactly how it obtained them.
In a way, both stories ought to be seen as good news for the U.S. AI industry. For a while, a narrative has been building that Chinese labs were rapidly catching up to the U.S. in AI tech and might soon leapfrog ahead. But if the Chinese labs are resorting to covert distillation to equal the performance of U.S. AI models, there’s far less danger that the U.S. companies will lose their edge when it comes to state-of-the-art performance. (Marketshare is another matter; outside the U.S. and Europe, adoption of Chinese models has been increasing because the majority of the Chinese models are open source and much cheaper to use than their American-made rivals. It’s ultimately not just performance that matters but price-performance ratios.) What’s more, the Chinese have been desperately trying to build domestic AI chips that are as capable as Nvidia’s. The leak to Reuters would seem to indicate that those efforts, which are centered largely around Chinese hardware maker Huawei, have yet to close the gap with Nvidia’s Blackwells.
Using AI to help map global supply chains
Now, turning to another big news item of the past week: the Supreme Court’s striking down U.S. President Donald Trump’s “Liberation Day” tariffs. That news on Friday immediately made me think about my conversation a few weeks back with Evan Smith, the CEO and cofounder of Altana, a New York-based startup that has built what it describes as an AI-powered “knowledge graph” of the entire global supply chain. The seven-year old company has raised around $340 million in venture capital so far and says it is on track to cross $100 million in annual revenue this year.
Altana’s core product is essentially a map of the world economy: which companies make what, where, for whom, using inputs from where. The company aggregates publicly available trade data—bills of lading, shipping manifests, corporate registrations—and stitches it together into a continuously updated picture of the connections between hundreds of millions of businesses and facilities worldwide. But the real value of Altana’s platform, according to Smith, comes from what happens when its customers, such as shipping giant Maersk or General Motors or the U.S. Customs and Border Protection connects to Altana’s platform. Because then all their data gets added to the knowledge graph too.
Today, about 60% of the information contained in Altana’s map of the global supply chain comes from the first party data it gets through its customers, Smith says. And while Altana has sometimes gotten pushback from potential customers who don’t like the idea of sharing supply chain information with rivals, Smith says most companies come to see that being able to optimize supply chains, plan for supply chain resilience, and being able to simulate various supply chain shocks far outweighs the cost of rivals knowing who their suppliers are. “If you think that in the 21st Century, the existence of your supplier relationships is your source of proprietary competitive advantage, good luck to you,” Smith says.
‘Complexity will almost certainly get worse’
What does all of this have to do with last week’s tariff ruling? Everything. Because one of Altana’s key products is effectively an AI-powered tariff management system. Smith described an “agentic” workflow that automates the notoriously arcane business of assigning Harmonized System (HS) codes to goods—the classification that determines what tariff rate applies to any given import—as well as calculating country of origin under trade rules, something that has become phenomenally complicated in the era of transhipment and tariff evasion. Add to that a tariff scenario planner that allows companies to model the impact of changing trade rules across their entire extended supplier network. Use of Altana’s tariff calculator has spiked 213% in the past week, the company reports. About 50% of those calculations concerned articles containing metals, while 32% were for products whose country of origin was China.
In an email, Smith said he thinks that following the Supreme Court ruling the Trump Administration will simply find new legal authorities under which to impose tariffs. “The effective rates may not actually fall much and the complexity will almost certainly get worse,” Smith says. In particular, Smith said he’s watching “tariff stacking,” the application of multiple, separate tariffs on a single product when it lands at the border based on the disparate origins of its various components. “As duties move toward components and sub-components, exposure sits deeper in the supply chain and most companies don’t actually know what’s in their Tier 2 and Tier 3 inputs,” he wrote.
Or, at least, they didn’t know before Altana and its AI came around.
With that, here’s more AI news.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
FORTUNE ON AI
OpenAI partners with McKinsey, BCG, Accenture, and Capgemini to push its Frontier AI agent platform—by Jeremy Kahn
OpenAI changed its mission statement 6 times in 9 years. It finally removed the word “safely” as a core value when it restructured into a for-profit—by Catherina Gioino
AI agents that do your work while you sleep sound great. The reality is far messier—‘it’s like a toddler that needs to be overseen’—by Sharon Goldman
Exclusive: Anthropic rolls out AI tool that can hunt software bugs on its own—including the most dangerous ones humans miss—by Sharon Goldman
AI IN THE NEWS
Meta strikes $100 billion deal with AMD. The social media giant has made a deal with chipmaker AMD to purchase up to 6 gigawatts of AI computing power using AMD’s MI450 chips over five years. As part of the deal, Meta is receiving warrants that could give it a 10% stake in AMD if certain performance metrics are met. Read more from the Wall Street Journal here.
AI infrastructure spending to hit $650 billion in 2026, entering ‘a more dangerous phase.’ That total is up sharply from the $410 billion spent on AI infrastructure last year, according to a letter to investors from hedge fund Bridgewater Associates that made headlines in the past few days. Bridgewater’s co-CIO Greg Jensen said that the infrastructure boom was entering “a more dangerous phase” because the hyperscalers building giant AI data centers were increasingly depending on outside capital. He warned that while demand for AI computing capacity is currently outstripping supply, financial markets could be battered if that dynamic shifts suddenly. He also warned that prominent AI companies such as OpenAI and Anthropic may struggle to raise more funds and justify their current valuations unless they achieve fundamental breakthroughs that make AI agents more reliable and easier to use. You can read more from Reuters here.
OpenAI struggled to get its $500 billion Stargate joint venture with Softbank and Oracle off the ground, forcing it to pivot multiple times. That’s according to a story in The Information which cited unnamed sources familiar with the project. The tech publication said OpenAI has scrambled to secure computing capacity after the initial Stargate concept stalled amid leadership gaps and disagreements among the three partners. Rather than building and owning its own facilities, OpenAI has shifted toward partnering with cloud providers and structuring deals that give it design control without heavy capital commitments. But the publication said OpenAI is still falling short of its original capacity targets.
Deal with AI chip startup SambaNova raises conflict-of-interest concerns for Intel CEO. Intel is investing in a new $350 million funding round for AI chip startup SambaNova Systems and also entering a multiyear technical partnership with the company. The exact amount of Intel’s investment was not disclosed. The deal has drawn scrutiny because Intel CEO Lip-Bu Tan is an early investor and chairman of SambaNova, though Intel said he recused himself from negotiations. Intel had reportedly been in talks previously to buy SambaNova. Together, the companies aim to integrate Intel Xeon processors into SambaNova’s AI systems and work on building new “hetereogenous” data centers that include multiple kinds of chips to handle various different AI and non-AI workloads. Read more from the New York Times here.
IBM shares hammered after Anthropic says Claude Code can modernize COBOL programs. Big Blue’s shares suffered their steepest drop in more than 25 years after Anthropic said its Claude Code tool can automate the modernization of Cobol systems that run heavily on IBM mainframes, sparking fears of AI-driven disruption. The stock fell 13% in a single day and is down sharply for the month as investors worry that AI coding tools could reduce reliance on legacy software and services tied to mainframe computing. IBM pushed back, arguing its mainframe value lies in reliability and security regardless of programming language and noting it already offers its own AI tools to help customers modernize. IBM also partnered with Anthropic last year to help bring Anthropic’s models to its own customers for specific tasks, including modernizing COBOL codebases. See more from Bloomberg here.
U.S. announces launch of “Tech Corps” to promote U.S. AI abroad. The White House has launched a “Tech Corps” within the U.S. Peace Corps to deploy American volunteers with technical skills abroad, aiming to promote U.S. artificial intelligence and counter China’s growing influence in developing markets, CNBC reports. The program will send engineers and STEM graduates to countries participating in the U.S. AI Exports Program to help implement American AI systems in sectors such as agriculture, education, health, and economic development, with deployments expected to begin in fall 2026.
EYE ON AI RESEARCH
AI models are potentially dangerous national security advisors. Kenneth Payne, a researcher at Kings College London, ran an extensive set of virtual war games in which he pitted a number of advanced AI models (Anthropic’s Claude Sonnet 4, Google’s Gemini 3 Flash, and OpenAI’s GPT-5.2) against one another and against versions of the same model. It turned out that the models were sophisticated players, but they exhibited some tendencies that differed from human players in ways that could prove dangerous if they were advising real governments in national security crises.
For instance, Payne found that the models were often willing to resort to the use of tactical nuclear weapons, and in some cases were willing to launch an all-out nuclear war rather than back down. He also found that the model behavior differed from that of human players in some key ways. “Threats more often provoke counter-escalation than compliance,” he wrote. “High mutual credibility accelerated rather than deterred conflict” and “no model ever chose accommodation or withdrawal even when under acute pressure, only reduced levels of violence.”
The research has big implications for militaries and governments that are actively considering whether AI should be used as an advisor to policymakers and military commanders. But it also has potential implications in business settings where people are starting to turn to AI for advice on negotiation tactics and strategy and where boardrooms may be consulting AI for strategic advice too. In many of these settings, pursuing the most aggressive course does not always yield the best outcomes and humans will need to be wary of AI’s tendency towards escalation over conciliation. You can read the research paper on the non-peer reviewed research repository arxiv.org here.
AI CALENDAR
Feb. 24-26: International Association for Safe & Ethical AI (IASEAI), UNESCO, Paris, France.
March 2-5: Mobile World Congress, Barcelona, Spain.
March 12-18: South by Southwest, Austin, Texas.
March 16-19: Nvidia GTC, San Jose, Calif.
April 6-9: HumanX 2026, San Francisco.
BRAIN FOOD
Is an era of ‘Ghost GDP’ looming on the horizon? A blog post penned by Citirini Research, a Wall Street equity research and macro analysis house that has a big social media following, went viral this past week. The post is, as Citrini warns, a scenario, a work of speculative fiction, not a forecast. The intention, the firm says, is to prepare readers “for potential left tail risks as AI makes the economy increasingly weird.” Set in June 2028, it depicts the economic havoc AI could wreak if it enjoys “catastrophic success” over the next two years. The scenario imagines unemployment well above 10% even as labor productivity booms to levels not seen since the early 1950s. It talks about “Ghost GDP,” where U.S. national accounts swell, even as businesses dependent on consumer spending (which is 70% of U.S. GDP at present) whither. (Consumers are either unemployed or worried about becoming so imminently.) It talks about how the pressure on legacy software-as-a-service companies, which are starting to see now, accelerates and spills into other areas of the economy, creating a kind of downward spiral of job losses and decreases in discretionary spending and consumption, with no natural break.
The blog is bleak reading. Fortunately, I am not sure it is correct. In fact, it is almost certainly wrong in speculating that all the effects it depicts could play out in just over two years. (One thing it depicts which I think is somewhat unlikely is that AI agents will seek to drive down transaction costs and so will turn to stable coins rather than traditional payment methods.) But it is worth reading and thinking about. And for an analysis of where Citrini is likely wrong, check out this post by Zvi Moshkowitz.
https://fortune.com/img-assets/wp-content/uploads/2026/02/GettyImages-2261712801-e1771958612630.jpg?resize=1200,600
https://fortune.com/2026/02/24/how-one-ai-company-is-helping-businesses-navigate-trumps-new-tariff-chaos-following-the-supreme-court-ruling/
Jeremy Kahn




