There’s no denying that Apple‘s Siri digital chatbot didn’t exactly hold a place of honor at this year’s WWDC 2025 keynote. Apple mentioned it, and reiterated that it was taking longer than it had anticipated to bring everyone the Siri it promised a year ago, saying the full Apple Integration would arrive “in the coming year.”
Apple has since confirmed this means 2026. That means we won’t be seeing the kind of deep integration that would have let Siri use what it knew about you and your iOS-running iPhone to become a better digital companion in 2025. It won’t, as part of the just-announced iOS 26, use app intents to understand what’s happening on the screen and take action on your behalf based on that.
I have my theories about the reason for the delay, most of which revolve around the tension between delivering a rich AI experience and Apple’s core principles regarding privacy. They often seem at cross purposes. This, though, is guesswork. Only Apple can tell us exactly what’s going on – and now they have.
I, along with Tom’s Guide Global Editor-in-Chief Mark Spoonauer, sat down shortly after the keynote with Apple’s Senior Vice President of Software Engineering Craig Federighi and Apple Global VP of Marketing Greg Joswiak for a wide-ranging podcast discussion about virtually everything Apple unveiled during its 90-minute keynote.
We started by asking Federighi about what Apple delivered regarding Apple Intelligence, as well as the status of Siri, and what iPhone users might expect this year or next. Federighi was surprisingly transparent, offering a window into Apple’s strategic thinking when it comes to Apple Intelligence, Siri, and AI.
Far from nothing
Federighi started by walking us through all that Apple has delivered with Apple Intelligence thus far, and, to be fair, it’s a considerable amount
“We were very focused on creating a broad platform for really integrated personal experiences into the OS.” recalled Federighi, referring to the original Apple Intelligence announcement at WWDC 2024.
At the time, Apple demonstrated Writing Tools, summarizations, notifications, movie memories, semantic search of the Photos library, and Clean Up for photos. It delivered on all those features, but even as Apple was building those tools, it recognized, Federighi told us, that “we could, on that foundation of large language models on device, private cloud compute as a foundation for even more intelligence, [and] semantic indexing on device to retrieve keep knowledge, build a better Siri.”
Over-confidence?
A year ago, Apple’s confidence in its ability to build such a Siri led it to demonstrate a platform that could handle more conversational context, mispeaking, Type to Siri, and a significantly redesigned UI. Again, all things Apple delivered.
“We also talked about […] things like being able to invoke a broader range of actions across your device by app intents being orchestrated by Siri to let it do more things,” added Federighi. “We also talked about the ability to use personal knowledge from that semantic index so if you ask for things like, “What’s that podcast, that ‘Joz’ sent me?’ that we could find it, whether it was in your messages or in your email, and call it out, and then maybe even act on it using those app intents. That piece is the piece that we have not delivered, yet.”
This is known history. Apple overpromised and underdelivered, failing to deliver a vaguely promised end-of-year Apple Intelligence Siri update in 2024 and admitting by spring 2025 that it would not be ready any time soon. As to why it happened, it’s been, up to now, a bit of a mystery. Apple is not in the habit of demonstrating technology or products that it does not know for certain that it will be able to deliver on schedule.
Federighi, however, explained in some detail where things went awry, and how Apple progresses from here.
“We found that when we were developing this feature that we had, really, two phases, two versions of the ultimate architecture that we were going to create,” he explained. “Version one we had working here at the time that we were getting close to the conference, and had, at the time, high confidence that we could deliver it. We thought by December, and if not, we figured by spring, until we announced it as part of WWDC. Because we knew the world wanted a really complete picture of, ‘What’s Apple thinking about the implications of Apple intelligence and where is it going?'”
A tale of two architectures
As Apple was working on a V1 of the Siri architecture, it was also working on what Federighi called V2, “a deeper end-to-end architecture that we knew was ultimately what we wanted to create, to get to a full set of capabilities that we wanted for Siri.”
What everyone saw during WWDC 2024 were videos of that V1 architecture, and that was the foundation for work that began in earnest after the WWDC 2024 reveal, in preparation for the full Apple Intelligence Siri launch.
“We set about for months, making it work better and better across more app intents, better and better for doing search,” Federighi added. “But fundamentally, we found that the limitations of the V1 architecture weren’t getting us to the quality level that we knew our customers needed and expected. We realized that V1 architecture, you know, we could push and push and push and put in more time, but if we tried to push that out in the state it was going to be in, it would not meet our customer expectations or Apple standards, and that we had to move to the V2 architecture.
“As soon as we realized that, and that was during the spring, we let the world know that we weren’t going to be able to put that out, and we were going to keep working on really shifting to the new architecture and releasing something.”
We realized that […] If we tried to push that out in the state it was going to be in, it would not meet our customer expectations or Apple standards, and that we had to move to the V2 architecture.
Craig Federighi, Apple
That switch, though, and what Apple learned along the way, meant that Apple would not make the same mistake again, and promise a new Siri for a date that it could not guarantee to hit. Instead. Apple won’t “precommunicate a date,” explained Federighi, “until we have in-house, the V2 architecture delivering not just in a form that we can demonstrate for you all…”
He then joked that, while, actually, he “could” demonstrate a working V2 model, he was not going to do it. Then he added, more seriously, “We have, you know, the V2 architecture, of course, working in-house, but we’re not yet to the point where it’s delivering at the quality level that I think makes it a great Apple feature, and so we’re not announcing the date for when that’s happening. We will announce the date when we’re ready to seed it, and you’re all ready to be able to experience it.”
I asked Federighi if, by V2 architecture, he was talking about a wholesale rebuilding of Siri, but Federighi disabused me of that notion.
“I should say the V2 architecture is not, it wasn’t a star-over. The V1 architecture was sort of half of the V2 architecture, and now we extend it across, sort of make it a pure architecture that extends across the entire Siri experience. So we’ve been very much building up upon what we have been building for V1, but now extending it more completely, and that more homogeneous end-to-end architecture gives us much higher quality and much better capability. And so that’s what we’re building now.”
A different AI strategy
Some might view Apple’s failure to deliver the full Siri on its original schedule as a strategic stumble. But Apple’s approach to AI and product is also utterly different than that of OpenAI or Google Gemini. It does not revolve around a singular product or a powerful chatbot. Siri is not necessarily the centerpiece we all imagined.
Federighi doesn’t dispute that “AI is this transformational technology […] All that’s growing out of this architecture is going to have decades-long impact across the industry and the economy, and much like the internet, much like mobility, and it’s going to touch Apple’s products and it’s going to touch experiences that are well outside of Apple products.”
Apple clearly wants to be part of this revolution, but on its terms and in ways that most benefit its users while, of course, protecting their privacy. Siri, though, was never the end game, as Federighi explained.
AI is this transformational technology […] and it’s going to touch Apple’s products and it’s going to touch experiences that are well outside of Apple products.”
Craig Federighi, Apple
“When we started with Apple Intelligence, we were very clear: this wasn’t about just building a chatbot. So, seemingly, when some of these Siri capabilities I mentioned didn’t show up, people were like, ‘What happened, Apple? I thought you were going to give us your chatbot. That was never the goal, and it remains not our primary goal.”
So what is the goal? I think it may be fairly obvious from the WWDC 2025 keynote. Apple is intent on integrating Apple Intelligence across all its platforms. Instead of heading over to a singular app like ChatGPT for your AI needs, Apple’s putting it, in a way, everywhere. It’s done, Federighi explains, “in a way that meets you where you are, not that you’re going off to some chat experience in order to get things done.”
Apple understands the allure of conversational bots. “I know a lot of people find it to be a really powerful way to gather their thoughts, brainstorm […] So, sure, these are great things,” Federighi says. “Are they the most important thing for Apple to develop? Well, time will tell where we go there, but that’s not the main thing we set out to do at this time.”
Check back soon for a link to the TechRadar and Tom’s Guide podcast featuring the full interview with Federighi and Joswiak.
You might also like
https://cdn.mos.cms.futurecdn.net/BB4SuGUerJ4w6EfEojBCiG.jpg
Source link
lance.ulanoff@futurenet.com (Lance Ulanoff)