Apple's Siri AI Overhaul Keeps Slipping. Here's the Latest.
Apple's biggest Siri update since 2011 promises on-screen awareness, personal context, and cross-app actions. But delays and a Google Gemini deal raise questions.
• Apple's major Siri AI overhaul is planned for 2026, powered by a partnership with Google's Gemini models
• New features include on-screen awareness, personal context from device data, and cross-app actions
• The upgrade has been delayed multiple times — originally announced at WWDC 2024, it's now expected in a spring 2026 iOS update
• Apple's cautious approach: they pulled Siri ads after testing showed the upgrade worked correctly only two-thirds of the time
• Apple Intelligence's on-device processing means most AI features work without sending data to the cloud
What's Inside
- What's Actually Coming to Siri
- Why Apple Keeps Delaying
- Apple Intelligence: The Foundation
- The Google Gemini Partnership
- The Updated Timeline
- How Apple's Approach Differs from Google and OpenAI
- What Works Now (and What Doesn't)
- FAQ
What's Actually Coming to Siri
Apple previewed the next generation of Siri at WWDC 2024, and the promised features represent the biggest update to the voice assistant since its 2011 launch. Here's what Apple has committed to delivering:
On-Screen Awareness
Siri will understand what's currently displayed on your screen. If you're looking at a restaurant in Safari, you can say "Add this to my calendar for Friday" and Siri will know which restaurant, pull the address, and create the event. If someone texts you a new address, you can say "Get directions there" without copying and pasting. This is the feature that turns Siri from a command executor into a context-aware assistant.
Personal Context
Siri will access and reason over your personal data — emails, messages, photos, calendar events, notes — to answer questions like "When does my flight land?" (pulling from a confirmation email) or "Show me the photos from mom's birthday" (recognizing faces and dates). Apple emphasizes this processing happens on-device using Apple Intelligence's private compute framework, meaning your personal data isn't sent to external servers.
Cross-App Actions
Instead of Siri only controlling individual apps in isolation, the upgraded version can chain actions across multiple apps. "Send the document I was working on to the people in my 3 PM meeting" requires Siri to identify the document (Pages or Files), find the meeting (Calendar), look up the attendees' email addresses (Contacts), and send the email (Mail). This multi-step, cross-app execution is what separates a voice assistant from a voice-controlled AI agent.
Natural Conversation
Siri's voice is getting a significant upgrade with more natural inflection, better error recovery (understanding you even when you stumble over words), and conversational context retention. You'll be able to ask follow-up questions without repeating context — "What's the weather in Tokyo?" followed by "And what about next week?" without needing to say "What's the weather in Tokyo next week?"
Why Apple Keeps Delaying
The Siri AI overhaul was first announced in June 2024. It was supposed to ship with iOS 18. Then iOS 18.4. Then a vague "2025" timeline. Now, as of late 2025, the target is spring 2026 — and even that date isn't guaranteed.
What happened? According to Bloomberg's reporting, internal testing revealed the upgraded Siri worked correctly only about two-thirds of the time. For a company that ships to over a billion devices, a 33% failure rate on a core feature is unacceptable. Apple pulled a series of TV advertisements that had been promoting the unreleased Siri capabilities — a rare public admission that the product wasn't ready.
The technical challenges are real:
- On-device processing constraints: Running sophisticated AI models on iPhone hardware (even the A-series chips) requires extreme optimization. The models need to be small enough to run locally but capable enough to be genuinely useful.
- Privacy requirements: Apple's commitment to on-device processing means they can't simply route queries to massive cloud models like ChatGPT or Gemini. They need models that work within Apple's Private Cloud Compute framework.
- Reliability bar: When Siri fails, it fails publicly — in front of the user, often in embarrassing ways. Apple's quality bar for shipping is higher than most competitors because the consequences of failure are more visible.
- Multi-app coordination: Getting Siri to reliably execute actions across multiple apps requires deep integration with every app's data model and API surface. Third-party app integration adds another layer of complexity.
Apple's approach stands in contrast to OpenAI and Google, which ship AI features quickly and iterate publicly. Apple prefers to delay until the experience meets their standard. Whether that's prudent or overly cautious depends on your perspective — but it explains the timeline.
Apple Intelligence: The Foundation
The Siri upgrade is built on Apple Intelligence, Apple's broader AI platform that debuted with iOS 18 in fall 2024. Understanding Apple Intelligence helps explain what Siri can (and can't) do.
Apple Intelligence includes:
- On-device models: Small language models running directly on iPhone, iPad, and Mac. These handle text summarization, writing assistance, notification prioritization, and basic Siri improvements. No internet connection required.
- Private Cloud Compute: For tasks too complex for on-device models, Apple runs larger models on dedicated Apple silicon servers. The key promise: your data is processed but never stored on these servers, and Apple can't access it.
- Third-party integrations: ChatGPT integration (already live) routes certain queries to OpenAI's models when Siri determines it can't handle them locally. The user is always asked before data is sent externally.
The existing Apple Intelligence features that are already shipping — writing tools, notification summaries, Clean Up in Photos, Genmoji — work well within their scope. The Siri upgrade extends this foundation with the personal context, screen awareness, and cross-app capabilities described above.
The Google Gemini Partnership
In early 2026, reports emerged that Apple signed a multi-year deal with Google to use Gemini models as the foundation for its next-generation AI features, including the Siri overhaul. The deal is reportedly worth approximately $1 billion per year.
This is significant for several reasons:
- Apple chose Google over OpenAI for its foundational models. The existing ChatGPT integration remains as a third-party option, but the core AI engine is shifting to Gemini.
- Gemini's strengths align with Apple's needs: strong multimodal understanding (important for on-screen awareness), efficient inference (important for on-device processing), and long context windows (important for personal data reasoning).
- Privacy implications: Apple will need to ensure that Gemini-powered features comply with its on-device and Private Cloud Compute privacy framework. How Google's models integrate with Apple's privacy architecture is still unclear.
The partnership reflects a broader industry trend: device manufacturers are licensing foundation models from AI labs rather than building their own from scratch. Samsung uses Google's models. Microsoft integrates OpenAI's. Apple is now following the same playbook — and paying a premium for it.
For developers building iOS apps, the Gemini integration could be relevant. If Apple exposes Gemini capabilities through SiriKit or a new framework, third-party apps could access more powerful AI features than what's possible with on-device models alone. Apple hasn't confirmed developer access to Gemini-powered APIs, but the possibility would significantly expand what iOS apps can do with AI.
The financial scale of the deal also suggests Apple views this as a long-term strategic commitment, not an experiment. At $1 billion annually, Apple is betting that Gemini's capabilities will be central to their AI strategy for years. This has implications for Siri's competitive positioning — Apple is outsourcing its AI brain to Google, which is simultaneously its biggest competitor in the AI assistant space.
The Updated Timeline
• Apple's major Siri AI overhaul is planned for 2026, powered by a partnership with Google's Gemini models
• New features include on-screen awareness, personal context from device data, and cross-app actions
• The upgrade has been delayed multiple times — originally announced at WWDC 2024, it's now expected in a spring 2026 iOS update
• Apple's cautious approach: they pulled Siri ads after testing showed the upgrade worked correctly only two-thirds of the time
• Apple Intelligence's on-device processing means most AI features work without sending data to the cloud
What's Inside
- What's Actually Coming to Siri
- Why Apple Keeps Delaying
- Apple Intelligence: The Foundation
- The Google Gemini Partnership
- The Updated Timeline
- How Apple's Approach Differs from Google and OpenAI
- What Works Now (and What Doesn't)
- FAQ
What's Actually Coming to Siri
Apple previewed the next generation of Siri at WWDC 2024, and the promised features represent the biggest update to the voice assistant since its 2011 launch. Here's what Apple has committed to delivering:
On-Screen Awareness
Siri will understand what's currently displayed on your screen. If you're looking at a restaurant in Safari, you can say "Add this to my calendar for Friday" and Siri will know which restaurant, pull the address, and create the event. If someone texts you a new address, you can say "Get directions there" without copying and pasting. This is the feature that turns Siri from a command executor into a context-aware assistant.
Personal Context
Siri will access and reason over your personal data — emails, messages, photos, calendar events, notes — to answer questions like "When does my flight land?" (pulling from a confirmation email) or "Show me the photos from mom's birthday" (recognizing faces and dates). Apple emphasizes this processing happens on-device using Apple Intelligence's private compute framework, meaning your personal data isn't sent to external servers.
Cross-App Actions
Instead of Siri only controlling individual apps in isolation, the upgraded version can chain actions across multiple apps. "Send the document I was working on to the people in my 3 PM meeting" requires Siri to identify the document (Pages or Files), find the meeting (Calendar), look up the attendees' email addresses (Contacts), and send the email (Mail). This multi-step, cross-app execution is what separates a voice assistant from a voice-controlled AI agent.
Natural Conversation
Siri's voice is getting a significant upgrade with more natural inflection, better error recovery (understanding you even when you stumble over words), and conversational context retention. You'll be able to ask follow-up questions without repeating context — "What's the weather in Tokyo?" followed by "And what about next week?" without needing to say "What's the weather in Tokyo next week?"
Why Apple Keeps Delaying
The Siri AI overhaul was first announced in June 2024. It was supposed to ship with iOS 18. Then iOS 18.4. Then a vague "2025" timeline. Now, as of late 2025, the target is spring 2026 — and even that date isn't guaranteed.
What happened? According to Bloomberg's reporting, internal testing revealed the upgraded Siri worked correctly only about two-thirds of the time. For a company that ships to over a billion devices, a 33% failure rate on a core feature is unacceptable. Apple pulled a series of TV advertisements that had been promoting the unreleased Siri capabilities — a rare public admission that the product wasn't ready.
The technical challenges are real:
- On-device processing constraints: Running sophisticated AI models on iPhone hardware (even the A-series chips) requires extreme optimization. The models need to be small enough to run locally but capable enough to be genuinely useful.
- Privacy requirements: Apple's commitment to on-device processing means they can't simply route queries to massive cloud models like ChatGPT or Gemini. They need models that work within Apple's Private Cloud Compute framework.
- Reliability bar: When Siri fails, it fails publicly — in front of the user, often in embarrassing ways. Apple's quality bar for shipping is higher than most competitors because the consequences of failure are more visible.
- Multi-app coordination: Getting Siri to reliably execute actions across multiple apps requires deep integration with every app's data model and API surface. Third-party app integration adds another layer of complexity.
Apple's approach stands in contrast to OpenAI and Google, which ship AI features quickly and iterate publicly. Apple prefers to delay until the experience meets their standard. Whether that's prudent or overly cautious depends on your perspective — but it explains the timeline.
Apple Intelligence: The Foundation
The Siri upgrade is built on Apple Intelligence, Apple's broader AI platform that debuted with iOS 18 in fall 2024. Understanding Apple Intelligence helps explain what Siri can (and can't) do.
Apple Intelligence includes:
- On-device models: Small language models running directly on iPhone, iPad, and Mac. These handle text summarization, writing assistance, notification prioritization, and basic Siri improvements. No internet connection required.
- Private Cloud Compute: For tasks too complex for on-device models, Apple runs larger models on dedicated Apple silicon servers. The key promise: your data is processed but never stored on these servers, and Apple can't access it.
- Third-party integrations: ChatGPT integration (already live) routes certain queries to OpenAI's models when Siri determines it can't handle them locally. The user is always asked before data is sent externally.
The existing Apple Intelligence features that are already shipping — writing tools, notification summaries, Clean Up in Photos, Genmoji — work well within their scope. The Siri upgrade extends this foundation with the personal context, screen awareness, and cross-app capabilities described above.
The Google Gemini Partnership
In early 2026, reports emerged that Apple signed a multi-year deal with Google to use Gemini models as the foundation for its next-generation AI features, including the Siri overhaul. The deal is reportedly worth approximately $1 billion per year.
This is significant for several reasons:
- Apple chose Google over OpenAI for its foundational models. The existing ChatGPT integration remains as a third-party option, but the core AI engine is shifting to Gemini.
- Gemini's strengths align with Apple's needs: strong multimodal understanding (important for on-screen awareness), efficient inference (important for on-device processing), and long context windows (important for personal data reasoning).
- Privacy implications: Apple will need to ensure that Gemini-powered features comply with its on-device and Private Cloud Compute privacy framework. How Google's models integrate with Apple's privacy architecture is still unclear.
The partnership reflects a broader industry trend: device manufacturers are licensing foundation models from AI labs rather than building their own from scratch. Samsung uses Google's models. Microsoft integrates OpenAI's. Apple is now following the same playbook — and paying a premium for it.
For developers building iOS apps, the Gemini integration could be relevant. If Apple exposes Gemini capabilities through SiriKit or a new framework, third-party apps could access more powerful AI features than what's possible with on-device models alone. Apple hasn't confirmed developer access to Gemini-powered APIs, but the possibility would significantly expand what iOS apps can do with AI.
The financial scale of the deal also suggests Apple views this as a long-term strategic commitment, not an experiment. At $1 billion annually, Apple is betting that Gemini's capabilities will be central to their AI strategy for years. This has implications for Siri's competitive positioning — Apple is outsourcing its AI brain to Google, which is simultaneously its biggest competitor in the AI assistant space.
The Updated Timeline
| Date | Milestone | Status |
|---|---|---|
| June 2024 | Siri AI overhaul announced at WWDC | Completed |
| Sept 2024 | iOS 18 ships with basic Apple Intelligence | Completed |
| Dec 2024 | iOS 18.2 adds ChatGPT integration | Completed |
| 2025 | Siri overhaul delayed from iOS 18.x timeline | Confirmed delay |
| Jan 2026 | Apple-Google Gemini partnership reported | Reported |
| Feb 2026 | Apple confirms Siri upgrade still coming in 2026 | Confirmed |
| Spring 2026 | iOS 26.4 or 26.5 — Siri upgrade expected | Target (may slip) |
| June 2026 | WWDC 2026 — likely to showcase full capabilities | Expected |
Useful Resources
Related Reading
Real AI Responses (Tested March 2026)
The spring 2026 target for the initial release is credible but not certain. Apple CEO Tim Cook confirmed in February 2026 that the upgrade remains on track for 2026 delivery. However, some features may roll out gradually across multiple iOS updates rather than arriving all at once — Apple has a track record of staging feature releases to manage quality.
How Apple's Approach Differs from Google and OpenAI
ApplePrivacy-first, on-device processing, delayed until reliable. Deep hardware-software integration. Cautious.GoogleCloud-first, ships fast, iterates publicly. Google Assistant + Gemini integration aggressive. Data-driven.OpenAIModel-first, ChatGPT + GPT-5 as the platform. Voice mode already live. Moving toward device integration.
Google's approach is most aggressive — Gemini is already deeply integrated into Android, Google Search, and Workspace. Google Assistant handles cross-app actions today, though not always reliably. OpenAI's ChatGPT voice mode provides natural conversation but lacks device-level integration (it can't control your phone's apps directly).
Apple's advantage, when the upgrade ships, will be the deepest possible integration. No other company controls the hardware, operating system, and app frameworks the way Apple does. A Siri that truly understands your entire device — every app, every piece of data, every screen — would be something neither Google nor OpenAI can fully replicate on Apple hardware.
The risk is timing. Every month Apple delays, Google and OpenAI's offerings become more capable. Users who might have waited for Apple's solution are already using ChatGPT, Claude, or Gemini as their default AI assistant. Switching costs are low, but habit formation is real.
What Works Now (and What Doesn't)
Working today with Apple Intelligence:
- Writing tools (rewrite, proofread, summarize) across all text fields
- Notification summaries and priority sorting
- Clean Up in Photos (object removal)
- Genmoji and Image Playground (image generation)
- ChatGPT integration through Siri (with user permission)
- Basic improved Siri natural language understanding
Not yet available:
- On-screen awareness (Siri understanding what's on your display)
- Personal context (Siri reasoning over your emails, messages, calendar)
- Cross-app actions (multi-step tasks across different apps)
- Gemini-powered features
- "World Knowledge Answers" (AI-generated summaries for factual queries)
The gap between what Apple has promised and what's currently available is substantial. For iPhone users who want AI assistant capabilities today, the ChatGPT integration is the most capable option — ironic, given that Apple's own solution was supposed to be the star.
FAQ
Which iPhones will support the new Siri?
Apple Intelligence requires iPhone 15 Pro or later (A17 Pro chip or newer). The full Siri upgrade with personal context and on-screen awareness will likely have the same requirement. Older iPhones won't get these features regardless of iOS version.
Will the new Siri work offline?
Basic Siri commands and some Apple Intelligence features work offline using on-device models. However, the more advanced features (personal context reasoning, complex queries, Gemini-powered responses) will require an internet connection, either through Apple's Private Cloud Compute or the Gemini partnership.
Is Apple abandoning ChatGPT for Gemini?
No. The ChatGPT integration remains as a user-facing option. The Gemini partnership is about Apple's foundational AI models — the infrastructure that powers Apple Intelligence and Siri internally. Users will likely still be able to route specific queries to ChatGPT when desired, but the default AI engine shifts to Gemini-based models.
How does this compare to Samsung Galaxy AI?
Samsung's Galaxy AI, powered by Google's models, already offers many features Apple has only promised: on-screen translation, AI-powered search, photo editing, and voice assistant improvements. However, Samsung's implementation relies heavily on cloud processing. Apple's privacy-focused, on-device approach offers stronger data protection — if it can match the capability when it finally ships.
Should I wait for the Siri upgrade to buy a new iPhone?
No. Buy based on current features, not promised ones. Every iPhone 15 Pro and newer will support the Siri upgrade when it arrives. There's no hardware advantage to waiting — the AI capabilities are software updates delivered to existing devices.
Sources
- Apple Intelligence — Apple Official
- Apple Targets Spring 2026 for Siri AI Upgrade — Bloomberg
- Apple Picks Google's Gemini to Power Siri — CNBC
- Apple Intelligence Siri On Track for 2026 — AppleInsider
- 7 Features to Expect from Siri's 2026 Upgrade — Cult of Mac
- Siri 2.0 Confirmed for 2026 — Tom's Guide
The spring 2026 target for the initial release is credible but not certain. Apple CEO Tim Cook confirmed in February 2026 that the upgrade remains on track for 2026 delivery. However, some features may roll out gradually across multiple iOS updates rather than arriving all at once — Apple has a track record of staging feature releases to manage quality.
How Apple's Approach Differs from Google and OpenAI
ApplePrivacy-first, on-device processing, delayed until reliable. Deep hardware-software integration. Cautious.GoogleCloud-first, ships fast, iterates publicly. Google Assistant + Gemini integration aggressive. Data-driven.OpenAIModel-first, ChatGPT + GPT-5 as the platform. Voice mode already live. Moving toward device integration.
Google's approach is most aggressive — Gemini is already deeply integrated into Android, Google Search, and Workspace. Google Assistant handles cross-app actions today, though not always reliably. OpenAI's ChatGPT voice mode provides natural conversation but lacks device-level integration (it can't control your phone's apps directly).
Apple's advantage, when the upgrade ships, will be the deepest possible integration. No other company controls the hardware, operating system, and app frameworks the way Apple does. A Siri that truly understands your entire device — every app, every piece of data, every screen — would be something neither Google nor OpenAI can fully replicate on Apple hardware.
The risk is timing. Every month Apple delays, Google and OpenAI's offerings become more capable. Users who might have waited for Apple's solution are already using ChatGPT, Claude, or Gemini as their default AI assistant. Switching costs are low, but habit formation is real.
What Works Now (and What Doesn't)
Working today with Apple Intelligence:
- Writing tools (rewrite, proofread, summarize) across all text fields
- Notification summaries and priority sorting
- Clean Up in Photos (object removal)
- Genmoji and Image Playground (image generation)
- ChatGPT integration through Siri (with user permission)
- Basic improved Siri natural language understanding
Not yet available:
- On-screen awareness (Siri understanding what's on your display)
- Personal context (Siri reasoning over your emails, messages, calendar)
- Cross-app actions (multi-step tasks across different apps)
- Gemini-powered features
- "World Knowledge Answers" (AI-generated summaries for factual queries)
The gap between what Apple has promised and what's currently available is substantial. For iPhone users who want AI assistant capabilities today, the ChatGPT integration is the most capable option — ironic, given that Apple's own solution was supposed to be the star.
FAQ
Which iPhones will support the new Siri?
Apple Intelligence requires iPhone 15 Pro or later (A17 Pro chip or newer). The full Siri upgrade with personal context and on-screen awareness will likely have the same requirement. Older iPhones won't get these features regardless of iOS version.
Will the new Siri work offline?
Basic Siri commands and some Apple Intelligence features work offline using on-device models. However, the more advanced features (personal context reasoning, complex queries, Gemini-powered responses) will require an internet connection, either through Apple's Private Cloud Compute or the Gemini partnership.
Is Apple abandoning ChatGPT for Gemini?
No. The ChatGPT integration remains as a user-facing option. The Gemini partnership is about Apple's foundational AI models — the infrastructure that powers Apple Intelligence and Siri internally. Users will likely still be able to route specific queries to ChatGPT when desired, but the default AI engine shifts to Gemini-based models.
How does this compare to Samsung Galaxy AI?
Samsung's Galaxy AI, powered by Google's models, already offers many features Apple has only promised: on-screen translation, AI-powered search, photo editing, and voice assistant improvements. However, Samsung's implementation relies heavily on cloud processing. Apple's privacy-focused, on-device approach offers stronger data protection — if it can match the capability when it finally ships.
Should I wait for the Siri upgrade to buy a new iPhone?
No. Buy based on current features, not promised ones. Every iPhone 15 Pro and newer will support the Siri upgrade when it arrives. There's no hardware advantage to waiting — the AI capabilities are software updates delivered to existing devices.