- Google plans to announce at Google I/O that it is making AI the main way people interact with their phones
- Android 17 and Gemini will handle everyday tasks automatically
- Apps will still exist, but mostly in the background
Google‘s big I/O conference is coming soon, and the tech giant has big plans to embed its AI models so deeply that it will all but replace the standard apps. The company will show off updates spanning Android 17, Chrome, and Gemini, but they all point to replacing the tapping through menus on an app with a straightforward, direct request that the AI can interpret and carry out on its own.
For the average person, that means the phone in your hand is about to feel a little less like a collection of apps and a little more like something that works things out for you.
Think about how many small actions go into a simple task like ordering food or replying to messages. You might be bouncing between apps, copying information, and making decisions at every step. Google’s new approach is designed to cut out those middle steps.
Article continues below
With Android 17, that takes the form of what the company calls agentic automation. You tell your phone what you want, and it figures out how to do it. Instead of opening three apps to plan a dinner and a movie, you might just ask for something fun and nearby, and the system pulls together options, checks your schedule, and helps you make a choice.
The difference is not just speed, but what you focus on. You can focus on outcomes while the phone is handling all the tasks in between. Google’s “Adaptive Everywhere” plan extends beyond a single device. Instead, the AI agents will digitally follow you around. You might start planning something on your phone, continue it on a laptop, and pick it up again later in your car or on a larger screen at home. The AI keeps track of what you were doing, so you do not have to start over.
Invisible apps
The changes Google has in mind won’t eliminate apps, but you might find them occupying less of your mind. Google is reversing the order of picking an app, then starting a task. Instead, you’ll start by asking the device to do a task, and the AI will work out what apps to use without you seeing them
In Chrome, for example, new AI features will help organize information and assist with tasks that stretch across multiple sites. Gemini sits at the hub, connecting everything together and making decisions about how to complete what you ask.
Google clearly hopes this will simplify matters for users, as every interaction will have the same basic shape regardless of which apps the AI uses. But some may find it eerie to give up control and let the AI anticipate and complete actions on your behalf.
There are still limits to how far this can go. Systems that take on more responsibility need to be accurate and reliable, especially when they are dealing with personal information. There is also an adjustment in how people think about using technology. Describing what you want is different from navigating step by step. It takes a little time to trust that the system will do the right thing.
Apps will still be there, doing what they have always done. You just may not notice them as much. And once that becomes the normal way of doing things, going back to tapping through menus might start to feel like switching to dial-up internet.

The best business laptops for all budgets
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.
https://cdn.mos.cms.futurecdn.net/2AvRf4EipWz5g5PSFr9mN8-1920-80.jpg
Source link
ESchwartzwrites@gmail.com (Eric Hal Schwartz)




