Google Gemini Can Now Order Your Uber and DoorDash but It Is Slow Clunky and Impressive
First real test of AI phone automation shows the future is coming whether phones are ready or not
The feature, currently in beta, represents the first time a mainstream AI assistant has been able to physically navigate and interact with third-party apps rather than simply answering questions or setting timers. Gemini watches the screen, taps buttons, fills in forms, and completes multi-step tasks like ordering food or booking a ride.
The current implementation is limited to a small subset of supported apps, and the process is noticeably slower than doing it yourself. But reviewers note that watching an AI navigate real app interfaces in real time — handling menus, confirming orders, and dealing with edge cases — feels fundamentally different from any previous assistant experience.
The rollout comes as Google, Apple, and other platform makers race to deliver on the promise of truly agentic AI that can act on behalf of users rather than just inform them.
Analysis
Why This Matters
This is the first credible demonstration of AI app automation shipping to real users on real phones. It bridges the gap between AI demos and actual utility.
Background
AI assistants have been promising to do things for you for years. Siri, Google Assistant, and Alexa have all been limited to simple commands and integrations. Gemini's task automation represents a genuine step change in capability.
Key Perspectives
The Verge's hands-on testing found the experience compelling despite its rough edges. The slow speed suggests significant processing overhead, but the accuracy of navigation through complex app UIs is noteworthy.
What to Watch
How quickly Google expands the supported app list and whether Apple responds with similar capabilities in iOS 27. The speed problem needs solving before mainstream adoption.