Mark your calendar: Google is staging two live shows this month that could reshape how phones, laptops and even your sunglasses think.
Two events, one big theme
Google will stream an Android-focused show on May 12 and follow with the main Google I/O keynote on May 19. The earlier stream — billed as The Android Show — is explicitly positioned to tease a banner year for Android, so expect a concentrated look at OS-level tweaks, multitasking changes and phone-forward features. If you want the rundown on that pre-I/O livestream, note that The Android Show will stream on May 12 and set the stage for I/O's bigger AI stories The Android Show will stream on May 12.
The main keynote on May 19 is likely to be Gemini-heavy. Last year’s I/O leaned hard into AI, and this year’s roadmap looks like the next chapter: more capable models, deeper device integrations and a push into new hardware categories.
Android 17: polish, multitasking and timing
Public betas have been teasing a handful of concrete Android 17 upgrades: a new multitasking UI, improved screen recording, floating app bubbles and even hints at faster charging behaviors. Those changes are incremental but practical, and they point to Google focusing on day-to-day polish rather than a radical UX overhaul. The beta track has been steady — the release cadence suggests a stable rollout could land in June. For a snapshot of recent beta changes like floating apps and other tweaks, see the Android 17 beta notes Android 17 beta changes like floating apps and faster charging.
Expect the Android Show to spotlight features that matter to users now, while I/O itself will likely place those features in the context of a broader AI story.
Gemini 3.x or 4.0? The upgrade everyone’s watching
Gemini sits at the center of Google’s push. Rumors hint at a major model upgrade — whether it’s branded Gemini 4.0 or a substantial 3.x release — and Google’s slide deck will probably frame the new model as the connective tissue across phones, cars, XR headsets and Chrome/Android-based PCs.
Beyond raw capability, the real headline is agentic behavior: local assistants that act on your behalf, run tasks across apps, and proactively surface context-aware help. We’ve already seen early agentic hints land on Pixels via system updates; at I/O Google is likely to clarify what those agents can do, where decisions are made (local vs cloud), and how developers can build on them.
Glasses, headsets and Android XR: fashion meets AI
If you thought smart glasses were niche, 2026 looks different. Google is broadening the concept: lightweight, display-free audio glasses for calls and translation; single-lens display eyewear for notifications and captioning; and larger, developer-friendly display glasses that behave like pocketable mixed-reality computers.
Partners include Warby Parker, Gentle Monster, Kering and Samsung — a fashion-and-tech approach that echoes earlier Android hardware partnerships. Samsung already shipped the Galaxy XR headset last fall, and Project Aura — Xreal’s pocket-sized processing puck paired with display glasses — shows how varied the form factors will be. Some devices will be designed for all-day wear, others as ‘headphones for your eyes’ with serious visual horsepower for short sessions.
Gemini will be the assistant powering many of these experiences: live translation, contextual pop-ups, on-device captioning, and scene-aware note-taking. But hardware variety raises questions: how many apps will meaningfully integrate with glasses, how seamless will phone pairing be, and, crucially, how comfortable will the public feel about cameras on faces again?
Aluminium OS and Google’s PC ambition
Less flashy but strategically important is Aluminium OS, Google’s rumored Android-based operating system for PCs. Alleged wallpaper leaks and executive hints have set expectations for a 2026 reveal. Think of Aluminium as Google’s attempt to make Android a genuine cross-device platform — one that can stretch from phones to XR headsets to clamshell laptops — rather than a mobile-only ecosystem. Expect I/O to show what the company means by convergence, and possibly name partner hardware.
Privacy, social acceptance and the missing price tags
A lot of the tech is exciting on paper. The sticky parts are policy, social norms and cost. Meta’s earlier stumbles with glasses underline how easily wearable cameras can rattle the public; Google must explain data collection, where processing occurs, and how long recordings persist. Pricing remains the elephant in the room — “sometime in 2026” is the best answer for now — and the variety of partners suggests a wide price range, not a single mass-market price point.
Why this matters
Taken together, the announcements expected at these two events are more than incremental OS updates. They signal a shift in how Google imagines AI interacting with hardware: quieter, more agentic assistants; eyes-up information through glasses; and a push to unify computing across form factors with Aluminium OS. If Google pulls this off, your next phone might act more like a hub for an ecosystem of AI-enabled devices.
There’s a lot to untangle over two weeks of streams. Whether you care about practical Android tweaks, an upgraded Gemini, or the prospect of fashionable smart glasses, Google’s I/O is shaping up to be a festival of small improvements stitched into a much bigger ambition. Tune in, and judge how much of the future you want on your face and in your pocket.




