Less than a week after the Gemini 2.5 Pro Canvas was released to the public, Google has returned to Gemini Live. This is a new feature that allows users to share screens and camera feeds in real time with AI assistants.
As of April 7th, Gemini Live is now available on Pixel 9 and Galaxy S25 series devices. It’s open to Gemini Advanced Subscribers on Android. This is the company’s most direct move to bring visual, real-time AI interaction to smartphones.
“Here it: Ask Gemini whatever you see. Share screens and cameras on Gemini Live to brainstorm, troubleshoot, and more. It’s now available to all advanced users of @Android in the Gemini app.
📣It’s here: Ask Gemini whatever you see. Share your screen and camera with Gemini Live to share brainstorming, troubleshooting and more.
Roll out to Pixel 9 and Samsung Galaxy S25 devices today and available to all advanced users of @Android in the Gemini app: …pic.twitter.com/fjtd4qhvjz
– Google Gemini App (@geminiapp) April 7, 2025
Gemini Live makes AI more visual
First teased at I/O 2024’s Astra Project Astra, Gemini Live changes the way people interact with AI. Instead of relying on typed questions or static photos, users can now see what they are seeing in AI. The camera can point towards your meal, gadget, or surroundings, and Gemini can respond with relevant information. Alternatively, you can respond in real time when you share your screen while browsing, writing, or coding.
According to a Google original post, the idea is to “discuss the idea, learn about the environment, and help you with what’s on screen.” This opens use cases for everyday scenarios. View recipes from the dinner plate and ask for either feedback from the documentation or debugging the code you are working on.
Who gets it first?
The rollout began with Google’s Pixel 9 and Samsung’s Galaxy S25 series. If you are using either device, you can now access it via the gemini app at no additional cost. For everyone else on Android, Gemini Live is available through a Gemini Advanced subscription. It is part of Google’s $19.99 AI Premium Plan under Google1. You need Android 10 or later to use it.
However, the rollout doesn’t attack everyone at once. According to reports from 9to5Google and Android Authority, several users had to force close the Gemini app to trigger an update. So, if it hasn’t appeared yet, it might help.
Subscription requirements sparked mixed reactions. Some users praised the “This is great” @ai_for_success posted on X, while others questioned why visual AI assistants should live behind the paywall when they are being promoted as the next big thing.
How it works
On supported devices, holding the power button will launch Gemini. From there, users can tap either the camera icon or “Share screen live.” Android checks for screen sharing (now there is no single app option) and once you start, a persistent notification will continue to show the session. Camera mode streams what you’re looking at directly to the AI assistant.
CNET called this update a major shift from static images to more fluid real-time interactions. For example, pointing to the camera with Jigglypuff toys, Gemini now identifies it and provides extra context according to Droid Life. Other users shared their screens to get help with shopping lists, coding tasks and online content reviews.
From Astra to Android
This feature is equipped with Project Astra and Google’s Gemini 2.0 models. This is rumoured to be an important part of the future AR experience. We are still waiting to see what it looks like, especially with the rumors of smart glasses, but Gemini Live is clearly a step in that direction.
Openai’s ChatGPT introduced similar camera-based tools last year, but Gemini has another advantage. It is difficult to integrate into the Android ecosystem. This allows you to feel like part of your phone, not just another app.
What people are saying
The online response is enthusiastic. Technology reviewers and early users share how their assistants feel more “exist” during conversations, especially when dealing with real objects and visual content. It also supports over 45 languages, helping to respond more naturally in different regions.
That said, the price of full access is still a fixed point. The subscription model may not be able to try out the features for some users, at least for now.
Looking ahead
Gemini Live brings a new dimension to how people use AI on their phones. Whether troubleshooting, learning new things, or just looking into something on the go, your ability to show what you’re dealing with AI can be a turning point in how these assistants fit into your daily life.
For now, it’s only available on Android and is gradually being deployed. But if you are a Pixel 9 or Galaxy S25 user, or if you are paying for Gemini Advanced, you are already in your hands.
🚀Want to share the story?
Submit your stories to TechStartUps.com in front of thousands of founders, investors, PE companies, tech executives, decision makers and tech leaders.
Please attract attention