Along with the announcement of the Pixel 9 series today, Google is also introducing a number of new AI features for Pixel 9 and other Android smartphones.
According to Google, starting today, Gemini’s overlay can be brought up on top of another app, and you can ask Gemini questions about what is seen on your display. This feature actually first launched with the Galaxy Z Fold 6 last month. Now its coming to even more devices. For example, you can bring up a document and ask Gemini questions about it or even summarize it. This works really well on foldables like the new Google Pixel 9 Pro Fold.
Google has also noted that they have completely rebuilt the assistant experience with Gemini. Allowing you to speak to Gemini naturally, like you would another human, and it will understand your intent, as well as following your train of thought.
Finally, there’s Gemini Live. This feature offers a mobile conversational experience that lets you chat with Gemini about whatever might be on your mind. You can ask Gemini complex questions, explore new ideas, or even brainstorm potential jobs well-suited for your skillset or degree. Gemini Live will start rolling out today in English to all Gemini Advanced subscribers on Android. It will be coming in more languages in “the coming weeks”.
Your data with Gemini is safe and secure
Google is also reassuring everyone that your data is secure, with Gemini. Touting that only Gemini can do everything with a secure, all-in-one approach, that doesn’t require handing off your data to a third-party AI provider. Like you would need to do with ChatGPT. This is clearly a shot at Apple Intelligence which is partnering with Open AI.
Google is also touting that your data never leaves your phone for some of the most sensitive use cases. Gemini Nano is built into the Pixel 9, and you can also use Call Notes to summarize audio from a phone call or Pixel Screenshots to save and organize your images.