Summary

  • Google Gemini’s latest update allows chaining actions across apps, enhancing multistep actions like drafting a text for the Messages app.
  • Gemini Live now supports user files, images, and YouTube links for insight delivered through real-time conversations.
  • Project Astra will introduce screen sharing and live video streaming to the Gemini app in the coming months.

Samsung just pulled the wraps off this year’s flagship Galaxy S25 series, but it is accompanied by a bunch of announcements from Google as well. Some of these new features bring Gemini closer to becoming the Assistant replacement we always wanted, and aren’t even exclusive to the latest crop of Samsung phones. It brings the AI closer to an agent-like status where you can delegate tasks for execution, taking the burden off your shoulders.


Related


Samsung’s Galaxy S25 supports Google’s multimodal Gemini Nano AI model

With certain actions happening entirely on-device



Google Gemini took giant strides in 2024, with the launch of Gemini 2.0 stepping up capabilities from multimodal prompting to more Search-like functionality. Gemini now allows you to “chain” prompts and queries with other services and apps. You can ask the assistant to look up nearby restaurants on Google Maps, or create a text message from scratch. You can do this on any phone with Gemini, depending on the installed extensions.

The prompt  “Find nearby weekend activities and text them to Sarah” is added to the Gemini prompt box followed by Gemini finding a short list of activities that fit the criteria

Source: Google

Gemini can now chain actions across apps

While the Messages feature overlaps with Magic Compose, it’s an effort to centralize access to overlapping AI tools. Such integration is also a vast improvement from the more complicated itinerary planning features introduced last year with Gemini extensions. Speaking of, the company also announced GalaxyS25-exclusive Gemini integrations such with the default Calendar, Reminder, Clock and Notes apps.

An image of a dog is added to Gemini Live and the user asks, “How’s my composition?”

Source: Google

Files and YouTube link support in gemini Live

Gemini Live is the real-time voice chatbot version of the same AI, and Google just announced you can now pull in a few more external items for the AI to chew through. Specifically, the company has added support for user-uploaded images or files so you can enjoy summaries read out to you, followed by a back-and-forth conversation about the file’s contents.

Moreover, Gemini Live also allows pulling in a YouTube URL now, so you can chat with the AI about said video, be it for topical discussion, a summary of the comments, or a broader overview of the subject for additional research. We spotted this capability in development earlier in January, and it is rolling out already to the new S25, last year’s Galaxy S24, and the Google Pixel 9.

Project Astra from the future is here

Big things visible in the distance

A Google presentation of its Project Astra

Source: Google

Project Astra was demoed alongside other key Gemini advancements early last year as the next logical step after Gemini Live, with a full audio-video experience, and what we would’ve imagined as the distant future of AI is already starting to take shape. Google just confirmed that Astra features like screen sharing and real-time live video streaming are coming to the Gemini app for Android in the coming months. Once they arrive, you can simply point at things in your phone camera’s field of view or quiz the AI about them.

The company hasn’t committed to a date yet, but made it clear the new Samsung Galaxy S25 series and Pixel devices will be the first to receive this feature. Hopefully, it will trickle down to older devices compatible with the app in due course.