Google Photos was released 10 years ago this week, if you can believe it. The Android ecosystem staple debuted during Google’s 2015 I/O keynote on May 28, 2015. In the decade since, Photos has racked up a billion and a half users and nine trillion stored photos and videos. It’s a great product with clear benefits and huge reach — and my personal quintessential Google service.
I’ve been thinking a lot about that era of Google’s long-running annual I/O conference in the wake of this year’s I/O. Ten years ago, I/O brought us Google Photos. The year before that, we got juicy pre-release info on the new Android Wear, Android Auto, and Android TV. In 2013, Google Play Music made its debut on the I/O stage. But in 2025, deep in its AI era, what Google had to say at its Shoreline Amphitheater left me feeling cold.
Welcome to Compiler, your weekly digest of Google’s goings-on. I spend my days as Google Editor reading and writing about what Google’s up to across Android, Pixel, Gemini, and more, and talk about it all right here in this column. Here’s what’s been on my mind this week.
In Google’s own list of 100 announcements made at I/O this year, 92 are AI-related. Some of these announcements were at least user-facing — for example, Search’s new AI Mode is now widely available to users in the US.
AI Mode swaps out the classic 10-blue-links search format that sends users to third-party sites — thereby generating revenue for those sites — for a chatbot-style interface that directly provides answers informed by content from the web. (This functionality has been described in a terse statement from the News/Media Alliance as “the definition of theft,” a characterization CEO Sundar Pichai pushed back on in an interview with The Verge’s Nilay Patel: “More than any other company, we think about, we prioritize, sending traffic to the web. No one sends traffic to the web in the way we do,” Pichai said.)
Google also showed off Android XR, its new operating system built for headsets and smart glasses that start hitting store shelves later this year, and demoed real-time AI-powered translation on smart glasses and in video calls. And there was a segment about a Project Astra prototype meant to show AI’s potential to help low-vision users, using Gemini to translate visual information from a phone camera into spoken-word audio. These types of demos — showing new products and services with clear use cases for regular people — would’ve fit in perfectly at I/O in years past. A lot of the rest, not so much.
Automating away the good stuff
A big chunk of this year’s I/O keynote was dedicated to generative AI features that have no clear benefit to most people, announced to what felt like misplaced fanfare. Before the keynote kicked off, singer/songwriter/producer Chaz Bear (better known as Toro y Moi) demoed Google’s new Lyria RealTime AI music utility in a sleepy DJ set he encouraged the audience to talk over. Toward the end of his time on stage, Chaz noted that the music industry was embracing AI with or without him, and that it was his job to “keep up.” (Knowing I’m a fan, my partner asked me after the show how the gen-AI music was. That I couldn’t remember seems like an answer in itself.)
In the back half of the keynote, Google talked up the newest version of its AI video generation tool, Veo 3. From a technical perspective, Veo 3 is a marvel: the clips it generates feature the most convincing AI video we’ve seen to date, and it can even create video with sound — including dialogue. Google hinted at Hollywood ambitions, announcing a partnership with Darren Aronofsky project Primordial Soup that’ll see three short films made in part using Veo 3 premiere at Tribeca later this month.
Primordial Soup describes itself as “a new venture dedicated to storytelling innovation,” and Google research scientist Jason Baldridge said on stage that “generative media is expanding the boundaries of creativity,” explaining how Google’s latest content generation tools “empower” creators. Charitably, this all feels disingenuous to me.
Generative AI tools exist to outsource human labor to computers. If you need to reformat a spreadsheet, or analyze large sets of data, or create a transcript from an audio recording, an AI tool can theoretically save you effort by automating that drudgery away. You’ll want to double-check the results, but time saved on tasks nobody wants to perform is easy to see as a net positive.
Music production and filmmaking, on the other hand, aren’t bothersome tasks to check off a to-do list. AI tools like Veo 3 don’t spit out fully formed work and, at least for now, any musical project or film created with AI still requires substantial human effort. But for people who create media because they actually want to, the process is part of the point — I’ve never spoken to an artist who wished making art could be more efficient.
The specific human experience that informs a given artist’s work gives it meaning, and the human labor that goes into creating a film, or an album, or a painting is part of what makes that work valuable in a material sense. Abdicating any part of the creative process to AI makes for an inherently lesser finished product than something that was made entirely by human labor, even if the finished product can look a little glossier at first blush with AI’s help.
It may save time and money to have Veo 3 generate effects for content like Primordial Soup’s Ancestra versus paying human VFX artists, sure, but AI-generated visuals will always be a probabilistic average of existing media that made its way into the AI’s training data — hardly pushing creative boundaries. There are loads of human VFX artists who would jump at the chance to work on films screening at Tribeca, too.
To me, it seems obvious that offloading that work, that privilege, to algorithms only stands to benefit the people and organizations who finance movie production to turn a profit — not the artisans who create films or the consumers who watch them. You can extend this to any and all commercial media experimenting with AI.
This is turning into a pattern
I can’t help but think of Google’s widely panned Gemini TV spot that ran for a time during the 2024 Olympics, in which an apparently busy dad had Gemini help his daughter write a fan letter to hurdler Sydney McLaughlin-Levrone — offloading the kind of meaningful bonding experience both people would have carried for the rest of their lives to an AI app. That ad was universally criticized and quickly pulled from the air, but it seems like the underlying attitude has remained in place at Google.
Case in point: also announced at I/O this year, Gmail is getting a feature that lets it draft emails for you in a way that tries to mimic the vernacular and tone of your own writing. That alone is creepy on its face, but the way Pichai framed it on stage was downright bizarre.
Pressed for time as executives tend to be, Pichai said that he didn’t have the opportunity to send a thoughtful reply to an email from a pal who wanted advice — but that now, he “can be a better friend” by firing off an automated, ambling response that superficially looks like something the real Pichai would have written. Wouldn’t you be offended to receive a long-winded, machine-generated email, sent in earnest by someone you consider a friend?
I/O ain’t what it used to be — and neither is Google
Reveals at I/O in years past, big-ticket announcements about products like Android Wear, Google Photos, and Google Home, got me excited about what technology can do for me — specifically, Google’s technology. By contrast, I struggle to see how most of what Google talked about at I/O this year benefits me as a customer at all, and that’s to say nothing of the very real resource costs of Google’s ever-expanding AI operations.
At I/O ’25, Google seemed excited to share ways its AI can help us speed past the things that actually matter in life. But what are we supposed to be so eager to get to on the other side of chores like talking to our friends and creating art? Aren’t these the types of activities we wish we had more time to do, the types of things we once hoped AI would empower us to spend time on? As a longtime observer and customer, it’s getting harder for me to see the benefit in what Google’s cooking up — and I can’t think I’m alone in that feeling.