Summary

  • Apple introduces Apple Intelligence with AI features like image generation and writing tools for iOS, iPadOS, and macOS.
  • Apple promises that Apple Intelligence will leverage user data to respond to nuanced requests, aiming to save time and effort.
  • Apple Intelligence is coming in beta to some recent Apple devices this fall.



At WWDC today, Apple announced that it’s getting into the AI game with a suite of features it’s calling Apple Intelligence. Apple Intelligence comprises a bunch of AI-powered functionality across iOS, iPadOS, and macOS, including expected features like image generation and writing help. But the big-ticket promise is that, like much of the Gemini functionality Google highlighted at I/O this year, Apple Intelligence will leverage all of a user’s data across Apple’s different platforms to respond to nuanced requests in ways that should, ideally, save users time and effort.

A lot of Apple Intelligence will seem familiar to anyone who’s been following the AI space. It’ll include system-level writing tools on the upcoming macOS Sequoia that help you tailor the tone of your emails; on iPhone, it’ll let you send unnerving AI-generated art in the Messages app. Image generation will also be built into the Notes and Keynote, among other apps.


More interestingly, Apple says that Apple Intelligence will be able to hook into all your various Apple services to perform actions the old Siri could never hope to. In one example, an AI-assisted Siri (complete with a new, rainbow-gradient app icon and matching animations) was able to search a user’s emails and access info online to answer the question “when is my mom’s flight landing?”



Craig Federighi says that Apple Intelligence is built around local processing, so it can be “aware of your personal data without collecting your personal data.” That on-device processing is only possible on devices running an M-series chipset or an iPhone 15 Pro, which is powered by the A17 Pro — and even on those, some requests are still processed in the cloud. In those cases, your device will “send only the data that’s relevant to your task to be processed on Apple silicon servers,” where, Federinghi says, it’s only used to process your request and isn’t stored.

I think this kind of ecosystem-wide access to your personal data is a much more appealing AI proposition for the average user than the ability to generate text or images. With Apple Intelligence, Siri will be able to call up very specific images from your Apple Photos library — kind of like the Ask Photos feature Google announced at I/O, but without having to open the Photos app.

Apple also announced ChatGPT integration that lets you send requests Siri couldn’t figure out on its own to GPT-4o. Siri will ask permission before pinging ChatGPT, and Apple says that your requests won’t be logged.



Apple Intelligence: Beta access coming this fall

Apple Intelligence is coming to iPhone 15 Pro and Pro Max, plus iPads and Mac computers with M-series chipsets, starting this fall with iOS 18, iPadOS 18, and macOS Sequoia. “Some features, additional languages, and platforms” are to follow in 2025.

Apple’s late entry into the AI market doesn’t seem especially groundbreaking, but with the perennial AI caveat if this actually works, Apple Intelligence’s ability to gather information and complete tasks across different apps could be a real boon for people invested in Apple’s ecosystem. We’ll be keeping an eye on how this all plays out, especially as Google works to inject Gemini into every corner of its own ecosystem.

Related

Hey Google, why isn’t Gemini available on Android Auto yet?

More natural prompts, better and clearer responses — it sounds like a win-win to me