For the past week now, the tech (and gambling) sphere has been buzzing with anticipation about Google’s latest Gemini model. The speculation surrounding Gemini 3, however, finally ends now.

The tech giant, in a new keyword blog post, just made its most intelligent large language model official, paired with ‘Generative Interfaces,’ and an all-new ‘Gemini Agent.’

Gemini 3 is official

There’s a lot that’s new with Gemini 3, but more importantly, this marks the first time Google has brought its new flagship AI model straight to Google Search (starting with AI Mode) on day one.

According to the tech giant, the new model will set a new bar for AI model performance. “You’ll notice the responses are more helpful, better formatted and more concise,” wrote Google, adding that Gemini 3 is the best model in the world for multimodal tasks. This means that for tasks like analyzing photos, reasoning, document analysis, transcribing lecture notes, and more, you’ll notice better performance from Gemini 3 than its predecessors (and potentially even competitors).

On paper, Gemini 3 Pro boasts a score of 1501 on LMArena, ranking higher than Gemini 2.5 Pro’s 1451 score.

Google AI Pro and Ultra subscribers in the US can start experimenting with Gemini 3 Pro starting today. To do so, head to Google Search > AI Mode > select ‘Thinking’ from the model drop-down.

The model will expand to everyone in the US “soon,” with AI Pro and Ultra plan holders retaining higher usage limits.

Generative interfaces end Gemini’s static UI

Think of generative interface as dynamic prompt-based UIs that change depending on your specific requests. The new feature is powered by two experiments, namely visual layout and dynamic view.

A GIF highlighting Gemini's visual layout.
Credit: Google

The former kicks in when manually selected. Instead of answering your queries in a plain text-based format, visual layout triggers an immersive, magazine-style view, complete with photos and module. For reference, prompts like “plan a 3-day trip to Rome next summer” will highlight a visual itinerary — something like this:

Dynamic view, on the other hand, changes the entire Gemini user interface. Leveraging Gemini 3’s agentic coding capabilities, the feature essentially designs and codes a custom UI in real-time. The UI it designs is suited to your prompt. For example, prompting something like “explain the Van Gogh Gallery with life context for each piece” will highlight something like this:

A GIF highlighting Gemini's new Dynamic View feature.
Credit: Google

Dynamic View and visual layout are rolling out now.

Gemini Agent arrives

Likely the most ambitious of the bunch, Gemini Agent, as Google describes it, is “an experimental feature that handles multi-step tasks directly inside Gemini.”

The agent, which can connect to your Google apps like Calendar, reminders, and more, can do a lot. For example, you can simply ask it to “organize my inbox,” and it will go through your to-dos and even draft replies to emails for your approval. Alternatively, you can give the agent complex multi-command tasks to fulfill. Think something like “Research and help me book a mid-size SUV for my trip next week under $80/day using details from my email.” The agent would locate flight information from Gmail, compare rentals within budget, and prepare the booking for you.

Powered by Gemini 3, the agent, which needs to be manually selected from the Gemini app’s ‘tools,’ can take action across other Gemini tools like Deep Research, Canvas, connected Workspace apps, live browsing, and more.

Gemini Agent is available to try out starting today, but only for US-based Google AI Ultra subscribers on the web. The tech giant did not hint at the feature expanding to more users anytime soon.