Summary

  • Gemma 3 by Google offers portability by running on a single GPU/TPU, unlike typical workstation-grade hardware.
  • Gemma 3 shares tech with Gemini 2.0, supports 35 languages, and offers models with up to 27B parameters.
  • It can be trained through Vertex AI and Google Colab, and it includes a safety image checker called ShieldGemma 2.

Google’s Gemini AI model has locked horns with consumer and enterprise-focused rivals like OpenAI’s ChatGPT, but a bustling market of researchers and developers still exists. They create new applications for the evolving technology, and push the envelope of what’s possible with AI. Google tapped into this with its first two Gemma models announced in February last year. Now, the company has announced Gemma 3 for researchers.


Related


6 things I had no idea Gemini could do

Google Gemini just got even more useful for me



41

Just over a year after the company announced the first two Gemma models, Gemma 3 has been unveiled, and the focus is squarely on portability (via 9to5Google). This AI model can run on any device a researcher may need it to, including phones and computers, but Google’s headlining feature this time is Gemma’s ability to run on just a single GPU or TPU. This is remarkable because processing AI models on-device typically requires workstation-grade hardware with multiple GPUs.

Google’s latest model reportedly shares the research and tech that powers the consumer-ready Gemini 2.0 models. As a result, it supports 35 natural languages (with pre-trained support for over 140) and boasts a 128,000-token context window. It is available as 1B, 4B, 12B, and 27B parameter models, and all of them besides the 1B model accept text and images as input.

Advanced capabilities and safe operation with GemmaShield

Available through Vertex AI and Google Colab

Google has achieved the portability Gemma 3 boasts, through quantization. Official quantized versions reduce the model size and computational hardware requirements, but maintain the desired accuracy. Google recommends working with NVIDIA GPUs, since the hardware partner has optimized Gemma 3 models to maximize performance on Jetson Nano to Blackwell GPUs.

Interested developers can train and customize the new Gemma models through Vertex AI and Google Colab. Google is claiming state-of-the-art performance that outshines Llama 405B, DeepSeek-V3, and o3-mini.

Alongside this dynamic new model, Google also unveiled ShieldGemma 2. In a push for responsible AI image generation, ShieldGemma is an image safety checker that uses a 4B-parameter model to spit out a safety label if the content is dangerous, explicit, or violent.

Gemma 3 is available for download through Kaggle and Hugging Face, but devs can get more info at Google AI Studio too.