Google’s AI Edge Gallery is an experimental, open-source Android app that lets you run generative AI models directly on your phone - without needing an internet connection once the models are downloaded. It’s part of a broader shift toward edge AI, where the computation happens on-device instead of in the cloud. The main advantage: your data stays local, responses are faster, and you don’t need a live network connection to use advanced AI tools.
The app functions as a local model runner and lightweight interface for exploring generative AI. You can use it to chat with language models like Google’s own Gemma 3 and Gemma 3n, analyze images, run multi-turn conversations, and test prompts. Everything runs through a set of simple interfaces—like Prompt Lab, AI Chat, and Ask Image—designed for experimentation and quick testing.
AI Edge Gallery connects to public repositories like Hugging Face, letting you browse and download open-source models that are optimized for mobile use. These models use formats like ONNX and TensorFlow Lite, depending on what your device supports. While the performance varies based on your hardware, newer or higher-end phones can handle many of the supported models smoothly.
Importantly, the app is fully open source under the Apache 2.0 license. The source code and instructions are available on GitHub, which makes it accessible to developers who want to experiment, extend its capabilities, or embed local AI into their own apps. It’s not available on the Play Store yet, so you’ll need to sideload the APK if you want to try it.
As of June 2025, AI Edge Gallery is still early-stage, but it’s a significant move toward making AI more private, portable, and developer-friendly—especially for those exploring what’s possible without the cloud. You can explore or install it here: https://github.com/google-ai-edge/gallery.