googleai_dart 5.0.0
googleai_dart: ^5.0.0 copied to clipboard
Unofficial Dart client for Google AI Gemini and Vertex AI Gemini REST APIs plus Live API WebSocket sessions.
Google AI & Vertex AI Gemini API Dart Client #
Dart client for the Google AI Gemini Developer API and Vertex AI Gemini API with text generation, image generation, tool calling, grounding tools, Live API WebSocket sessions, service tier routing, and embeddings. It gives Dart and Flutter applications a pure Dart, type-safe client across iOS, Android, macOS, Windows, Linux, Web, and server-side Dart.
Note
The official google_generative_ai Dart package has been deprecated in favor of firebase_ai. However, since firebase_ai is a Flutter package rather than a pure Dart package, this unofficial client bridges the gap by providing a pure Dart, fully type-safe API client for both Google AI and Vertex AI.
Tip
Coding agents: start with llms.txt. It links to the package docs, examples, and optional references in a compact format.
Table of Contents
Features #
Core Gemini APIs #
- Text generation, image generation, streaming, token counting, and multimodal prompts
- Embeddings, model discovery, and context caching
- Tool calling plus structured outputs through typed schemas
- Service tier selection (
standard,flex,priority) per request - Long-running operations, pagination helpers, and retries
Grounding and retrieval tools #
- Google Search, URL Context, Google Maps, and File Search tools
- Files, cached contents, corpora, file search stores, and batch operations
- Interactions and Live API WebSocket flows with code execution and MCP server integration
Google AI and Vertex AI support #
- Google AI API key workflows for hosted Gemini access
- Vertex AI project and location routing with OAuth auth providers
- Auth tokens and tuned models for ephemeral auth and tuned model workflows
- One Dart client surface for server apps, CLIs, and Flutter codebases
Why choose this client? #
- Pure Dart with no Flutter dependency — works in mobile apps, backends, and CLIs.
- Type-safe request and response models with minimal dependencies (
http,logging,meta). - Streaming, retries, interceptors, and error handling built into the client.
- One package supports both Google AI and Vertex AI without duplicated abstractions.
- Strict semver versioning so downstream packages can depend on stable, predictable version ranges.
Quickstart #
dependencies:
googleai_dart: ^5.0.0
import 'package:googleai_dart/googleai_dart.dart';
Future<void> main() async {
final client = GoogleAIClient.fromEnvironment();
try {
final response = await client.models.generateContent(
model: 'gemini-2.5-flash',
request: GenerateContentRequest(
contents: [Content.text('Explain why Dart works well for APIs.')],
),
);
print(response.text);
} finally {
client.close();
}
}
Configuration #
Configure Google AI, Vertex AI, auth providers, and retries
Use GoogleAIClient.fromEnvironment() for the default GOOGLE_GENAI_API_KEY workflow. Switch to GoogleAIConfig.googleAI(...) or GoogleAIConfig.vertexAI(...) when you need alternate auth placement, custom headers, or Vertex-specific project routing.
import 'package:googleai_dart/googleai_dart.dart';
Future<void> main() async {
final googleClient = GoogleAIClient(
config: GoogleAIConfig.googleAI(
authProvider: ApiKeyProvider('YOUR_API_KEY'),
timeout: const Duration(minutes: 2),
retryPolicy: RetryPolicy.defaultPolicy,
),
);
final vertexClient = GoogleAIClient(
config: GoogleAIConfig.vertexAI(
projectId: 'your-project-id',
location: 'us-central1',
authProvider: BearerTokenProvider('YOUR_ACCESS_TOKEN'),
),
);
googleClient.close();
vertexClient.close();
}
Environment variable:
GOOGLE_GENAI_API_KEY
Use explicit configuration on web builds where runtime environment variables are not available.
API Versions:
Google AI supports both stable and beta API versions, and googleai_dart exposes them through the same config object.
import 'package:googleai_dart/googleai_dart.dart';
Future<void> main() async {
final stableClient = GoogleAIClient(
config: GoogleAIConfig.googleAI(
apiVersion: ApiVersion.v1,
authProvider: ApiKeyProvider('YOUR_API_KEY'),
),
);
final betaClient = GoogleAIClient(
config: GoogleAIConfig.googleAI(
apiVersion: ApiVersion.v1beta,
authProvider: ApiKeyProvider('YOUR_API_KEY'),
),
);
stableClient.close();
betaClient.close();
}
v1is the stable choice for production rollouts.v1betaexposes preview features earlier and is the default for Google AI.
Vertex AI:
Use Vertex AI when you need OAuth-based auth, GCP project scoping, or enterprise controls such as regional routing and broader Google Cloud integration.
import 'package:googleai_dart/googleai_dart.dart';
class MyOAuthProvider implements AuthProvider {
@override
Future<AuthCredentials> getCredentials() async {
return BearerTokenCredentials('YOUR_ACCESS_TOKEN');
}
}
Future<void> main() async {
final client = GoogleAIClient(
config: GoogleAIConfig.vertexAI(
projectId: 'your-project-id',
location: 'us-central1',
authProvider: MyOAuthProvider(),
),
);
client.close();
}
Vertex AI setup requirements:
- A GCP project with Vertex AI enabled
- OAuth 2.0 credentials or a service account flow
- A valid project ID and location such as
us-central1orglobal(thegloballocation usesaiplatform.googleapis.cominstead of a regional endpoint)
Usage #
How do I generate text with Gemini? #
Show example
client.models.generateContent(...) is the core entry point for most Gemini use cases. The response.text extension keeps simple text generation ergonomic for Dart and Flutter code.
import 'package:googleai_dart/googleai_dart.dart';
Future<void> main() async {
final client = GoogleAIClient.fromEnvironment();
try {
final response = await client.models.generateContent(
model: 'gemini-2.5-flash',
request: GenerateContentRequest(
contents: [Content.text('Explain what hot restart does in Flutter.')],
// Optional: route to a specific service tier (standard, flex, priority)
serviceTier: ServiceTier.flex,
),
);
print(response.text);
} finally {
client.close();
}
}
How do I stream Gemini output? #
Show example
Streaming uses the same request type as normal generation, so you can switch between buffered and incremental output without changing the rest of your app code.
import 'dart:io';
import 'package:googleai_dart/googleai_dart.dart';
Future<void> main() async {
final client = GoogleAIClient.fromEnvironment();
try {
await for (final chunk in client.models.streamGenerateContent(
model: 'gemini-2.5-flash',
request: GenerateContentRequest(
contents: [Content.text('Write a short poem about Dart streams.')],
),
)) {
final text = chunk.text;
if (text != null) {
stdout.write(text);
}
}
} finally {
client.close();
}
}
How do I generate images? #
Show example
Image generation uses responseModalities on the same generation endpoint, which keeps multimodal workflows inside one Gemini client. The response.data helper gives access to the generated image bytes.
import 'dart:convert';
import 'package:googleai_dart/googleai_dart.dart';
Future<void> main() async {
final client = GoogleAIClient.fromEnvironment();
try {
final response = await client.models.generateContent(
model: 'gemini-2.5-flash-image',
request: GenerateContentRequest(
contents: [Content.text('A clean geometric poster about Flutter')],
generationConfig: const GenerationConfig(
responseModalities: ['TEXT', 'IMAGE'],
),
),
);
final imageData = response.data;
if (imageData != null) {
print(base64Decode(imageData).length);
}
} finally {
client.close();
}
}
How do I use tool calling? #
Show example
Gemini tool calling uses typed FunctionDeclaration definitions inside Tool objects. This keeps the tool schema local to the request and easy to share across Dart services.
import 'package:googleai_dart/googleai_dart.dart';
Future<void> main() async {
final client = GoogleAIClient.fromEnvironment();
try {
final response = await client.models.generateContent(
model: 'gemini-2.5-flash',
request: GenerateContentRequest(
contents: [Content.text('What is the weather in Madrid?')],
tools: [
Tool(
functionDeclarations: [
FunctionDeclaration(
name: 'get_weather',
description: 'Get current weather',
parameters: Schema(
type: SchemaType.object,
properties: {
'location': Schema(type: SchemaType.string),
},
required: ['location'],
),
),
],
),
],
),
);
print(response.text);
} finally {
client.close();
}
}
How do I ground responses with Google data? #
Show example
Grounding tools let Gemini call Google Search, URL Context, Maps, or File Search without leaving the same client surface. Use these when you need fresher answers or source-aware responses in Dart and Flutter apps.
import 'package:googleai_dart/googleai_dart.dart';
Future<void> main() async {
final client = GoogleAIClient.fromEnvironment();
try {
final response = await client.models.generateContent(
model: 'gemini-2.5-flash',
request: GenerateContentRequest(
contents: [Content.text('What are the latest Dart language updates?')],
tools: [Tool(googleSearch: GoogleSearch())],
),
);
print(response.text);
} finally {
client.close();
}
}
How do I create embeddings? #
Show example
Embeddings are a first-class resource and support multimodal models. This makes retrieval and semantic search pipelines straightforward to build in pure Dart.
import 'package:googleai_dart/googleai_dart.dart';
Future<void> main() async {
final client = GoogleAIClient.fromEnvironment();
try {
final response = await client.models.embedContent(
model: 'gemini-embedding-2-preview',
request: EmbedContentRequest(
content: Content.text('Dart language'),
taskType: TaskType.retrievalDocument,
),
);
print(response.embedding.values.length);
} finally {
client.close();
}
}
How do I upload files for prompts? #
Show example
The Google AI Files API is useful for large prompts and multimodal workflows. For Vertex AI, use Cloud Storage URIs instead of this resource.
import 'package:googleai_dart/googleai_dart.dart';
Future<void> main() async {
final client = GoogleAIClient.fromEnvironment();
try {
final file = await client.files.upload(
filePath: '/path/to/image.jpg',
mimeType: 'image/jpeg',
displayName: 'Sample image',
);
print(file.uri);
} finally {
client.close();
}
}
How do I use the Live API? #
Show example
The Live API gives you bidirectional WebSocket sessions for text and audio. It supports audio input at 16kHz PCM, audio output at 24kHz PCM, session resumption with resumption tokens, and VAD (voice activity detection). Use createLiveClient() when you need realtime interactions beyond regular streaming responses.
import 'package:googleai_dart/googleai_dart.dart';
Future<void> main() async {
final client = GoogleAIClient.fromEnvironment();
final liveClient = client.createLiveClient();
try {
final session = await liveClient.connect(
model: 'gemini-2.0-flash-live-001',
);
session.sendText('Hello! Tell me a short joke.');
await session.close();
} finally {
await liveClient.close();
client.close();
}
}
Error Handling #
Handle API failures, rate limits, canceled requests, and live session errors
googleai_dart throws typed exceptions for REST and Live API failures, which keeps retries and fallbacks explicit. Catch ApiException and its subclasses first, then fall back to GoogleAIException for other client-side failures.
import 'dart:io';
import 'package:googleai_dart/googleai_dart.dart';
Future<void> main() async {
final client = GoogleAIClient.fromEnvironment();
try {
await client.models.generateContent(
model: 'gemini-2.5-flash',
request: GenerateContentRequest(
contents: [Content.text('Ping')],
),
);
} on RateLimitException catch (error) {
stderr.writeln('Retry after: ${error.retryAfter}');
} on ApiException catch (error) {
stderr.writeln('Gemini API error ${error.statusCode}: ${error.message}');
} on GoogleAIException catch (error) {
stderr.writeln('Google AI client error: $error');
} finally {
client.close();
}
}
Examples #
See the example/ directory for complete examples:
| Example | Description |
|---|---|
abort_example.dart |
Request cancellation with abort triggers |
api_versions_example.dart |
API version selection (v1 vs v1beta) |
auth_tokens_example.dart |
Ephemeral token authentication |
batch_example.dart |
Batch operations |
batches_example.dart |
Batch resource management |
cached_contents_example.dart |
Context caching |
caching_example.dart |
Context caching API usage |
complete_api_example.dart |
Complete API coverage demo |
corpora_example.dart |
Corpus management for semantic retrieval |
documents_example.dart |
Document management within corpora |
embeddings_example.dart |
Text embeddings |
error_handling_example.dart |
Exception handling patterns |
example.dart |
Quick-start usage |
file_search_example.dart |
File search with semantic retrieval |
file_search_stores_example.dart |
File search store management |
files_example.dart |
File uploads for prompts |
function_calling_example.dart |
Tool calling |
generate_answer_example.dart |
Grounded question answering (RAG) |
generate_content.dart |
Basic text generation |
generated_files_example.dart |
Generated files for video outputs |
google_maps_example.dart |
Google Maps grounding |
google_search_example.dart |
Grounding with Google Search |
image_generation_example.dart |
Image generation |
interactions_example.dart |
Server-side conversation state management |
live_example.dart |
Live API WebSocket sessions |
models_example.dart |
List and inspect models |
oauth_refresh_example.dart |
OAuth token refresh during retries |
operations_example.dart |
Long-running operations management |
pagination_example.dart |
Paginated list results |
permissions_example.dart |
Permission management for resources |
prediction_example.dart |
Video generation with Veo model |
streaming_example.dart |
Streaming responses |
tuned_model_generation_example.dart |
Generate content with tuned models |
tuned_models_example.dart |
Tuned model workflows |
url_context_example.dart |
URL content fetching and analysis |
vertex_ai_example.dart |
Vertex AI configuration |
API Coverage #
| API | Status |
|---|---|
| Models | ✅ Full |
| Tuned Models | ✅ Full |
| Files | ✅ Full |
| Generated Files | ✅ Full |
| Cached Contents | ✅ Full |
| Batches | ✅ Full |
| Corpora | ✅ Full |
| File Search Stores | ✅ Full |
| Interactions (Experimental) | ✅ Full |
| Auth Tokens | ✅ Full |
| Live API (WebSocket) | ✅ Full |
Official Documentation #
- API reference
- Google AI Gemini API docs
- Vertex AI Gemini API docs
- Google GenAI Python SDK
- Google GenAI JS/TS SDK
Sponsor #
If these packages are useful to you or your company, please consider sponsoring the project. Development and maintenance are provided to the community for free, but integration tests against real APIs and the tooling required to build and verify releases still have real costs. Your support, at any level, helps keep these packages maintained and free for the Dart & Flutter community.
License #
This package is licensed under the MIT License.
This is a community-maintained package and is not affiliated with or endorsed by Google.