Google Gen AI SDK¶
Documentation: https://googleapis.github.io/python-genai/
https://github.com/googleapis/python-genai
Google Gen AI Python SDK provides an interface for developers to integrate Google’s generative models into their Python applications. It supports the Gemini Developer API and Vertex AI APIs.
Installation¶
pip install google-genai
With uv:
uv pip install google-genai
Imports¶
from google import genai
from google.genai import types
Create a client¶
Please run one of the following code blocks to create a client for different services (Gemini Developer API or Vertex AI).
from google import genai
# Only run this block for Gemini Developer API
client = genai.Client(api_key='GEMINI_API_KEY')
from google import genai
# Only run this block for Vertex AI API
client = genai.Client(
vertexai=True, project='your-project-id', location='us-central1'
)
(Optional) Using environment variables:
You can create a client by configuring the necessary environment variables. Configuration setup instructions depends on whether you’re using the Gemini Developer API or the Gemini API in Vertex AI.
Gemini Developer API: Set the GEMINI_API_KEY or GOOGLE_API_KEY. It will automatically be picked up by the client. It’s recommended that you set only one of those variables, but if both are set, GOOGLE_API_KEY takes precedence.
export GEMINI_API_KEY='your-api-key'
Gemini API on Vertex AI: Set GOOGLE_GENAI_USE_VERTEXAI, GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_LOCATION, as shown below:
export GOOGLE_GENAI_USE_VERTEXAI=true
export GOOGLE_CLOUD_PROJECT='your-project-id'
export GOOGLE_CLOUD_LOCATION='us-central1'
from google import genai
client = genai.Client()
Close a client¶
Explicitly close the sync client to ensure that resources, such as the underlying HTTP connections, are properly cleaned up and closed.
from google.genai import Client
client = Client()
response_1 = client.models.generate_content(
model=MODEL_ID,
contents='Hello',
)
response_2 = client.models.generate_content(
model=MODEL_ID,
contents='Ask a question',
)
# Close the sync client to release resources.
client.close()
To explicitly close the async client:
from google.genai import Client
aclient = Client(
vertexai=True, project='my-project-id', location='us-central1'
).aio
response_1 = await aclient.models.generate_content(
model=MODEL_ID,
contents='Hello',
)
response_2 = await aclient.models.generate_content(
model=MODEL_ID,
contents='Ask a question',
)
# Close the async client to release resources.
await aclient.aclose()
Client context managers¶
By using the sync client context manager, it will close the underlying sync client when exiting the with block.
from google.genai import Client
with Client() as client:
response_1 = client.models.generate_content(
model=MODEL_ID,
contents='Hello',
)
response_2 = client.models.generate_content(
model=MODEL_ID,
contents='Ask a question',
)
By using the async client context manager, it will close the underlying async client when exiting the with block.
from google.genai import Client
async with Client().aio as aclient:
response_1 = await aclient.models.generate_content(
model=MODEL_ID,
contents='Hello',
)
response_2 = await aclient.models.generate_content(
model=MODEL_ID,
contents='Ask a question',
)
API Selection¶
By default, the SDK uses the beta API endpoints provided by Google to support preview features in the APIs. The stable API endpoints can be selected by setting the API version to v1.
To set the API version use http_options. For example, to set the API version to v1 for Vertex AI:
from google import genai
from google.genai import types
client = genai.Client(
vertexai=True,
project='your-project-id',
location='us-central1',
http_options=types.HttpOptions(api_version='v1')
)
To set the API version to v1alpha for the Gemini Developer API:
from google import genai
from google.genai import types
# Only run this block for Gemini Developer API
client = genai.Client(
api_key='GEMINI_API_KEY',
http_options=types.HttpOptions(api_version='v1alpha')
)
Faster async client option: Aiohttp¶
By default we use httpx for both sync and async client implementations. In order to have faster performance, you may install google-genai[aiohttp]. In Gen AI SDK we configure trust_env=True to match with the default behavior of httpx. Additional args of aiohttp.ClientSession.request() (see _RequestOptions args) can be passed through the following way:
http_options = types.HttpOptions(
async_client_args={'cookies': ..., 'ssl': ...},
)
client=Client(..., http_options=http_options)
Proxy¶
Both httpx and aiohttp libraries use urllib.request.getproxies from environment variables. Before client initialization, you may set proxy (and optional SSL_CERT_FILE) by setting the environment variables:
export HTTPS_PROXY='http://username:password@proxy_uri:port'
export SSL_CERT_FILE='client.pem'
If you need socks5 proxy, httpx supports socks5 proxy if you pass it via args to httpx.Client(). You may install httpx[socks] to use it. Then you can pass it through the following way:
http_options = types.HttpOptions(
client_args={'proxy': 'socks5://user:pass@host:port'},
async_client_args={'proxy': 'socks5://user:pass@host:port'},
)
client=Client(..., http_options=http_options)
Custom base url¶
In some cases you might need a custom base url (for example, API gateway proxy server) and bypass some authentication checks for project, location, or API key. You may pass the custom base url like this:
base_url = 'https://test-api-gateway-proxy.com'
client = Client(
vertexai=True, # Currently only vertexai=True is supported
http_options={
'base_url': base_url,
'headers': {'Authorization': 'Bearer test_token'},
},
)
Types¶
Parameter types can be specified as either dictionaries(TypedDict) or Pydantic Models.
Pydantic model types are available in the types module.
Models¶
The client.models modules exposes model inferencing and model
getters. See the ‘Create a client’ section above to initialize a client.
Generate Content¶
with text content input (text output)¶
response = client.models.generate_content(
model='gemini-2.5-flash', contents='Why is the sky blue?'
)
print(response.text)
with text content input (image output)¶
from google.genai import types
response = client.models.generate_content(
model='gemini-2.5-flash-image',
contents='A cartoon infographic for flying sneakers',
config=types.GenerateContentConfig(
response_modalities=["IMAGE"],
image_config=types.ImageConfig(
aspect_ratio="9:16",
),
),
)
for part in response.parts:
if part.inline_data:
generated_image = part.as_image()
generated_image.show()
with uploaded file (Gemini Developer API only)¶
download the file in console.
!wget -q https://storage.googleapis.com/generativeai-downloads/data/a11.txt
python code.
file = client.files.upload(file='a11.txt')
response = client.models.generate_content(
model='gemini-2.5-flash',
contents=['Could you summarize this file?', file]
)
print(response.text)
How to structure contents argument for generate_content¶
The SDK always converts the inputs to the contents argument into list[types.Content]. The following shows some common ways to provide your inputs.
Provide a list[types.Content]¶
This is the canonical way to provide contents, SDK will not do any conversion.
Provide a types.Content instance¶
from google.genai import types
contents = types.Content(
role='user',
parts=[types.Part.from_text(text='Why is the sky blue?')]
)
SDK converts this to
[
types.Content(
role='user',
parts=[types.Part.from_text(text='Why is the sky blue?')]
)
]
Provide a string¶
contents='Why is the sky blue?'
The SDK will assume this is a text part, and it converts this into the following:
[
types.UserContent(
parts=[
types.Part.from_text(text='Why is the sky blue?')
]
)
]
Where a types.UserContent is a subclass of types.Content, it sets the role field to be user.
Provide a list of string¶
contents=['Why is the sky blue?', 'Why is the cloud white?']
The SDK assumes these are 2 text parts, it converts this into a single content, like the following:
[
types.UserContent(
parts=[
types.Part.from_text(text='Why is the sky blue?'),
types.Part.from_text(text='Why is the cloud white?'),
]
)
]
Where a types.UserContent is a subclass of types.Content, the role field in types.UserContent is fixed to be user.
Provide a function call part¶
from google.genai import types
contents = types.Part.from_function_call(
name='get_weather_by_location',
args={'location': 'Boston'}
)
The SDK converts a function call part to a content with a model role:
[
types.ModelContent(
parts=[
types.Part.from_function_call(
name='get_weather_by_location',
args={'location': 'Boston'}
)
]
)
]
Where a types.ModelContent is a subclass of types.Content, the role field in types.ModelContent is fixed to be model.
Provide a list of function call parts¶
from google.genai import types
contents = [
types.Part.from_function_call(
name='get_weather_by_location',
args={'location': 'Boston'}
),
types.Part.from_function_call(
name='get_weather_by_location',
args={'location': 'New York'}
),
]
The SDK converts a list of function call parts to the a content with a model role:
[
types.ModelContent(
parts=[
types.Part.from_function_call(
name='get_weather_by_location',
args={'location': 'Boston'}
),
types.Part.from_function_call(
name='get_weather_by_location',
args={'location': 'New York'}
)
]
)
]
Where a types.ModelContent is a subclass of types.Content, the role field in types.ModelContent is fixed to be model.
Provide a non function call part¶
from google.genai import types
contents = types.Part.from_uri(
file_uri: 'gs://generativeai-downloads/images/scones.jpg',
mime_type: 'image/jpeg',
)
The SDK converts all non function call parts into a content with a user role.
[
types.UserContent(parts=[
types.Part.from_uri(
file_uri: 'gs://generativeai-downloads/images/scones.jpg',
mime_type: 'image/jpeg',
)
])
]
Provide a list of non function call parts¶
from google.genai import types
contents = [
types.Part.from_text('What is this image about?'),
types.Part.from_uri(
file_uri: 'gs://generativeai-downloads/images/scones.jpg',
mime_type: 'image/jpeg',
)
]
The SDK will convert the list of parts into a content with a user role
[
types.UserContent(
parts=[
types.Part.from_text('What is this image about?'),
types.Part.from_uri(
file_uri: 'gs://generativeai-downloads/images/scones.jpg',
mime_type: 'image/jpeg',
)
]
)
]
Mix types in contents¶
You can also provide a list of types.ContentUnion. The SDK leaves items of types.Content as is, it groups consecutive non function call parts into a single types.UserContent, and it groups consecutive function call parts into a single types.ModelContent.
If you put a list within a list, the inner list can only contain types.PartUnion items. The SDK will convert the inner list into a single types.UserContent.
System Instructions and Other Configs¶
The output of the model can be influenced by several optional settings available in generate_content’s config parameter. For example, increasing max_output_tokens is essential for longer model responses. To make a model more deterministic, lowering the temperature parameter reduces randomness, with values near 0 minimizing variability. Capabilities and parameter defaults for each model is shown in the Vertex AI docs and Gemini API docs respectively.
from google.genai import types
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents='high',
config=types.GenerateContentConfig(
system_instruction='I say high, you say low',
max_output_tokens=3,
temperature=0.3,
),
)
print(response.text)
Typed Config¶
All API methods support Pydantic types for parameters as well as
dictionaries. You can get the type from google.genai.types.
from google.genai import types
response = client.models.generate_content(
model='gemini-2.0-flash-001',
contents=types.Part.from_text(text='Why is the sky blue?'),
config=types.GenerateContentConfig(
temperature=0,
top_p=0.95,
top_k=20,
candidate_count=1,
seed=5,
max_output_tokens=100,
stop_sequences=['STOP!'],
presence_penalty=0.0,
frequency_penalty=0.0,
),
)
print(response.text)
List Base Models¶
To retrieve tuned models, see: List Tuned Models
for model in client.models.list():
print(model)
pager = client.models.list(config={'page_size': 10})
print(pager.page_size)
print(pager[0])
pager.next_page()
print(pager[0])
List Base Models (Asynchronous)¶
async for job in await client.aio.models.list():
print(job)
async_pager = await client.aio.models.list(config={'page_size': 10})
print(async_pager.page_size)
print(async_pager[0])
await async_pager.next_page()
print(async_pager[0])
Safety Settings¶
from google.genai import types
response = client.models.generate_content(
model='gemini-2.5-flash',
contents='Say something bad.',
config=types.GenerateContentConfig(
safety_settings=[
types.SafetySetting(
category='HARM_CATEGORY_HATE_SPEECH',
threshold='BLOCK_ONLY_HIGH',
)
]
),
)
print(response.text)
Function Calling¶
Automatic Python function Support:¶
You can pass a Python function directly and it will be automatically called and responded by default.
from google.genai import types
def get_current_weather(location: str) -> str:
"""Returns the current weather.
Args:
location: The city and state, e.g. San Francisco, CA
"""
return 'sunny'
response = client.models.generate_content(
model='gemini-2.5-flash',
contents='What is the weather like in Boston?',
config=types.GenerateContentConfig(
tools=[get_current_weather],
),
)
print(response.text)
Disabling automatic function calling¶
If you pass in a python function as a tool directly, and do not want automatic function calling, you can disable automatic function calling as follows:
from google.genai import types
response = client.models.generate_content(
model='gemini-2.5-flash',
contents='What is the weather like in Boston?',
config=types.GenerateContentConfig(
tools=[get_current_weather],
automatic_function_calling=types.AutomaticFunctionCallingConfig(
disable=True
),
),
)
With automatic function calling disabled, you will get a list of function call parts in the response:
function_calls: Optional[List[types.FunctionCall]] = response.function_calls
Manually declare and invoke a function for function calling¶
If you don’t want to use the automatic function support, you can manually declare the function and invoke it.
The following example shows how to declare a function and pass it as a tool. Then you will receive a function call part in the response.
from google.genai import types
function = types.FunctionDeclaration(
name='get_current_weather',
description='Get the current weather in a given location',
parameters_json_schema={
'type': 'object',
'properties': {
'location': {
'type': 'string',
'description': 'The city and state, e.g. San Francisco, CA',
}
},
'required': ['location'],
},
)
tool = types.Tool(function_declarations=[function])
response = client.models.generate_content(
model='gemini-2.5-flash',
contents='What is the weather like in Boston?',
config=types.GenerateContentConfig(
tools=[tool],
),
)
print(response.function_calls[0])
After you receive the function call part from the model, you can invoke the function and get the function response. And then you can pass the function response to the model. The following example shows how to do it for a simple function invocation.
from google.genai import types
user_prompt_content = types.Content(
role='user',
parts=[types.Part.from_text(text='What is the weather like in Boston?')],
)
function_call_part = response.function_calls[0]
function_call_content = response.candidates[0].content
try:
function_result = get_current_weather(
**function_call_part.function_call.args
)
function_response = {'result': function_result}
except (
Exception
) as e: # instead of raising the exception, you can let the model handle it
function_response = {'error': str(e)}
function_response_part = types.Part.from_function_response(
name=function_call_part.name,
response=function_response,
)
function_response_content = types.Content(
role='tool', parts=[function_response_part]
)
response = client.models.generate_content(
model='gemini-2.5-flash',
contents=[
user_prompt_content,
function_call_content,
function_response_content,
],
config=types.GenerateContentConfig(
tools=[tool],
),
)
print(response.text)
Function calling with ANY tools config mode¶
If you configure function calling mode to be ANY, then the model will always return function call parts. If you also pass a python function as a tool, by default the SDK will perform automatic function calling until the remote calls exceed the maximum remote call for automatic function calling (default to 10 times).
If you’d like to disable automatic function calling in ANY mode:
from google.genai import types
def get_current_weather(location: str) -> str:
"""Returns the current weather.
Args:
location: The city and state, e.g. San Francisco, CA
"""
return "sunny"
response = client.models.generate_content(
model="gemini-2.5-flash",
contents="What is the weather like in Boston?",
config=types.GenerateContentConfig(
tools=[get_current_weather],
automatic_function_calling=types.AutomaticFunctionCallingConfig(
disable=True
),
tool_config=types.ToolConfig(
function_calling_config=types.FunctionCallingConfig(mode='ANY')
),
),
)
If you’d like to set x number of automatic function call turns, you can
configure the maximum remote calls to be x + 1.
Assuming you prefer 1 turn for automatic function calling:
from google.genai import types
def get_current_weather(location: str) -> str:
"""Returns the current weather.
Args:
location: The city and state, e.g. San Francisco, CA
"""
return "sunny"
response = client.models.generate_content(
model="gemini-2.5-flash",
contents="What is the weather like in Boston?",
config=types.GenerateContentConfig(
tools=[get_current_weather],
automatic_function_calling=types.AutomaticFunctionCallingConfig(
maximum_remote_calls=2
),
tool_config=types.ToolConfig(
function_calling_config=types.FunctionCallingConfig(mode='ANY')
),
),
)
Model Context Protocol (MCP) support (experimental)¶
Built-in MCP support is an experimental feature. You can pass a local MCP server as a tool directly.
import os
import asyncio
from datetime import datetime
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from google import genai
client = genai.Client()
# Create server parameters for stdio connection
server_params = StdioServerParameters(
command="npx", # Executable
args=["-y", "@philschmid/weather-mcp"], # MCP Server
env=None, # Optional environment variables
)
async def run():
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
# Prompt to get the weather for the current day in London.
prompt = f"What is the weather in London in {datetime.now().strftime('%Y-%m-%d')}?"
# Initialize the connection between client and server
await session.initialize()
# Send request to the model with MCP function declarations
response = await client.aio.models.generate_content(
model="gemini-2.5-flash",
contents=prompt,
config=genai.types.GenerateContentConfig(
temperature=0,
tools=[session], # uses the session, will automatically call the tool using automatic function calling
),
)
print(response.text)
# Start the asyncio event loop and run the main function
asyncio.run(run())
JSON Response Schema¶
However you define your schema, don’t duplicate it in your input prompt, including by giving examples of expected JSON output. If you do, the generated output might be lower in quality.
JSON Schema support¶
Schemas can be provided as standard JSON schema.
user_profile = {
'properties': {
'age': {
'anyOf': [
{'maximum': 20, 'minimum': 0, 'type': 'integer'},
{'type': 'null'},
],
'title': 'Age',
},
'username': {
'description': "User's unique name",
'title': 'Username',
'type': 'string',
},
},
'required': ['username', 'age'],
'title': 'User Schema',
'type': 'object',
}
response = client.models.generate_content(
model='gemini-2.5-flash',
contents='Give me a random user profile.',
config={
'response_mime_type': 'application/json',
'response_json_schema': user_profile
},
)
print(response.parsed)
Pydantic Model Schema support¶
Schemas can be provided as Pydantic Models.
from pydantic import BaseModel
from google.genai import types
class CountryInfo(BaseModel):
name: str
population: int
capital: str
continent: str
gdp: int
official_language: str
total_area_sq_mi: int
response = client.models.generate_content(
model='gemini-2.5-flash',
contents='Give me information for the United States.',
config=types.GenerateContentConfig(
response_mime_type='application/json',
response_schema=CountryInfo,
),
)
print(response.text)
from google.genai import types
response = client.models.generate_content(
model='gemini-2.5-flash',
contents='Give me information for the United States.',
config=types.GenerateContentConfig(
response_mime_type='application/json',
response_schema={
'required': [
'name',
'population',
'capital',
'continent',
'gdp',
'official_language',
'total_area_sq_mi',
],
'properties': {
'name': {'type': 'STRING'},
'population': {'type': 'INTEGER'},
'capital': {'type': 'STRING'},
'continent': {'type': 'STRING'},
'gdp': {'type': 'INTEGER'},
'official_language': {'type': 'STRING'},
'total_area_sq_mi': {'type': 'INTEGER'},
},
'type': 'OBJECT',
},
),
)
print(response.text)
Enum Response Schema¶
Text Response¶
You can set response_mime_type to 'text/x.enum' to return one of those enum
values as the response.
from enum import Enum
class InstrumentEnum(Enum):
PERCUSSION = 'Percussion'
STRING = 'String'
WOODWIND = 'Woodwind'
BRASS = 'Brass'
KEYBOARD = 'Keyboard'
response = client.models.generate_content(
model='gemini-2.5-flash',
contents='What instrument plays multiple notes at once?',
config={
'response_mime_type': 'text/x.enum',
'response_schema': InstrumentEnum,
},
)
print(response.text)
JSON Response¶
You can also set response_mime_type to 'application/json', the response will be
identical but in quotes.
from enum import Enum
class InstrumentEnum(Enum):
PERCUSSION = 'Percussion'
STRING = 'String'
WOODWIND = 'Woodwind'
BRASS = 'Brass'
KEYBOARD = 'Keyboard'
response = client.models.generate_content(
model='gemini-2.5-flash',
contents='What instrument plays multiple notes at once?',
config={
'response_mime_type': 'application/json',
'response_schema': InstrumentEnum,
},
)
print(response.text)
Generate Content (Synchronous Streaming)¶
Generate content in a streaming format so that the model outputs streams back to you, rather than being returned as one chunk.
Streaming for text content¶
for chunk in client.models.generate_content_stream(
model='gemini-2.5-flash', contents='Tell me a story in 300 words.'
):
print(chunk.text, end='')
Streaming for image content¶
If your image is stored in Google Cloud Storage, you can use the from_uri class method to create a Part object.
from google.genai import types
for chunk in client.models.generate_content_stream(
model='gemini-2.5-flash',
contents=[
'What is this image about?',
types.Part.from_uri(
file_uri='gs://generativeai-downloads/images/scones.jpg',
mime_type='image/jpeg',
),
],
):
print(chunk.text, end='')
If your image is stored in your local file system, you can read it in as bytes
data and use the from_bytes class method to create a Part object.
from google.genai import types
YOUR_IMAGE_PATH = 'your_image_path'
YOUR_IMAGE_MIME_TYPE = 'your_image_mime_type'
with open(YOUR_IMAGE_PATH, 'rb') as f:
image_bytes = f.read()
for chunk in client.models.generate_content_stream(
model='gemini-2.5-flash',
contents=[
'What is this image about?',
types.Part.from_bytes(data=image_bytes, mime_type=YOUR_IMAGE_MIME_TYPE),
],
):
print(chunk.text, end='')
Generate Content (Asynchronous Non Streaming)¶
client.aio exposes all the analogous async methods that are available on client.
Note that it applies to all the modules.
For example, client.aio.models.generate_content is the async version of client.models.generate_content
response = await client.aio.models.generate_content(
model='gemini-2.5-flash', contents='Tell me a story in 300 words.'
)
print(response.text)
Generate Content (Asynchronous Streaming)¶
async for chunk in await client.aio.models.generate_content_stream(
model='gemini-2.5-flash', contents='Tell me a story in 300 words.'
):
print(chunk.text, end='')
Count Tokens and Compute Tokens¶
response = client.models.count_tokens(
model='gemini-2.5-flash',
contents='why is the sky blue?',
)
print(response)
Compute Tokens¶
Compute tokens is only supported in Vertex AI.
response = client.models.compute_tokens(
model='gemini-2.5-flash',
contents='why is the sky blue?',
)
print(response)
Count Tokens (Asynchronous)¶
response = await client.aio.models.count_tokens(
model='gemini-2.5-flash',
contents='why is the sky blue?',
)
print(response)
Local Count Tokens¶
tokenizer = genai.LocalTokenizer(model_name='gemini-2.5-flash')
result = tokenizer.count_tokens("What is your name?")
Local Compute Tokens¶
tokenizer = genai.LocalTokenizer(model_name='gemini-2.5-flash')
result = tokenizer.compute_tokens("What is your name?")
Embed Content¶
response = client.models.embed_content(
model='gemini-embedding-001',
contents='why is the sky blue?',
)
print(response)
from google.genai import types
# multiple contents with config
response = client.models.embed_content(
model='gemini-embedding-001',
contents=['why is the sky blue?', 'What is your age?'],
config=types.EmbedContentConfig(output_dimensionality=10),
)
print(response)
Imagen¶
Generate Images¶
Support for generate images in Gemini Developer API is behind an allowlist
from google.genai import types
# Generate Image
response1 = client.models.generate_images(
model='imagen-3.0-generate-002',
prompt='An umbrella in the foreground, and a rainy night sky in the background',
config=types.GenerateImagesConfig(
number_of_images=1,
include_rai_reason=True,
output_mime_type='image/jpeg',
),
)
response1.generated_images[0].image.show()
Upscale Image¶
Upscale image is only supported in Vertex AI.
from google.genai import types
# Upscale the generated image from above
response2 = client.models.upscale_image(
model='imagen-3.0-generate-002',
image=response1.generated_images[0].image,
upscale_factor='x2',
config=types.UpscaleImageConfig(
include_rai_reason=True,
output_mime_type='image/jpeg',
),
)
response2.generated_images[0].image.show()
Edit Image¶
Edit image uses a separate model from generate and upscale.
Edit image is only supported in Vertex AI.
# Edit the generated image from above
from google.genai import types
from google.genai.types import RawReferenceImage, MaskReferenceImage
raw_ref_image = RawReferenceImage(
reference_id=1,
reference_image=response1.generated_images[0].image,
)
# Model computes a mask of the background
mask_ref_image = MaskReferenceImage(
reference_id=2,
config=types.MaskReferenceConfig(
mask_mode='MASK_MODE_BACKGROUND',
mask_dilation=0,
),
)
response3 = client.models.edit_image(
model='imagen-3.0-capability-001',
prompt='Sunlight and clear sky',
reference_images=[raw_ref_image, mask_ref_image],
config=types.EditImageConfig(
edit_mode='EDIT_MODE_INPAINT_INSERTION',
number_of_images=1,
include_rai_reason=True,
output_mime_type='image/jpeg',
),
)
response3.generated_images[0].image.show()
Veo¶
Support for generating videos is considered public preview
Generate Videos (Text to Video)¶
from google.genai import types
# Create operation
operation = client.models.generate_videos(
model='veo-2.0-generate-001',
prompt='A neon hologram of a cat driving at top speed',
config=types.GenerateVideosConfig(
number_of_videos=1,
duration_seconds=5,
enhance_prompt=True,
),
)
# Poll operation
while not operation.done:
time.sleep(20)
operation = client.operations.get(operation)
video = operation.response.generated_videos[0].video
video.show()
Generate Videos (Image to Video)¶
from google.genai import types
# Read local image (uses mimetypes.guess_type to infer mime type)
image = types.Image.from_file("local/path/file.png")
# Create operation
operation = client.models.generate_videos(
model='veo-2.0-generate-001',
# Prompt is optional if image is provided
prompt='Night sky',
image=image,
config=types.GenerateVideosConfig(
number_of_videos=1,
duration_seconds=5,
enhance_prompt=True,
# Can also pass an Image into last_frame for frame interpolation
),
)
# Poll operation
while not operation.done:
time.sleep(20)
operation = client.operations.get(operation)
video = operation.response.generated_videos[0].video
video.show()
Generate Videos (Video to Video)¶
Currently, only Vertex AI supports Video to Video generation (Video extension).
from google.genai import types
# Read local video (uses mimetypes.guess_type to infer mime type)
video = types.Video.from_file("local/path/video.mp4")
# Create operation
operation = client.models.generate_videos(
model='veo-2.0-generate-001',
# Prompt is optional if Video is provided
prompt='Night sky',
# Input video must be in GCS
video=types.Video(
uri="gs://bucket-name/inputs/videos/cat_driving.mp4",
),
config=types.GenerateVideosConfig(
number_of_videos=1,
duration_seconds=5,
enhance_prompt=True,
),
)
# Poll operation
while not operation.done:
time.sleep(20)
operation = client.operations.get(operation)
video = operation.response.generated_videos[0].video
video.show()
Chats¶
Create a chat session to start a multi-turn conversations with the model. Then, use chat.send_message function multiple times within the same chat session so that it can reflect on its previous responses (i.e., engage in an ongoing conversation). See the ‘Create a client’ section above to initialize a client.
Send Message (Synchronous Non-Streaming)¶
chat = client.chats.create(model='gemini-2.5-flash')
response = chat.send_message('tell me a story')
print(response.text)
response = chat.send_message('summarize the story you told me in 1 sentence')
print(response.text)
Send Message (Synchronous Streaming)¶
chat = client.chats.create(model='gemini-2.5-flash')
for chunk in chat.send_message_stream('tell me a story'):
print(chunk.text)
Send Message (Asynchronous Non-Streaming)¶
chat = client.aio.chats.create(model='gemini-2.5-flash')
response = await chat.send_message('tell me a story')
print(response.text)
Send Message (Asynchronous Streaming)¶
chat = client.aio.chats.create(model='gemini-2.5-flash')
async for chunk in await chat.send_message_stream('tell me a story'):
print(chunk.text)
Files¶
Files are only supported in Gemini Developer API. See the ‘Create a client’ section above to initialize a client.
gsutil cp gs://cloud-samples-data/generative-ai/pdf/2312.11805v3.pdf .
gsutil cp gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf .
Upload¶
file1 = client.files.upload(file='2312.11805v3.pdf')
file2 = client.files.upload(file='2403.05530.pdf')
print(file1)
print(file2)
Get¶
file1 = client.files.upload(file='2312.11805v3.pdf')
file_info = client.files.get(name=file1.name)
Delete¶
file3 = client.files.upload(file='2312.11805v3.pdf')
client.files.delete(name=file3.name)
Caches¶
client.cachescontains the control plane APIs for cached content.See the ‘Create a client’ section above to initialize a client.
Create¶
from google.genai import types
if client.vertexai:
file_uris = [
'gs://cloud-samples-data/generative-ai/pdf/2312.11805v3.pdf',
'gs://cloud-samples-data/generative-ai/pdf/2403.05530.pdf',
]
else:
file_uris = [file1.uri, file2.uri]
cached_content = client.caches.create(
model='gemini-2.5-flash',
config=types.CreateCachedContentConfig(
contents=[
types.Content(
role='user',
parts=[
types.Part.from_uri(
file_uri=file_uris[0], mime_type='application/pdf'
),
types.Part.from_uri(
file_uri=file_uris[1],
mime_type='application/pdf',
),
],
)
],
system_instruction='What is the sum of the two pdfs?',
display_name='test cache',
ttl='3600s',
),
)
Get¶
cached_content = client.caches.get(name=cached_content.name)
Generate Content with Caches¶
from google.genai import types
response = client.models.generate_content(
model='gemini-2.5-flash',
contents='Summarize the pdfs',
config=types.GenerateContentConfig(
cached_content=cached_content.name,
),
)
print(response.text)
Tunings¶
client.tunings contains tuning job APIs and supports supervised fine
tuning through tune. Only supported in Vertex AI. See the ‘Create a client’
section above to initialize a client.
Tune¶
Vertex AI supports tuning from GCS source or from a Vertex AI Multimodal Dataset
from google.genai import types
model = 'gemini-2.5-flash'
training_dataset = types.TuningDataset(
# or gcs_uri=my_vertex_multimodal_dataset
gcs_uri='gs://cloud-samples-data/ai-platform/generative_ai/gemini-1_5/text/sft_train_data.jsonl',
)
from google.genai import types
tuning_job = client.tunings.tune(
base_model=model,
training_dataset=training_dataset,
config=types.CreateTuningJobConfig(
epoch_count=1, tuned_model_display_name='test_dataset_examples model'
),
)
print(tuning_job)
Get Tuning Job¶
tuning_job = client.tunings.get(name=tuning_job.name)
print(tuning_job)
import time
completed_states = set(
[
'JOB_STATE_SUCCEEDED',
'JOB_STATE_FAILED',
'JOB_STATE_CANCELLED',
]
)
while tuning_job.state not in completed_states:
print(tuning_job.state)
tuning_job = client.tunings.get(name=tuning_job.name)
time.sleep(10)
Use Tuned Model¶
response = client.models.generate_content(
model=tuning_job.tuned_model.endpoint,
contents='why is the sky blue?',
)
print(response.text)
Get Tuned Model¶
tuned_model = client.models.get(model=tuning_job.tuned_model.model)
print(tuned_model)
Update Tuned Model¶
from google.genai import types
tuned_model = client.models.update(
model=tuning_job.tuned_model.model,
config=types.UpdateModelConfig(
display_name='my tuned model', description='my tuned model description'
),
)
print(tuned_model)
List Tuned Models¶
To retrieve base models, see: List Base Models
for model in client.models.list(config={'page_size': 10, 'query_base': False}):
print(model)
pager = client.models.list(config={'page_size': 10, 'query_base': False})
print(pager.page_size)
print(pager[0])
pager.next_page()
print(pager[0])
List Tuned Models (Asynchronous)¶
async for job in await client.aio.models.list(config={'page_size': 10, 'query_base': False}):
print(job)
async_pager = await client.aio.models.list(config={'page_size': 10, 'query_base': False})
print(async_pager.page_size)
print(async_pager[0])
await async_pager.next_page()
print(async_pager[0])
Update Tuned Model¶
from google.genai import types
model = pager[0]
model = client.models.update(
model=model.name,
config=types.UpdateModelConfig(
display_name='my tuned model', description='my tuned model description'
),
)
print(model)
List Tuning Jobs¶
for job in client.tunings.list(config={'page_size': 10}):
print(job)
pager = client.tunings.list(config={'page_size': 10})
print(pager.page_size)
print(pager[0])
pager.next_page()
print(pager[0])
List Tuning Jobs (Asynchronous):
async for job in await client.aio.tunings.list(config={'page_size': 10}):
print(job)
async_pager = await client.aio.tunings.list(config={'page_size': 10})
print(async_pager.page_size)
print(async_pager[0])
await async_pager.next_page()
print(async_pager[0])
Batch Prediction¶
Create a batch job. See the ‘Create a client’ section above to initialize a client.
Create¶
Vertex AI client support using a BigQuery table or a GCS file as the source.
# Specify model and source file only, destination and job display name will be auto-populated
job = client.batches.create(
model='gemini-2.5-flash',
src='bq://my-project.my-dataset.my-table', # or "gs://path/to/input/data"
)
print(job)
Gemini Developer API¶
# Create a batch job with inlined requests
batch_job = client.batches.create(
model="gemini-2.5-flash",
src=[{
"contents": [{
"parts": [{
"text": "Hello!",
}],
"role": "user",
}],
"config": {"response_modalities": ["text"]},
}],
)
job
In order to create a batch job with file name. Need to upload a json file. For example myrequests.json:
{"key":"request_1", "request": {"contents": [{"parts": [{"text":
"Explain how AI works in a few words"}]}], "generation_config": {"response_modalities": ["TEXT"]}}}
{"key":"request_2", "request": {"contents": [{"parts": [{"text": "Explain how Crypto works in a few words"}]}]}}
Then upload the file.
# Upload a file to Gemini Developer API
file_name = client.files.upload(
file='myrequests.json',
config=types.UploadFileConfig(display_name='test-json'),
)
# Create a batch job with file name
batch_job = client.batches.create(
model="gemini-2.0-flash",
src="files/test-json",
)
# Get a job by name
job = client.batches.get(name=job.name)
job.state
completed_states = set(
[
'JOB_STATE_SUCCEEDED',
'JOB_STATE_FAILED',
'JOB_STATE_CANCELLED',
'JOB_STATE_PAUSED',
]
)
while job.state not in completed_states:
print(job.state)
job = client.batches.get(name=job.name)
time.sleep(30)
job
List¶
from google.genai import types
for job in client.batches.list(config=types.ListBatchJobsConfig(page_size=10)):
print(job)
List Batch Jobs with Pager¶
from google.genai import types
pager = client.batches.list(config=types.ListBatchJobsConfig(page_size=10))
print(pager.page_size)
print(pager[0])
pager.next_page()
print(pager[0])
List Batch Jobs (Asynchronous)¶
from google.genai import types
async for job in await client.aio.batches.list(
config=types.ListBatchJobsConfig(page_size=10)
):
print(job)
List Batch Jobs with Pager (Asynchronous)¶
from google.genai import types
async_pager = await client.aio.batches.list(
config=types.ListBatchJobsConfig(page_size=10)
)
print(async_pager.page_size)
print(async_pager[0])
await async_pager.next_page()
print(async_pager[0])
Delete¶
# Delete the job resource
delete_job = client.batches.delete(name=job.name)
delete_job
Error Handling¶
To handle errors raised by the model service, the SDK provides this APIError class.
from google.genai import errors
try:
client.models.generate_content(
model="invalid-model-name",
contents="What is your name?",
)
except errors.APIError as e:
print(e.code) # 404
print(e.message)
Extra Request Body¶
The extra_body field in HttpOptions accepts a dictionary of additional JSON
properties to include in the request body. This can be used to access new or
experimental backend features that are not yet formally supported in the SDK.
The structure of the dictionary must match the backend API’s request structure.
VertexAI backend API docs: https://cloud.google.com/vertex-ai/docs/reference/rest
GeminiAPI backend API docs: https://ai.google.dev/api/rest
response = client.models.generate_content(
model="gemini-2.5-pro",
contents="What is the weather in Boston? and how about Sunnyvale?",
config=types.GenerateContentConfig(
tools=[get_current_weather],
http_options=types.HttpOptions(extra_body={'tool_config': {'function_calling_config': {'mode': 'COMPOSITIONAL'}}}),
),
)
Reference¶
- Submodules
- genai.client module
- genai.batches module
- genai.caches module
- genai.chats module
- genai.files module
- genai.live module
- genai.models module
AsyncModelsAsyncModels.compute_tokens()AsyncModels.count_tokens()AsyncModels.delete()AsyncModels.edit_image()AsyncModels.embed_content()AsyncModels.generate_content()AsyncModels.generate_content_stream()AsyncModels.generate_images()AsyncModels.generate_videos()AsyncModels.get()AsyncModels.list()AsyncModels.recontext_image()AsyncModels.segment_image()AsyncModels.update()AsyncModels.upscale_image()
ModelsModels.compute_tokens()Models.count_tokens()Models.delete()Models.edit_image()Models.embed_content()Models.generate_content()Models.generate_content_stream()Models.generate_images()Models.generate_videos()Models.get()Models.list()Models.recontext_image()Models.segment_image()Models.update()Models.upscale_image()
- genai.tokens module
- genai.tunings module
- genai.types module
ActivityEndActivityEndDictActivityHandlingActivityStartActivityStartDictAdapterSizeApiAuthApiAuthApiKeyConfigApiAuthApiKeyConfigDictApiAuthDictApiKeyConfigApiKeyConfigDictApiSpecAudioChunkAudioChunkDictAudioTranscriptionConfigAudioTranscriptionConfigDictAuthConfigAuthConfigDictAuthConfigGoogleServiceAccountConfigAuthConfigGoogleServiceAccountConfigDictAuthConfigHttpBasicAuthConfigAuthConfigHttpBasicAuthConfigDictAuthConfigOauthConfigAuthConfigOauthConfigDictAuthConfigOidcConfigAuthConfigOidcConfigDictAuthTokenAuthTokenDictAuthTypeAutomaticActivityDetectionAutomaticActivityDetectionDictAutomaticFunctionCallingConfigAutomaticFunctionCallingConfigDictAutoraterConfigAutoraterConfigDictBatchJobBatchJobDestinationBatchJobDestinationDictBatchJobDictBatchJobSourceBatchJobSourceDictBehaviorBleuSpecBleuSpecDictBlobBlobDictBlockedReasonCachedContentCachedContentDictCachedContentUsageMetadataCachedContentUsageMetadataDictCancelBatchJobConfigCancelBatchJobConfigDictCancelTuningJobConfigCancelTuningJobConfigDictCandidateCandidateDictCandidateDict.avg_logprobsCandidateDict.citation_metadataCandidateDict.contentCandidateDict.finish_messageCandidateDict.finish_reasonCandidateDict.grounding_metadataCandidateDict.indexCandidateDict.logprobs_resultCandidateDict.safety_ratingsCandidateDict.token_countCandidateDict.url_context_metadata
CheckpointCheckpointDictCitationCitationDictCitationMetadataCitationMetadataDictCodeExecutionResultCodeExecutionResultDictComputeTokensConfigComputeTokensConfigDictComputeTokensResponseComputeTokensResponseDictComputeTokensResultComputeTokensResultDictComputerUseComputerUseDictContentContentDictContentEmbeddingContentEmbeddingDictContentEmbeddingStatisticsContentEmbeddingStatisticsDictContentReferenceImageContentReferenceImageDictContextWindowCompressionConfigContextWindowCompressionConfigDictControlReferenceConfigControlReferenceConfigDictControlReferenceImageControlReferenceImageDictControlReferenceTypeCountTokensConfigCountTokensConfigDictCountTokensResponseCountTokensResponseDictCountTokensResultCountTokensResultDictCreateAuthTokenConfigCreateAuthTokenConfigDictCreateAuthTokenParametersCreateAuthTokenParametersDictCreateBatchJobConfigCreateBatchJobConfigDictCreateCachedContentConfigCreateCachedContentConfig.contentsCreateCachedContentConfig.display_nameCreateCachedContentConfig.expire_timeCreateCachedContentConfig.http_optionsCreateCachedContentConfig.kms_key_nameCreateCachedContentConfig.system_instructionCreateCachedContentConfig.tool_configCreateCachedContentConfig.toolsCreateCachedContentConfig.ttl
CreateCachedContentConfigDictCreateCachedContentConfigDict.contentsCreateCachedContentConfigDict.display_nameCreateCachedContentConfigDict.expire_timeCreateCachedContentConfigDict.http_optionsCreateCachedContentConfigDict.kms_key_nameCreateCachedContentConfigDict.system_instructionCreateCachedContentConfigDict.tool_configCreateCachedContentConfigDict.toolsCreateCachedContentConfigDict.ttl
CreateEmbeddingsBatchJobConfigCreateEmbeddingsBatchJobConfigDictCreateFileConfigCreateFileConfigDictCreateFileResponseCreateFileResponseDictCreateTuningJobConfigCreateTuningJobConfig.adapter_sizeCreateTuningJobConfig.batch_sizeCreateTuningJobConfig.betaCreateTuningJobConfig.descriptionCreateTuningJobConfig.epoch_countCreateTuningJobConfig.evaluation_configCreateTuningJobConfig.export_last_checkpoint_onlyCreateTuningJobConfig.http_optionsCreateTuningJobConfig.labelsCreateTuningJobConfig.learning_rateCreateTuningJobConfig.learning_rate_multiplierCreateTuningJobConfig.methodCreateTuningJobConfig.pre_tuned_model_checkpoint_idCreateTuningJobConfig.tuned_model_display_nameCreateTuningJobConfig.validation_dataset
CreateTuningJobConfigDictCreateTuningJobConfigDict.adapter_sizeCreateTuningJobConfigDict.batch_sizeCreateTuningJobConfigDict.betaCreateTuningJobConfigDict.descriptionCreateTuningJobConfigDict.epoch_countCreateTuningJobConfigDict.evaluation_configCreateTuningJobConfigDict.export_last_checkpoint_onlyCreateTuningJobConfigDict.http_optionsCreateTuningJobConfigDict.labelsCreateTuningJobConfigDict.learning_rateCreateTuningJobConfigDict.learning_rate_multiplierCreateTuningJobConfigDict.methodCreateTuningJobConfigDict.pre_tuned_model_checkpoint_idCreateTuningJobConfigDict.tuned_model_display_nameCreateTuningJobConfigDict.validation_dataset
CreateTuningJobParametersCreateTuningJobParametersDictCustomOutputFormatConfigCustomOutputFormatConfigDictDatasetDistributionDatasetDistributionDictDatasetDistributionDistributionBucketDatasetDistributionDistributionBucketDictDatasetStatsDatasetStats.total_billable_character_countDatasetStats.total_tuning_character_countDatasetStats.tuning_dataset_example_countDatasetStats.tuning_step_countDatasetStats.user_dataset_examplesDatasetStats.user_input_token_distributionDatasetStats.user_message_per_example_distributionDatasetStats.user_output_token_distribution
DatasetStatsDictDatasetStatsDict.total_billable_character_countDatasetStatsDict.total_tuning_character_countDatasetStatsDict.tuning_dataset_example_countDatasetStatsDict.tuning_step_countDatasetStatsDict.user_dataset_examplesDatasetStatsDict.user_input_token_distributionDatasetStatsDict.user_message_per_example_distributionDatasetStatsDict.user_output_token_distribution
DeleteBatchJobConfigDeleteBatchJobConfigDictDeleteCachedContentConfigDeleteCachedContentConfigDictDeleteCachedContentResponseDeleteCachedContentResponseDictDeleteFileConfigDeleteFileConfigDictDeleteFileResponseDeleteFileResponseDictDeleteModelConfigDeleteModelConfigDictDeleteModelResponseDeleteModelResponseDictDeleteResourceJobDeleteResourceJobDictDistillationDataStatsDistillationDataStatsDictDownloadFileConfigDownloadFileConfigDictDynamicRetrievalConfigDynamicRetrievalConfigDictDynamicRetrievalConfigModeEditImageConfigEditImageConfig.add_watermarkEditImageConfig.aspect_ratioEditImageConfig.base_stepsEditImageConfig.edit_modeEditImageConfig.guidance_scaleEditImageConfig.http_optionsEditImageConfig.include_rai_reasonEditImageConfig.include_safety_attributesEditImageConfig.labelsEditImageConfig.languageEditImageConfig.negative_promptEditImageConfig.number_of_imagesEditImageConfig.output_compression_qualityEditImageConfig.output_gcs_uriEditImageConfig.output_mime_typeEditImageConfig.person_generationEditImageConfig.safety_filter_levelEditImageConfig.seed
EditImageConfigDictEditImageConfigDict.add_watermarkEditImageConfigDict.aspect_ratioEditImageConfigDict.base_stepsEditImageConfigDict.edit_modeEditImageConfigDict.guidance_scaleEditImageConfigDict.http_optionsEditImageConfigDict.include_rai_reasonEditImageConfigDict.include_safety_attributesEditImageConfigDict.labelsEditImageConfigDict.languageEditImageConfigDict.negative_promptEditImageConfigDict.number_of_imagesEditImageConfigDict.output_compression_qualityEditImageConfigDict.output_gcs_uriEditImageConfigDict.output_mime_typeEditImageConfigDict.person_generationEditImageConfigDict.safety_filter_levelEditImageConfigDict.seed
EditImageResponseEditImageResponseDictEditModeEmbedContentBatchEmbedContentBatchDictEmbedContentConfigEmbedContentConfigDictEmbedContentMetadataEmbedContentMetadataDictEmbedContentResponseEmbedContentResponseDictEmbeddingsBatchJobSourceEmbeddingsBatchJobSourceDictEncryptionSpecEncryptionSpecDictEndSensitivityEndpointEndpointDictEnterpriseWebSearchEnterpriseWebSearchDictEntityLabelEntityLabelDictEnvironmentEvaluationConfigEvaluationConfigDictExecutableCodeExecutableCodeDictExternalApiExternalApiDictExternalApiElasticSearchParamsExternalApiElasticSearchParamsDictExternalApiSimpleSearchParamsExternalApiSimpleSearchParamsDictFeatureSelectionPreferenceFetchPredictOperationConfigFetchPredictOperationConfigDictFileFileDataFileDataDictFileDictFileSourceFileStateFileStatusFileStatusDictFinishReasonFinishReason.BLOCKLISTFinishReason.FINISH_REASON_UNSPECIFIEDFinishReason.IMAGE_PROHIBITED_CONTENTFinishReason.IMAGE_SAFETYFinishReason.LANGUAGEFinishReason.MALFORMED_FUNCTION_CALLFinishReason.MAX_TOKENSFinishReason.NO_IMAGEFinishReason.OTHERFinishReason.PROHIBITED_CONTENTFinishReason.RECITATIONFinishReason.SAFETYFinishReason.SPIIFinishReason.STOPFinishReason.UNEXPECTED_TOOL_CALL
FunctionCallFunctionCallDictFunctionCallingConfigFunctionCallingConfigDictFunctionCallingConfigModeFunctionDeclarationFunctionDeclaration.behaviorFunctionDeclaration.descriptionFunctionDeclaration.nameFunctionDeclaration.parametersFunctionDeclaration.parameters_json_schemaFunctionDeclaration.responseFunctionDeclaration.response_json_schemaFunctionDeclaration.from_callable()FunctionDeclaration.from_callable_with_api_option()
FunctionDeclarationDictFunctionResponseFunctionResponseBlobFunctionResponseBlobDictFunctionResponseDictFunctionResponseFileDataFunctionResponseFileDataDictFunctionResponsePartFunctionResponsePartDictFunctionResponseSchedulingGcsDestinationGcsDestinationDictGeminiPreferenceExampleGeminiPreferenceExampleCompletionGeminiPreferenceExampleCompletionDictGeminiPreferenceExampleDictGenerateContentConfigGenerateContentConfig.audio_timestampGenerateContentConfig.automatic_function_callingGenerateContentConfig.cached_contentGenerateContentConfig.candidate_countGenerateContentConfig.frequency_penaltyGenerateContentConfig.http_optionsGenerateContentConfig.image_configGenerateContentConfig.labelsGenerateContentConfig.logprobsGenerateContentConfig.max_output_tokensGenerateContentConfig.media_resolutionGenerateContentConfig.model_selection_configGenerateContentConfig.presence_penaltyGenerateContentConfig.response_json_schemaGenerateContentConfig.response_logprobsGenerateContentConfig.response_mime_typeGenerateContentConfig.response_modalitiesGenerateContentConfig.response_schemaGenerateContentConfig.routing_configGenerateContentConfig.safety_settingsGenerateContentConfig.seedGenerateContentConfig.should_return_http_responseGenerateContentConfig.speech_configGenerateContentConfig.stop_sequencesGenerateContentConfig.system_instructionGenerateContentConfig.temperatureGenerateContentConfig.thinking_configGenerateContentConfig.tool_configGenerateContentConfig.toolsGenerateContentConfig.top_kGenerateContentConfig.top_p
GenerateContentConfigDictGenerateContentConfigDict.audio_timestampGenerateContentConfigDict.automatic_function_callingGenerateContentConfigDict.cached_contentGenerateContentConfigDict.candidate_countGenerateContentConfigDict.frequency_penaltyGenerateContentConfigDict.http_optionsGenerateContentConfigDict.image_configGenerateContentConfigDict.labelsGenerateContentConfigDict.logprobsGenerateContentConfigDict.max_output_tokensGenerateContentConfigDict.media_resolutionGenerateContentConfigDict.model_selection_configGenerateContentConfigDict.presence_penaltyGenerateContentConfigDict.response_json_schemaGenerateContentConfigDict.response_logprobsGenerateContentConfigDict.response_mime_typeGenerateContentConfigDict.response_modalitiesGenerateContentConfigDict.response_schemaGenerateContentConfigDict.routing_configGenerateContentConfigDict.safety_settingsGenerateContentConfigDict.seedGenerateContentConfigDict.should_return_http_responseGenerateContentConfigDict.speech_configGenerateContentConfigDict.stop_sequencesGenerateContentConfigDict.system_instructionGenerateContentConfigDict.temperatureGenerateContentConfigDict.thinking_configGenerateContentConfigDict.tool_configGenerateContentConfigDict.toolsGenerateContentConfigDict.top_kGenerateContentConfigDict.top_p
GenerateContentResponseGenerateContentResponse.automatic_function_calling_historyGenerateContentResponse.candidatesGenerateContentResponse.create_timeGenerateContentResponse.model_versionGenerateContentResponse.parsedGenerateContentResponse.prompt_feedbackGenerateContentResponse.response_idGenerateContentResponse.sdk_http_responseGenerateContentResponse.usage_metadataGenerateContentResponse.code_execution_resultGenerateContentResponse.executable_codeGenerateContentResponse.function_callsGenerateContentResponse.partsGenerateContentResponse.text
GenerateContentResponseDictGenerateContentResponsePromptFeedbackGenerateContentResponsePromptFeedbackDictGenerateContentResponseUsageMetadataGenerateContentResponseUsageMetadata.cache_tokens_detailsGenerateContentResponseUsageMetadata.cached_content_token_countGenerateContentResponseUsageMetadata.candidates_token_countGenerateContentResponseUsageMetadata.candidates_tokens_detailsGenerateContentResponseUsageMetadata.prompt_token_countGenerateContentResponseUsageMetadata.prompt_tokens_detailsGenerateContentResponseUsageMetadata.thoughts_token_countGenerateContentResponseUsageMetadata.tool_use_prompt_token_countGenerateContentResponseUsageMetadata.tool_use_prompt_tokens_detailsGenerateContentResponseUsageMetadata.total_token_countGenerateContentResponseUsageMetadata.traffic_type
GenerateContentResponseUsageMetadataDictGenerateContentResponseUsageMetadataDict.cache_tokens_detailsGenerateContentResponseUsageMetadataDict.cached_content_token_countGenerateContentResponseUsageMetadataDict.candidates_token_countGenerateContentResponseUsageMetadataDict.candidates_tokens_detailsGenerateContentResponseUsageMetadataDict.prompt_token_countGenerateContentResponseUsageMetadataDict.prompt_tokens_detailsGenerateContentResponseUsageMetadataDict.thoughts_token_countGenerateContentResponseUsageMetadataDict.tool_use_prompt_token_countGenerateContentResponseUsageMetadataDict.tool_use_prompt_tokens_detailsGenerateContentResponseUsageMetadataDict.total_token_countGenerateContentResponseUsageMetadataDict.traffic_type
GenerateImagesConfigGenerateImagesConfig.add_watermarkGenerateImagesConfig.aspect_ratioGenerateImagesConfig.enhance_promptGenerateImagesConfig.guidance_scaleGenerateImagesConfig.http_optionsGenerateImagesConfig.image_sizeGenerateImagesConfig.include_rai_reasonGenerateImagesConfig.include_safety_attributesGenerateImagesConfig.labelsGenerateImagesConfig.languageGenerateImagesConfig.negative_promptGenerateImagesConfig.number_of_imagesGenerateImagesConfig.output_compression_qualityGenerateImagesConfig.output_gcs_uriGenerateImagesConfig.output_mime_typeGenerateImagesConfig.person_generationGenerateImagesConfig.safety_filter_levelGenerateImagesConfig.seed
GenerateImagesConfigDictGenerateImagesConfigDict.add_watermarkGenerateImagesConfigDict.aspect_ratioGenerateImagesConfigDict.enhance_promptGenerateImagesConfigDict.guidance_scaleGenerateImagesConfigDict.http_optionsGenerateImagesConfigDict.image_sizeGenerateImagesConfigDict.include_rai_reasonGenerateImagesConfigDict.include_safety_attributesGenerateImagesConfigDict.labelsGenerateImagesConfigDict.languageGenerateImagesConfigDict.negative_promptGenerateImagesConfigDict.number_of_imagesGenerateImagesConfigDict.output_compression_qualityGenerateImagesConfigDict.output_gcs_uriGenerateImagesConfigDict.output_mime_typeGenerateImagesConfigDict.person_generationGenerateImagesConfigDict.safety_filter_levelGenerateImagesConfigDict.seed
GenerateImagesResponseGenerateImagesResponseDictGenerateVideosConfigGenerateVideosConfig.aspect_ratioGenerateVideosConfig.compression_qualityGenerateVideosConfig.duration_secondsGenerateVideosConfig.enhance_promptGenerateVideosConfig.fpsGenerateVideosConfig.generate_audioGenerateVideosConfig.http_optionsGenerateVideosConfig.last_frameGenerateVideosConfig.maskGenerateVideosConfig.negative_promptGenerateVideosConfig.number_of_videosGenerateVideosConfig.output_gcs_uriGenerateVideosConfig.person_generationGenerateVideosConfig.pubsub_topicGenerateVideosConfig.reference_imagesGenerateVideosConfig.resolutionGenerateVideosConfig.seed
GenerateVideosConfigDictGenerateVideosConfigDict.aspect_ratioGenerateVideosConfigDict.compression_qualityGenerateVideosConfigDict.duration_secondsGenerateVideosConfigDict.enhance_promptGenerateVideosConfigDict.fpsGenerateVideosConfigDict.generate_audioGenerateVideosConfigDict.http_optionsGenerateVideosConfigDict.last_frameGenerateVideosConfigDict.maskGenerateVideosConfigDict.negative_promptGenerateVideosConfigDict.number_of_videosGenerateVideosConfigDict.output_gcs_uriGenerateVideosConfigDict.person_generationGenerateVideosConfigDict.pubsub_topicGenerateVideosConfigDict.reference_imagesGenerateVideosConfigDict.resolutionGenerateVideosConfigDict.seed
GenerateVideosOperationGenerateVideosResponseGenerateVideosResponseDictGenerateVideosSourceGenerateVideosSourceDictGeneratedImageGeneratedImageDictGeneratedImageMaskGeneratedImageMaskDictGeneratedVideoGeneratedVideoDictGenerationConfigGenerationConfig.audio_timestampGenerationConfig.candidate_countGenerationConfig.enable_affective_dialogGenerationConfig.enable_enhanced_civic_answersGenerationConfig.frequency_penaltyGenerationConfig.logprobsGenerationConfig.max_output_tokensGenerationConfig.media_resolutionGenerationConfig.model_selection_configGenerationConfig.presence_penaltyGenerationConfig.response_json_schemaGenerationConfig.response_logprobsGenerationConfig.response_mime_typeGenerationConfig.response_modalitiesGenerationConfig.response_schemaGenerationConfig.routing_configGenerationConfig.seedGenerationConfig.speech_configGenerationConfig.stop_sequencesGenerationConfig.temperatureGenerationConfig.thinking_configGenerationConfig.top_kGenerationConfig.top_p
GenerationConfigDictGenerationConfigDict.audio_timestampGenerationConfigDict.candidate_countGenerationConfigDict.enable_affective_dialogGenerationConfigDict.enable_enhanced_civic_answersGenerationConfigDict.frequency_penaltyGenerationConfigDict.logprobsGenerationConfigDict.max_output_tokensGenerationConfigDict.media_resolutionGenerationConfigDict.model_selection_configGenerationConfigDict.presence_penaltyGenerationConfigDict.response_json_schemaGenerationConfigDict.response_logprobsGenerationConfigDict.response_mime_typeGenerationConfigDict.response_modalitiesGenerationConfigDict.response_schemaGenerationConfigDict.routing_configGenerationConfigDict.seedGenerationConfigDict.speech_configGenerationConfigDict.stop_sequencesGenerationConfigDict.temperatureGenerationConfigDict.thinking_configGenerationConfigDict.top_kGenerationConfigDict.top_p
GenerationConfigRoutingConfigGenerationConfigRoutingConfigAutoRoutingModeGenerationConfigRoutingConfigAutoRoutingModeDictGenerationConfigRoutingConfigDictGenerationConfigRoutingConfigManualRoutingModeGenerationConfigRoutingConfigManualRoutingModeDictGenerationConfigThinkingConfigGenerationConfigThinkingConfigDictGetBatchJobConfigGetBatchJobConfigDictGetCachedContentConfigGetCachedContentConfigDictGetFileConfigGetFileConfigDictGetModelConfigGetModelConfigDictGetOperationConfigGetOperationConfigDictGetTuningJobConfigGetTuningJobConfigDictGoogleMapsGoogleMapsDictGoogleRpcStatusGoogleRpcStatusDictGoogleSearchGoogleSearchDictGoogleSearchRetrievalGoogleSearchRetrievalDictGoogleTypeDateGoogleTypeDateDictGroundingChunkGroundingChunkDictGroundingChunkMapsGroundingChunkMapsDictGroundingChunkMapsPlaceAnswerSourcesGroundingChunkMapsPlaceAnswerSourcesAuthorAttributionGroundingChunkMapsPlaceAnswerSourcesAuthorAttributionDictGroundingChunkMapsPlaceAnswerSourcesDictGroundingChunkMapsPlaceAnswerSourcesReviewSnippetGroundingChunkMapsPlaceAnswerSourcesReviewSnippet.author_attributionGroundingChunkMapsPlaceAnswerSourcesReviewSnippet.flag_content_uriGroundingChunkMapsPlaceAnswerSourcesReviewSnippet.google_maps_uriGroundingChunkMapsPlaceAnswerSourcesReviewSnippet.relative_publish_time_descriptionGroundingChunkMapsPlaceAnswerSourcesReviewSnippet.reviewGroundingChunkMapsPlaceAnswerSourcesReviewSnippet.review_idGroundingChunkMapsPlaceAnswerSourcesReviewSnippet.title
GroundingChunkMapsPlaceAnswerSourcesReviewSnippetDictGroundingChunkMapsPlaceAnswerSourcesReviewSnippetDict.author_attributionGroundingChunkMapsPlaceAnswerSourcesReviewSnippetDict.flag_content_uriGroundingChunkMapsPlaceAnswerSourcesReviewSnippetDict.google_maps_uriGroundingChunkMapsPlaceAnswerSourcesReviewSnippetDict.relative_publish_time_descriptionGroundingChunkMapsPlaceAnswerSourcesReviewSnippetDict.reviewGroundingChunkMapsPlaceAnswerSourcesReviewSnippetDict.review_idGroundingChunkMapsPlaceAnswerSourcesReviewSnippetDict.title
GroundingChunkRetrievedContextGroundingChunkRetrievedContextDictGroundingChunkWebGroundingChunkWebDictGroundingMetadataGroundingMetadata.google_maps_widget_context_tokenGroundingMetadata.grounding_chunksGroundingMetadata.grounding_supportsGroundingMetadata.retrieval_metadataGroundingMetadata.retrieval_queriesGroundingMetadata.search_entry_pointGroundingMetadata.source_flagging_urisGroundingMetadata.web_search_queries
GroundingMetadataDictGroundingMetadataDict.google_maps_widget_context_tokenGroundingMetadataDict.grounding_chunksGroundingMetadataDict.grounding_supportsGroundingMetadataDict.retrieval_metadataGroundingMetadataDict.retrieval_queriesGroundingMetadataDict.search_entry_pointGroundingMetadataDict.source_flagging_urisGroundingMetadataDict.web_search_queries
GroundingMetadataSourceFlaggingUriGroundingMetadataSourceFlaggingUriDictGroundingSupportGroundingSupportDictHarmBlockMethodHarmBlockThresholdHarmCategoryHarmCategory.HARM_CATEGORY_CIVIC_INTEGRITYHarmCategory.HARM_CATEGORY_DANGEROUS_CONTENTHarmCategory.HARM_CATEGORY_HARASSMENTHarmCategory.HARM_CATEGORY_HATE_SPEECHHarmCategory.HARM_CATEGORY_IMAGE_DANGEROUS_CONTENTHarmCategory.HARM_CATEGORY_IMAGE_HARASSMENTHarmCategory.HARM_CATEGORY_IMAGE_HATEHarmCategory.HARM_CATEGORY_IMAGE_SEXUALLY_EXPLICITHarmCategory.HARM_CATEGORY_JAILBREAKHarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICITHarmCategory.HARM_CATEGORY_UNSPECIFIED
HarmProbabilityHarmSeverityHttpOptionsHttpOptionsDictHttpResponseHttpResponseDictHttpRetryOptionsHttpRetryOptionsDictImageImageConfigImageConfigDictImageDictImagePromptLanguageInlinedEmbedContentResponseInlinedEmbedContentResponseDictInlinedRequestInlinedRequestDictInlinedResponseInlinedResponseDictIntervalIntervalDictJSONSchemaJSONSchema.additional_propertiesJSONSchema.any_ofJSONSchema.defaultJSONSchema.defsJSONSchema.descriptionJSONSchema.enumJSONSchema.formatJSONSchema.itemsJSONSchema.max_itemsJSONSchema.max_lengthJSONSchema.max_propertiesJSONSchema.maximumJSONSchema.min_itemsJSONSchema.min_lengthJSONSchema.min_propertiesJSONSchema.minimumJSONSchema.patternJSONSchema.propertiesJSONSchema.refJSONSchema.requiredJSONSchema.titleJSONSchema.typeJSONSchema.unique_items
JSONSchemaTypeJobErrorJobErrorDictJobStateJobState.JOB_STATE_CANCELLEDJobState.JOB_STATE_CANCELLINGJobState.JOB_STATE_EXPIREDJobState.JOB_STATE_FAILEDJobState.JOB_STATE_PARTIALLY_SUCCEEDEDJobState.JOB_STATE_PAUSEDJobState.JOB_STATE_PENDINGJobState.JOB_STATE_QUEUEDJobState.JOB_STATE_RUNNINGJobState.JOB_STATE_SUCCEEDEDJobState.JOB_STATE_UNSPECIFIEDJobState.JOB_STATE_UPDATING
LanguageLatLngLatLngDictListBatchJobsConfigListBatchJobsConfigDictListBatchJobsResponseListBatchJobsResponseDictListCachedContentsConfigListCachedContentsConfigDictListCachedContentsResponseListCachedContentsResponseDictListFilesConfigListFilesConfigDictListFilesResponseListFilesResponseDictListModelsConfigListModelsConfigDictListModelsResponseListModelsResponseDictListTuningJobsConfigListTuningJobsConfigDictListTuningJobsResponseListTuningJobsResponseDictLiveClientContentLiveClientContentDictLiveClientMessageLiveClientMessageDictLiveClientRealtimeInputLiveClientRealtimeInputDictLiveClientSetupLiveClientSetup.context_window_compressionLiveClientSetup.generation_configLiveClientSetup.input_audio_transcriptionLiveClientSetup.modelLiveClientSetup.output_audio_transcriptionLiveClientSetup.proactivityLiveClientSetup.session_resumptionLiveClientSetup.system_instructionLiveClientSetup.tools
LiveClientSetupDictLiveClientSetupDict.context_window_compressionLiveClientSetupDict.generation_configLiveClientSetupDict.input_audio_transcriptionLiveClientSetupDict.modelLiveClientSetupDict.output_audio_transcriptionLiveClientSetupDict.proactivityLiveClientSetupDict.session_resumptionLiveClientSetupDict.system_instructionLiveClientSetupDict.tools
LiveClientToolResponseLiveClientToolResponseDictLiveConnectConfigLiveConnectConfig.context_window_compressionLiveConnectConfig.enable_affective_dialogLiveConnectConfig.generation_configLiveConnectConfig.http_optionsLiveConnectConfig.input_audio_transcriptionLiveConnectConfig.max_output_tokensLiveConnectConfig.media_resolutionLiveConnectConfig.output_audio_transcriptionLiveConnectConfig.proactivityLiveConnectConfig.realtime_input_configLiveConnectConfig.response_modalitiesLiveConnectConfig.seedLiveConnectConfig.session_resumptionLiveConnectConfig.speech_configLiveConnectConfig.system_instructionLiveConnectConfig.temperatureLiveConnectConfig.thinking_configLiveConnectConfig.toolsLiveConnectConfig.top_kLiveConnectConfig.top_p
LiveConnectConfigDictLiveConnectConfigDict.context_window_compressionLiveConnectConfigDict.enable_affective_dialogLiveConnectConfigDict.generation_configLiveConnectConfigDict.http_optionsLiveConnectConfigDict.input_audio_transcriptionLiveConnectConfigDict.max_output_tokensLiveConnectConfigDict.media_resolutionLiveConnectConfigDict.output_audio_transcriptionLiveConnectConfigDict.proactivityLiveConnectConfigDict.realtime_input_configLiveConnectConfigDict.response_modalitiesLiveConnectConfigDict.seedLiveConnectConfigDict.session_resumptionLiveConnectConfigDict.speech_configLiveConnectConfigDict.system_instructionLiveConnectConfigDict.temperatureLiveConnectConfigDict.thinking_configLiveConnectConfigDict.toolsLiveConnectConfigDict.top_kLiveConnectConfigDict.top_p
LiveConnectConstraintsLiveConnectConstraintsDictLiveConnectParametersLiveConnectParametersDictLiveMusicClientContentLiveMusicClientContentDictLiveMusicClientMessageLiveMusicClientMessageDictLiveMusicClientSetupLiveMusicClientSetupDictLiveMusicConnectParametersLiveMusicConnectParametersDictLiveMusicFilteredPromptLiveMusicFilteredPromptDictLiveMusicGenerationConfigLiveMusicGenerationConfig.bpmLiveMusicGenerationConfig.brightnessLiveMusicGenerationConfig.densityLiveMusicGenerationConfig.guidanceLiveMusicGenerationConfig.music_generation_modeLiveMusicGenerationConfig.mute_bassLiveMusicGenerationConfig.mute_drumsLiveMusicGenerationConfig.only_bass_and_drumsLiveMusicGenerationConfig.scaleLiveMusicGenerationConfig.seedLiveMusicGenerationConfig.temperatureLiveMusicGenerationConfig.top_k
LiveMusicGenerationConfigDictLiveMusicGenerationConfigDict.bpmLiveMusicGenerationConfigDict.brightnessLiveMusicGenerationConfigDict.densityLiveMusicGenerationConfigDict.guidanceLiveMusicGenerationConfigDict.music_generation_modeLiveMusicGenerationConfigDict.mute_bassLiveMusicGenerationConfigDict.mute_drumsLiveMusicGenerationConfigDict.only_bass_and_drumsLiveMusicGenerationConfigDict.scaleLiveMusicGenerationConfigDict.seedLiveMusicGenerationConfigDict.temperatureLiveMusicGenerationConfigDict.top_k
LiveMusicPlaybackControlLiveMusicServerContentLiveMusicServerContentDictLiveMusicServerMessageLiveMusicServerMessageDictLiveMusicServerSetupCompleteLiveMusicServerSetupCompleteDictLiveMusicSetConfigParametersLiveMusicSetConfigParametersDictLiveMusicSetWeightedPromptsParametersLiveMusicSetWeightedPromptsParametersDictLiveMusicSourceMetadataLiveMusicSourceMetadataDictLiveSendRealtimeInputParametersLiveSendRealtimeInputParametersDictLiveSendRealtimeInputParametersDict.activity_endLiveSendRealtimeInputParametersDict.activity_startLiveSendRealtimeInputParametersDict.audioLiveSendRealtimeInputParametersDict.audio_stream_endLiveSendRealtimeInputParametersDict.mediaLiveSendRealtimeInputParametersDict.textLiveSendRealtimeInputParametersDict.video
LiveServerContentLiveServerContent.generation_completeLiveServerContent.grounding_metadataLiveServerContent.input_transcriptionLiveServerContent.interruptedLiveServerContent.model_turnLiveServerContent.output_transcriptionLiveServerContent.turn_completeLiveServerContent.turn_complete_reasonLiveServerContent.url_context_metadataLiveServerContent.waiting_for_input
LiveServerContentDictLiveServerContentDict.generation_completeLiveServerContentDict.grounding_metadataLiveServerContentDict.input_transcriptionLiveServerContentDict.interruptedLiveServerContentDict.model_turnLiveServerContentDict.output_transcriptionLiveServerContentDict.turn_completeLiveServerContentDict.turn_complete_reasonLiveServerContentDict.url_context_metadataLiveServerContentDict.waiting_for_input
LiveServerGoAwayLiveServerGoAwayDictLiveServerMessageLiveServerMessageDictLiveServerSessionResumptionUpdateLiveServerSessionResumptionUpdateDictLiveServerSetupCompleteLiveServerSetupCompleteDictLiveServerToolCallLiveServerToolCallCancellationLiveServerToolCallCancellationDictLiveServerToolCallDictLogprobsResultLogprobsResultCandidateLogprobsResultCandidateDictLogprobsResultDictLogprobsResultTopCandidatesLogprobsResultTopCandidatesDictMaskReferenceConfigMaskReferenceConfigDictMaskReferenceImageMaskReferenceImageDictMaskReferenceModeMediaModalityMediaResolutionMetricMetricDictModalityModalityTokenCountModalityTokenCountDictModeModelModelContentModelDictModelSelectionConfigModelSelectionConfigDictMultiSpeakerVoiceConfigMultiSpeakerVoiceConfigDictMusicGenerationModeOperationOutcomeOutputConfigOutputConfigDictPairwiseMetricSpecPairwiseMetricSpecDictPartPart.code_execution_resultPart.executable_codePart.file_dataPart.function_callPart.function_responsePart.inline_dataPart.textPart.thoughtPart.thought_signaturePart.video_metadataPart.from_bytes()Part.from_code_execution_result()Part.from_executable_code()Part.from_function_call()Part.from_function_response()Part.from_text()Part.from_uri()Part.as_image()
PartDictPartnerModelTuningSpecPartnerModelTuningSpecDictPersonGenerationPhishBlockThresholdPhishBlockThreshold.BLOCK_HIGHER_AND_ABOVEPhishBlockThreshold.BLOCK_HIGH_AND_ABOVEPhishBlockThreshold.BLOCK_LOW_AND_ABOVEPhishBlockThreshold.BLOCK_MEDIUM_AND_ABOVEPhishBlockThreshold.BLOCK_ONLY_EXTREMELY_HIGHPhishBlockThreshold.BLOCK_VERY_HIGH_AND_ABOVEPhishBlockThreshold.PHISH_BLOCK_THRESHOLD_UNSPECIFIED
PointwiseMetricSpecPointwiseMetricSpecDictPreTunedModelPreTunedModelDictPrebuiltVoiceConfigPrebuiltVoiceConfigDictPreferenceOptimizationDataStatsPreferenceOptimizationDataStats.score_variance_per_example_distributionPreferenceOptimizationDataStats.scores_distributionPreferenceOptimizationDataStats.total_billable_token_countPreferenceOptimizationDataStats.tuning_dataset_example_countPreferenceOptimizationDataStats.tuning_step_countPreferenceOptimizationDataStats.user_dataset_examplesPreferenceOptimizationDataStats.user_input_token_distributionPreferenceOptimizationDataStats.user_output_token_distribution
PreferenceOptimizationDataStatsDictPreferenceOptimizationDataStatsDict.score_variance_per_example_distributionPreferenceOptimizationDataStatsDict.scores_distributionPreferenceOptimizationDataStatsDict.total_billable_token_countPreferenceOptimizationDataStatsDict.tuning_dataset_example_countPreferenceOptimizationDataStatsDict.tuning_step_countPreferenceOptimizationDataStatsDict.user_dataset_examplesPreferenceOptimizationDataStatsDict.user_input_token_distributionPreferenceOptimizationDataStatsDict.user_output_token_distribution
PreferenceOptimizationHyperParametersPreferenceOptimizationHyperParametersDictPreferenceOptimizationSpecPreferenceOptimizationSpecDictProactivityConfigProactivityConfigDictProductImageProductImageDictProjectOperationProjectOperationDictRagChunkRagChunkDictRagChunkPageSpanRagChunkPageSpanDictRagRetrievalConfigRagRetrievalConfigDictRagRetrievalConfigFilterRagRetrievalConfigFilterDictRagRetrievalConfigHybridSearchRagRetrievalConfigHybridSearchDictRagRetrievalConfigRankingRagRetrievalConfigRankingDictRagRetrievalConfigRankingLlmRankerRagRetrievalConfigRankingLlmRankerDictRagRetrievalConfigRankingRankServiceRagRetrievalConfigRankingRankServiceDictRawReferenceImageRawReferenceImageDictRealtimeInputConfigRealtimeInputConfigDictRecontextImageConfigRecontextImageConfig.add_watermarkRecontextImageConfig.base_stepsRecontextImageConfig.enhance_promptRecontextImageConfig.http_optionsRecontextImageConfig.labelsRecontextImageConfig.number_of_imagesRecontextImageConfig.output_compression_qualityRecontextImageConfig.output_gcs_uriRecontextImageConfig.output_mime_typeRecontextImageConfig.person_generationRecontextImageConfig.safety_filter_levelRecontextImageConfig.seed
RecontextImageConfigDictRecontextImageConfigDict.add_watermarkRecontextImageConfigDict.base_stepsRecontextImageConfigDict.enhance_promptRecontextImageConfigDict.http_optionsRecontextImageConfigDict.labelsRecontextImageConfigDict.number_of_imagesRecontextImageConfigDict.output_compression_qualityRecontextImageConfigDict.output_gcs_uriRecontextImageConfigDict.output_mime_typeRecontextImageConfigDict.person_generationRecontextImageConfigDict.safety_filter_levelRecontextImageConfigDict.seed
RecontextImageResponseRecontextImageResponseDictRecontextImageSourceRecontextImageSourceDictReplayFileReplayFileDictReplayInteractionReplayInteractionDictReplayRequestReplayRequestDictReplayResponseReplayResponseDictRetrievalRetrievalConfigRetrievalConfigDictRetrievalDictRetrievalMetadataRetrievalMetadataDictRougeSpecRougeSpecDictSafetyAttributesSafetyAttributesDictSafetyFilterLevelSafetyRatingSafetyRatingDictSafetySettingSafetySettingDictScaleScale.A_FLAT_MAJOR_F_MINORScale.A_MAJOR_G_FLAT_MINORScale.B_FLAT_MAJOR_G_MINORScale.B_MAJOR_A_FLAT_MINORScale.C_MAJOR_A_MINORScale.D_FLAT_MAJOR_B_FLAT_MINORScale.D_MAJOR_B_MINORScale.E_FLAT_MAJOR_C_MINORScale.E_MAJOR_D_FLAT_MINORScale.F_MAJOR_D_MINORScale.G_FLAT_MAJOR_E_FLAT_MINORScale.G_MAJOR_E_MINORScale.SCALE_UNSPECIFIED
SchemaSchema.additional_propertiesSchema.any_ofSchema.defaultSchema.defsSchema.descriptionSchema.enumSchema.exampleSchema.formatSchema.itemsSchema.max_itemsSchema.max_lengthSchema.max_propertiesSchema.maximumSchema.min_itemsSchema.min_lengthSchema.min_propertiesSchema.minimumSchema.nullableSchema.patternSchema.propertiesSchema.property_orderingSchema.refSchema.requiredSchema.titleSchema.typeSchema.from_json_schema()Schema.json_schema
SchemaDictSchemaDict.additional_propertiesSchemaDict.any_ofSchemaDict.defaultSchemaDict.defsSchemaDict.descriptionSchemaDict.enumSchemaDict.exampleSchemaDict.formatSchemaDict.max_itemsSchemaDict.max_lengthSchemaDict.max_propertiesSchemaDict.maximumSchemaDict.min_itemsSchemaDict.min_lengthSchemaDict.min_propertiesSchemaDict.minimumSchemaDict.nullableSchemaDict.patternSchemaDict.propertiesSchemaDict.property_orderingSchemaDict.refSchemaDict.requiredSchemaDict.titleSchemaDict.type
ScribbleImageScribbleImageDictSearchEntryPointSearchEntryPointDictSegmentSegmentDictSegmentImageConfigSegmentImageConfigDictSegmentImageResponseSegmentImageResponseDictSegmentImageSourceSegmentImageSourceDictSegmentModeSessionResumptionConfigSessionResumptionConfigDictSingleEmbedContentResponseSingleEmbedContentResponseDictSlidingWindowSlidingWindowDictSpeakerVoiceConfigSpeakerVoiceConfigDictSpeechConfigSpeechConfigDictStartSensitivityStyleReferenceConfigStyleReferenceConfigDictStyleReferenceImageStyleReferenceImageDictSubjectReferenceConfigSubjectReferenceConfigDictSubjectReferenceImageSubjectReferenceImageDictSubjectReferenceTypeSupervisedHyperParametersSupervisedHyperParametersDictSupervisedTuningDataStatsSupervisedTuningDataStats.dropped_example_reasonsSupervisedTuningDataStats.total_billable_character_countSupervisedTuningDataStats.total_billable_token_countSupervisedTuningDataStats.total_truncated_example_countSupervisedTuningDataStats.total_tuning_character_countSupervisedTuningDataStats.truncated_example_indicesSupervisedTuningDataStats.tuning_dataset_example_countSupervisedTuningDataStats.tuning_step_countSupervisedTuningDataStats.user_dataset_examplesSupervisedTuningDataStats.user_input_token_distributionSupervisedTuningDataStats.user_message_per_example_distributionSupervisedTuningDataStats.user_output_token_distribution
SupervisedTuningDataStatsDictSupervisedTuningDataStatsDict.dropped_example_reasonsSupervisedTuningDataStatsDict.total_billable_character_countSupervisedTuningDataStatsDict.total_billable_token_countSupervisedTuningDataStatsDict.total_truncated_example_countSupervisedTuningDataStatsDict.total_tuning_character_countSupervisedTuningDataStatsDict.truncated_example_indicesSupervisedTuningDataStatsDict.tuning_dataset_example_countSupervisedTuningDataStatsDict.tuning_step_countSupervisedTuningDataStatsDict.user_dataset_examplesSupervisedTuningDataStatsDict.user_input_token_distributionSupervisedTuningDataStatsDict.user_message_per_example_distributionSupervisedTuningDataStatsDict.user_output_token_distribution
SupervisedTuningDatasetDistributionSupervisedTuningDatasetDistribution.billable_sumSupervisedTuningDatasetDistribution.bucketsSupervisedTuningDatasetDistribution.maxSupervisedTuningDatasetDistribution.meanSupervisedTuningDatasetDistribution.medianSupervisedTuningDatasetDistribution.minSupervisedTuningDatasetDistribution.p5SupervisedTuningDatasetDistribution.p95SupervisedTuningDatasetDistribution.sum
SupervisedTuningDatasetDistributionDatasetBucketSupervisedTuningDatasetDistributionDatasetBucketDictSupervisedTuningDatasetDistributionDictSupervisedTuningDatasetDistributionDict.billable_sumSupervisedTuningDatasetDistributionDict.bucketsSupervisedTuningDatasetDistributionDict.maxSupervisedTuningDatasetDistributionDict.meanSupervisedTuningDatasetDistributionDict.medianSupervisedTuningDatasetDistributionDict.minSupervisedTuningDatasetDistributionDict.p5SupervisedTuningDatasetDistributionDict.p95SupervisedTuningDatasetDistributionDict.sum
SupervisedTuningSpecSupervisedTuningSpecDictTestTableFileTestTableFileDictTestTableItemTestTableItemDictThinkingConfigThinkingConfigDictTokensInfoTokensInfoDictToolToolCodeExecutionToolCodeExecutionDictToolConfigToolConfigDictToolDictTrafficTypeTranscriptionTranscriptionDictTunedModelTunedModelCheckpointTunedModelCheckpointDictTunedModelDictTunedModelInfoTunedModelInfoDictTuningDataStatsTuningDataStatsDictTuningDatasetTuningDatasetDictTuningExampleTuningExampleDictTuningJobTuningJob.base_modelTuningJob.create_timeTuningJob.custom_base_modelTuningJob.descriptionTuningJob.encryption_specTuningJob.end_timeTuningJob.errorTuningJob.evaluation_configTuningJob.experimentTuningJob.labelsTuningJob.nameTuningJob.output_uriTuningJob.partner_model_tuning_specTuningJob.pipeline_jobTuningJob.pre_tuned_modelTuningJob.preference_optimization_specTuningJob.sdk_http_responseTuningJob.service_accountTuningJob.start_timeTuningJob.stateTuningJob.supervised_tuning_specTuningJob.tuned_modelTuningJob.tuned_model_display_nameTuningJob.tuning_data_statsTuningJob.update_timeTuningJob.veo_tuning_specTuningJob.has_endedTuningJob.has_succeeded
TuningJobDictTuningJobDict.base_modelTuningJobDict.create_timeTuningJobDict.custom_base_modelTuningJobDict.descriptionTuningJobDict.encryption_specTuningJobDict.end_timeTuningJobDict.errorTuningJobDict.evaluation_configTuningJobDict.experimentTuningJobDict.labelsTuningJobDict.nameTuningJobDict.output_uriTuningJobDict.partner_model_tuning_specTuningJobDict.pipeline_jobTuningJobDict.pre_tuned_modelTuningJobDict.preference_optimization_specTuningJobDict.sdk_http_responseTuningJobDict.service_accountTuningJobDict.start_timeTuningJobDict.stateTuningJobDict.supervised_tuning_specTuningJobDict.tuned_modelTuningJobDict.tuned_model_display_nameTuningJobDict.tuning_data_statsTuningJobDict.update_timeTuningJobDict.veo_tuning_spec
TuningMethodTuningModeTuningOperationTuningOperationDictTuningTaskTuningValidationDatasetTuningValidationDatasetDictTurnCompleteReasonTurnCoverageTypeUpdateCachedContentConfigUpdateCachedContentConfigDictUpdateModelConfigUpdateModelConfigDictUploadFileConfigUploadFileConfigDictUpscaleImageConfigUpscaleImageConfig.enhance_input_imageUpscaleImageConfig.http_optionsUpscaleImageConfig.image_preservation_factorUpscaleImageConfig.include_rai_reasonUpscaleImageConfig.labelsUpscaleImageConfig.output_compression_qualityUpscaleImageConfig.output_gcs_uriUpscaleImageConfig.output_mime_typeUpscaleImageConfig.person_generationUpscaleImageConfig.safety_filter_level
UpscaleImageConfigDictUpscaleImageConfigDict.enhance_input_imageUpscaleImageConfigDict.http_optionsUpscaleImageConfigDict.image_preservation_factorUpscaleImageConfigDict.include_rai_reasonUpscaleImageConfigDict.labelsUpscaleImageConfigDict.output_compression_qualityUpscaleImageConfigDict.output_gcs_uriUpscaleImageConfigDict.output_mime_typeUpscaleImageConfigDict.person_generationUpscaleImageConfigDict.safety_filter_level
UpscaleImageParametersUpscaleImageParametersDictUpscaleImageResponseUpscaleImageResponseDictUrlContextUrlContextDictUrlContextMetadataUrlContextMetadataDictUrlMetadataUrlMetadataDictUrlRetrievalStatusUsageMetadataUsageMetadata.cache_tokens_detailsUsageMetadata.cached_content_token_countUsageMetadata.prompt_token_countUsageMetadata.prompt_tokens_detailsUsageMetadata.response_token_countUsageMetadata.response_tokens_detailsUsageMetadata.thoughts_token_countUsageMetadata.tool_use_prompt_token_countUsageMetadata.tool_use_prompt_tokens_detailsUsageMetadata.total_token_countUsageMetadata.traffic_type
UsageMetadataDictUsageMetadataDict.cache_tokens_detailsUsageMetadataDict.cached_content_token_countUsageMetadataDict.prompt_token_countUsageMetadataDict.prompt_tokens_detailsUsageMetadataDict.response_token_countUsageMetadataDict.response_tokens_detailsUsageMetadataDict.thoughts_token_countUsageMetadataDict.tool_use_prompt_token_countUsageMetadataDict.tool_use_prompt_tokens_detailsUsageMetadataDict.total_token_countUsageMetadataDict.traffic_type
UserContentVeoHyperParametersVeoHyperParametersDictVeoTuningSpecVeoTuningSpecDictVertexAISearchVertexAISearchDataStoreSpecVertexAISearchDataStoreSpecDictVertexAISearchDictVertexRagStoreVertexRagStoreDictVertexRagStoreRagResourceVertexRagStoreRagResourceDictVideoVideoCompressionQualityVideoDictVideoGenerationMaskVideoGenerationMaskDictVideoGenerationMaskModeVideoGenerationReferenceImageVideoGenerationReferenceImageDictVideoGenerationReferenceTypeVideoMetadataVideoMetadataDictVoiceConfigVoiceConfigDictWeightedPromptWeightedPromptDict