Google’s ADK Agent Development kit
A guide to getting started with google ADK ( agent development kit) with simple understanding . This also involves some debugging when error happens
What is the Agent Development Kit?
The Agent Development Kit (ADK) is a modular and adaptable framework for building and deploying AI agents. It’s designed to work seamlessly with leading large language models (LLMs) and open-source generative AI tools, with deep integration into the Google ecosystem and Gemini models. Whether you’re starting with basic agents powered by Gemini and Google AI tools or building advanced agent architectures with custom orchestration, ADK provides both the simplicity to get started and the flexibility to scale.
Core Pillars of ADK: Build, Interact, Evaluate, Deploy
ADK supports the complete lifecycle of agent development with a robust set of capabilities:
- Multi-Agent by Design: Create modular and scalable applications that support coordination and task delegation across multiple agents.
- Rich Model Ecosystem: Select the models that best suit your use case. Mix and match models to customize your agent interactions.
- Extensive Tool Ecosystem: Empower your agents with a wide range of functionalities. Leverage built-in tools like Search and Code Execution, integrate Model Context Protocol (MCP) tools, or plug in external libraries such as LangChain and LlamaIndex.
- Built-in Streaming: Engage with agents through real-time, human-like interactions.
In this article we’ll learn how to build a simple agent and enable low-latency, bidirectional voice and video communication using ADK Streaming. We’ll walk through installing ADK, setting up a basic “Google Search” agent, and running it with the ADK web tool. You’ll also explore how to create your own asynchronous web application using ADK Streaming and FastAPI.
Setup Gemini API Key
To run your agent, you’ll need to set up a Gemini API Key.
- Get an API key from Google AI Studio.
- Keep this API key handy as we will need it in environment file
Create virtual environment in your project
python -m venv adkenv
#activate in window
.\adkenv\scripts\activate
Install Google ADK
pip install google-adk
You will see something like this
This means the installation has happened, Some warning may come up based on your python version. So you may want to handle it accrodingly.
Project Structure
Below is the folder structure we will implement. So start by creating the following structure with empty files and also within the main application folder . In this case I used googleadk-stream
googleadk-stream/ # Project folder
└── app/ # the web app folder
├── .env # Gemini API key
└── google_search_agent/ # Agent folder
├── __init__.py # Python package
└── agent.py # Agent definition
Let’s update Agent.py
Purpose:
Implementation:
We will first import the foundational libraries provided by google adk. In this cas eits Agents whihc will help as Agents. Tools will utilize google search as a tool. If we were building a weather app then weather application could have been a tool, or an separate agen can also be a tool by itself.
from google.adk.agents import Agents
from google.adk.tools import google_search
Update the code to add Agent defnition and Tools for the Agent in agent.py.agent.py
is where all your agent(s)' logic will be stored, and you must have a root_agent
defined.
The Agent
class and the google_search
tool handle the complex interactions with the LLM and grounding with the search API, allowing you to focus on the agent's purpose and behavior.
from google.adk.agents import Agent
from google.adk.tools import google_search
root_agent=Agent(
#unique name of agent
name="first_searc_agent",
#LLM I want to use, make sure you check the name if llm
model="gemini-2.0-flash-live-001", # Google AI Studio
#short descrition of models purpose
description="Google search using this agent",
#instructions to set Agent's behaviour
instruction="you are a researcher, you always stick to the facts.",
#Add the tools for this agent. In this case we will use google_search as tool. This is a prebuult tool
#you can create your our tool as well to act as a subagent
tools=[google_search]
)
Update __init__.py
from . import agent
Update the .env file with Google API key that was captured in prerequirites
GOOGLE_API_KEY=#YOUR API KEY
GOOGLE_GENAI_USE_VERTEXAI=0
Let’s test agent with adk web
Now it’s ready to try the agent. Run the following command to launch the dev UI. First, make sure to set the current directory to app
:
cd app
Then, run the dev UI:
adk web
You will see something like this , the ADK web has started
Open the URL provided (usually http://localhost:8000
or http://127.0.0.1:8000
) directly in your browser. This connection stays entirely on your local machine. Select google_search_agent.
This is what you will see
Interact via Text
Try entering one of the following prompts:
- What is the weather in Australia?
Error:
I am getting this error when i text , what is weather in Australia?
googleadk-stream\adkenv\Lib\site-packages\google\genai\errors.py", line 129, in raise_for_async_response
raise ClientError(status_code, response_json, response)
google.genai.errors.ClientError: 404 NOT_FOUND. {'error': {'code': 404, 'message': 'models/gemini-2.0-flash-live-001 is not found for API version v1beta, or is not supported for generateContent. Call ListModels to see the list of available models and their supported methods.', 'status': 'NOT_FOUND'}}
INFO: 127.0.0.1:52345 - "GET /apps/google_search_agent/users/user/sessions/6e072547-d2cb-4067-b242-ce8357f0fac7 HTTP/1.1" 200 OK
This clearly means that “models/gemini-2.0-flash-live-001” is not a right model that I am using, so I need to get the right name for it.
In Agent.py update the model to below in my case.
model="gemini-2.0-flash", #
Save and run the adk web command again to restart the server.
Your agent will respond using the google_search
tool to fetch the most up-to-date information.
Interact via Voice and Video
To engage with the agent using voice, click the microphone icon, then ask a question aloud. The agent will reply in real time using voice output.
You can also click the camera icon to activate video input. Try asking, “What do you see?” and the agent will describe what’s visible through the camera.
The live interaction did not worked for me as the Model does not support voice output. So We will have to look for the right model to update this code. But we still have a foundational code which we can enhance it further.
Let’s enhance this application even more customized in next article. Until then Happy agenting.
Reference:
https://developers.googleblog.com/en/agent-development-kit-easy-to-build-multi-agent-applications/