Building Travel Concierge with Google-ADK
This guide demonstrates how to enhance Google's ADK Travel Concierge sample by integrating it with LangDB. By leveraging LangDB's AI gateway, you can offload complex tool logic to virtual MCPs, dynamically select models, and gain full end-to-end observability into your agent's operations. This approach simplifies the agent's code, improves security by centralizing API key management, and provides powerful tracing capabilities.
Code
Overview
The Travel Concierge is a sophisticated agent system built with Google ADK that handles the complete travel lifecycle through specialized sub-agents.
Travel Concierge Architecture
The system consists of these specialized sub-agents:
Main Agent: Orchestrates the workflow and routes user queries to specialized sub-agents.
Inspiration Agent: Helps users discover travel destinations based on preferences.
Planning Agent: Creates detailed itineraries and travel plans.
Booking Agent: Assists with booking flights, hotels, and activities.
Pre-Trip Agent: Provides preparation guidance before departure.
In-Trip Agent: Offers real-time assistance during travel.
Post-Trip Agent: Collects feedback and helps with post-travel tasks.
Key Benefits
With LangDB, this multi-agent system gains significant capabilities:
Seamless Integration: All LLM calls flow through a centralized AI gateway
Tool Abstraction: Virtual MCPs simplify integration with external services
End-to-End Tracing: Complete visibility into agent interactions and tool usage
Flexible Model Selection: Choose any model without code changes
Centralized Configuration: Tools, guardrails, and monitoring managed in one place
Installation
Environment Variables
Create a .env file in your project root with the following variables:
Project Structure
How the Integration Works
Seamless LLM Integration
The key to enhancing Google's ADK sample is directing all LLM calls through a centralized AI gateway. This is achieved through the Python SDK:
By calling init() before any Google ADK imports, the integration:
Patches the ADK's underlying model calling mechanisms
Routes all LLM requests through LangDB's API
Attaches tracing metadata to each request
Captures all agent-to-agent handoffs and tool calls
This is a significant architectural improvement over the original sample, which requires manual instrumentation to achieve observability.
Virtual Model References
Instead of hardcoding model names or building complex tool integration, we reference LangDB virtual models in the agent definitions:
Notice the model="langdb/travel_concierge_main" parameter. This doesn't point directly to a specific LLM but to a LangDB Virtual Model that can have:
A specific underlying LLM (like Google's Gemini Pro)
Attached tools and MCPs
Guardrails for input/output validation
Custom handling and retry logic
This approach offloads significant complexity from the application code to LangDB AI gateway.
Simplified Sub-Agent Implementation
The inspiration_agent uses a virtual model that connects to a Google Maps MCP. The agent code itself only needs to reference the tools, not implement them:
Key Benefit: The agent code is significantly cleaner. All tool configuration is offloaded to the central platform, allowing you to:
Change search providers without code changes
Add guardrails for travel queries
Deploy updates to tools without redeploying your application
Tool Definitions vs. Virtual MCPs
Original Approach (Tool Implementation)
In the original Google ADK sample, you would need to implement a complex tool to interact with the Google Places API directly:
This approach requires:
Managing API keys in your environment
Implementing error handling and retries
Deploying code changes for API updates
Writing custom integration code for each tool
Enhanced Approach (Virtual MCPs)
With the Virtual MCP approach, you offload the entire tool implementation to the central platform. This means:
No tool implementation code needed in your application.
API keys are managed securely in the platform.
You get built-in error handling and retries.
Tools can be updated or switched without code changes.
Configuring Virtual Models and MCPs
This approach separates tool configuration from code, moving it to a web interface where it can be managed and updated without deployments.
Creating Virtual MCP Servers
Virtual MCP servers act as API gateways to external tools and services:
In the LangDB UI, navigate to Projects → MCP Servers.
Click + New Virtual MCP Server and create the necessary MCPs. Each MCP is then connected to a virtual model, which is used by a specific agent:
Inspiration Agent uses a virtual model connected to a Google Maps MCP.
Planning Agent uses a virtual model connected to an Airbnb MCP.
Search Agent uses a virtual model connected to a Tavily Search MCP.
Attaching MCPs to Virtual Models
Virtual models connect your agent code to the right tools automatically:
Navigate to Models → + New Virtual Model.
For the Inspiration Agent:
Name:
travel_inspiration_agentBase Model:
anthropic/claude-sonnet-4Attach the Tavily Search MCP from step 1
Add optional guardrails (e.g., content filters for travel advice)
For the Google Search agent in shared tools:
Name:
google_search_agentBase Model:
gemini/gemini-2.5-pro-previewAttach the Google Places MCP from step 1
Add custom response templates for place recommendations
Key Benefits:
Separation of Concerns: Code handles agent orchestration while LangDB handles tools and models
Dynamic Updates: Change tools without redeploying your application
Security: API keys stored securely in LangDB, not in application code
Monitoring: Track usage patterns and error rates in one place
Run the Agent
Navigate to http://localhost:8000, select travel_concierge, and start your travel planning conversation.
Example Conversation with Traced Tool Calls
Here is an example conversation that shows how the inspiration_agent uses a virtual model connected to a Google Maps MCP to provide travel suggestions.
User Input:
Agent Execution Flow:
The root
travel_conciergeagent receives the request and transfers control to theinspiration_agent.Tool Call:
transfer_to_agent(agent_name="inspiration_agent")
The
inspiration_agentis activated. Its virtual model is configured to use the Google Maps MCP.Tool Call:
poi_agent(request="Maldives")
The Google Maps MCP is invoked, which searches for points of interest in the Maldives and returns a structured list of places with details, ratings, and map links.
Final Agent Response:
Full Tracing with LangDB
The true power of the LangDB integration becomes apparent in the comprehensive tracing capabilities. While the original Google ADK sample provides basic conversation logging, LangDB captures every aspect of the agent system's operation:

You can checkout the entire conversation here:
In the LangDB trace view, you can see:
The exact flow paths between agents
Every tool call made through virtual MCPs
Complete inputs and outputs of each model invocation
Performance metrics and token usage
Errors and retries (if any occurred)
Conclusion: Benefits of LangDB Integration
By enhancing Google's ADK Travel Concierge sample with LangDB integration, we've achieved several significant improvements:
Architectural Simplification: Cleaner code with less boilerplate by offloading tool logic to LangDB
Dynamic Tool Configuration: Change tools and models without code changes
Enhanced Security: API keys managed centrally in LangDB
Complete Observability: End-to-end tracing of all agent interactions
Improved Development Workflow: Update agent capabilities without redeployment
This approach demonstrates how LangDB's AI gateway can enhance agent frameworks by providing a layer of abstraction that simplifies development while adding powerful capabilities.
References
Last updated
Was this helpful?
