Building Travel Agent with OpenAI Agents SDK
Integrate OpenAI Agents SDK with LangDB for multi-agent travel workflows. Configure guardrails, virtual MCP search, and model routing for reliable outputs.
Code
This guide illustrates how to build a multi-agent travel query workflow using the OpenAI Agents SDK, augmented by LangDB for guardrails, virtual MCP servers (tool integration), and model routing.
OpenAI introduced the Agents SDK, a lightweight, Python-first toolkit for building agentic AI apps. It’s built around three primitives:
Agents: LLMs paired with tools and instructions to complete tasks autonomously.
Handoffs: Let agents delegate tasks to other agents.
Guardrails: Validate inputs/outputs to keep workflows safe and reliable.
Overview
This guide illustrates how to build a multi-agent travel query workflow using the OpenAI Agents SDK, augmented by LangDB for advanced tracing, tool integration, and model routing.
We will create a 4-agent pipeline:
Query Router Agent: Routes user queries to the appropriate specialist agent.
Booking Specialist: Manages booking-related requests.
Travel Recommendation Specialist: Provides destination recommendations with web search support.
Reply Agent: Formats the final output for the user.
Installation
Environment Variables
Code Walkthrough
The snippets below break down how to configure the OpenAI Agents SDK with LangDB for end-to-end tracing and custom model routing.
Initialize LangDB Tracing
First, initialize pylangdb tracing. This must be the first step to ensure all subsequent SDK operations are captured.
Configure the OpenAI Client & Model Provider
Next, configure the AsyncOpenAI client to send all requests through the LangDB gateway. We then create a CustomModelProvider to ensure the Agents SDK uses this client for all model calls.
Define the Agents
Now, define the specialist agents and the router agent that orchestrates them. The model parameter can be any model available in LangDB, including the virtual models we configure in the next section.
Run the Workflow
Finally, use the Runner to execute the workflow. We inject our CustomModelProvider and a group_id into the RunConfig to ensure all steps are routed through LangDB and linked in the same trace.
Configuring MCPs, Guardrails, and Models
To empower agents with tools like web search or to enforce specific behaviors with guardrails, you use LangDB Virtual Models. This allows you to attach functionality directly to a model identifier without changing your agent code.
In the LangDB UI, navigate to Models → + New Virtual Model.
Create virtual models for your agents (e.g.,
travel-recommender,query-router).Attach tools and guardrails as needed:
For the
travel_recommendation_agent: Attach an MCP Server (like Tavily Search) to give it live web search capabilities.For the
query_router_agent: Attach guardrails to validate incoming requests. For example:Topic Adherence: Ensure the query is travel-related.
OpenAI Moderation: Block harmful or disallowed content.
Minimum Word Count: Reject overly short or vague queries.
For the
reply_agent: Attach a Language Validator guardrail to ensure the final output is in the expected language.
Use the virtual model's identifier (e.g.,
langdb/travel-recommender) as themodelstring in yourAgentdefinition.

Full Trace
After setting up the virtual models and running the query like:
We get the following trace

You can check out the entire trace here:
References
Last updated
Was this helpful?
