LangDB AI Gateway offers plug and play observability when you work with AI agents.
LangDB automatically visualizes how agents interact, providing a clear view of workflows, hierarchies, and usage patterns.
LangDB supports three main concepts.
-
Run: A complete end-to-end interaction between agents, grouped for easy tracking.
-
Thread: Aggregate multiple Runs into a single thread for a unified chat experience.
-
Parent Trace ID: Optionally link different traces to build hierarchical relationships and visualize dependencies.
Example
from openai import OpenAI
from uuid import uuid4
client = OpenAI(
base_url="https://api.us-east-1.langdb.ai" # LangDB API base URL,
api_key=api_key, # Replace with your LangDB token
)
response = client.chat.completions.create(
model="gpt-4o", # Use the model
messages=[{"role": "developer", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}],
extra_headers={"x-project-id": "xxxxx"
"x-thread-id": str(uuid4()),
"x-run-id": str(uuid4())}
)
What You Can Do with LangDB
-
Monitor Interactions: Track how agents communicate and collaborate within your workflows.
-
Optimize System Performance: Detect inefficiencies or bottlenecks and refine your workflows for better outcomes.
-
Control Costs: Analyze token usage and associated costs to optimize expenditure across workflows.
-
Debug Seamlessly: Quickly pinpoint and resolve issues with trace visualizations.
Checkout the full Multi-Agent Tracing Example here.