Introducing Virtual MCP Servers
LogoLogo
GithubJoin SlackSignupBook a Demo
  • Documentation
  • Self Hosted
  • Integrations
  • Guides
  • Enterprise
  • Introduction to AI Gateway
  • Supported Models
  • Supported MCP Servers
  • Getting Started
    • Quick Start
    • Working with API
    • Working with Multiple Agents
    • Working with MCPs
    • Working with Headers
    • User Tracking
    • Using Parameters
  • Concepts
    • Thread
    • Trace
    • Run
    • Label
    • Message
    • Virtual Models
      • Routing with Virtual Model
    • Virtual MCP Servers
  • Features
    • Tracing
    • Routing
    • MCP Support
    • Publishing MCP Servers
    • Usage
    • Analytics
    • Guardrails
    • User Roles
    • Cost Control
  • Python SDK
    • Getting Started
  • API Reference
  • Postman Collection
Powered by GitBook
LogoLogo

Social

  • LinkedIn
  • X
  • Youtube
  • Github

Platform

  • Pricing
  • Documentation
  • Blog

Company

  • Home
  • About

Legal

  • Privacy Policy
  • Terms of Service

2025 LangDB. All rights reserved.

On this page

Was this helpful?

Export as PDF
  1. Features

Tracing

Track every model call, agent handoff, and tool execution for faster debugging and optimization.

PreviousVirtual MCP ServersNextRouting

Last updated 17 days ago

Was this helpful?

LangDB Gateway provides detailed tracing to monitor, debug, and optimize LLM workflows.

Below is an example of a trace visualization from the dashboard, showcasing a detailed breakdown of the request stages:

In this example trace you’ll find:

  • Overview Metrics

    • Cost: Total spend for this request (e.g. $0.034).

    • Tokens: Input (5,774) vs. output (1,395).

    • Duration: Total end-to-end latency (29.52 s).

  • Timeline Breakdown A parallel-track timeline showing each step—from moderation and relevance scoring to model inference and final reply.

  • Model Invocations** Every call to gpt-4o-mini, gpt-4o, etc., is plotted with precise start times and durations.

  • Agent Hand-offs Transitions between your agents (e.g. search → booking → reply) are highlighted with custom labels like transfer_to_reply_agent.

  • Tool Integrations External tools (e.g. booking_tool, travel_tool, python_repl_tool) appear inline with their execution times—so you can spot slow or failed runs immediately.

  • Guardrails Rules like Min Word Count and Travel Relevance enforce domain-specific constraints and appear in the trace.

With this level of visibility you can quickly pinpoint bottlenecks, understand cost drivers, and ensure your multi-agent pipelines run smoothly.

A full end-to-end multi agent workflow traced on LangDB