LangDB Gateway provides advanced tracing and observability tools that empower developers to monitor, debug, and optimize their LLM workflows effectively. With detailed logging of each request, response, and tool invocation, you gain actionable insights into the behaviour and performance of your AI models.
Tracing
The Tracing enables you to visualize the flow of requests, identify bottlenecks, and ensure reliable execution of tasks—all in real time.
Trace Example
Below is an example of a trace visualization from the dashboard, showcasing a detailed breakdown of the request stages:
-
API Stream: Total time for the request (7.11 sec).
-
Model Call: Time spent interacting with the model (6.89 sec).
-
Tool Usage: Specific tool call duration (1.52 sec).
This breakdown helps identify where time is spent during the execution of a request, allowing for targeted optimizations.
With detailed logging and real-time visualizations, users can gain insights into request flows, identify bottlenecks, and optimize task execution by analyzing specific stages like API stream time, model interaction, and tool invocation durations.