Getting Started
LangDB simplifies working with multiple Large Language Models (LLMs) through a single API. It excels at analytics, usage monitoring, and evaluation, giving developers insights into model performance, usage stats, and costs. This guide covers installation, setup, and key functionalities.
Installation
To install the LangDB Python client, run:
Initialize LangDB Client
Initialize the client with your API key and project ID:
Making a Chat Completion Request
You can generate a response using the completion
method:
Retrieve Messages from a Thread
You can fetch messages from a specific thread using its thread_id
:
Get Thread Usage
Retrieve cost and token usage details for a thread:
Get Analytics
You can retrieve analytics for specific model tags:
Alternatively, you can convert analytics data into a Pandas DataFrame for easier analysis:
Evaluate Multiple Threads
To generate an evaluation DataFrame containing message and cost information for multiple threads:
List Available Models
To list all models supported by LangDB:
Usage in Evaluation
LangDB provides built-in evaluation capabilities, allowing developers to assess model performance, response accuracy, and cost efficiency. By analyzing messages, token usage, and analytics data, teams can fine-tune their models for better results.
Checkout Evaluation section for more information.
Last updated
Was this helpful?