Getting Started
Use LangDB’s Python SDK to generate completions, monitor API usage, retrieve analytics, and evaluate LLM workflows efficiently.
LangDB simplifies working with multiple Large Language Models (LLMs) through a single API. It excels at analytics, usage monitoring, and evaluation, giving developers insights into model performance, usage stats, and costs. This guide covers installation, setup, and key functionalities.
Installation
To install the LangDB Python client, run:
pip install pylangdb[client]
Initialize LangDB Client
Initialize the client with your API key and project ID:
from pylangdb.client import LangDb
client = LangDb(
api_key="your_api_key",
project_id="your_project_id"
)
Making a Chat Completion Request
You can generate a response using the completion
method:
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Say hello!"}
]
response = client.completion(
model="gemini-1.5-pro-latest",
messages=messages,
temperature=0.7,
max_tokens=100
)
print(response["content"]) # Output AI response
Retrieve Messages from a Thread
You can fetch messages from a specific thread using its thread_id
:
thread_id = response["thread_id"]
messages = client.get_messages(thread_id)
for message in messages:
print(f"Type: {message.type}")
print(f"Content: {message.content}")
if message.tool_calls:
for tool_call in message.tool_calls:
print(f"Tool: {tool_call.function.name}")
Get Thread Usage
Retrieve cost and token usage details for a thread:
usage = client.get_usage(thread_id)
print(f"Total Cost: ${usage.total_cost:.4f}")
print(f"Input Tokens: {usage.total_input_tokens}")
print(f"Output Tokens: {usage.total_output_tokens}")
Get Analytics
You can retrieve analytics for specific model tags:
analytics_data = client.get_analytics(tags="gpt-4,gemini")
print(analytics_data)
Alternatively, you can convert analytics data into a Pandas DataFrame for easier analysis:
import pandas as pd
df = client.get_analytics_dataframe(tags="gpt-4,gemini")
print(df.head())
Evaluate Multiple Threads
To generate an evaluation DataFrame containing message and cost information for multiple threads:
df = client.create_evaluation_df(thread_ids=["thread1", "thread2"])
print(df)
List Available Models
To list all models supported by LangDB:
models = client.list_models()
print(models)
Usage in Evaluation
LangDB provides built-in evaluation capabilities, allowing developers to assess model performance, response accuracy, and cost efficiency. By analyzing messages, token usage, and analytics data, teams can fine-tune their models for better results.
Last updated
Was this helpful?