API Reference

API Endpoints for LangDB

Create chat completion

post
Authorizations
Header parameters
X-Project-IdstringRequired

LangDB project ID

Body
modelstringRequired

ID of the model to use. This can be either a specific model ID or a virtual model identifier.

Example: gpt-4o
temperaturenumber · max: 2Optional

Sampling temperature.

Example: 0.8
Responses
200
OK
application/json
post
POST /v1/chat/completions HTTP/1.1
Host: api.us-east-1.langdb.ai
Authorization: Bearer YOUR_SECRET_TOKEN
X-Project-Id: text
Content-Type: application/json
Accept: */*
Content-Length: 599

{
  "model": "gpt-4o",
  "messages": [
    {
      "role": "user",
      "content": "Write a haiku about recursion in programming."
    }
  ],
  "temperature": 0.8,
  "max_tokens": 1000,
  "top_p": 0.9,
  "frequency_penalty": 0.1,
  "presence_penalty": 0.2,
  "stream": false,
  "response_format": "json_object",
  "mcp_servers": [
    {
      "name": "websearch",
      "type": "in-memory"
    }
  ],
  "router": {
    "name": "kg_random_router",
    "type": "script",
    "script": "const route = ({ request, headers, models, metrics }) => { return {model: 'test'};};"
  },
  "extra": {
    "guards": [
      "word_count_validator_bd4bdnun",
      "toxicity_detection_4yj4cdvu"
    ],
    "user": {
      "id": "7",
      "name": "mrunmay",
      "tags": [
        "coding",
        "software"
      ]
    }
  }
}
200

OK

{
  "id": "text",
  "choices": [
    {
      "finish_reason": "stop",
      "index": 1,
      "message": {
        "role": "assistant",
        "content": "text"
      },
      "logprobs": {
        "content": [
          {
            "token": "text",
            "logprob": 1
          }
        ],
        "refusal": [
          {
            "token": "text",
            "logprob": 1
          }
        ]
      }
    }
  ],
  "created": 1,
  "model": "text",
  "service_tier": "scale",
  "system_fingerprint": "text",
  "object": "chat.completion",
  "usage": {
    "prompt_tokens": 1,
    "completion_tokens": 1,
    "total_tokens": 1,
    "prompt_tokens_details": {
      "cached_tokens": 1
    },
    "completion_tokens_details": {
      "reasoning_tokens": 1,
      "accepted_prediction_tokens": 1,
      "rejected_prediction_tokens": 1
    }
  }
}

Create embeddings

post

Creates an embedding vector representing the input text or token arrays.

Authorizations
Body
modelstringRequired

ID of the model to use for generating embeddings.

Example: text-embedding-ada-002
inputone ofRequired
stringOptional

The text to embed.

or
string[]Optional

Array of text strings to embed.

encoding_formatstring · enumOptional

The format to return the embeddings in.

Default: floatPossible values:
dimensionsinteger · min: 1 · max: 1536Optional

The number of dimensions the resulting embeddings should have.

Example: 1536
Responses
200
Successful response with embeddings
application/json
post
POST /v1/embeddings HTTP/1.1
Host: api.us-east-1.langdb.ai
Authorization: Bearer YOUR_SECRET_TOKEN
Content-Type: application/json
Accept: */*
Content-Length: 136

{
  "input": "The food was delicious and the waiter was kind.",
  "model": "text-embedding-ada-002",
  "encoding_format": "float",
  "dimensions": 1536
}
200

Successful response with embeddings

{
  "data": [
    {
      "embedding": [
        1
      ],
      "index": 1
    }
  ],
  "model": "text",
  "usage": {
    "prompt_tokens": 1,
    "total_tokens": 1
  }
}

Retrieve a list of threads

post
Authorizations
Header parameters
X-Project-IdstringRequired

LangDB project ID

Body
limitinteger · min: 1RequiredExample: 10
offsetintegerRequiredExample: 100
Responses
200
A list of threads with pagination info
application/json
post
POST /threads HTTP/1.1
Host: api.us-east-1.langdb.ai
Authorization: Bearer YOUR_SECRET_TOKEN
X-Project-Id: text
Content-Type: application/json
Accept: */*
Content-Length: 25

{
  "limit": 10,
  "offset": 100
}
200

A list of threads with pagination info

{
  "data": [
    {
      "id": "123e4567-e89b-12d3-a456-426614174000",
      "created_at": "2025-06-30T20:32:40.453Z",
      "updated_at": "2025-06-30T20:32:40.453Z",
      "model_name": "text",
      "project_id": "text",
      "score": 1,
      "title": "text",
      "user_id": "text"
    }
  ],
  "pagination": {
    "limit": 10,
    "offset": 100,
    "total": 10
  }
}

Retrieve messages for a specific thread

get
Authorizations
Path parameters
thread_idstring · uuidRequired

The ID of the thread to retrieve messages from

Header parameters
X-Project-IdstringRequired

LangDB project ID

Responses
200
A list of messages for the given thread
application/json
get
GET /threads/{thread_id}/messages HTTP/1.1
Host: api.us-east-1.langdb.ai
Authorization: Bearer YOUR_SECRET_TOKEN
X-Project-Id: text
Accept: */*
200

A list of messages for the given thread

[
  {
    "model_name": "gpt-4o-mini",
    "thread_id": "123e4567-e89b-12d3-a456-426614174000",
    "user_id": "langdb",
    "content_type": "Text",
    "content": "text",
    "content_array": [
      "text"
    ],
    "type": "system",
    "tool_call_id": "123e4567-e89b-12d3-a456-426614174000",
    "tool_calls": "text",
    "created_at": "2025-01-29 10:25:00.736000",
    "id": "123e4567-e89b-12d3-a456-426614174000"
  }
]

Retrieve the total cost for a specific thread

get
Authorizations
Path parameters
thread_idstring · uuidRequired

The ID of the thread for which to retrieve cost information

Header parameters
X-Project-IdstringRequired

LangDB project ID

Responses
200
The total cost and token usage for the specified thread
application/json
get
GET /threads/{thread_id}/cost HTTP/1.1
Host: api.us-east-1.langdb.ai
Authorization: Bearer YOUR_SECRET_TOKEN
X-Project-Id: text
Accept: */*
200

The total cost and token usage for the specified thread

{
  "total_cost": 0.022226999999999997,
  "total_output_tokens": 171,
  "total_input_tokens": 6725
}

Fetch analytics data

post
Authorizations
Header parameters
X-Project-IdstringRequired

LangDB project ID

Body
start_time_usinteger · int64Optional

Start time in microseconds.

Example: 1693062345678
end_time_usinteger · int64Optional

End time in microseconds.

Example: 1693082345678
Responses
200
Successful response
application/json
post
POST /analytics HTTP/1.1
Host: api.us-east-1.langdb.ai
Authorization: Bearer YOUR_SECRET_TOKEN
X-Project-Id: text
Content-Type: application/json
Accept: */*
Content-Length: 59

{
  "start_time_us": 1693062345678,
  "end_time_us": 1693082345678
}
200

Successful response

{
  "timeseries": [
    {
      "hour": "2025-02-20 18:00:00",
      "total_cost": 12.34,
      "total_requests": 1000,
      "avg_duration": 250.5,
      "duration": 245.7,
      "duration_p99": 750.2,
      "duration_p95": 500.1,
      "duration_p90": 400.8,
      "duration_p50": 200.3,
      "total_duration": 1,
      "total_input_tokens": 1,
      "total_output_tokens": 1,
      "error_rate": 1,
      "error_request_count": 1,
      "avg_ttft": 1,
      "ttft": 1,
      "ttft_p99": 1,
      "ttft_p95": 1,
      "ttft_p90": 1,
      "ttft_p50": 1,
      "tps": 1,
      "tps_p99": 1,
      "tps_p95": 1,
      "tps_p90": 1,
      "tps_p50": 1,
      "tpot": 0.85,
      "tpot_p99": 1.5,
      "tpot_p95": 1.2,
      "tpot_p90": 1,
      "tpot_p50": 0.75,
      "tag_tuple": [
        "text"
      ]
    }
  ],
  "start_time_us": 1,
  "end_time_us": 1
}

Fetch analytics summary

post
Authorizations
Header parameters
X-Project-IdstringRequired

LangDB project ID

Body
start_time_usinteger · int64OptionalExample: 1693062345678
end_time_usinteger · int64OptionalExample: 1693082345678
groupBystring[]RequiredExample: ["provider"]
Responses
200
Successful response
application/json
post
POST /analytics/summary HTTP/1.1
Host: api.us-east-1.langdb.ai
Authorization: Bearer YOUR_SECRET_TOKEN
X-Project-Id: text
Content-Type: application/json
Accept: */*
Content-Length: 82

{
  "start_time_us": 1693062345678,
  "end_time_us": 1693082345678,
  "groupBy": [
    "provider"
  ]
}
200

Successful response

{
  "summary": [
    {
      "tag_tuple": [
        "openai",
        "gpt-4"
      ],
      "total_cost": 156.78,
      "total_requests": 5000,
      "total_duration": 1250000,
      "avg_duration": 250,
      "duration": 245.5,
      "duration_p99": 750,
      "duration_p95": 500,
      "duration_p90": 400,
      "duration_p50": 200,
      "total_input_tokens": 100000,
      "total_output_tokens": 50000,
      "avg_ttft": 100,
      "ttft": 98.5,
      "ttft_p99": 300,
      "ttft_p95": 200,
      "ttft_p90": 150,
      "ttft_p50": 80,
      "tps": 10.5,
      "tps_p99": 20,
      "tps_p95": 15,
      "tps_p90": 12,
      "tps_p50": 8,
      "tpot": 0.85,
      "tpot_p99": 1.5,
      "tpot_p95": 1.2,
      "tpot_p90": 1,
      "tpot_p50": 0.75,
      "error_rate": 1,
      "error_request_count": 1
    }
  ],
  "start_time_us": 1,
  "end_time_us": 1
}

Get total usage

post
Authorizations
Header parameters
X-Project-IdstringRequired

LangDB project ID

Body
start_time_usinteger · int64OptionalExample: 1693062345678
end_time_usinteger · int64Optional
Responses
200
OK
application/json
post
POST /usage/total HTTP/1.1
Host: api.us-east-1.langdb.ai
Authorization: Bearer YOUR_SECRET_TOKEN
X-Project-Id: text
Content-Type: application/json
Accept: */*
Content-Length: 47

{
  "start_time_us": 1693062345678,
  "end_time_us": 1
}
200

OK

{
  "models": [
    {
      "provider": "openai",
      "model_name": "gpt-4o",
      "total_input_tokens": 3196182,
      "total_output_tokens": 74096,
      "total_cost": 10.4776979999,
      "cost_per_input_token": 3,
      "cost_per_output_token": 12
    }
  ],
  "total": {
    "total_input_tokens": 4181386,
    "total_output_tokens": 206547,
    "total_cost": 11.8904386859
  },
  "period_start": 1737504000000000,
  "period_end": 1740120949421000
}

Retrieve pricing information

get

Returns the pricing details for LangDB services.

Responses
200
Successful retrieval of pricing information
application/json
get
GET /pricing HTTP/1.1
Host: api.us-east-1.langdb.ai
Accept: */*
200

Successful retrieval of pricing information

{
  "model": "gpt-3.5-turbo-0125",
  "provider": "openai",
  "price": {
    "per_input_token": 0.5,
    "per_output_token": 1.5,
    "valid_from": null
  },
  "input_formats": [
    "text"
  ],
  "output_formats": [
    "text"
  ],
  "capabilities": [
    "tools"
  ],
  "type": "completions",
  "limits": {
    "max_context_size": 16385
  }
}

List models

get
Responses
200
OK
application/json
get
GET /models HTTP/1.1
Host: api.us-east-1.langdb.ai
Accept: */*
200

OK

{
  "object": "list",
  "data": [
    {
      "id": "o1-mini",
      "object": "model",
      "created": 1686935002,
      "owned_by": "openai"
    }
  ]
}

Create a new model

post

Register and configure a new LLM under your LangDB project

Authorizations
Header parameters
X-Admin-KeystringRequired

LangDB Admin Key

Body
model_namestringRequiredExample: my-model
descriptionstringRequiredExample: A custom completions model for text and image inputs
provider_info_idstring · uuidRequiredExample: e2e9129b-6661-4eeb-80a2-0c86964974c9
project_idstringRequiredExample: 55f4a12b-74c8-4294-8e4b-537f13fc3861
publicbooleanOptionalExample: false
request_response_mappingstringOptionalExample: openai-compatible
model_typestringRequiredExample: completions
input_token_pricenumber · float | nullableOptionalExample: 0.00001
output_token_pricenumber · float | nullableOptionalExample: 0.00003
context_sizeinteger | nullableOptionalExample: 128000
capabilitiesstring[]OptionalExample: ["tools"]
input_typesstring[]OptionalExample: ["text","image"]
output_typesstring[]OptionalExample: ["text","image"]
tagsstring[]Optional
mp_pricenumber · float | nullableOptional
owner_namestringRequiredExample: openai
priorityintegerRequiredExample: 0
model_name_in_providerstringOptionalExample: my-model-v1.2
parametersobjectOptional

Additional configuration parameters

Example: {"top_k":{"default":0,"description":"Limits the token sampling to only the top K tokens.","min":0,"required":false,"step":1,"type":"int"},"top_p":{"default":1,"description":"Nucleus sampling alternative.","max":1,"min":0,"required":false,"step":0.05,"type":"float"}}
Responses
200
Created
application/json
post
POST /admin/models HTTP/1.1
Host: api.xxx.langdb.ai
Authorization: Bearer JWT
X-Admin-Key: text
Content-Type: application/json
Accept: */*
Content-Length: 884

{
  "model_name": "my-model",
  "description": "A custom completions model for text and image inputs",
  "provider_info_id": "e2e9129b-6661-4eeb-80a2-0c86964974c9",
  "project_id": "55f4a12b-74c8-4294-8e4b-537f13fc3861",
  "public": false,
  "request_response_mapping": "openai-compatible",
  "model_type": "completions",
  "input_token_price": 0.00001,
  "output_token_price": 0.00003,
  "context_size": 128000,
  "capabilities": [
    "tools"
  ],
  "input_types": [
    "text",
    "image"
  ],
  "output_types": [
    "text",
    "image"
  ],
  "tags": [],
  "type_prices": {
    "text_generation": 0.00002
  },
  "mp_price": null,
  "owner_name": "openai",
  "priority": 0,
  "model_name_in_provider": "my-model-v1.2",
  "parameters": {
    "top_k": {
      "default": 0,
      "description": "Limits the token sampling to only the top K tokens.",
      "min": 0,
      "required": false,
      "step": 1,
      "type": "int"
    },
    "top_p": {
      "default": 1,
      "description": "Nucleus sampling alternative.",
      "max": 1,
      "min": 0,
      "required": false,
      "step": 0.05,
      "type": "float"
    }
  }
}
200

Created

{
  "id": "55f4a12b-74c8-4294-8e4b-537f13fc3861",
  "model_name": "my-model",
  "description": "A custom completions model for text and image inputs",
  "provider_info_id": "e2e9129b-6661-4eeb-80a2-0c86964974c9",
  "model_type": "completions",
  "input_token_price": "0.00001",
  "output_token_price": "0.00003",
  "context_size": 128000,
  "capabilities": [
    "tools"
  ],
  "input_types": [
    "text",
    "image"
  ],
  "output_types": [
    "text",
    "image"
  ],
  "tags": [],
  "type_prices": null,
  "mp_price": null,
  "model_name_in_provider": "my-model-v1.2",
  "owner_name": "openai",
  "priority": 0,
  "parameters": {
    "top_k": {
      "default": 0,
      "description": "Limits the token sampling to only the top K tokens.",
      "min": 0,
      "required": false,
      "step": 1,
      "type": "int"
    },
    "top_p": {
      "default": 1,
      "description": "An alternative to sampling with temperature.",
      "max": 1,
      "min": 0,
      "required": false,
      "step": 0.05,
      "type": "float"
    }
  }
}

Last updated

Was this helpful?