Introducing Virtual MCP Servers
LogoLogo
GithubJoin SlackSignupBook a Demo
  • Documentation
  • Self Hosted
  • Integrations
  • Guides
  • Enterprise
  • Introduction to AI Gateway
  • Supported Models
  • Supported MCP Servers
  • Getting Started
    • Quick Start
    • Working with API
    • Working with Multiple Agents
    • Working with MCPs
    • Working with Headers
    • User Tracking
    • Using Parameters
  • Concepts
    • Thread
    • Trace
    • Run
    • Label
    • Message
    • Virtual Models
      • Routing with Virtual Model
    • Virtual MCP Servers
  • Features
    • Tracing
    • Routing
    • MCP Support
    • Publishing MCP Servers
    • Usage
    • Analytics
    • Guardrails
    • User Roles
    • Cost Control
  • Python SDK
    • Getting Started
  • API Reference
  • Postman Collection
Powered by GitBook
LogoLogo

Social

  • LinkedIn
  • X
  • Youtube
  • Github

Platform

  • Pricing
  • Documentation
  • Blog

Company

  • Home
  • About

Legal

  • Privacy Policy
  • Terms of Service

2025 LangDB. All rights reserved.

On this page
  • API Usage:
  • UI
  • Playground
  • Samples

Was this helpful?

Export as PDF
  1. Getting Started

Using Parameters

Configure temperature, max_tokens, logit_bias, and more with LangDB AI Gateway. Test easily via API, UI, or Playground.

LangDB AI Gateway supports every LLM parameter like temperature, max_tokens, stop sequences, logit_bias, and more.

API Usage:

from openai import OpenAI

response = client.chat.completions.create(
    model="gpt-4o", # Change Model
    messages=[
        {"role": "user", "content": "What are the earnings of Apple in 2022?"},
    ],
    temperature=0.7,               # temperature parameter
    max_tokens=150,                # max_tokens parameter
    stream=True                   # stream parameter
)
const response = await client.chat.completions.create({
  model: 'gpt-4o-mini',          
  messages,                     
  temperature: 0.7,              // temperature parameter
  max_tokens: 150,               // max_tokens parameter
  logit_bias: { '50256': -100 }, // logit_bias parameter
  stream: true,                  // stream parameter
});

UI

You can also use the UI to test various parameters and getting code snippet

Playground

Samples

Explore ready-made code snippets complete with preconfigured parameters—copy, paste, and customize to fit your needs.

PreviousUser TrackingNextThread

Last updated 14 days ago

Was this helpful?

Use the Playground to tweak parameters in real time via the and send test requests instantly.

Virtual Model config
Trying different Parameters for chat completions through LangDB Playground
Trying different Parameters for chat completions through LangDB Samples