Guardrails
Enforce safety, compliance, and quality with LangDB guardrails—moderate content, validate responses, and detect security risks.

Guardrail Behaviour
Example 1: Input Rejected by Guard
Example 2: Output Rejected by Guard
Limitations
Guardrail Templates
Guardrail
Description
Toxicity Detection (content-toxicity)
content-toxicity)Parameter
Type
Description
Defaults
JSON Schema Validator (validation-json-schema)
validation-json-schema)Parameter
Type
Description
Defaults
Competitor Mention Check (content-competitor-mentions)
content-competitor-mentions)Parameter
Type
Description
Defaults
PII Detection (security-pii-detection)
security-pii-detection)Parameter
Type
Description
Defaults
Prompt Injection Detection (security-prompt-injection)
security-prompt-injection)Parameter
Type
Description
Defaults
Company Policy Compliance (compliance-company-policy)
compliance-company-policy)Parameter
Type
Description
Defaults
Regex Pattern Validator (validation-regex-pattern)
validation-regex-pattern)Parameter
Type
Description
Defaults
Word Count Validator (validation-word-count)
validation-word-count)Parameter
Type
Description
Defaults
Sentiment Analysis (content-sentiment-analysis)
content-sentiment-analysis)Parameter
Type
Description
Defaults
Language Validator (content-language-validation)
content-language-validation)Parameter
Type
Description
Defaults
Topic Adherence (content-topic-adherence)
content-topic-adherence)Parameter
Type
Description
Defaults
Factual Accuracy (content-factual-accuracy)
content-factual-accuracy)Parameter
Type
Description
Defaults
Last updated
Was this helpful?