Skip to main content

8 docs tagged with "guardrails"

View All Tags

Bad Actor Ruleset

Bad Actor is designed to prevent bad actors from interacting with your LLM applications through the detection of jailbreak and injection attacks.

Cost Ruleset

Cost is designed to monitor LLM costs. This ruleset doesn't have an associated score with it at the moment, rather, it enables character and

Customer Experience Ruleset

Customer Experience is designed to prevent various issues that impact the application experience by detecting sentiment, toxicity, refusals, and any PII that a user attempts to share.

Guardrails API

As part of the WhyLabs Secure solution, the Guardrails API is a RESTful API that allows you to interact with the WhyLabs Guardrails service. This powerful tool enables organizations to implement robust safety measures and control mechanisms for their AI and ML systems, particularly for Large Language Models (LLMs).

Misuse Ruleset

Misuse is designed to detect the LLM being used in a way that wasn't intended through the detection of various topics and pii. For example,

Truthfulness Ruleset

Truthfulness is designed to detect falsehoods (by opting into hallucination detection) or off topic responses from the LLM. For example, you can detect

WhyLabs Secure Policy

Whylabs Secure Policy is the central place where you define the rules and actions that apply to your LLM applications. These constraints

Prefooter Illustration Mobile
Run AI With Certainty
Get started for free
Prefooter Illustration