Bad Actor Ruleset
Bad Actor is designed to prevent bad actors from interacting with your LLM applications through the detection of jailbreak and injection attacks.
Bad Actor is designed to prevent bad actors from interacting with your LLM applications through the detection of jailbreak and injection attacks.
Cost is designed to monitor LLM costs. This ruleset doesn't have an associated score with it at the moment, rather, it enables character and
Customer Experience is designed to prevent various issues that impact the application experience by detecting sentiment, toxicity, refusals, and any PII that a user attempts to share.
Overview
As part of the WhyLabs Secure solution, the Guardrails API is a RESTful API that allows you to interact with the WhyLabs Guardrails service. This powerful tool enables organizations to implement robust safety measures and control mechanisms for their AI and ML systems, particularly for Large Language Models (LLMs).
Misuse is designed to detect the LLM being used in a way that wasn't intended through the detection of various topics and pii. For example,
Truthfulness is designed to detect falsehoods (by opting into hallucination detection) or off topic responses from the LLM. For example, you can detect
Whylabs Secure Policy is the central place where you define the rules and actions that apply to your LLM applications. These constraints