Bad Actor Ruleset
Bad Actor is designed to prevent bad actors from interacting with your LLM applications through the detection of jailbreak and injection attacks.
Bad Actor is designed to prevent bad actors from interacting with your LLM applications through the detection of jailbreak and injection attacks.
Cost is designed to monitor LLM costs. This ruleset doesn't have an associated score with it at the moment, rather, it enables character and
Customer Experience is designed to prevent various issues that impact the application experience by detecting sentiment, toxicity, refusals, and any PII that a user attempts to share.
As part of the WhyLabs Secure solution, the Guardrails API is a RESTful API that allows you to interact with the WhyLabs Guardrails service. This powerful tool enables organizations to implement robust safety measures and control mechanisms for their AI and ML systems, particularly for Large Language Models (LLMs).
Integration with an Amazon Bedrock application is straightforward with openllmtelemetry package.
Integration with an Azure OpenAI application is straightforward with openllmtelemetry package.
IBM® watsonx™ AI and data platform includes three core components and a set of AI assistants
NVIDIA NIM is a platform for managing and deploying AI models APIs that are compatible with the OpenAI client.
Integration with an OpenAI application is straightforward with openllmtelemetry package.
Misuse is designed to detect the LLM being used in a way that wasn't intended through the detection of various topics and pii. For example,
openLLMtelemetry is a package built on top of OpenTelemetry
Truthfulness is designed to detect falsehoods (by opting into hallucination detection) or off topic responses from the LLM. For example, you can detect
Whylabs Secure Policy is the central place where you define the rules and actions that apply to your LLM applications. These constraints