Skip to main content

Secure against OWASP top 10 risks

OWASP top 10 risks for LLM applications

The dynamically developing field of LLM applications introduces new kinds of vulnerabilities. These emerging threats were summarized into 10 risk categories by the OWASP Foundation, a non-profit organization aiming at democratizing and standardizing software security.

The below table describes how WhyLabs can help in addressing the OWASP top 10 LLM security risks.

OWASP risk itemRisk description*WhyLabs solution
LLM01: Prompt injectionThis manipulates a large language model (LLM) through crafty inputs, causing unintended actions by the LLM. Direct injections overwrite system prompts, while indirect ones manipulate inputs from external sources.WhyLabs offers prompt injection and jailbreak detection out of the box under the “Bad actors” policy rule set. It enables real-time blocking of prompts that are deemed to be malicious.
LLM02: Insecure Output HandlingThis vulnerability occurs when an LLM output is accepted without scrutiny, exposing backend systems. Misuse may lead to severe consequences like XSS, CSRF, SSRF, privilege escalation, or remote code execution.WhyLabs extracts an array of metrics from the LLM output that are focused on security - it can identify various PII classes and undesirable topics (e.g. legal) or attempt to return unsafe HTML/code. It also supports custom metrics. Based on these metrics the insecure responses can be blocked from getting to the end user. This is easily configurable with the “Misuse” policy rule set. Additionally the guardrails applied to the prompt can prevent the LLM from falling for adversarial attacks and exposing insecure information.
LLM03: Training Data PoisoningThis occurs when LLM training data is tampered, introducing vulnerabilities or biases that compromise security, effectiveness, or ethical behavior. Sources include Common Crawl, WebText, OpenWebText, & books.In case access to the LLM training data is provided, it can be also profiled with WhyLabs and inspected for potential issues like low sentiment, inaccuracy, injections etc.
LLM04: Model Denial of ServiceAttackers cause resource-heavy operations on LLMs, leading to service degradation or high costs. The vulnerability is magnified due to the resource-intensive nature of LLMs and unpredictability of user inputs.WhyLabs offers guardrails centered around the potential LLM exploitation and high cost incurred by prohibitively long prompts/responses with the “Cost” policy rule set.
LLM05: Supply Chain VulnerabilitiesLLM application lifecycle can be compromised by vulnerable components or services, leading to security attacks. Using third-party datasets, pre- trained models, and plugins can add vulnerabilities.WhyLabs can help with detecting supply chain vulnerabilities by scanning third party datasets for issues prior to incorporating them into the LLM solution, evaluating pre-trained model’s performance and monitoring the plugins through traces and custom metrics.
LLM06: Sensitive Information DisclosureLLMs may inadvertently reveal confidential data in their responses, leading to unauthorized data access, privacy violations, and security breaches. It's crucial to implement data sanitization and strict user policies to mitigate this.WhyLabs offers scanning the LLM responses for various PII types like email addresses, phone numbers, SSN numbers, etc. and enables blocking such responses from getting to the end user or automatic removal of insecure data from the response via custom callbacks.
LLM07: Insecure Plugin DesignLLM plugins can have insecure inputs and insufficient access control. This lack of application control makes them easier to exploit and can result in consequences like remote code execution.LLM plugins could be monitored by introducing custom metrics dedicated to signals emitted by the plugins. Additionally, traces may include the actions of the agents at each step to enable debugging and identifying access control pitfalls.
LLM08: Excessive AgencyLLM-based systems may undertake actions leading to unintended consequences. The issue arises from excessive functionality, permissions, or autonomy granted to the LLM-based systems.Actions performed by LLM agents as well as permissions granted to them should be monitored via traces and/or custom metrics to detect excessive agency and related issues.
LLM09: OverrelianceSystems or people overly depending on LLMs without oversight may face misinformation, miscommunication, legal issues, and security vulnerabilities due to incorrect or inappropriate content generated by LLMs.WhyLabs prevents overreliance on the LLM by applying guardrails that enable automated interventions, blocking, callbacks, redirecting the prompt to a human agent, cleaning PII data and other custom actions to mitigate the risks listed here.
LLM10: Model TheftThis involves unauthorized access, copying, or exfiltration of proprietary LLM models. The impact includes economic losses, compromised competitive advantage, and potential access to sensitive information.WhyLabs protects users from LLM model theft by detecting attempts of hacking (remote code exploration, injection attempt to extract model weights, private information extraction). WhyLabs also monitors the LLM model output for any potential leakage of LLM model’s information.
Prefooter Illustration Mobile
Run AI With Certainty
Get started for free
Prefooter Illustration