Skip to main content

LLM Integrations

We have created a dedicated LLM version of the whylogs container to integrate your existing LLM applications with WhyLabs. The whylogs container is a low code solution that relies on API calls to asynchronously profile and monitor your data with WhyLabs. To learn more about how to use the whylogs container, refer to the docs

Configuration

There are two types of configuration for the LLM whylogs container. One with environment variables and the second with a Python configuration file. The minimal variables needed to deploy the container are:

# Your WhyLabs org id
WHYLABS_ORG_ID=org-0
# An api key from the org above
WHYLABS_API_KEY=xxxxxxxxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

# One of these two must be set
# Sets the container password to `password`. See the auth section for details
CONTAINER_PASSWORD=password
# If you don't care about password protecting the container then you can set this to True.
DISABLE_CONTAINER_PASSWORD=True

# A comma-separated string of WhyLabs' dataset IDs for LLM use-cases. Required to use the LLM proxy on the container.
LLM_DATASET_IDS="model-13, model-16"

To learn more about other configurations to the container, refer to this section

LLM - OpenAI Proxy endpoint

The /v1/chat/completions endpoint on the container serves as a proxy for making requests to OpenAI's Completions API. It takes a request prompt and forwards it to the OpenAI API. Additionally, it transforms the user prompt message and the generated response from OpenAI into whylogs profiles using langkit and upload these profiles to WhyLabs automatically. By using this proxy, your LLM application is automatically monitored by WhyLabs while maintaining a similar input and ouput experience for the API client.

This endpoint is only available using our LLM dedicated image, so you will need to use a different tag for it.

docker run --env-file local.env whylabs/whylogs:py-llm-latest

Or you can also build your own image using our dedicated LLM Dockerfile:

docker build -t llm-whylogs-container -f Dockerfile.llm .
docker run -it --net=host --env-file local.env llm-whylogs-container

Usage

The idea for the OpenAI proxy endpoint is to change as little as possible to OpenAI's Completions request structure. It only requires an extra request parameter, whylabs_dataset_id so it can tie a specific prompt to an existing WhyLabs dataset-id. You will need to inform it both on the LLM_DATASET_IDS environment variable and on each request, so that the container can be used to log multiple OpenAI models at once with the same endpoint.

resp = requests.post(
url="http://localhost:8000/v1/chat/completions",
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer $OPENAI_API_KEY"
},
params={"whylabs_dataset_id": "model-X"},
json={
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "Say this is a test!" }
],
})

Will return:

{
"id":"chatcmpl-{ID}",
"object":"chat.completion",
"created":{timestamp},
"model":"gpt-3.5-turbo-0613",
"choices":[
{"index":0,"message":{
"role":"assistant","content":"This is a test!"},
"finish_reason":"stop"}
],
"usage": {
"prompt_tokens":13,"completion_tokens":5,"total_tokens":18
}
}

And also gets asynchronously profiled and pushed to WhyLabs on a pre-defined cadence of 5 minutes. The LLM dataset is pre-configured to be daily, so all prompts and responses will get merged to the daily profile.

LLM - Validations

Coming soon! 👀

Troubleshooting

If you need help setting up the container then reach out to us on Slack or via email. See the Github repo for submitting issues and feature requests.

Prefooter Illustration Mobile
Run AI With Certainty
Get started for free
Prefooter Illustration