Skip to main content

OpenAI

The simplest way to get started logging prompts and responses with langkit is as follows. This is useful for getting things working, seeing which llm metrics you might want to use in profiling, etc. For complete documentation, see the github repo.

import openai
import whylogs as why
from langkit import llm_metrics

schema = llm_metrics.init()

openai.api_key = "xxx"

prompt = "Who won the world series in 2020?"
openai_response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": prompt},
]
)

response = openai_response["choices"][0]["message"]["content"]
profile = why.log({"prompt": prompt, "response": response}, schema=schema)

print(profile.view().to_pandas()) # See what's inisde the profile

For logging to WhyLabs continuously from an application or service, you'll want to use the following configuration.

import os
import openai
import whylogs as why
from whylogs.api.writer.whylabs import WhyLabsWriter
from langkit import llm_metrics

os.environ["WHYLABS_DEFAULT_ORG_ID"] = "YOUR-ORG-ID"
os.environ["WHYLABS_API_KEY"] = "YOUR-API-KEY"
os.environ["WHYLABS_DEFAULT_DATASET_ID"] = "YOUR-MODEL-ID"

schema = llm_metrics.init()

openai.api_key = "xxx"

# Upload logs every 15 minutes to whylabs
logger = why.logger(mode="rolling", interval=15, when="M", schema=schema)
logger.append_writer(WhyLabsWriter()) # WhyLabs redentials read from env

prompt = "Who won the world series in 2020?"
openai_response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": prompt},
],
)

response = openai_response["choices"][0]["message"]["content"]
logger.log({"prompt": prompt, "response": response})
Prefooter Illustration Mobile
Run AI With Certainty
Get started for free
Prefooter Illustration