Skip to main content

Integration: IBM watsonx

IBM® watsonx™ AI and data platform includes three core components and a set of AI assistants designed to help you scale and accelerate the adoption of GenAI. Integration with an the IBM watsonX application is straightforward with openllmtelemetry package.

First, you need to set a few environment variables. This can be done via your container set up or via code.

import openllmtelemetry

openllmtelemetry.instrument()

Output

Set WhyLabs API Key: ··········
Using WhyLabs API key with ID: D66fSX6Znf
Set GuardRails endpoint (leave blank to skip guardrail): https://<guardrails-endpoint>
Set GuardRails API Key: ··········
Using GuardRails API key with prefix: wjYDn8
Do you want to save these settings to a configuration file? [y/n]: y

Once this is done, all of your watsonx interactions will be automatically traced. If you have rulesets enabled for blocking in WhyLabs Secure policy, the library will block requests accordingly.

from ibm_watsonx_ai.foundation_models import ModelInference
from ibm_watsonx_ai.foundation_models.utils.enums import ModelTypes


my_credentials = {
"url": "<ENDPOINT>",
"apikey": "<API-KEY>",
}
model_id = ModelTypes.LLAMA_2_13B_CHAT
gen_parms = {}
project_id = "<YOUR-PROJECT-ID>"
space_id = None
verify = False

model = ModelInference(
model_id=model_id,
credentials=my_credentials,
params=gen_parms,
project_id=project_id,
space_id=space_id,
verify=verify,
)

gen_parms_override = None
Prefooter Illustration Mobile
Run AI With Certainty
Get started for free
Prefooter Illustration