Skip to main content

Integration: NVIDIA NIM

NVIDIA NIM is a platform for managing and deploying AI models APIs that are compatible with the OpenAI client. Integration with an NVIDIA NIM application is straightforward with openllmtelemetry package.

First, you need to set a few environment variables. This can be done via your container set up or via code.

import openllmtelemetry

openllmtelemetry.instrument()

Once this is done, all of your LLM interactions will be automatically traced. If you have rulesets enabled for blocking in WhyLabs Secure policy, the library will block requests accordingly

from openai import OpenAI
client = OpenAI()

client = OpenAI(
base_url = "https://integrate.api.nvidia.com/v1",
api_key = "GET_YOUR_API_KEY_FROM_NVIDIA"
)


completion = client.chat.completions.create(
model="meta/llama3-70b-instruct",
messages=[{"role":"user","content":"Show me how to build a bike"}],
temperature=0.5,
top_p=1,
max_tokens=1024,
)
Prefooter Illustration Mobile
Run AI With Certainty
Get started for free
Prefooter Illustration