Skip to main content

Integration: Amazon Bedrock

Integration with an Amazon Bedrock application is straightforward with openllmtelemetry package.

First, you need to set a few environment variables. This can be done via your container set up or via code.

import openllmtelemetry

openllmtelemetry.instrument()

Once this is done, all of your Amazon Bedrock interactions will be automatically traced. If you have rulesets enabled for blocking in WhyLabs Secure policy, the library will block requests accordingly

import boto3
import json

#Create the connection to Bedrock
bedrock = boto3.client(
service_name='bedrock',
region_name='us-west-2',

)

bedrock_runtime = boto3.client(
service_name='bedrock-runtime',
region_name='us-west-2',

)

# Define prompt and model parameters
prompt_data = """Write me a poem about apples"""

#The Text Generation Configuration are Titans inference parameters
text_gen_config = {
"maxTokenCount": 512,
"stopSequences": [],
"temperature": 0,
"topP": 0.9
}

body = json.dumps({
# "inputText": prompt_data,
"prompt": prompt_data,
# "textGenerationConfig": text_gen_config
})

model_id = 'ai21.j2-ultra-v1'
accept = 'application/json'
content_type = 'application/json'

# Invoke model
response = bedrock_runtime.invoke_model(
body=body,
modelId=model_id,
accept=accept,
contentType=content_type
)

Prefooter Illustration Mobile
Run AI With Certainty
Get started for free
Prefooter Illustration