Integrating Azure Content Safety with API Management for Azure OpenAI Endpoints
In today’s digital landscape, ensuring the safety and integrity of AI-generated content is paramount. Azure Content Safety, combined with Azure API Management, provides a robust solution for managing and securing Azure OpenAI endpoints. This blog will guide you through the integration process, focusing on text analysis and prompt shields.
What is Azure Content Safety?
Azure AI Content Safety provides analysis for user and AI generated content. Currently available APIs include
Prompt Shields: scans user text and document text for input attacks on LLMs
Groundedness Detection: Verify if the response generated by the LLMs are grounded in the source provided
Protected material text detection: checks for existence of copyrighted material in the AI generated response
Analyze Text/Image: identifies and categorizes text severity against sexual content, hate, violence and self-harm
Why Integrate Azure Content Safety?
Azure Content Safety offers advanced algorithms to detect and mitigate harmful content in both user prompts and AI-generated outputs. By integrating this with Azure API Management, you can:
Enhance Security: Protect your applications from harmful content.
Ensure Compliance: Adhere to regulatory standards and guidelines.
Improve User Experience: Provide a safer and more reliable service to your users.
Onboard Azure Content Safety API to Azure API Management
Like any other APIs, we can onboard Azure Content Safety API to Azure APIM by importing the latest OpenAPI specification. API management helps with enabling Managed Identity based authentication to the Content Safety API as well as communicate privately using Private Endpoints.
Onboard Azure OpenAI to Azure API Management
Onboarding AOAI to API Management comes with many benefits which are extensively discussed. I have a blog and github repo that talks about this in detail.
Integrate Content Safety with Azure OpenAI APIs in API Management
AI Gateway Labs is an amazing repository exploring various patterns through a series of labs. We have included 2 Content Safety scenarios as labs to demonstrate this integration.
The pattern behind this integration is to leverage the send-request policy in APIM to invoke the respective Content Safety API, and decide to forward the request to OpenAI, if it is safe.
The snippet below concatenates all the prompts in the incoming request to OpenAI and validates if there is any attack detected.
<send-request mode=”new” response-variable-name=”safetyResponse”>
<set-url>@(“https://” + context.Request.Headers.GetValueOrDefault(“Host”) + “/contentsafety/text:shieldPrompt?api-version=2024-02-15-preview”)</set-url>
<set-method>POST</set-method>
<set-header name=”Ocp-Apim-Subscription-Key” exists-action=”override”>
<value>@(context.Variables.GetValueOrDefault<string>(“SubscriptionKey”))</value>
</set-header>
<set-header name=”Content-Type” exists-action=”override”>
<value>application/json</value>
</set-header>
<set-body>@{
string[] documents = new string[] {};
string[] messages = context.Request.Body.As<JObject>(preserveContent: true)[“messages”].Select(m => m.Value<string>(“content”)).ToArray();
JObject obj = new JObject();
JProperty userProperty = new JProperty(“userPrompt”, string.Concat(messages));
JProperty documentsProperty = new JProperty(“documents”, new JArray(documents));
obj.Add(userProperty);
obj.Add(documentsProperty);
return obj.ToString();
}</set-body>
</send-request>
<choose>
<when condition=”@(((IResponse)context.Variables[“safetyResponse”]).StatusCode == 200)”>
<choose>
<when condition=”@((bool)((IResponse)context.Variables[“safetyResponse”]).Body.As<JObject>()[“userPromptAnalysis”][“attackDetected”] == true)”>
<!– Return 400 if an attach is detected –>
<return-response>
<set-status code=”400″ reason=”Bad Request” />
<set-body>@{
var errorResponse = new
{
error = new
{
message = “The prompt was identified as an attack by the Azure AI Content Safety service.”
}
};
return JsonConvert.SerializeObject(errorResponse);
}</set-body>
</return-response>
</when>
</choose>
</when>
<otherwise>
<return-response>
<set-status code=”500″ reason=”Internal Server Error” />
</return-response>
</otherwise>
</choose>
The snippet below concatenates all the prompts in the incoming request to OpenAI and validates if it is within the allowed limits for hate, sexual, self harm and violence.
<send-request mode=”new” response-variable-name=”safetyResponse”>
<set-url>@(“https://” + context.Request.Headers.GetValueOrDefault(“Host”) + “/contentsafety/text:analyze?api-version=2023-10-01”)</set-url>
<set-method>POST</set-method>
<set-header name=”Ocp-Apim-Subscription-Key” exists-action=”override”>
<value>@(context.Variables.GetValueOrDefault<string>(“SubscriptionKey”))</value>
</set-header>
<set-header name=”Content-Type” exists-action=”override”>
<value>application/json</value>
</set-header>
<set-body>@{
string[] categories = new string[] {“Hate”,”Sexual”,”SelfHarm”,”Violence”};
JObject obj = new JObject();
JProperty textProperty = new JProperty(“text”, string.Concat(context.Request.Body.As<JObject>(preserveContent: true)[“messages”].Select(m => m.Value<string>(“content”)).ToArray()));
JProperty categoriesProperty = new JProperty(“categories”, new JArray(categories));
JProperty outputTypeProperty = new JProperty(“outputType”, “EightSeverityLevels”);
obj.Add(textProperty);
obj.Add(categoriesProperty);
obj.Add(outputTypeProperty);
return obj.ToString();
}</set-body>
</send-request>
<choose>
<when condition=”@(((IResponse)context.Variables[“safetyResponse”]).StatusCode == 200)”>
<set-variable name=”thresholdExceededCategory” value=”@{
var thresholdExceededCategory = “”;
// Define the allowed threshold for each category
Dictionary<string, int> categoryThresholds = new Dictionary<string, int>()
{
{ “Hate”, 0 },
{ “Sexual”, 0 },
{ “SelfHarm”, 0 },
{ “Violence”, 0 }
};
foreach (var category in categoryThresholds)
{
var categoryAnalysis = ((JArray)((IResponse)context.Variables[“safetyResponse”]).Body.As<JObject>(preserveContent: true)[“categoriesAnalysis”]).FirstOrDefault(c => (string)c[“category”] == category.Key);
if (categoryAnalysis != null && (int)categoryAnalysis[“severity”] > category.Value)
{
// Threshold exceeded for the category
thresholdExceededCategory = category.Key;
break;
}
}
return thresholdExceededCategory;
}” />
<choose>
<when condition=”@(context.Variables[“thresholdExceededCategory”] != “”)”>
<return-response>
<set-status code=”400″ reason=”Bad Request” />
<set-body>@{
var errorResponse = new
{
error = new
{
message = “The content was filtered by the Azure AI Content Safety service for the category: ” + (string)context.Variables[“thresholdExceededCategory”]
}
};
return JsonConvert.SerializeObject(errorResponse);
}</set-body>
</return-response>
</when>
</choose>
</when>
<otherwise>
<return-response>
<set-status code=”500″ reason=”Internal Server Error” />
</return-response>
</otherwise>
</choose>
Conclusion
Integrating Azure Content Safety with API Management for Azure OpenAI endpoints is a powerful way to enhance the security and reliability of your AI applications. By following these steps, you can ensure that your AI-generated content is safe, compliant, and user-friendly.
For more detailed information, refer to the Azure Content Safety documentation and the Azure API Management documentation.
Microsoft Tech Community – Latest Blogs –Read More