Tag Archives: microsoft
Azure DevOps blog closing -> moving to DevBlogs
Hello! We will be closing this Azure DevOps blog soon on Tech Community as part of consolidation efforts. We appreciate your continued readership and interest in this topic.
For Azure DevOps blog posts (including the last 10 posted here), please go here: Azure DevOps Blog (microsoft.com)
Microsoft Tech Community – Latest Blogs –Read More
Azure DevOps blog closing -> moving to DevBlogs
Hello! We will be closing this Azure DevOps blog soon on Tech Community as part of consolidation efforts. We appreciate your continued readership and interest in this topic.
For Azure DevOps blog posts (including the last 10 posted here), please go here: Azure DevOps Blog (microsoft.com)
Microsoft Tech Community – Latest Blogs –Read More
Creating Intelligent Apps on App Service with .NET
You can use Azure App Service to work with popular AI frameworks like LangChain and Semantic Kernel connected to OpenAI for creating intelligent apps. In the following tutorial we will be adding an Azure OpenAI service using Semantic Kernel to a .NET 8 Blazor web application.
Prerequisites
An Azure OpenAI resource or an OpenAI account.
A .NET 8 Blazor Web App. Create the application with a template here.
Setup Blazor web app
For this Blazor web application, we’ll be building off the Blazor template and creating a new razor page that can send and receive requests to an Azure OpenAI OR OpenAI service using Semantic Kernel.
Right click on the Pages folder found under the Components folder and add a new item named OpenAI.razor
Add the following code to the ****OpenAI.razor file and click Save
“/openai”
@rendermode InteractiveServer
<PageTitle>Open AI</PageTitle>
<h3>Open AI Query</h3>
<input placeholder=”Input query” @bind=”newQuery” />
<button class=”btn btn-primary” @onclick=”SemanticKernelClient”>Send Request</button>
<br />
<h4>Server response:</h4> <p>@serverResponse</p>
@code {
public string? newQuery;
public string? serverResponse;
}
Next, we’ll need to add the new page to the navigation so we can navigate to the service.
Go to the NavMenu.razor file under the Layout folder and add the following div in the nav class. Click Save
<div class=”nav-item px-3″>
<NavLink class=”nav-link” href=”openai”>
<span class=”bi bi-list-nested-nav-menu” aria-hidden=”true”></span> Open AI
</NavLink>
</div>
After the Navigation is updated, we can start preparing to build the OpenAI client to handle our requests.
API Keys and Endpoints
In order to make calls to OpenAI with your client, you will need to first grab the Keys and Endpoint values from Azure OpenAI or OpenAI and add them as secrets for use in your application. Retrieve and save the values for later use.
For Azure OpenAI, see this documentation to retrieve the key and endpoint values. For our application, you will need the following values:
deploymentName
endpoint
apiKey
modelId
For OpenAI, see this documentation to retrieve the api keys. For our application, you will need the following values:
apiKey
modelId
Since we’ll be deploying to App Service we can secure these secrets in Azure Key Vault for protection. Follow the Quickstart to setup your Key Vault and add the secrets you saved from earlier.
Next, we can use Key Vault references as app settings in our App Service resource to reference in our application. Follow the instructions in the documentation to grant your app access to your Key Vault and to setup Key Vault references.
Then, go to the portal Environment Variables blade in your resource and add the following app settings:
For Azure OpenAI, use the following:
DEPOYMENT_NAME = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
ENDPOINT = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
API_KEY = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
MODEL_ID = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
For OpenAI, use the following:
OPENAI_API_KEY = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
OPENAI_MODEL_ID = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
Once your app settings are saved, you can bring them into the code by injecting IConfiguration and referencing the app settings. Add the following code to your OpenAI.razor file:
@inject Microsoft.Extensions.Configuration.IConfiguration _config
@code {
private async Task SemanticKernelClient()
{
string deploymentName = _config[“DEPLOYMENT_NAME”];
string endpoint = _config[“ENDPOINT”];
string apiKey = _config[“API_KEY”];
string modelId = _config[“MODEL_ID”];
// OpenAI
string OpenAIModelId = _config[“OPENAI_MODEL_ID”];
string OpenAIApiKey = _config[“OPENAI_API_KEY”];
}
Semantic Kernel
Semantic Kernel is an open-source SDK that enables you to easily develop AI agents to work with your existing code. You can use Semantic Kernel with Azure OpenAI and OpenAI models.
To create the OpenAI client, we’ll first start by installing Semantic Kernel.
To install Semantic Kernel, browse the NuGet package manager in Visual Studio and install the Microsoft.SemanticKernel package. For NuGet Package Manager instructions, see here. For CLI instructions, see here.
Once the Semantic Kernel package is installed, you can now initialize the kernel.
Initialize the Kernel
To initialize the Kernel, add the following code to the OpenAI.razor file.
@code {
@using Microsoft.SemanticKernel;
private async Task SemanticKernelClient()
{
var builder = Kernel.CreateBuilder();
var kernel = builder.Build();
}
}
Here we are adding the using statement and creating the Kernel in a method that we can use when we send the request to the service.
Add your AI service
Once the Kernel is initialized, we can add our chosen AI service to the kernel. Here we will define our model and pass in our key and endpoint information to be consumed by the chosen model.
For Azure OpenAI, use the following code:
var builder = Kernel.CreateBuilder();
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
var kernel = builder.Build();
For OpenAI, use the following code:
var builder = Kernel.CreateBuilder();
builder.Services.AddOpenAIChatCompletion(
modelId: OpenAIModelId,
apiKey: OpenAIApiKey,
);
var kernel = builder.Build();
Configure prompt and create Semantic function
Now that our chosen OpenAI service client is created with the correct keys we can add a function to handle the prompt. With Semantic Kernel you can handle prompts by the use of a semantic functions, which turn the prompt and the prompt configuration settings into a function the Kernel can execute. Learn more on configuring prompts here.
First, we’ll create a variable that will hold the users prompt. Then add a function with execution settings to handle and configure the prompt. Add the following code to the OpenAI.razor file:
@using Microsoft.SemanticKernel.Connectors.OpenAI
private async Task SemanticKernelClient()
{
var builder = Kernel.CreateBuilder();
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
var kernel = builder.Build();
var prompt = @”{{$input}} ” + newQuery;
var summarize = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings { MaxTokens = 100, Temperature = 0.2 });
}
Lastly, we’ll need to invoke the function and return the response. Add the following to the OpenAI.razor file:
private async Task SemanticKernelClient()
{
var builder = Kernel.CreateBuilder();
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
var kernel = builder.Build();
var prompt = @”{{$input}} ” + newQuery;
var summarize = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings { MaxTokens = 100, Temperature = 0.2 })
var result = await kernel.InvokeAsync(summarize);
serverResponse = result.ToString();
}
Here is the example in it’s completed form. In this example, use the Azure OpenAI chat completion service OR the OpenAI chat completion service, not both.
“/openai”
@rendermode InteractiveServer
@inject Microsoft.Extensions.Configuration.IConfiguration _config
<PageTitle>OpenAI</PageTitle>
<h3>OpenAI input query: </h3>
<input class=”col-sm-4″ @bind=”newQuery” />
<button class=”btn btn-primary” @onclick=”SemanticKernelClient”>Send Request</button>
<br />
<br />
<h4>Server response:</h4> <p>@serverResponse</p>
@code {
@using Microsoft.SemanticKernel;
@using Microsoft.SemanticKernel.Connectors.OpenAI
private string? newQuery;
private string? serverResponse;
private async Task SemanticKernelClient()
{
// Azure OpenAI
string deploymentName = _config[“DEPLOYMENT_NAME”];
string endpoint = _config[“ENDPOINT”];
string apiKey = _config[“API_KEY”];
string modelId = _config[“MODEL_ID”];
// OpenAI
// string OpenAIModelId = _config[“OPENAI_DEPLOYMENT_NAME”];
// string OpenAIApiKey = _config[“OPENAI_API_KEY”];
// Semantic Kernel client
var builder = Kernel.CreateBuilder();
// Azure OpenAI
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
// OpenAI
// builder.Services.AddOpenAIChatCompletion(
// modelId: OpenAIModelId,
// apiKey: OpenAIApiKey
// );
var kernel = builder.Build();
var prompt = @”{{$input}} ” + newQuery;
var summarize = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings { MaxTokens = 100, Temperature = 0.2 });
var result = await kernel.InvokeAsync(summarize);
serverResponse = result.ToString();
}
}
Now save the application and follow the next steps to deploy it to App Service. If you would like to test it locally first at this step, you can swap out the config values at with the literal string values of your OpenAI service. For example: string modelId = “gpt-4-turbo”;
Deploy to App Service
If you have followed the steps above, you are ready to deploy to App Service. If you run into any issues remember that you need to have done the following: grant your app access to your Key Vault, add the app settings with key vault references as your values. App Service will resolve the app settings in your application that match what you’ve added in the portal.
Authentication
Although optional, it is highly recommended that you also add authentication to your web app when using an Azure OpenAI or OpenAI service. This can add a level of security with no additional code. Learn how to enable authentication for your web app here.
Once deployed, browse to the web app and navigate to the Open AI tab. Enter a query to the service and you should see a populated response from the server. The tutorial is now complete and you now know how to use OpenAI services to create intelligent applications.
Microsoft Tech Community – Latest Blogs –Read More
Creating Intelligent Apps on App Service with .NET
You can use Azure App Service to work with popular AI frameworks like LangChain and Semantic Kernel connected to OpenAI for creating intelligent apps. In the following tutorial we will be adding an Azure OpenAI service using Semantic Kernel to a .NET 8 Blazor web application.
Prerequisites
An Azure OpenAI resource or an OpenAI account.
A .NET 8 Blazor Web App. Create the application with a template here.
Setup Blazor web app
For this Blazor web application, we’ll be building off the Blazor template and creating a new razor page that can send and receive requests to an Azure OpenAI OR OpenAI service using Semantic Kernel.
Right click on the Pages folder found under the Components folder and add a new item named OpenAI.razor
Add the following code to the ****OpenAI.razor file and click Save
“/openai”
@rendermode InteractiveServer
<PageTitle>Open AI</PageTitle>
<h3>Open AI Query</h3>
<input placeholder=”Input query” @bind=”newQuery” />
<button class=”btn btn-primary” @onclick=”SemanticKernelClient”>Send Request</button>
<br />
<h4>Server response:</h4> <p>@serverResponse</p>
@code {
public string? newQuery;
public string? serverResponse;
}
Next, we’ll need to add the new page to the navigation so we can navigate to the service.
Go to the NavMenu.razor file under the Layout folder and add the following div in the nav class. Click Save
<div class=”nav-item px-3″>
<NavLink class=”nav-link” href=”openai”>
<span class=”bi bi-list-nested-nav-menu” aria-hidden=”true”></span> Open AI
</NavLink>
</div>
After the Navigation is updated, we can start preparing to build the OpenAI client to handle our requests.
API Keys and Endpoints
In order to make calls to OpenAI with your client, you will need to first grab the Keys and Endpoint values from Azure OpenAI or OpenAI and add them as secrets for use in your application. Retrieve and save the values for later use.
For Azure OpenAI, see this documentation to retrieve the key and endpoint values. For our application, you will need the following values:
deploymentName
endpoint
apiKey
modelId
For OpenAI, see this documentation to retrieve the api keys. For our application, you will need the following values:
apiKey
modelId
Since we’ll be deploying to App Service we can secure these secrets in Azure Key Vault for protection. Follow the Quickstart to setup your Key Vault and add the secrets you saved from earlier.
Next, we can use Key Vault references as app settings in our App Service resource to reference in our application. Follow the instructions in the documentation to grant your app access to your Key Vault and to setup Key Vault references.
Then, go to the portal Environment Variables blade in your resource and add the following app settings:
For Azure OpenAI, use the following:
DEPOYMENT_NAME = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
ENDPOINT = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
API_KEY = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
MODEL_ID = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
For OpenAI, use the following:
OPENAI_API_KEY = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
OPENAI_MODEL_ID = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
Once your app settings are saved, you can bring them into the code by injecting IConfiguration and referencing the app settings. Add the following code to your OpenAI.razor file:
@inject Microsoft.Extensions.Configuration.IConfiguration _config
@code {
private async Task SemanticKernelClient()
{
string deploymentName = _config[“DEPLOYMENT_NAME”];
string endpoint = _config[“ENDPOINT”];
string apiKey = _config[“API_KEY”];
string modelId = _config[“MODEL_ID”];
// OpenAI
string OpenAIModelId = _config[“OPENAI_MODEL_ID”];
string OpenAIApiKey = _config[“OPENAI_API_KEY”];
}
Semantic Kernel
Semantic Kernel is an open-source SDK that enables you to easily develop AI agents to work with your existing code. You can use Semantic Kernel with Azure OpenAI and OpenAI models.
To create the OpenAI client, we’ll first start by installing Semantic Kernel.
To install Semantic Kernel, browse the NuGet package manager in Visual Studio and install the Microsoft.SemanticKernel package. For NuGet Package Manager instructions, see here. For CLI instructions, see here.
Once the Semantic Kernel package is installed, you can now initialize the kernel.
Initialize the Kernel
To initialize the Kernel, add the following code to the OpenAI.razor file.
@code {
@using Microsoft.SemanticKernel;
private async Task SemanticKernelClient()
{
var builder = Kernel.CreateBuilder();
var kernel = builder.Build();
}
}
Here we are adding the using statement and creating the Kernel in a method that we can use when we send the request to the service.
Add your AI service
Once the Kernel is initialized, we can add our chosen AI service to the kernel. Here we will define our model and pass in our key and endpoint information to be consumed by the chosen model.
For Azure OpenAI, use the following code:
var builder = Kernel.CreateBuilder();
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
var kernel = builder.Build();
For OpenAI, use the following code:
var builder = Kernel.CreateBuilder();
builder.Services.AddOpenAIChatCompletion(
modelId: OpenAIModelId,
apiKey: OpenAIApiKey,
);
var kernel = builder.Build();
Configure prompt and create Semantic function
Now that our chosen OpenAI service client is created with the correct keys we can add a function to handle the prompt. With Semantic Kernel you can handle prompts by the use of a semantic functions, which turn the prompt and the prompt configuration settings into a function the Kernel can execute. Learn more on configuring prompts here.
First, we’ll create a variable that will hold the users prompt. Then add a function with execution settings to handle and configure the prompt. Add the following code to the OpenAI.razor file:
@using Microsoft.SemanticKernel.Connectors.OpenAI
private async Task SemanticKernelClient()
{
var builder = Kernel.CreateBuilder();
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
var kernel = builder.Build();
var prompt = @”{{$input}} ” + newQuery;
var summarize = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings { MaxTokens = 100, Temperature = 0.2 });
}
Lastly, we’ll need to invoke the function and return the response. Add the following to the OpenAI.razor file:
private async Task SemanticKernelClient()
{
var builder = Kernel.CreateBuilder();
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
var kernel = builder.Build();
var prompt = @”{{$input}} ” + newQuery;
var summarize = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings { MaxTokens = 100, Temperature = 0.2 })
var result = await kernel.InvokeAsync(summarize);
serverResponse = result.ToString();
}
Here is the example in it’s completed form. In this example, use the Azure OpenAI chat completion service OR the OpenAI chat completion service, not both.
“/openai”
@rendermode InteractiveServer
@inject Microsoft.Extensions.Configuration.IConfiguration _config
<PageTitle>OpenAI</PageTitle>
<h3>OpenAI input query: </h3>
<input class=”col-sm-4″ @bind=”newQuery” />
<button class=”btn btn-primary” @onclick=”SemanticKernelClient”>Send Request</button>
<br />
<br />
<h4>Server response:</h4> <p>@serverResponse</p>
@code {
@using Microsoft.SemanticKernel;
@using Microsoft.SemanticKernel.Connectors.OpenAI
private string? newQuery;
private string? serverResponse;
private async Task SemanticKernelClient()
{
// Azure OpenAI
string deploymentName = _config[“DEPLOYMENT_NAME”];
string endpoint = _config[“ENDPOINT”];
string apiKey = _config[“API_KEY”];
string modelId = _config[“MODEL_ID”];
// OpenAI
// string OpenAIModelId = _config[“OPENAI_DEPLOYMENT_NAME”];
// string OpenAIApiKey = _config[“OPENAI_API_KEY”];
// Semantic Kernel client
var builder = Kernel.CreateBuilder();
// Azure OpenAI
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
// OpenAI
// builder.Services.AddOpenAIChatCompletion(
// modelId: OpenAIModelId,
// apiKey: OpenAIApiKey
// );
var kernel = builder.Build();
var prompt = @”{{$input}} ” + newQuery;
var summarize = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings { MaxTokens = 100, Temperature = 0.2 });
var result = await kernel.InvokeAsync(summarize);
serverResponse = result.ToString();
}
}
Now save the application and follow the next steps to deploy it to App Service. If you would like to test it locally first at this step, you can swap out the config values at with the literal string values of your OpenAI service. For example: string modelId = “gpt-4-turbo”;
Deploy to App Service
If you have followed the steps above, you are ready to deploy to App Service. If you run into any issues remember that you need to have done the following: grant your app access to your Key Vault, add the app settings with key vault references as your values. App Service will resolve the app settings in your application that match what you’ve added in the portal.
Authentication
Although optional, it is highly recommended that you also add authentication to your web app when using an Azure OpenAI or OpenAI service. This can add a level of security with no additional code. Learn how to enable authentication for your web app here.
Once deployed, browse to the web app and navigate to the Open AI tab. Enter a query to the service and you should see a populated response from the server. The tutorial is now complete and you now know how to use OpenAI services to create intelligent applications.
Microsoft Tech Community – Latest Blogs –Read More
Creating Intelligent Apps on App Service with .NET
You can use Azure App Service to work with popular AI frameworks like LangChain and Semantic Kernel connected to OpenAI for creating intelligent apps. In the following tutorial we will be adding an Azure OpenAI service using Semantic Kernel to a .NET 8 Blazor web application.
Prerequisites
An Azure OpenAI resource or an OpenAI account.
A .NET 8 Blazor Web App. Create the application with a template here.
Setup Blazor web app
For this Blazor web application, we’ll be building off the Blazor template and creating a new razor page that can send and receive requests to an Azure OpenAI OR OpenAI service using Semantic Kernel.
Right click on the Pages folder found under the Components folder and add a new item named OpenAI.razor
Add the following code to the ****OpenAI.razor file and click Save
“/openai”
@rendermode InteractiveServer
<PageTitle>Open AI</PageTitle>
<h3>Open AI Query</h3>
<input placeholder=”Input query” @bind=”newQuery” />
<button class=”btn btn-primary” @onclick=”SemanticKernelClient”>Send Request</button>
<br />
<h4>Server response:</h4> <p>@serverResponse</p>
@code {
public string? newQuery;
public string? serverResponse;
}
Next, we’ll need to add the new page to the navigation so we can navigate to the service.
Go to the NavMenu.razor file under the Layout folder and add the following div in the nav class. Click Save
<div class=”nav-item px-3″>
<NavLink class=”nav-link” href=”openai”>
<span class=”bi bi-list-nested-nav-menu” aria-hidden=”true”></span> Open AI
</NavLink>
</div>
After the Navigation is updated, we can start preparing to build the OpenAI client to handle our requests.
API Keys and Endpoints
In order to make calls to OpenAI with your client, you will need to first grab the Keys and Endpoint values from Azure OpenAI or OpenAI and add them as secrets for use in your application. Retrieve and save the values for later use.
For Azure OpenAI, see this documentation to retrieve the key and endpoint values. For our application, you will need the following values:
deploymentName
endpoint
apiKey
modelId
For OpenAI, see this documentation to retrieve the api keys. For our application, you will need the following values:
apiKey
modelId
Since we’ll be deploying to App Service we can secure these secrets in Azure Key Vault for protection. Follow the Quickstart to setup your Key Vault and add the secrets you saved from earlier.
Next, we can use Key Vault references as app settings in our App Service resource to reference in our application. Follow the instructions in the documentation to grant your app access to your Key Vault and to setup Key Vault references.
Then, go to the portal Environment Variables blade in your resource and add the following app settings:
For Azure OpenAI, use the following:
DEPOYMENT_NAME = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
ENDPOINT = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
API_KEY = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
MODEL_ID = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
For OpenAI, use the following:
OPENAI_API_KEY = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
OPENAI_MODEL_ID = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
Once your app settings are saved, you can bring them into the code by injecting IConfiguration and referencing the app settings. Add the following code to your OpenAI.razor file:
@inject Microsoft.Extensions.Configuration.IConfiguration _config
@code {
private async Task SemanticKernelClient()
{
string deploymentName = _config[“DEPLOYMENT_NAME”];
string endpoint = _config[“ENDPOINT”];
string apiKey = _config[“API_KEY”];
string modelId = _config[“MODEL_ID”];
// OpenAI
string OpenAIModelId = _config[“OPENAI_MODEL_ID”];
string OpenAIApiKey = _config[“OPENAI_API_KEY”];
}
Semantic Kernel
Semantic Kernel is an open-source SDK that enables you to easily develop AI agents to work with your existing code. You can use Semantic Kernel with Azure OpenAI and OpenAI models.
To create the OpenAI client, we’ll first start by installing Semantic Kernel.
To install Semantic Kernel, browse the NuGet package manager in Visual Studio and install the Microsoft.SemanticKernel package. For NuGet Package Manager instructions, see here. For CLI instructions, see here.
Once the Semantic Kernel package is installed, you can now initialize the kernel.
Initialize the Kernel
To initialize the Kernel, add the following code to the OpenAI.razor file.
@code {
@using Microsoft.SemanticKernel;
private async Task SemanticKernelClient()
{
var builder = Kernel.CreateBuilder();
var kernel = builder.Build();
}
}
Here we are adding the using statement and creating the Kernel in a method that we can use when we send the request to the service.
Add your AI service
Once the Kernel is initialized, we can add our chosen AI service to the kernel. Here we will define our model and pass in our key and endpoint information to be consumed by the chosen model.
For Azure OpenAI, use the following code:
var builder = Kernel.CreateBuilder();
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
var kernel = builder.Build();
For OpenAI, use the following code:
var builder = Kernel.CreateBuilder();
builder.Services.AddOpenAIChatCompletion(
modelId: OpenAIModelId,
apiKey: OpenAIApiKey,
);
var kernel = builder.Build();
Configure prompt and create Semantic function
Now that our chosen OpenAI service client is created with the correct keys we can add a function to handle the prompt. With Semantic Kernel you can handle prompts by the use of a semantic functions, which turn the prompt and the prompt configuration settings into a function the Kernel can execute. Learn more on configuring prompts here.
First, we’ll create a variable that will hold the users prompt. Then add a function with execution settings to handle and configure the prompt. Add the following code to the OpenAI.razor file:
@using Microsoft.SemanticKernel.Connectors.OpenAI
private async Task SemanticKernelClient()
{
var builder = Kernel.CreateBuilder();
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
var kernel = builder.Build();
var prompt = @”{{$input}} ” + newQuery;
var summarize = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings { MaxTokens = 100, Temperature = 0.2 });
}
Lastly, we’ll need to invoke the function and return the response. Add the following to the OpenAI.razor file:
private async Task SemanticKernelClient()
{
var builder = Kernel.CreateBuilder();
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
var kernel = builder.Build();
var prompt = @”{{$input}} ” + newQuery;
var summarize = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings { MaxTokens = 100, Temperature = 0.2 })
var result = await kernel.InvokeAsync(summarize);
serverResponse = result.ToString();
}
Here is the example in it’s completed form. In this example, use the Azure OpenAI chat completion service OR the OpenAI chat completion service, not both.
“/openai”
@rendermode InteractiveServer
@inject Microsoft.Extensions.Configuration.IConfiguration _config
<PageTitle>OpenAI</PageTitle>
<h3>OpenAI input query: </h3>
<input class=”col-sm-4″ @bind=”newQuery” />
<button class=”btn btn-primary” @onclick=”SemanticKernelClient”>Send Request</button>
<br />
<br />
<h4>Server response:</h4> <p>@serverResponse</p>
@code {
@using Microsoft.SemanticKernel;
@using Microsoft.SemanticKernel.Connectors.OpenAI
private string? newQuery;
private string? serverResponse;
private async Task SemanticKernelClient()
{
// Azure OpenAI
string deploymentName = _config[“DEPLOYMENT_NAME”];
string endpoint = _config[“ENDPOINT”];
string apiKey = _config[“API_KEY”];
string modelId = _config[“MODEL_ID”];
// OpenAI
// string OpenAIModelId = _config[“OPENAI_DEPLOYMENT_NAME”];
// string OpenAIApiKey = _config[“OPENAI_API_KEY”];
// Semantic Kernel client
var builder = Kernel.CreateBuilder();
// Azure OpenAI
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
// OpenAI
// builder.Services.AddOpenAIChatCompletion(
// modelId: OpenAIModelId,
// apiKey: OpenAIApiKey
// );
var kernel = builder.Build();
var prompt = @”{{$input}} ” + newQuery;
var summarize = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings { MaxTokens = 100, Temperature = 0.2 });
var result = await kernel.InvokeAsync(summarize);
serverResponse = result.ToString();
}
}
Now save the application and follow the next steps to deploy it to App Service. If you would like to test it locally first at this step, you can swap out the config values at with the literal string values of your OpenAI service. For example: string modelId = “gpt-4-turbo”;
Deploy to App Service
If you have followed the steps above, you are ready to deploy to App Service. If you run into any issues remember that you need to have done the following: grant your app access to your Key Vault, add the app settings with key vault references as your values. App Service will resolve the app settings in your application that match what you’ve added in the portal.
Authentication
Although optional, it is highly recommended that you also add authentication to your web app when using an Azure OpenAI or OpenAI service. This can add a level of security with no additional code. Learn how to enable authentication for your web app here.
Once deployed, browse to the web app and navigate to the Open AI tab. Enter a query to the service and you should see a populated response from the server. The tutorial is now complete and you now know how to use OpenAI services to create intelligent applications.
Microsoft Tech Community – Latest Blogs –Read More
Creating Intelligent Apps on App Service with .NET
You can use Azure App Service to work with popular AI frameworks like LangChain and Semantic Kernel connected to OpenAI for creating intelligent apps. In the following tutorial we will be adding an Azure OpenAI service using Semantic Kernel to a .NET 8 Blazor web application.
Prerequisites
An Azure OpenAI resource or an OpenAI account.
A .NET 8 Blazor Web App. Create the application with a template here.
Setup Blazor web app
For this Blazor web application, we’ll be building off the Blazor template and creating a new razor page that can send and receive requests to an Azure OpenAI OR OpenAI service using Semantic Kernel.
Right click on the Pages folder found under the Components folder and add a new item named OpenAI.razor
Add the following code to the ****OpenAI.razor file and click Save
“/openai”
@rendermode InteractiveServer
<PageTitle>Open AI</PageTitle>
<h3>Open AI Query</h3>
<input placeholder=”Input query” @bind=”newQuery” />
<button class=”btn btn-primary” @onclick=”SemanticKernelClient”>Send Request</button>
<br />
<h4>Server response:</h4> <p>@serverResponse</p>
@code {
public string? newQuery;
public string? serverResponse;
}
Next, we’ll need to add the new page to the navigation so we can navigate to the service.
Go to the NavMenu.razor file under the Layout folder and add the following div in the nav class. Click Save
<div class=”nav-item px-3″>
<NavLink class=”nav-link” href=”openai”>
<span class=”bi bi-list-nested-nav-menu” aria-hidden=”true”></span> Open AI
</NavLink>
</div>
After the Navigation is updated, we can start preparing to build the OpenAI client to handle our requests.
API Keys and Endpoints
In order to make calls to OpenAI with your client, you will need to first grab the Keys and Endpoint values from Azure OpenAI or OpenAI and add them as secrets for use in your application. Retrieve and save the values for later use.
For Azure OpenAI, see this documentation to retrieve the key and endpoint values. For our application, you will need the following values:
deploymentName
endpoint
apiKey
modelId
For OpenAI, see this documentation to retrieve the api keys. For our application, you will need the following values:
apiKey
modelId
Since we’ll be deploying to App Service we can secure these secrets in Azure Key Vault for protection. Follow the Quickstart to setup your Key Vault and add the secrets you saved from earlier.
Next, we can use Key Vault references as app settings in our App Service resource to reference in our application. Follow the instructions in the documentation to grant your app access to your Key Vault and to setup Key Vault references.
Then, go to the portal Environment Variables blade in your resource and add the following app settings:
For Azure OpenAI, use the following:
DEPOYMENT_NAME = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
ENDPOINT = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
API_KEY = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
MODEL_ID = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
For OpenAI, use the following:
OPENAI_API_KEY = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
OPENAI_MODEL_ID = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
Once your app settings are saved, you can bring them into the code by injecting IConfiguration and referencing the app settings. Add the following code to your OpenAI.razor file:
@inject Microsoft.Extensions.Configuration.IConfiguration _config
@code {
private async Task SemanticKernelClient()
{
string deploymentName = _config[“DEPLOYMENT_NAME”];
string endpoint = _config[“ENDPOINT”];
string apiKey = _config[“API_KEY”];
string modelId = _config[“MODEL_ID”];
// OpenAI
string OpenAIModelId = _config[“OPENAI_MODEL_ID”];
string OpenAIApiKey = _config[“OPENAI_API_KEY”];
}
Semantic Kernel
Semantic Kernel is an open-source SDK that enables you to easily develop AI agents to work with your existing code. You can use Semantic Kernel with Azure OpenAI and OpenAI models.
To create the OpenAI client, we’ll first start by installing Semantic Kernel.
To install Semantic Kernel, browse the NuGet package manager in Visual Studio and install the Microsoft.SemanticKernel package. For NuGet Package Manager instructions, see here. For CLI instructions, see here.
Once the Semantic Kernel package is installed, you can now initialize the kernel.
Initialize the Kernel
To initialize the Kernel, add the following code to the OpenAI.razor file.
@code {
@using Microsoft.SemanticKernel;
private async Task SemanticKernelClient()
{
var builder = Kernel.CreateBuilder();
var kernel = builder.Build();
}
}
Here we are adding the using statement and creating the Kernel in a method that we can use when we send the request to the service.
Add your AI service
Once the Kernel is initialized, we can add our chosen AI service to the kernel. Here we will define our model and pass in our key and endpoint information to be consumed by the chosen model.
For Azure OpenAI, use the following code:
var builder = Kernel.CreateBuilder();
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
var kernel = builder.Build();
For OpenAI, use the following code:
var builder = Kernel.CreateBuilder();
builder.Services.AddOpenAIChatCompletion(
modelId: OpenAIModelId,
apiKey: OpenAIApiKey,
);
var kernel = builder.Build();
Configure prompt and create Semantic function
Now that our chosen OpenAI service client is created with the correct keys we can add a function to handle the prompt. With Semantic Kernel you can handle prompts by the use of a semantic functions, which turn the prompt and the prompt configuration settings into a function the Kernel can execute. Learn more on configuring prompts here.
First, we’ll create a variable that will hold the users prompt. Then add a function with execution settings to handle and configure the prompt. Add the following code to the OpenAI.razor file:
@using Microsoft.SemanticKernel.Connectors.OpenAI
private async Task SemanticKernelClient()
{
var builder = Kernel.CreateBuilder();
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
var kernel = builder.Build();
var prompt = @”{{$input}} ” + newQuery;
var summarize = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings { MaxTokens = 100, Temperature = 0.2 });
}
Lastly, we’ll need to invoke the function and return the response. Add the following to the OpenAI.razor file:
private async Task SemanticKernelClient()
{
var builder = Kernel.CreateBuilder();
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
var kernel = builder.Build();
var prompt = @”{{$input}} ” + newQuery;
var summarize = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings { MaxTokens = 100, Temperature = 0.2 })
var result = await kernel.InvokeAsync(summarize);
serverResponse = result.ToString();
}
Here is the example in it’s completed form. In this example, use the Azure OpenAI chat completion service OR the OpenAI chat completion service, not both.
“/openai”
@rendermode InteractiveServer
@inject Microsoft.Extensions.Configuration.IConfiguration _config
<PageTitle>OpenAI</PageTitle>
<h3>OpenAI input query: </h3>
<input class=”col-sm-4″ @bind=”newQuery” />
<button class=”btn btn-primary” @onclick=”SemanticKernelClient”>Send Request</button>
<br />
<br />
<h4>Server response:</h4> <p>@serverResponse</p>
@code {
@using Microsoft.SemanticKernel;
@using Microsoft.SemanticKernel.Connectors.OpenAI
private string? newQuery;
private string? serverResponse;
private async Task SemanticKernelClient()
{
// Azure OpenAI
string deploymentName = _config[“DEPLOYMENT_NAME”];
string endpoint = _config[“ENDPOINT”];
string apiKey = _config[“API_KEY”];
string modelId = _config[“MODEL_ID”];
// OpenAI
// string OpenAIModelId = _config[“OPENAI_DEPLOYMENT_NAME”];
// string OpenAIApiKey = _config[“OPENAI_API_KEY”];
// Semantic Kernel client
var builder = Kernel.CreateBuilder();
// Azure OpenAI
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
// OpenAI
// builder.Services.AddOpenAIChatCompletion(
// modelId: OpenAIModelId,
// apiKey: OpenAIApiKey
// );
var kernel = builder.Build();
var prompt = @”{{$input}} ” + newQuery;
var summarize = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings { MaxTokens = 100, Temperature = 0.2 });
var result = await kernel.InvokeAsync(summarize);
serverResponse = result.ToString();
}
}
Now save the application and follow the next steps to deploy it to App Service. If you would like to test it locally first at this step, you can swap out the config values at with the literal string values of your OpenAI service. For example: string modelId = “gpt-4-turbo”;
Deploy to App Service
If you have followed the steps above, you are ready to deploy to App Service. If you run into any issues remember that you need to have done the following: grant your app access to your Key Vault, add the app settings with key vault references as your values. App Service will resolve the app settings in your application that match what you’ve added in the portal.
Authentication
Although optional, it is highly recommended that you also add authentication to your web app when using an Azure OpenAI or OpenAI service. This can add a level of security with no additional code. Learn how to enable authentication for your web app here.
Once deployed, browse to the web app and navigate to the Open AI tab. Enter a query to the service and you should see a populated response from the server. The tutorial is now complete and you now know how to use OpenAI services to create intelligent applications.
Microsoft Tech Community – Latest Blogs –Read More
MDTI Earns Impactful Trio of ISO Certificates
We are excited to announce that Microsoft Defender Threat Intelligence (MDTI) has achieved ISO 27001, ISO 27017 and ISO 27018 certifications. The ISO, the International Organization for Standardization, develops market relevant international standards that support innovation and provide solutions to global challenges, including information security requirements around establishing, implementing, and improving an Information Security Management System (ISM).
These certificates emphasize the MDTI team’s continuous commitment to protecting customer information and following the strictest standards of security and privacy standards.
Certificate meaning and importance
ISO 27001: This certification demonstrates compliance of MDTI’s ISMS with best practices of industry, thereby providing a structured approach towards risk management pertaining to information security.
ISO 27017: This certificate is a worldwide standard that provides guidance on securing information in the cloud. It demonstrates that we have put in place strong controls and countermeasures to ensure our customers’ data is safe when stored in the cloud.
ISO 27018: This certificate sets out common objectives, controls and guidelines for protecting personally identifiable information (PII) processed in public clouds consistent with the privacy principles outlined in ISO 29100. This is confirmed by our ISO 27018 certification, which shows that we are committed to respecting our customers’ privacy rights and protecting their personal data through cloud computing.
What are the advantages of these certifications for our customers?
Enhanced Safety and Privacy Assurance: Our customers can be confident that the most sophisticated and exhaustive security and privacy standards offered in the market are in place to protect their data. We have ensured we exceed these certifications; therefore, their information is secure from emerging threats.
Reduced Risk and Liability Exposure: Through our certified ISMs and Privacy Information Management System (PIMS), consumers can significantly reduce liability for potential data breaches, legal actions, regulatory fines, or reputational risks. They use our efficient structures to boost resistance against cybercrime to reduce the risk of lawsuits.
Streamlined Compliance and Competitive Edge: The clients’ industry or market-specific rigorous regulatory and contractual requirements are usually facilitated by our certification programs. Global accreditation of international standards signifies that organizations are serious when it comes to data security. Their job reputation improves plus they get options for teaming up with other businesses that value safeguarding privacy.
What are the steps to begin with MDTI?
If you are interested in learning more about MDTI and how it can help you unmask and neutralize modern adversaries and cyberthreats such as ransomware, and to explore the features and benefits of MDTI please visit the MDT product web page.
Also, be sure to contact our sales team to request a demo or a quote.
Microsoft Tech Community – Latest Blogs –Read More
Mistral Large, Mistral AI’s flagship LLM, debuts on Azure AI Models-as-a-Service
Microsoft is partnering with Mistral AI to bring its Large Language Models (LLMs) to Azure. Mistral AI’s OSS models, Mixtral-8x7B and Mistral-7B, were added to the Azure AI model catalog last December. We are excited to announce the addition of Mistral AI’s new flagship model, Mistral Large to the Mistral AI collection of models in the Azure AI model catalog today. The Mistral Large model will be available through Models-as-a-Service (MaaS) that offers API-based access and token based billing for LLMs, making it easier to build Generative AI apps. Developers can provision an API endpoint in a matter of seconds and try out the model in the Azure AI Studio playground or use it with popular LLM app development tools like Azure AI prompt flow and LangChain. The APIs support two layers of safety – first, the model has built-in support for a “safe prompt” parameter and second, Azure AI content safety filters are enabled to screen for harmful content generated by the model, helping developers build safe and trustworthy applications.
The Mistral Large model
Mistral Large is Mistral AI’s most advanced Large Language Model (LLM), first available on Azure and the Mistral AI platform. It can be used on a full range of language-based task thanks to its state-of-the-art reasoning and knowledge capabilities. Key attributes:
Specialized in RAG: Crucial information is not lost in the middle of long context windows. Supports up to 32K tokens.
Strong in coding: Code generation, review and comments with support for all mainstream coding languages.
Multi-lingual by design: Best-in-class performance in French, German, Spanish, and Italian – in addition to English. Dozens of other languages are supported.
Responsible AI: Efficient guardrails baked in the model, with additional safety layer with safe prompt option.
Benchmarks
You can read more about the model and review evaluation results on Mistral AI’s blog: https://mistral.ai/news/mistral-large. The Benchmarks hub in Azure offers a standardized set of evaluation metrics for popular models including Mistral’s OSS models and Mistral Large.
Using Mistral Large on Azure AI
Let’s take care of the prerequisites first:
If you don’t have an Azure subscription, get one here: https://azure.microsoft.com/en-us/pricing/purchase-options/pay-as-you-go
Create an Azure AI Studio hub and project. Make sure you pick East US 2 or France Central as the Azure region for the hub.
Next, you need to create a deployment to obtain the inference API and key:
Open the Mistral Large model card in the model catalog: https://aka.ms/aistudio/landing/mistral-large
Click on Deploy and pick the Pay-as-you-go option.
Subscribe to the Marketplace offer and deploy. You can also review the API pricing at this step.
You should land on the deployment page that shows you the API and key in less than a minute. You can try out your prompts in the playground.
The prerequisites and deployment steps are explained in the product documentation: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral.
You can use the API and key with various clients. Review the API schema if you are looking to integrate the REST API with your own client: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral#reference-for-mistral-large-deployed-as-a-service. Let’s review samples for some popular clients.
Basic CLI with curl and Python web request sample: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/webrequests.ipynb
Mistral clients: Azure APIs for Mistral Large are compatible with the API schema offered on the Mistral AI ‘s platform which allows you to use any of the Mistral AI platform clients with Azure APIs. Sample notebook for the Mistral python client: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/mistralai.ipynb
LangChain: API compatibility also enables you to use the Mistral AI’s Python and JavaScript LangChain integrations. Sample LangChain notebook: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/langchain.ipynb
LiteLLM: LiteLLM is easy to get started and offers consistent input/output format across many LLMs. Sample LiteLLM notebook: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/litellm.ipynb
Prompt flow: Prompt flow offers a web experience in Azure AI Studio and VS code extension to build LLM apps with support for authoring, orchestration, evaluation and deployment. Learn more: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/prompt-flow. Out-of-the-box support for Mistral AI APIs on Azure is coming soon, but you can create a custom connection using the API and key, and use the SDK of your choice Python tool in prompt flow.
Develop with integrated content safety
Mistral AI APIs on Azure come with two layered safety approach – instructing the model through the system prompt and an additional content filtering system that screens prompts and completions for harmful content. Using the safe_prompt parameter prefixes the system prompt with a guardrail instruction as documented here. Additionally, the Azure AI content safety system that consists of an ensemble of classification models screens for specific types of harmful content. The external system is designed to be effective against adversarial prompts attacks such as prompts that ask the model to ignore previous instructions. When the content filtering system detects harmful content, you will receive either an error if the prompt was classified as harmful or the response will be partially or completely truncated with an appropriate message when the output generated is classified as harmful. Make sure you account for these scenarios where the content returned by the APIs is filtered when building your applications.
FAQs
What does it cost to use Mistral Large on Azure?
You are billed based on the number of prompt and completions tokens. You can review the pricing on the Mistral Large offer in the Marketplace offer details tab when deploying the model. You can also find the pricing on the Azure Marketplace: https://azuremarketplace.microsoft.com/en-us/marketplace/apps/000-000.mistral-ai-large-offer
Do I need GPU capacity in my Azure subscription to use Mistral Large?
No. Unlike the Mistral AI OSS models that deploy to VMs with GPUs using Online Endpoints, the Mistral Large model is offered as an API. Mistral Large is a premium model whose weights are not available, so you cannot deploy it to a VM yourself.
This blog talks about the Mistral Large experience in Azure AI Studio. Is Mistral Large available in Azure Machine Learning Studio?
Yes, Mistral Large is available in the Model Catalog in both Azure AI Studio and Azure Machine Learning Studio.
Does Mistral Large on Azure support function calling and Json output?
The Mistral Large model can do function calling and generate Json output, but support for those features will roll out soon on the Azure platform.
Mistral Large is listed on the Azure Marketplace. Can I purchase and use Mistral Large directly from Azure Marketplace?
Azure Marketplace enables the purchase and billing of Mistral Large, but the purchase experience can only be accessed through the model catalog. Attempting to purchase Mistral Large from the Marketplace will redirect you to Azure AI Studio.
Given that Mistral Large is billed through the Azure Marketplace, does it retire my Azure consumption commitment (aka MACC)?
Yes, Mistral Large is an “Azure benefit eligible” Marketplace offer, which indicates MACC eligibility. Learn more about MACC here: https://learn.microsoft.com/en-us/marketplace/azure-consumption-commitment-benefit
Is my inference data shared with Mistral AI?
No, Microsoft does not share the content of any inference request or response data with Mistral AI.
Are there rate limits for the Mistral Large API on Azure?
Mistral Large API comes with 200k tokens per minute and 1k requests per minute limit. Reach out to Azure customer support if this doesn’t suffice.
Are Mistral Large Azure APIs region specific?
Mistral Large API endpoints can be created in AI Studio projects to Azure Machine Learning workspaces in East US 2 or France Central Azure regions. If you want to use Mistral Large in prompt flow in project or workspaces in other regions, you can use the API and key as a connection to prompt flow manually. Essentially, you can use the API from any Azure region once you create it in East US 2 or France Central.
Can I fine-tune Mistal Large?
Not yet, stay tuned…
Supercharge your AI apps with Mistral Large today. Head over to AI Studio model catalog to get started.
Microsoft Tech Community – Latest Blogs –Read More
How to use ACSS Inventory Checks to review your SAP workload
Organizations using SAP for business-critical operations understand the complexity and significance of managing the system. Adjusting the configurations of SAP workload for performance and reliability is important. Microsoft’s unmatched expertise in this field stems from helping numerous customers migrate their SAP applications to the Microsoft cloud.
We’ve created tools and frameworks in the Azure Centre for SAP Solutions (ACSS) to support our customers and partners running SAP workload on the Microsoft Cloud. These tools improve customers visibility at both platform and SAP system levels.
In this blog post, we’re highlighting the Azure Inventory Checks for SAP, which facilitate health checks at the subscription level. Azure Inventory Checks for SAP is a health check capability for SAP Workloads which aims to provide our customers and partners with a holistic view of the quality of their SAP deployment at a subscription level. It is based on Azure workbooks and integrated within the Azure Portal.
Inventory Checks for SAP provide insights into your SAP workload through the following aggregated views:
Overview – High level counts of deployed resources such as virtual machines.
Virtual Machines – Compute List, SKUs, Extensions, Disk Configuration and more
Storage – Backup and Storage Account configuration.
Network – Information about Private Endpoints, DNS Zones and Network Monitoring.
Orphaned Resources – Helping with Cost Optimization by surfacing unused resources such as unattached disks.
Configuration Checks – To creating awareness of resources that do not align with Azure Best Practices.
Azure NetApp Files – Capacity Pool, Volume usage.
SAP System detail (With VIS registration) – SAP System Type, Instance number, Kernel release, Patch version, Region of deployment.
Monitoring – Compute Key Metrics and Resource Health
Here’s what some of our customers have to say about Azure Inventory Checks for SAP:
“Inventory checks of ModusLink’s SAP Azure and non-SAP Azure environments provides us reporting which enables us to effectively manage assets/resource utilization and make appropriate configuration changes. We have visibility to identify orphaned resources, configuration drifts from best practices which in turn helps us optimize the environment and cost.”
Gita Karle, Director Cloud and emerging Technologies & Alejandro Flores, Director IT Infrastructure
“Inventory Checks for SAP is a brilliant and easy-to-use tool that immediately provides insights into areas to improve the quality of the SAP workload. My team have been using it on all our mission-critical (SfMC) SAP customers, and it’s a clear value-add for Microsoft customers & partners.”
Paul Enns, Senior Cloud Solution Architect-Engineering
Overview
To provide customers with a rich experience, we continue to expand these health checks continuously, and over time these are also integrated back into the main workbook within ACSS Quality Insights.
#
User Options
Description
Azure Portal Integration
Onboarding Required
Virtual Instance for SAP
Targeted SAP Views
Resource Group View
Updates
1
ACSS Integrated Quality Insights
ACSS Product
Integrated into Quality Insights view.
Yes
Yes
Complete
No
Monthly
2
Inventory Checks for SAP
Azure Monitor*
Integrated into Azure Monitor view.
No
Yes
Complete
Yes
Monthly
3
Open Source – Community Edition
Standalone
Standalone Azure Workbook.
No
Yes
Partial
Yes
Monthly
There are user options based on customer/partner preference on how to access the information. Irrespective of the user options,
Customers are advised to register SAP system in ACSS to access other management capabilities and receive the appropriate support from Microsoft for those capabilities.
User Option #1 – ACSS Integrated Quality Insights
With ACSS, customers can either deploy new SAP systems or register their existing ones to use Quality Insights, a key feature of the ACSS product. Quality Insights lets customers compare their Azure resources and Operating System settings with the best practices. Quality Insights uses Azure Resource Graph Queries to get information from the Azure Subscription about the configuration and health of Azure resources that are important for the SAP Workload.
Pre-requisites
The SAP System needs to be onboarded to ACSS via a registration process for brownfield (existing) deployments. Refer to this link for more information – Register Existing SAP system to ACSS
RBAC (Role Based Access Control) to the Azure Subscription where SAP systems are deployed.
How to access the Quality Insights for SAP Workbook from Azure Center for SAP solutions
Sign in to the Azure portal.
Search for and select Azure Center for SAP solutions in the Azure portal search bar.
On the Azure Center for SAP solutions page’s sidebar menu, select Quality Insights.
User option #2 – Access Inventory Checks for SAP via Azure Monitor
The standalone version of Azure Inventory Checks for SAP is accessible as a workbook within Azure Monitor. With this standalone workbook, customers can tailor their experience by customizing views and selections using Azure Resource Graph Queries according to their preferences.
Pre-requisites
The SAP System needs to be onboarded to ACSS via a registration process for brownfield (existing) deployments. Refer to this link for more information – Register Existing SAP system to ACSS. Recommended for VIS specific view.
RBAC (Role Based Access Control) / minimum Read-Only to the Azure Subscription where SAP systems are deployed.
How to access the Inventory Checks for SAP Workbook from Azure Monitor
Browse to shortcut https://aka.ms/ACESInventoryCheckSAP.
Option#2
Sign in to the Azure portal.
Browse to Monitor –> Insights Hub –> Workloads –> Inventory Checks for SAP Workbook.
Once the workbook opens; it will provide a selection criteria by <Subscription>, <Resource_Group> or <VIS>.
It is recommended to use a filter by Subscription & Resource_Filter where the SAP workload deployed.
Resource_Filter = ResourceGroup
Resource_Filter = VIS
Each tab will provide an additional option to download the data into CSV.
User Option #3 – Community Edition – Standalone
We also provide our customers a standalone Inventory Checks for SAP, that can be used for Resource Group / VIS (ACSS – Virtual instance for SAP) views and is continuously updated.
Pre-requisites
The SAP System needs to be onboarded to ACSS via a registration process for brownfield (existing) deployments. Refer to this link for more information – Register Existing SAP system to ACSS. Recommended for VIS specific view.
Users must have at least READ-ONLY permission on the subscription or Resource Group they wish to assess.
Write access to an Azure Resource Group, where the workbook is persisted.
How to import workbook template from GitHub into Azure Monitor
The Azure Inventory Checks also available on GitHub as opensource for collaboration. To use the GitHub template of ACSS Inventory Checks for SAP, please follow steps from GitHub.
The ACSS Inventory Checks offer an invaluable capability, providing profound insights into your workloads. Through aggregated views, it offers a holistic approach, integrating Operating System, Workload, and Azure Platform metrics. This ensures clarity in running and maintaining your SAP System on Azure.
Authors
Authors are members of Azure Engineering team for SAP workload at Microsoft, part of the Customer Solutions & Incubation team, responsible for helping customers achieve the most out of their cloud investment by identifying and unblocking customers of technical issues and incubate products to connect SAP Workload deployment with best practices on Azure.
Microsoft Tech Community – Latest Blogs –Read More
Best Practices for Upgrading Azure WAF Ruleset
This blog is written in collaboration with @davidfrazee.
Introduction:
In today’s digital landscape, web applications are the lifeblood of businesses. They enable seamless communication, transactions, and interactions with customers. However, this increased reliance on web apps also makes them prime targets for cyberattacks. To safeguard your applications and protect sensitive data, implementing a robust Web Application Firewall (WAF) is essential.
What is a Web Application Firewall (WAF)?
A WAF acts as a protective barrier between your web applications and potential threats. It analyses incoming HTTP/S traffic, detects malicious requests, and blocks them before they reach your application servers. By doing so, it prevents common vulnerabilities and attacks without requiring modifications to your application code.
Azure Web Application Firewall (Azure WAF):
Azure WAF, integrated with Azure Application Gateway or Azure Front Door, provides a powerful solution for securing your web apps. Let’s explore why you should consider using Azure WAF:
Protection Against Common Exploits and Vulnerabilities:
Azure WAF actively safeguards your applications against well-known attack vectors like SQL injection and cross-site scripting.
It leverages the Core Rule Set (CRS) from the Open Web Application Security Project (OWASP) to stay ahead of emerging threats, it also uses MSTIC (Microsoft Threat Intelligence Collection) rules that are written in partnership with the Microsoft Intelligence team to provide increased coverage, patches for specific vulnerabilities, and better false positive reduction.
Easy Configuration and Central Management:
Create custom WAF policies tailored to different sites behind the same Application Gateway/Front Door instance.
Manage and configure settings centrally using WAF policies.
Monitoring and Real-Time Alerts:
Monitor attacks with real-time WAF logs integrated into Azure Monitor.
Easily track WAF alerts and analyse trends.
IP Reputation Ruleset:
Protect your applications from malicious bots utilizing the Azure WAF Bot Manager Ruleset
Defend against Distributed Denial of Service (DDoS) attacks.
Upgrading WAF Rulesets
Keeping your WAF rulesets up to date is critical for several reasons:
Expanded Coverage:
New rulesets include additional protections for emerging vulnerabilities.
Stay ahead of attackers by having the latest defenses in place.
Reduced False Positives:
Updated rulesets improve accuracy, minimizing false positives.
Ensure legitimate traffic isn’t blocked unnecessarily.
Staying Ahead of Threats:
Regular updates ensure your WAF defends against the latest attack vectors.
Cyber threats evolve rapidly, and your defenses must keep pace.
Best Practices for Upgrading Azure WAF Ruleset
Consider a situation where you are currently using Core Rule Set (CRS) version 3.2 for your Azure Web Application Firewall (WAF). You have made several customizations to the WAF configuration, including disabling specific rule IDs, adjusting rule actions from Anomaly score/Log to Block, and applying exclusions.
Now, if you decide to upgrade to Default Rule Set (DRS) version 2.1, it’s important to be aware that all your previous customizations to the managed rulesets will be reset if you upgrade through the portal directly. However, rest assured that any Custom Rules, Global Exclusions and Policy settings you’ve defined will remain unaffected during this transition.
To make sure that you do not lose any custom configurations for your Managed rulesets, follow these best practices using Template-based approach:
1. Document Your Current WAF Configuration:
Export the template capturing existing WAF settings, including disabled rules and exclusions. Save this template as CRS_3.2
2. Prepare a New Template:
Clone the old Template and rename it to DRS_2.1 for the upgraded version.
3. Test in a Non-Production Environment:
Switch to the new ruleset using Portal Assign method in a non-production environment.
Temporarily disable Custom rules used in Tuning
Verify if exclusions are still necessary by sending traffic through this non prod WAF setup.
4. Reassign Exclusions and Customizations:
Apply exclusions and customizations using the below template modification method.
Modify the following parameters in the template saved as DRS_2.1 as shown below:
i. Ruleset Type
ii. Ruleset Version
iii. Rule Group Name (Rule Group and Id information can be found here )
Deploy this template in your environment and this will upgrade the policy from CRS_3.2 to DRS_2.1 with all the Rule Overrides and Exclusions intact.
5. Run Tests:
Send traffic and validate that exclusions and customizations still apply as expected.
Note:
If the exclusions are set to the Global level, those exclusions will not be affected after the upgrade. So, no changes are needed for Global exclusions.
In any case, you want to revert to the old ruleset, you can simply redeploy the initially saved template CRS_3.2 and all the changes should be reverted to previous state.
While following Template Based Upgrade process above, it is important to note that the Rule Id must be present in the new ruleset for which there has been a custom modification done in the existing ruleset. This needs to be checked before the upgrade using the information here.
Conclusion:
In summary, Azure WAF provides robust protection, easy management, and real-time monitoring for your web applications. Upgrade your rulesets regularly to stay secure in an ever-evolving threat landscape. Remember, a proactive defense is the key to keeping your applications safe and your users confident.
Microsoft Tech Community – Latest Blogs –Read More
Azure Permissions 101: How to manage Azure access effectively
While onboarding customers to Azure they ask what permissions do we need to assign to our IT Ops or to partners and I’ve seen customer gets confused when we ask them for Azure AD permission for some task and they say we’ve provided owner access on Azure Subscription why Azure AD permission is required and how this is related. So thought of writing this blog to share how many permission domains are there when you use Azure.
We will talk about these RBAC Domain:
Classic Roles
Azure RBAC Roles
Azure AD Roles
EA RBAC
MCA RBAC
Reserved Instance RBAC
Classic Roles
So let us talk about RBAC first – When I used to work in Azure Classic portal it used to be fewer roles. Mostly Account Admin, Co-Admin and Service Admin. The person who created subscription would become service Admin and if that person wanted to share the admin privilege, then he used to assign co-administrator role to the other guy
So when you go to Subscription -> IAM blade you’ll still see this. I have seen customers trying to provide owner access just try to use this Add Co-administrator button. Now you know the difference. This is not mean for providing someone access to ARM resource.
Azure RBAC
Let us talk about ARM RBAC now. When we moved to Azure RBAC from classic. We started with more fine-grained access control. With each service there was a role e.g. virtual machine contributor for managing VMs, Network contributor for managing network and so on. So, the user gets stored in Azure AD itself, but the permissions are maintained at subscription, resource group, management group level or resource level.
In each RBAC we have Actions which basically tells the role what it can perform.
The actions are part of the control plane. Which you get access to manage the service and its settings or configurations. We also have data plane actions. Which provides you the actual data access. Let us take an example of Azure Blob storage, if you get reader role you would be able to see the resource itself but will not be able to see the actual data in blob storage if you authenticate via Azure AD. If you want to see the actual data, then you can get storage blob data contributor role assigned to the ID and you can see the actual data. Similarly, there are services which expose data actions e.g. Azure Key vault, Service Bus.
Getting into where this RBAC roles can be assigned at Resource, Resource Group level or management group level is another discussion which I will cover in another blog post.
Azure AD Roles
This is used when you deal with Azure AD itself or services of which roles are stored in Azure AD like SharePoint, Exchange, or Dynamics 365. Dealing with Azure AD roles might be required during multiple instances, for example using service which creates service principals in the backend like app registration. Azure Migrate, Site recovery etc. would require Azure AD permissions to be assigned to your ID.
This RBAC Domain is separate from the Azure RBAC, this gets stored in Azure AD itself and managed centrally from roles and administrator’s blade.
The person who created the tenant gets a global admin role and then we have fine grained access based on the roles.
Though Azure AD roles are different than Azure RBAC which we assign to subscriptions, a global admin can elevate himself and get access to all the subscriptions in his tenant through a toggle.
Once you enable this toggle you get the user access administrator role at the root scope under which all the management group gets created. So eventually you can access all the subscriptions.
This is a rare and exceptional procedure that requires consultation with your internal team and a clear justification for its activation.
EA RBAC
If you are an enterprise customer and have signed up for the EA Agreement from Microsoft, as a customer in order to create subscriptions and manage billing you need to log on to EA portal which is now moved to Azure portal. Hence we’ve set of 6 RBAC permissions which can be used from cost management + billing section in Azure portal.
Enterprise administrator
EA purchaser
Department administrator
Account owner
Service administrator
Notification contact
Which set of permission is assigned at specific hierarchy can be explained through the below image. this is copied from Microsoft learn documentation mentioned below.
Below is the sample screenshot which you see when you click on cost management + billing portal. Here you will see Accounts, Departments, subscriptions.
MCA RBAC
If you have purchased MCA, then you get hierarchy for permissions to be assigned. Top level permissions are assigned at the billing scope and then billing profile level.
Billing account owner and Billing profile owner are the most common role you will use. More roles are mentioned in the article below which you can go through.
Reserved Instance RBAC
A common request from customers I get, I have got contributor/owner access to the subscription still I do not see the reserved Instance which is purchased by my colleague. Few years back the person who purchased reservation used to be the one who provided access to others by going to individual reservation. This is still possible but now you can get access to all reservations in the tenant.
Reservations when purchased by an admin he can see/manage it and seen by EA Admin or a person with reservation administrator role.
You can do this via PowerShell too, check this document for more information.
More information regarding who got access to RI is mentioned in the article below.
https://learn.microsoft.com/en-us/azure/cost-management-billing/reservations/view-reservations
Happy Learning!
You can find more such article at www.azuredoctor.com
Microsoft Tech Community – Latest Blogs –Read More
What’s new in Microsoft Bookings: Updated homepage
Introducing the updated Microsoft Bookings homepage
Microsoft Bookings is a powerful tool that helps you schedule and manage appointments with your customers, clients, or colleagues. Whether you need to book time to meet people or set aside time for different meetings, Bookings makes it easy and convenient for both you and your attendees.
Today, we would like to announce an update that will elevate the way you interact with Bookings, making it more intuitive, visually appealing, and seamlessly integrated into your workflow. Read on to find out what’s new.
This is primarily an experience update on the home page of Bookings and does not change any existing functionality.
Exploring the Homepage
With our latest update, the homepage becomes your centralized hub for all things Bookings. You can now access both personal and shared bookings from a single surface. Whether you’re managing personal appointments or coordinating with your team, everything you need is right on your homepage. You can easily switch between your shared and personal bookings pages workflows, view and edit your availability, and manage your appointments. You can create new meeting types (personal bookings) or bookings pages (shared bookings), either for yourself or for your team, with minimal clicks.
Explore Personal Bookings
Personal bookings is how you manage your own appointment timeslots, it allows you to easily configure and share your availability with your customers, clients, or colleagues. You can be in charge of your own time and avoid the back and forth of scheduling. You can also set aside time for specific activities by creating meeting types. Once you create a personal booking page, you can share a link with anyone who can then see your availability and easily schedule a time when you are free and is convenient for them.
What are Meeting Types?
Meeting types are the different kinds of appointments that you offer, such as a 30-minute consultation, a 15-minute check-in, or a 60-minute coaching session. You can customize each meeting type with a name, description, duration, location, and availability. You can also add buffer time, and confirmation emails to enhance your booking experience. The entire workflow of personal bookings is just the same.
If you are new to Personal Bookings, here is what we have for you:
Use the quick start templates.
One of the easiest ways to get started with personal bookings is to use the quick-start templates. Find pre-created templates on your personal booking page that let you start booking appointments right away. You can use them as they are, or you can edit them to suit your needs. You can also create your own meeting types from scratch, or duplicate and modify the existing ones.
Get Started Guide
Our Get Started Guide walks you through the process in three simple steps. From previewing the customer experience to customizing your meeting types and sharing your booking page, we provide the tools and resources you need to succeed. To see it, just click on “Get Started” on the right side of your home page.
Explore Shared Bookings
Shared bookings are booking pages that you create and manage for your team. They allow you to invite your team members to let your customers book time with you and your team.
Enhanced New User Onboarding
For new users embarking on their Bookings journey, we’ve introduced an exciting onboarding process designed to familiarize them with the product. From interactive coach marks to engaging tutorials, we make learning enjoyable and straightforward. Say hello to a smooth onboarding experience that sets you up for success from day one.
The latest update for Microsoft Bookings brings a fresh perspective to your appointment management. With and enhanced UI and a more intuitive experience, we’re committed to empowering you to do your best work. Whether you’re a seasoned Bookings user or just getting started, there’s something for everyone in this exciting update.
We hope this update makes your appointment management easier and please, leave your feedback and thoughts in the comments – We love hearing from you.
Microsoft Tech Community – Latest Blogs –Read More
Prompt users for reauthentication on sensitive apps and high-risk actions with Conditional Access
Howdy folks!
Today I’m thrilled to announce support Today I’m thrilled to announce support for additional capabilities now available for Conditional Access reauthentication policy scenarios. Reauthentication policy lets you require users to interactively provide their credentials again – typically before accessing critical applications and taking sensitive actions. Combined with Conditional Access session control of Sign-in frequency, you can require reauthentication for users and sign-ins with risk, or for Intune enrollment. With today’s public preview, now you can require reauthentication on any resource protected by Conditional Access.
To tell you more about this capability, I’ve invited Inbar Cizer Kobrinsky, Principal Product Manager at Microsoft Entra, to talk about the scenarios and configuration.
Thanks, and please let us know your thoughts!
Alex Weinert
—
Hi everyone!
I’m excited to tell you more about the public preview capabilities that we’re adding to the “Sign-in frequency – every time” session control in Conditional Access.
Single sign-on (SSO) using modern authentication is like a secret sauce for productivity and security. It improves productivity because users can access applications smoothly without signing in to each one with their credentials. SSO also enhances security because it reduces the risks of credential reuse and gives you a common point for control and logging for your Zero Trust deployments.
However, there are situations where you may want the user’s input, such as interactive authentication, before accessing a resource. One of these situations is token theft. A token theft attack occurs when threat actors compromise and replay tokens issued to a user, even if that user has satisfied multifactor authentication (MFA). Because authentication requirements are met, the threat actor is granted access to organizational resources by using the stolen token. Risk-based reauthentication policies can help lower the risk that results from a token theft, requiring threat actors to compromise a fresh token to regain system access.
Now, let’s look at some additional examples of these situations, where you may want to prompt the user for reauthentication:
Accessing high-risk resources, such as connecting to a VPN.
Activating a privileged role in Privileged Identity Management (PIM).
Performing an action within an application, such as changing personal information in an HR application.
Critical actions such as Intune enrollment or updating credentials.
Risky sign-ins, as called out above, help reduce and mitigate the risk of token theft.
With the latest update, you now can create policies requiring interactive reauthentication for any application or authentication context protected by Conditional Access.
In the example below, we’ve created a reauthentication policy for super sensitive actions, such as wire transfer. We’re using authentication context, which the developer has integrated into the application. Before a user can transfer any amount from the application, they must satisfy the Conditional Access policy that targets this authentication context.
The Conditional Access policy for this authentication context requires phishing-resistant authentication (using authentication strength) and “Sign-in frequency – every time”. The next time a user attempts to make a wire transfer, they will be required to reauthenticate with a phishing-resistant MFA.
To learn more, check out our documentation about Sign-in frequency – every time.
Let us know what you think!
Inbar
Learn more about Microsoft Entra:
See recent Microsoft Entra blogs
Dive into Microsoft Entra technical documentation
Learn more at Azure Active Directory (Azure AD) rename to Microsoft Entra ID
Join the conversation on the Microsoft Entra discussion space and Twitter
Learn more about Microsoft Security
Microsoft Tech Community – Latest Blogs –Read More
Improved hybrid meeting experience in Outlook
Outlook has now an improved hybrid meeting experience which gives you more options on how to manage and organize your meetings. This improved experience adds an option to mark an event as an in-person event, with this meeting organizers can request in-person attendance and attendees can RSVP confirming if they can indeed participate in person or if they can only make it virtually.
You will start seeing this this feature first in the new Outlook for Windows and the web experience in late March 2024.
Organize a meeting and request in-person attendance
Many meetings are set up in a hybrid environment but sometimes it might make sense to ask people to attend in person. When organizing a meeting, the organizer can mark the meeting as in-person by selecting the In-Person event toggle next to the location field.
Although the organizer is marking the event as “in-person”, they can also add a Teams meeting for those people who are not able to make it into the office and still want to participate.
Respond (RSVP) to a meeting invite with in-person request
When someone receives a meeting invite marked as an in-person event, they will now see an additional option when RSVPing “Yes” – instead of just one “Yes” option, you will now see three. Choose “Yes, in-person” if you plan to attend the meeting in-person as requested by the organizer, “Yes, virtually” if you would like to attend, but cannot make it in-person, or select “Yes” – with no attendance mode information – in case you prefer to confirm participation, but not disclose how you will attend.
Track in-person responses
As attendees respond to an in-person request, the organizer can track responses in the tracking pane, which displays each person’s attendance preference along with their response.
If you want to follow the status of this feature, you can keep track of it in the Microsoft 365 roadmap.
We hope this feature will improve your hybrid meeting experience, make managing your calendar easier, and help with your time management tasks.
Cheers!
Microsoft Tech Community – Latest Blogs –Read More
Answers in Viva: 2024 Update
We built Answers in Viva to help address one of the biggest challenges of the modern hybrid workplace: connecting to the right information, person or conversation you need to do your work, fast and smoothly. We aimed to reduce interruptions at work by automating the discovery of repeated questions and answers and allow employees to discover the knowledge they need in the flow of work.
Answers in Viva launched in February 2023 and quickly garnered interest from customers facing these very problems. Over the past year we have worked to further enhance Answers, integrating Answers deeply into Viva Engage communities and surfacing content from Answers within other areas of Microsoft 365 and (coming soon) Copilot. We are excited for what the future holds and want to share what’s new and what’s coming next for Answers in Viva.
Answers in Viva Engage communities
One of the most significant changes is bringing Answers into Viva Engage communities. We are excited to share that this capability is now available to all Viva suite and Employee Communications and Communities (C&C) licensed users. Through this integration, the robust capabilities of Answers in Viva – including AI-driven related questions – are now an integral part of the Viva Engage community experience. This means every community member can leverage the collective intelligence of their organization to find answers, connect with experts, share insights, and solve problems collaboratively. Starting this month, we will also index existing community questions into Answers.
We have seen that Answers in communities provides an immediate answer to nearly 25% of questions being asked by directing the user to an existing similar question and answer. The resulting reduction in duplicate questions allows community admins and SMEs to devote their time to more meaningful efforts while still ensuring that community members get the information they’re looking for. The ‘Best Answer’ feature automatically highlights the correct answer without needing to pull in additional people.
Answers in Viva Engage can be used within both public and private communities. This allows customers to separate sensitive information saved in Answers without having to visit each private community to get information. They can access all Answers content that they are authorized to see within the Answers tab in Viva Engage. In the future, Answers content will also be available through search (see below).
Answers returning top related questions while within a community.
Answers in the Flow of Work
Copilot
We aim to meet users in the flow of their work. To that end, Answers will be a companion app to Copilot, enriching AI with timely, employee sourced information, verified and validated by subject matter experts.
The content from Answers – both the Answers tab in Viva Engage and communities – will become part of Copilot’s knowledgebase. This means Answers will contribute to the ever-evolving information in Copilot. And what’s more, should users wish to confirm the accuracy of the content provided by Copilot, Answers will power the ability to verify Copilot responses with a real person.
Search
Starting early March 2024, Answers content will appear in search locations across M365. Answers content already appears in the Viva Engage search, but now Answers content will also appear in office.com, sharepoint.com and Bing at Work searches. If users do not find the information they are looking for in their search, they will be prompted to try posting their question into Answers.
Answers in Microsoft Search
Email Digests
Answers users also receive the Answers digest, which collects relevant questions recipients may be able to answer into one convenient, interactive email.
Topics & Answers
Topics help organize information across communities and enable questions to be asked outside of communities on the Answers tab. While Microsoft is retiring Viva Topics in February 2025, core topic experiences in Answers and Viva Engage, such as creating, adding or following topics will remain in place.
Importing Content into Answers
A common request from customers was to make it simpler to add content to Answers. The Answers Intelligent Importer creates Q&A pairs automatically from documents, boosting knowledge management productivity and efficiency. The importer lets you upload a text file, Word doc, or PDF. The system scans the document and creates question and answer pairs, which the user can then check, modify, and publish to either Answers or directly to a community. Answers Intelligent Importer is currently in private preview.
We are still in the initial stages of our journey, but we also have plans to enable customers to link knowledge from different systems into Answers. Watch for future announcements of the new Answers MS Graph APIs and connectors to third party apps.
Connecting to Knowledge and Subject Matter Experts (SMEs)
Answers makes it easy to connect to knowledge and get your questions answered by subject matter experts. By following Viva Engage topics, SMEs can see personalized feeds in the Answers tab in Viva Engage of questions relevant to their expertise: “Questions for me”.
In the future, we are looking at ways to make it even easier for SMEs to help their colleagues. These include ways to identify SMEs in a community and recognize their contributions. We are also looking at ways to indicate Official Answers from more crowdsourced answers and conversations.
Introduction of Code Snippets in Answers Posts and Replies
At the end of February, users will be able to insert code snippets in posts and replies in Viva Engage, including Answers, a boon for technical communities. This feature enhances clarity in code sharing and discussions, catering to the needs of software and IT professionals in your organization.
Code snippets available now in Viva Engage
Answers Licensing
Answers is available to all Microsoft Viva suite and Employee Communications and Communities (C&C) customers. Admins can also request a trial to try out these capabilities.
Webinar
Join us to hear more in our webinar on March 20th. Sign up for one of the two sessions:
March 20, 8am PST / 5pm CET – https://aka.ms/VivaEngage/Answers/Webinar/2024EMEA
March 20, 4pm PST / March 21 APAC – https://aka.ms/VivaEngage/Answers/Webinar/2024APAC
Microsoft Tech Community – Latest Blogs –Read More
SQL Server enabled by Azure Arc, now assists in selecting the best Azure SQL target
Cloud migration is the process of moving workloads from on-premises or legacy systems to the cloud. It can be a complex and lengthy process, depending on the size and scope of the project. Cloud migration typically involves four steps: discovery, assessment, planning, and migration. Azure Arc simplifies migration by automating complex tasks for the user and allowing them to use Azure Arc-enabled services to start modernizing operations and optimize operational costs from day one of the migration journey.
In our mission to streamline the Azure SQL migrations, SQL Server enabled by Arc now features the “Migration assessment“. This feature helps you assess your SQL Server readiness for Azure SQL.
Streamlined discovery and migration readiness assessments: Given that the SQL Server already has the Arc agent installed, the discovery of the SQL Server and its technical readiness is automatically, and continuously done. The migration readiness reports are available in the Azure portal without additional configuration.
Azure SQL readiness assessment: Evaluate and measure the readiness of SQL Servers for migration to Azure SQL. This process
Discovers and assesses the SQL Server instance and databases
Pinpoints SQL Server workloads that are ready for migration
Identifies potential compatibility issues with the target environment
Assesses migration risks
Provides recommendations to mitigate these risks
Azure SQL size recommendations: Provides best-fit recommendations, including the service tier and right-sizing based on performance history.
The SQL Server migration assessment is a free and continuous service that runs weekly for all SQL Server editions.
The readiness report identifies and helps resolve any migration issues or warnings for the SQL Server. It shows the exact objects and issues that need attention and provides clear instructions on fixing them for a smooth migration.
This video will show you how the SQL Server enabled by Arc can modernize your database management operations and reduce operational costs from the first day of the migration journey. You will also learn how the Arc-enabled SQL Servers provide an integrated experience for the migration process.
Call for action:
To get access to the SQL Server migration assessment, all you need to do is just Arc enable your SQL Server.
Select the optimal Azure SQL target using Migration assessment (preview) – SQL Server enabled by Azure Arc
Reach out to your Microsoft account team and review the next steps, based on the results from the migration assessment. We are here to support you with the actual migration projects..
Microsoft Tech Community – Latest Blogs –Read More
Mistral Large, Mistral AI’s flagship LLM, debuts on Azure AI Models-as-a-Service
Microsoft is partnering with Mistral AI to bring its Large Language Models (LLMs) to Azure. Mistral AI’s OSS models, Mixtral-8x7B and Mistral-7B, were added to the Azure AI model catalog last December. We are excited to announce the addition of Mistral AI’s new flagship model, Mistral Large to the Mistral AI collection of models in the Azure AI model catalog today. The Mistral Large model will be available through Models-as-a-Service (MaaS) that offers API-based access and token based billing for LLMs, making it easier to build Generative AI apps. Developers can provision an API endpoint in a matter of seconds and try out the model in the Azure AI Studio playground or use it with popular LLM app development tools like Azure AI prompt flow and LangChain. The APIs support two layers of safety – first, the model has built-in support for a “safe prompt” parameter and second, Azure AI content safety filters are enabled to screen for harmful content generated by the model, helping developers build safe and trustworthy applications.
The Mistral Large model
Mistral Large is Mistral AI’s most advanced Large Language Model (LLM), first available on Azure and the Mistral AI platform. It can be used on a full range of language-based task thanks to its state-of-the-art reasoning and knowledge capabilities. Key attributes:
Specialized in RAG: Crucial information is not lost in the middle of long context windows. Supports up to 32K tokens.
Strong in coding: Code generation, review and comments with support for all mainstream coding languages.
Multi-lingual by design: Best-in-class performance in French, German, Spanish, and Italian – in addition to English. Dozens of other languages are supported.
Responsible AI: Efficient guardrails baked in the model, with additional safety layer with safe prompt option.
Benchmarks
You can read more about the model and review evaluation results on Mistral AI’s blog: https://mistral.ai/news/mistral-large. The Benchmarks hub in Azure offers a standardized set of evaluation metrics for popular models including Mistral’s OSS models and Mistral Large.
Using Mistral Large on Azure AI
Let’s take care of the prerequisites first:
If you don’t have an Azure subscription, get one here: https://azure.microsoft.com/en-us/pricing/purchase-options/pay-as-you-go
Create an Azure AI Studio hub and project. Make sure you pick East US 2 or France Central as the Azure region for the hub.
Next, you need to create a deployment to obtain the inference API and key:
Open the Mistral Large model card in the model catalog: https://aka.ms/aistudio/landing/mistral-large
Click on Deploy and pick the Pay-as-you-go option.
Subscribe to the Marketplace offer and deploy. You can also review the API pricing at this step.
You should land on the deployment page that shows you the API and key in less than a minute. You can try out your prompts in the playground.
The prerequisites and deployment steps are explained in the product documentation: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral.
You can use the API and key with various clients. Review the API schema if you are looking to integrate the REST API with your own client: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral#reference-for-mistral-large-deployed-as-a-service. Let’s review samples for some popular clients.
Basic CLI with curl and Python web request sample: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/webrequests.ipynb
Mistral clients: Azure APIs for Mistral Large are compatible with the API schema offered on the Mistral AI ‘s platform which allows you to use any of the Mistral AI platform clients with Azure APIs. Sample notebook for the Mistral python client: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/mistralai.ipynb
LangChain: API compatibility also enables you to use the Mistral AI’s Python and JavaScript LangChain integrations. Sample LangChain notebook: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/langchain.ipynb
LiteLLM: LiteLLM is easy to get started and offers consistent input/output format across many LLMs. Sample LiteLLM notebook: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/litellm.ipynb
Prompt flow: Prompt flow offers a web experience in Azure AI Studio and VS code extension to build LLM apps with support for authoring, orchestration, evaluation and deployment. Learn more: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/prompt-flow. Out-of-the-box support for Mistral AI APIs on Azure is coming soon, but you can create a custom connection using the API and key, and use the SDK of your choice Python tool in prompt flow.
Develop with integrated content safety
Mistral AI APIs on Azure come with two layered safety approach – instructing the model through the system prompt and an additional content filtering system that screens prompts and completions for harmful content. Using the safe_prompt parameter prefixes the system prompt with a guardrail instruction as documented here. Additionally, the Azure AI content safety system that consists of an ensemble of classification models screens for specific types of harmful content. The external system is designed to be effective against adversarial prompts attacks such as prompts that ask the model to ignore previous instructions. When the content filtering system detects harmful content, you will receive either an error if the prompt was classified as harmful or the response will be partially or completely truncated with an appropriate message when the output generated is classified as harmful. Make sure you account for these scenarios where the content returned by the APIs is filtered when building your applications.
FAQs
What does it cost to use Mistral Large on Azure?
You are billed based on the number of prompt and completions tokens. You can review the pricing on the Mistral Large offer in the Marketplace offer details tab when deploying the model. You can also find the pricing on the Azure Marketplace: https://azuremarketplace.microsoft.com/en-us/marketplace/apps/000-000.mistral-ai-large-offer
Do I need GPU capacity in my Azure subscription to use Mistral Large?
No. Unlike the Mistral AI OSS models that deploy to VMs with GPUs using Online Endpoints, the Mistral Large model is offered as an API. Mistral Large is a premium model whose weights are not available, so you cannot deploy it to a VM yourself.
This blog talks about the Mistral Large experience in Azure AI Studio. Is Mistral Large available in Azure Machine Learning Studio?
Yes, Mistral Large is available in the Model Catalog in both Azure AI Studio and Azure Machine Learning Studio.
Does Mistral Large on Azure support function calling and Json output?
The Mistral Large model can do function calling and generate Json output, but support for those features will roll out soon on the Azure platform.
Mistral Large is listed on the Azure Marketplace. Can I purchase and use Mistral Large directly from Azure Marketplace?
Azure Marketplace enables the purchase and billing of Mistral Large, but the purchase experience can only be accessed through the model catalog. Attempting to purchase Mistral Large from the Marketplace will redirect you to Azure AI Studio.
Given that Mistral Large is billed through the Azure Marketplace, does it retire my Azure consumption commitment (aka MACC)?
Yes, Mistral Large is an “Azure benefit eligible” Marketplace offer, which indicates MACC eligibility. Learn more about MACC here: https://learn.microsoft.com/en-us/marketplace/azure-consumption-commitment-benefit
Is my inference data shared with Mistral AI?
No, Microsoft does not share the content of any inference request or response data with Mistral AI.
Are there rate limits for the Mistral Large API on Azure?
Mistral Large API comes with 200k tokens per minute and 1k requests per minute limit. Reach out to Azure customer support if this doesn’t suffice.
Are Mistral Large Azure APIs region specific?
Mistral Large API endpoints can be created in AI Studio projects to Azure Machine Learning workspaces in East US 2 or France Central Azure regions. If you want to use Mistral Large in prompt flow in project or workspaces in other regions, you can use the API and key as a connection to prompt flow manually. Essentially, you can use the API from any Azure region once you create it in East US 2 or France Central.
Can I fine-tune Mistal Large?
Not yet, stay tuned…
Supercharge your AI apps with Mistral Large today. Head over to AI Studio model catalog to get started.
Microsoft Tech Community – Latest Blogs –Read More
Mistral Large, Mistral AI’s flagship LLM, debuts on Azure AI Models-as-a-Service
Microsoft is partnering with Mistral AI to bring its Large Language Models (LLMs) to Azure. Mistral AI’s OSS models, Mixtral-8x7B and Mistral-7B, were added to the Azure AI model catalog last December. We are excited to announce the addition of Mistral AI’s new flagship model, Mistral Large to the Mistral AI collection of models in the Azure AI model catalog today. The Mistral Large model will be available through Models-as-a-Service (MaaS) that offers API-based access and token based billing for LLMs, making it easier to build Generative AI apps. Developers can provision an API endpoint in a matter of seconds and try out the model in the Azure AI Studio playground or use it with popular LLM app development tools like Azure AI prompt flow and LangChain. The APIs support two layers of safety – first, the model has built-in support for a “safe prompt” parameter and second, Azure AI content safety filters are enabled to screen for harmful content generated by the model, helping developers build safe and trustworthy applications.
The Mistral Large model
Mistral Large is Mistral AI’s most advanced Large Language Model (LLM), first available on Azure and the Mistral AI platform. It can be used on a full range of language-based task thanks to its state-of-the-art reasoning and knowledge capabilities. Key attributes:
Specialized in RAG: Crucial information is not lost in the middle of long context windows. Supports up to 32K tokens.
Strong in coding: Code generation, review and comments with support for all mainstream coding languages.
Multi-lingual by design: Best-in-class performance in French, German, Spanish, and Italian – in addition to English. Dozens of other languages are supported.
Responsible AI: Efficient guardrails baked in the model, with additional safety layer with safe prompt option.
Benchmarks
You can read more about the model and review evaluation results on Mistral AI’s blog: https://mistral.ai/news/mistral-large. The Benchmarks hub in Azure offers a standardized set of evaluation metrics for popular models including Mistral’s OSS models and Mistral Large.
Using Mistral Large on Azure AI
Let’s take care of the prerequisites first:
If you don’t have an Azure subscription, get one here: https://azure.microsoft.com/en-us/pricing/purchase-options/pay-as-you-go
Create an Azure AI Studio hub and project. Make sure you pick East US 2 or France Central as the Azure region for the hub.
Next, you need to create a deployment to obtain the inference API and key:
Open the Mistral Large model card in the model catalog: https://aka.ms/aistudio/landing/mistral-large
Click on Deploy and pick the Pay-as-you-go option.
Subscribe to the Marketplace offer and deploy. You can also review the API pricing at this step.
You should land on the deployment page that shows you the API and key in less than a minute. You can try out your prompts in the playground.
The prerequisites and deployment steps are explained in the product documentation: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral.
You can use the API and key with various clients. Review the API schema if you are looking to integrate the REST API with your own client: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral#reference-for-mistral-large-deployed-as-a-service. Let’s review samples for some popular clients.
Basic CLI with curl and Python web request sample: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/webrequests.ipynb
Mistral clients: Azure APIs for Mistral Large are compatible with the API schema offered on the Mistral AI ‘s platform which allows you to use any of the Mistral AI platform clients with Azure APIs. Sample notebook for the Mistral python client: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/mistralai.ipynb
LangChain: API compatibility also enables you to use the Mistral AI’s Python and JavaScript LangChain integrations. Sample LangChain notebook: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/langchain.ipynb
LiteLLM: LiteLLM is easy to get started and offers consistent input/output format across many LLMs. Sample LiteLLM notebook: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/litellm.ipynb
Prompt flow: Prompt flow offers a web experience in Azure AI Studio and VS code extension to build LLM apps with support for authoring, orchestration, evaluation and deployment. Learn more: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/prompt-flow. Out-of-the-box support for Mistral AI APIs on Azure is coming soon, but you can create a custom connection using the API and key, and use the SDK of your choice Python tool in prompt flow.
Develop with integrated content safety
Mistral AI APIs on Azure come with two layered safety approach – instructing the model through the system prompt and an additional content filtering system that screens prompts and completions for harmful content. Using the safe_prompt parameter prefixes the system prompt with a guardrail instruction as documented here. Additionally, the Azure AI content safety system that consists of an ensemble of classification models screens for specific types of harmful content. The external system is designed to be effective against adversarial prompts attacks such as prompts that ask the model to ignore previous instructions. When the content filtering system detects harmful content, you will receive either an error if the prompt was classified as harmful or the response will be partially or completely truncated with an appropriate message when the output generated is classified as harmful. Make sure you account for these scenarios where the content returned by the APIs is filtered when building your applications.
FAQs
What does it cost to use Mistral Large on Azure?
You are billed based on the number of prompt and completions tokens. You can review the pricing on the Mistral Large offer in the Marketplace offer details tab when deploying the model. You can also find the pricing on the Azure Marketplace: https://azuremarketplace.microsoft.com/en-us/marketplace/apps/000-000.mistral-ai-large-offer
Do I need GPU capacity in my Azure subscription to use Mistral Large?
No. Unlike the Mistral AI OSS models that deploy to VMs with GPUs using Online Endpoints, the Mistral Large model is offered as an API. Mistral Large is a premium model whose weights are not available, so you cannot deploy it to a VM yourself.
This blog talks about the Mistral Large experience in Azure AI Studio. Is Mistral Large available in Azure Machine Learning Studio?
Yes, Mistral Large is available in the Model Catalog in both Azure AI Studio and Azure Machine Learning Studio.
Does Mistral Large on Azure support function calling and Json output?
The Mistral Large model can do function calling and generate Json output, but support for those features will roll out soon on the Azure platform.
Mistral Large is listed on the Azure Marketplace. Can I purchase and use Mistral Large directly from Azure Marketplace?
Azure Marketplace enables the purchase and billing of Mistral Large, but the purchase experience can only be accessed through the model catalog. Attempting to purchase Mistral Large from the Marketplace will redirect you to Azure AI Studio.
Given that Mistral Large is billed through the Azure Marketplace, does it retire my Azure consumption commitment (aka MACC)?
Yes, Mistral Large is an “Azure benefit eligible” Marketplace offer, which indicates MACC eligibility. Learn more about MACC here: https://learn.microsoft.com/en-us/marketplace/azure-consumption-commitment-benefit
Is my inference data shared with Mistral AI?
No, Microsoft does not share the content of any inference request or response data with Mistral AI.
Are there rate limits for the Mistral Large API on Azure?
Mistral Large API comes with 200k tokens per minute and 1k requests per minute limit. Reach out to Azure customer support if this doesn’t suffice.
Are Mistral Large Azure APIs region specific?
Mistral Large API endpoints can be created in AI Studio projects to Azure Machine Learning workspaces in East US 2 or France Central Azure regions. If you want to use Mistral Large in prompt flow in project or workspaces in other regions, you can use the API and key as a connection to prompt flow manually. Essentially, you can use the API from any Azure region once you create it in East US 2 or France Central.
Can I fine-tune Mistal Large?
Not yet, stay tuned…
Supercharge your AI apps with Mistral Large today. Head over to AI Studio model catalog to get started.
Microsoft Tech Community – Latest Blogs –Read More
Mistal Large, Mistral AI’s flagship LLM, debuts on Azure AI Models-as-a-Service
Microsoft is partnering with Mistral AI to bring its Large Language Models (LLMs) to Azure. Mistral AI’s OSS models, Mixtral-8x7B and Mistral-7B, were added to the Azure AI model catalog last December. We are excited to announce the addition of Mistral AI’s new flagship model, Mistral Large to the Mistral AI collection of models in the Azure AI model catalog today. The Mistral Large model will be available through Models-as-a-Service (MaaS) that offers API-based access and token based billing for LLMs, making it easier to build Generative AI apps. Developers can provision an API endpoint in a matter of seconds and try out the model in the Azure AI Studio playground or use it with popular LLM app development tools like Azure AI prompt flow and LangChain. The APIs support two layers of safety – first, the model has built-in support for a “safe prompt” parameter and second, Azure AI content safety filters are enabled to screen for harmful content generated by the model, helping developers build safe and trustworthy applications.
The Mistral Large model
Mistral Large is Mistral AI’s most advanced Large Language Model (LLM), first available on Azure and the Mistral AI platform. It can be used on a full range of language-based task thanks to its state-of-the-art reasoning and knowledge capabilities. Key attributes:
Specialized in RAG: Crucial information is not lost in the middle of long context windows. Supports up to 32K tokens.
Strong in coding: Code generation, review and comments with support for all mainstream coding languages.
Multi-lingual by design: Best-in-class performance in French, German, Spanish, and Italian – in addition to English. Dozens of other languages are supported.
Responsible AI: Efficient guardrails baked in the model, with additional safety layer with safe prompt option.
Benchmarks
You can read more about the model and review evaluation results on Mistral AI’s blog: https://mistral.ai/news/mistral-large. The Benchmarks hub in Azure offers a standardized set of evaluation metrics for popular models including Mistral’s OSS models and Mistral Large.
Using Mistral Large on Azure AI
Let’s take care of the prerequisites first:
If you don’t have an Azure subscription, get one here: https://azure.microsoft.com/en-us/pricing/purchase-options/pay-as-you-go
Create an Azure AI Studio hub and project. Make sure you pick East US 2 or France Central as the Azure region for the hub.
Next, you need to create a deployment to obtain the inference API and key:
Open the Mistral Large model card in the model catalog: https://aka.ms/aistudio/landing/mistral-large
Click on Deploy and pick the Pay-as-you-go option.
Subscribe to the Marketplace offer and deploy. You can also review the API pricing at this step.
You should land on the deployment page that shows you the API and key in less than a minute. You can try out your prompts in the playground.
The prerequisites and deployment steps are explained in the product documentation: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral.
You can use the API and key with various clients. Review the API schema if you are looking to integrate the REST API with your own client: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral#reference-for-mistral-large-deployed-as-a-service. Let’s review samples for some popular clients.
Basic CLI with curl and Python web request sample: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/webrequests.ipynb
Mistral clients: Azure APIs for Mistral Large are compatible with the API schema offered on the Mistral AI ‘s platform which allows you to use any of the Mistral AI platform clients with Azure APIs. Sample notebook for the Mistral python client: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/mistralai.ipynb
LangChain: API compatibility also enables you to use the Mistral AI’s Python and JavaScript LangChain integrations. Sample LangChain notebook: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/langchain.ipynb
LiteLLM: LiteLLM is easy to get started and offers consistent input/output format across many LLMs. Sample LiteLLM notebook: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/litellm.ipynb
Prompt flow: Prompt flow offers a web experience in Azure AI Studio and VS code extension to build LLM apps with support for authoring, orchestration, evaluation and deployment. Learn more: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/prompt-flow. Out-of-the-box support for Mistral AI APIs on Azure is coming soon, but you can create a custom connection using the API and key, and use the SDK of your choice Python tool in prompt flow.
Develop with integrated content safety
Mistral AI APIs on Azure come with two layered safety approach – instructing the model through the system prompt and an additional content filtering system that screens prompts and completions for harmful content. Using the safe_prompt parameter prefixes the system prompt with a guardrail instruction as documented here. Additionally, the Azure AI content safety system that consists of an ensemble of classification models screens for specific types of harmful content. The external system is designed to be effective against adversarial prompts attacks such as prompts that ask the model to ignore previous instructions. When the content filtering system detects harmful content, you will receive either an error if the prompt was classified as harmful or the response will be partially or completely truncated with an appropriate message when the output generated is classified as harmful. Make sure you account for these scenarios where the content returned by the APIs is filtered when building your applications.
FAQs
What does it cost to use Mistral Large on Azure?
You are billed based on the number of prompt and completions tokens. You can review the pricing on the Mistral Large offer in the Marketplace offer details tab when deploying the model. You can also find the pricing on the Azure Marketplace: https://azuremarketplace.microsoft.com/en-us/marketplace/apps/000-000.mistral-ai-large-offer
Do I need GPU capacity in my Azure subscription to use Mistral Large?
No. Unlike the Mistral AI OSS models that deploy to VMs with GPUs using Online Endpoints, the Mistral Large model is offered as an API. Mistral Large is a premium model whose weights are not available, so you cannot deploy it to a VM yourself.
This blog talks about the Mistral Large experience in Azure AI Studio. Is Mistral Large available in Azure Machine Learning Studio?
Yes, Mistral Large is available in the Model Catalog in both Azure AI Studio and Azure Machine Learning Studio.
Does Mistral Large on Azure support function calling and Json output?
The Mistral Large model can do function calling and generate Json output, but support for those features will roll out soon on the Azure platform.
Mistral Large is listed on the Azure Marketplace. Can I purchase and use Mistral Large directly from Azure Marketplace?
Azure Marketplace enables the purchase and billing of Mistral Large, but the purchase experience can only be accessed through the model catalog. Attempting to purchase Mistral Large from the Marketplace will redirect you to Azure AI Studio.
Given that Mistral Large is billed through the Azure Marketplace, does it retire my Azure consumption commitment (aka MACC)?
Yes, Mistral Large is an “Azure benefit eligible” Marketplace offer, which indicates MACC eligibility. Learn more about MACC here: https://learn.microsoft.com/en-us/marketplace/azure-consumption-commitment-benefit
Is my inference data shared with Mistral AI?
No, Microsoft does not share the content of any inference request or response data with Mistral AI.
Are there rate limits for the Mistral Large API on Azure?
Mistral Large API comes with 200k tokens per minute and 1k requests per minute limit. Reach out to Azure customer support if this doesn’t suffice.
Are Mistral Large Azure APIs region specific?
Mistral Large API endpoints can be created in AI Studio projects to Azure Machine Learning workspaces in East US 2 or France Central Azure regions. If you want to use Mistral Large in prompt flow in project or workspaces in other regions, you can use the API and key as a connection to prompt flow manually. Essentially, you can use the API from any Azure region once you create it in East US 2 or France Central.
Can I fine-tune Mistal Large?
Not yet, stay tuned…
Supercharge your AI apps with Mistral Large today. Head over to AI Studio model catalog to get started.
Microsoft Tech Community – Latest Blogs –Read More
Windows Update Compliance Reporting FAQ
Hi, Jonas here!
Or as we say in the north of Germany: “Moin Moin!”
I wrote a frequently asked questions document about the report solution I created some years ago. Mainly because I still receive questions about the report solution and how to deal with update related errors. If you haven’t seen my ConfigMgr report solution yet, here is the link: “Mastering Configuration Manager Patch Compliance Reporting”
Hopefully the Q&A list will help others to reach 100% patch compliance.
If that’s even possible 😉
First things first
If you spend a lot of time dealing with update related issues or processes for client operating systems, have a look at “Windows Autopatch” HERE and let Microsoft deal with that.
For server operating systems, on-premises or in the cloud have a look at “Azure Update Manager” and “A Staged Patching Solution with Azure Update Manager“.
The section: “Some key facts and prerequisites” of the blog mentioned earlier covers the basics of the report solution and should answer some questions already.
Everything else is hopefully covered by the list below.
So, lets jump in…
The OSType field does not contain any data. What should I do?
This can mean two things.
No ConfigMgr client is installed. In that case other fields like client version also contain no data. Install the client or use another collection without the system in it to run the report and therefore exclude the system from the report.
The system has not send any hardware inventory data. Check the ConfigMgr client functionality.
Some fields contain a value of 999. What does that mean?
There is no data for the system found in the ConfigMgr database when a value of 999 is shown.
“Days Since Last Online” with a value of 999 typically means that the system had no contact with ConfigMgr at all. Which either means the system has no ConfigMgr client installed or the client cannot contact any ConfigMgr management point.
“Days Since Last AADSLogon” with a value of 999 means there is no data from AD System or Group Discovery in the ConfigMgr database for the system.
“Days Since Last Boot” with a value of 999 means there is no hardware inventory data from WMI class win32_operatingsystem in the ConfigMgr database for the system.
“Month Since Last Update Install” with a value of 999 means there is no hardware inventory data from WMI class win32_quickfixengineering in the ConfigMgr database for the system.
What does WSUS Scan error stand for?
Before a system is able to install updates, it needs to scan against the WSUS server to be able to report missing updates. If that scan fails, an error will be shown in the report.
In that case, other update information might be also missing and such an error should be fixed before any other update related analysis.
I found WSUS scan errors with a comment about a WSUS GPO
The ConfigMgr client tries to set a local policy to point to WSUS client to the WSUS server of the ConfigMgr infrastructure. That process fails in case a group policy tries to do the same with a different WSUS server name.
Remove the GPO for those systems to resolve the error.
I found WSUS scan errors mentioning the registry.pol file
The ConfigMgr client tries to set a local policy to point the WSUS client to the WSUS server of the ConfigMgr infrastructure. That process results in a policy entry in: “C:Windowssystem32grouppolicyMachineRegistry.pol”
If the file cannot be accessed the WSUS entry cannot be set and the process fails.
Delete the file in that case and run “gpupdate /force” as an administrator.
IMPORTANT: Local group policy settings made manually or via ConfigMgr task sequence need to be set again if the file has been deleted.
NOTE: To avoid any other policy problems (with Defender settings for example) it is best to re-install the ConfigMgr client after the file has been re-created.
I found WSUS scan errors mentioning a proxy problem, how can I fix that?
This typically happens when a system proxy is set and the WSUS agent tries to connect to the WSUS server via that proxy and fails.
You can do the following:
Open a command prompt as admin and run “netsh winhhtp show proxy”
If a proxy is present, either remove the proxy with: “netsh winhttp reset proxy”
Or, add either the WSUS server FQDN or just the domain to the bypass list.
Example: netsh winhttp set proxy proxy-server=”proxy.domain.local:8080″ bypass-list=”<local>;wsus.domain.local;*.domain.local”
Use either “wsus.domain.local” or “*.domain.local” in case your wsus is part of domain.local.
In some cases the proxy is set for the local SYSTEM account
Open: “regedit” as administrator
Open: [HKEY_USERSS-1-5-18SoftwareMicrosoftWindowsCurrentVersionInternet SettingsConnections]
Set “ProxyEnable” to “0” to disable the use of a proxy for the system account
When should I restart systems with a pending reboot?
As soon as possible. A pending reboot might be pending from another source than just software update installations like a normal application or server role installation and might prevent security update installations.
Some systems have all updates installed, but some deployments still show as non-compliant (Column: “Deployments Non Compliant”) What can I do?
This can happen if older update deployments exist and have no compliance changes over a longer period of time. Systems in that state are typically shown as “unknown” in the ConfigMgr console under “Deployments”.
Do one of the following to resolve that:
Remove older updates from an update group in case they are no longer needed
Remove the deployment completely
Delete the deployment and create a new one.
All actions will result in a re-evaluation of the deployment.
Column “Update Collections” does not show any entries.
The system is not a member of a collection with update deployments applied and is therefore not able to install updates. Make sure the system is part of an update deployment collection.
What is the difference between “Missing Updates All” and “Missing Updates Approved”?
“Missing Updates All” are ALL updates missing for a system whether deployed or not.
“Missing Updates Approved” are just the updates which are approved, deployed, or assigned (depending on the term you use) to a system and still missing. “Missing Updates Approved” should be zero at the end of your patch cycle, while “Missing Updates All” can always have a value other than zero.
Some systems are shown without any WSUS scan or install error, but have still updates missing. What can I do to fix that?
There can be multiple reasons for that.
Make sure the system is part of a collection with update deployments first
Check the update deployment start and deadline time and if the system sees (time is past start time) and is forced to install the update (time is past deadline time)
This is visible in the report: “Software Updates Compliance – Per device deployments”. Which can be opened individually or by clicking on the number in column: “Deployments Non Compliant” in any of the list views of the report solution.
The earliest deadline for a specific update and device is visible in the report: “Software Updates Compliance – Per device” or when clicked on the number of column: “Missing Updates Approved”.
Make sure the system either has no maintenance window at all or a maintenance window which fits the start and deadline time.
Make sure a maintenance window is at least 2h long to be able to install updates in it
Also, check the timeout configured for deployed updates on each update in the ConfigMgr console.
For example, if an update has a timeout of two hours configured and the maintenance window is set to two hours, installation of the update will NOT be triggered.
Check the restart notification client settings. This is especially important in server environments where a logged on user might not see a restart warning and therefore might not act on it. The restart time will be added to the overall timeout of each update and could exceed the overall allowed installation time of a maintenance window
Check the available space on drive C:. Too little space can cause all sorts of problems.
Start “cleanmgr.exe” as admin an delete unused files.
If nothing else worked: Reboot the system and trigger updates manually
If nothing else worked: Re-install the ConfigMgr client
If nothing else worked: Follow the document: “Troubleshoot issues with WSUS client agents”
Some systems are shown as uncompliant, but all updates are installed? What can I do to fix that?
This can either be a reporting delay or a problem with update compliance state messages.
If the update installation just finished, wait at least 45 to 60 minutes. This is because the default state message send interval is set to 15 minutes and the report result is typically cached for 30 minutes.
If the update installation time is in the past, this could be due to missing state messages.
In that case, run the following PowerShell line locally on the affected machines to re-send update compliance state messages
(New-Object -ComObject “Microsoft.CCM.UpdatesStore”).RefreshServerComplianceState
Installation errors indicate an issue with the CBS store. How can this be fixed?
If the CBS store is marked corrupt no security updates can be installed and the store needs to be fixed.
the following articles describe the process in more detail:
HERE and HERE
The CBS log is located under: “C:WindowsLogsCBSCBS.log”.
The large log file size sometimes causes issues when parsing the file for the correct log entry.
In addition to that, older logfiles are stored as CAB-files and can also be quite large in size.
The following script can be used to parse even very large files and related CAB-files for store corruption entries.
Get-CBSLogState.ps1
Are there any additional resources related to update compliance issues?
Yes, the following articles can help further troubleshoot update related issues:
Troubleshoot software update management
Troubleshoot software update synchronization
Troubleshoot software update scan failures
Troubleshoot software update deployments
Deprecation of Microsoft Support Diagnostic Tool (MSDT) and MSDT Troubleshooters – Microsoft Support
What can I do to increase my update compliance percentage?
This is an non exhaustive list of actions which can help to positively impact software update compliance:
As mentioned before do not leave a system too long in a pending reboot state.
As mentioned before make sure to always have enough space left on disk C: (~2GB+ for monthly security updates, ~10GB+ for feature updates)
Start “cleanmgr.exe” as admin and delete unused files for example.
Make sure a system has enough uptime to be able to download and install security updates.
If a system has limited bandwidth available it might need to stay online/active a while longer than other systems with more bandwidth available
You also might need to consider power settings for systems running on battery
What is a realistic update compliance percentage?
While the aim is to get to 100% fully patched systems, this goal can be quite hard in some situations. Some of the reasons behind bad patch compliance come from technical issues like the ones mentioned above under: “What can I do to increase my update compliance percentage?”. But other factors include the device delivery process for example. If you put ready to go systems on a shelf for a longer period, those devices will decrease overall patch compliance percentage.
To reach a high compliance percentage, know your workforce and know your update processes.
Reduce the blind spot and make sure each actively used system does not fall out of management due to errors, misconfigurations, or simply bad monitoring. Keep those devices on the shelf in mind and exclude them from compliance reporting for the duration of inactivity.
That’s it!
I hope you enjoyed reading this blog post. Stay safe!
Jonas Ohmsen
Microsoft Tech Community – Latest Blogs –Read More