Category: Microsoft
Category Archives: Microsoft
Azure DevOps blog closing -> moving to DevBlogs
Hello! We will be closing this Azure DevOps blog soon on Tech Community as part of consolidation efforts. We appreciate your continued readership and interest in this topic.
For Azure DevOps blog posts (including the last 10 posted here), please go here: Azure DevOps Blog (microsoft.com)
Microsoft Tech Community – Latest Blogs –Read More
Creating Intelligent Apps on App Service with .NET
You can use Azure App Service to work with popular AI frameworks like LangChain and Semantic Kernel connected to OpenAI for creating intelligent apps. In the following tutorial we will be adding an Azure OpenAI service using Semantic Kernel to a .NET 8 Blazor web application.
Prerequisites
An Azure OpenAI resource or an OpenAI account.
A .NET 8 Blazor Web App. Create the application with a template here.
Setup Blazor web app
For this Blazor web application, we’ll be building off the Blazor template and creating a new razor page that can send and receive requests to an Azure OpenAI OR OpenAI service using Semantic Kernel.
Right click on the Pages folder found under the Components folder and add a new item named OpenAI.razor
Add the following code to the ****OpenAI.razor file and click Save
“/openai”
@rendermode InteractiveServer
<PageTitle>Open AI</PageTitle>
<h3>Open AI Query</h3>
<input placeholder=”Input query” @bind=”newQuery” />
<button class=”btn btn-primary” @onclick=”SemanticKernelClient”>Send Request</button>
<br />
<h4>Server response:</h4> <p>@serverResponse</p>
@code {
public string? newQuery;
public string? serverResponse;
}
Next, we’ll need to add the new page to the navigation so we can navigate to the service.
Go to the NavMenu.razor file under the Layout folder and add the following div in the nav class. Click Save
<div class=”nav-item px-3″>
<NavLink class=”nav-link” href=”openai”>
<span class=”bi bi-list-nested-nav-menu” aria-hidden=”true”></span> Open AI
</NavLink>
</div>
After the Navigation is updated, we can start preparing to build the OpenAI client to handle our requests.
API Keys and Endpoints
In order to make calls to OpenAI with your client, you will need to first grab the Keys and Endpoint values from Azure OpenAI or OpenAI and add them as secrets for use in your application. Retrieve and save the values for later use.
For Azure OpenAI, see this documentation to retrieve the key and endpoint values. For our application, you will need the following values:
deploymentName
endpoint
apiKey
modelId
For OpenAI, see this documentation to retrieve the api keys. For our application, you will need the following values:
apiKey
modelId
Since we’ll be deploying to App Service we can secure these secrets in Azure Key Vault for protection. Follow the Quickstart to setup your Key Vault and add the secrets you saved from earlier.
Next, we can use Key Vault references as app settings in our App Service resource to reference in our application. Follow the instructions in the documentation to grant your app access to your Key Vault and to setup Key Vault references.
Then, go to the portal Environment Variables blade in your resource and add the following app settings:
For Azure OpenAI, use the following:
DEPOYMENT_NAME = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
ENDPOINT = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
API_KEY = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
MODEL_ID = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
For OpenAI, use the following:
OPENAI_API_KEY = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
OPENAI_MODEL_ID = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
Once your app settings are saved, you can bring them into the code by injecting IConfiguration and referencing the app settings. Add the following code to your OpenAI.razor file:
@inject Microsoft.Extensions.Configuration.IConfiguration _config
@code {
private async Task SemanticKernelClient()
{
string deploymentName = _config[“DEPLOYMENT_NAME”];
string endpoint = _config[“ENDPOINT”];
string apiKey = _config[“API_KEY”];
string modelId = _config[“MODEL_ID”];
// OpenAI
string OpenAIModelId = _config[“OPENAI_MODEL_ID”];
string OpenAIApiKey = _config[“OPENAI_API_KEY”];
}
Semantic Kernel
Semantic Kernel is an open-source SDK that enables you to easily develop AI agents to work with your existing code. You can use Semantic Kernel with Azure OpenAI and OpenAI models.
To create the OpenAI client, we’ll first start by installing Semantic Kernel.
To install Semantic Kernel, browse the NuGet package manager in Visual Studio and install the Microsoft.SemanticKernel package. For NuGet Package Manager instructions, see here. For CLI instructions, see here.
Once the Semantic Kernel package is installed, you can now initialize the kernel.
Initialize the Kernel
To initialize the Kernel, add the following code to the OpenAI.razor file.
@code {
@using Microsoft.SemanticKernel;
private async Task SemanticKernelClient()
{
var builder = Kernel.CreateBuilder();
var kernel = builder.Build();
}
}
Here we are adding the using statement and creating the Kernel in a method that we can use when we send the request to the service.
Add your AI service
Once the Kernel is initialized, we can add our chosen AI service to the kernel. Here we will define our model and pass in our key and endpoint information to be consumed by the chosen model.
For Azure OpenAI, use the following code:
var builder = Kernel.CreateBuilder();
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
var kernel = builder.Build();
For OpenAI, use the following code:
var builder = Kernel.CreateBuilder();
builder.Services.AddOpenAIChatCompletion(
modelId: OpenAIModelId,
apiKey: OpenAIApiKey,
);
var kernel = builder.Build();
Configure prompt and create Semantic function
Now that our chosen OpenAI service client is created with the correct keys we can add a function to handle the prompt. With Semantic Kernel you can handle prompts by the use of a semantic functions, which turn the prompt and the prompt configuration settings into a function the Kernel can execute. Learn more on configuring prompts here.
First, we’ll create a variable that will hold the users prompt. Then add a function with execution settings to handle and configure the prompt. Add the following code to the OpenAI.razor file:
@using Microsoft.SemanticKernel.Connectors.OpenAI
private async Task SemanticKernelClient()
{
var builder = Kernel.CreateBuilder();
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
var kernel = builder.Build();
var prompt = @”{{$input}} ” + newQuery;
var summarize = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings { MaxTokens = 100, Temperature = 0.2 });
}
Lastly, we’ll need to invoke the function and return the response. Add the following to the OpenAI.razor file:
private async Task SemanticKernelClient()
{
var builder = Kernel.CreateBuilder();
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
var kernel = builder.Build();
var prompt = @”{{$input}} ” + newQuery;
var summarize = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings { MaxTokens = 100, Temperature = 0.2 })
var result = await kernel.InvokeAsync(summarize);
serverResponse = result.ToString();
}
Here is the example in it’s completed form. In this example, use the Azure OpenAI chat completion service OR the OpenAI chat completion service, not both.
“/openai”
@rendermode InteractiveServer
@inject Microsoft.Extensions.Configuration.IConfiguration _config
<PageTitle>OpenAI</PageTitle>
<h3>OpenAI input query: </h3>
<input class=”col-sm-4″ @bind=”newQuery” />
<button class=”btn btn-primary” @onclick=”SemanticKernelClient”>Send Request</button>
<br />
<br />
<h4>Server response:</h4> <p>@serverResponse</p>
@code {
@using Microsoft.SemanticKernel;
@using Microsoft.SemanticKernel.Connectors.OpenAI
private string? newQuery;
private string? serverResponse;
private async Task SemanticKernelClient()
{
// Azure OpenAI
string deploymentName = _config[“DEPLOYMENT_NAME”];
string endpoint = _config[“ENDPOINT”];
string apiKey = _config[“API_KEY”];
string modelId = _config[“MODEL_ID”];
// OpenAI
// string OpenAIModelId = _config[“OPENAI_DEPLOYMENT_NAME”];
// string OpenAIApiKey = _config[“OPENAI_API_KEY”];
// Semantic Kernel client
var builder = Kernel.CreateBuilder();
// Azure OpenAI
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
// OpenAI
// builder.Services.AddOpenAIChatCompletion(
// modelId: OpenAIModelId,
// apiKey: OpenAIApiKey
// );
var kernel = builder.Build();
var prompt = @”{{$input}} ” + newQuery;
var summarize = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings { MaxTokens = 100, Temperature = 0.2 });
var result = await kernel.InvokeAsync(summarize);
serverResponse = result.ToString();
}
}
Now save the application and follow the next steps to deploy it to App Service. If you would like to test it locally first at this step, you can swap out the config values at with the literal string values of your OpenAI service. For example: string modelId = “gpt-4-turbo”;
Deploy to App Service
If you have followed the steps above, you are ready to deploy to App Service. If you run into any issues remember that you need to have done the following: grant your app access to your Key Vault, add the app settings with key vault references as your values. App Service will resolve the app settings in your application that match what you’ve added in the portal.
Authentication
Although optional, it is highly recommended that you also add authentication to your web app when using an Azure OpenAI or OpenAI service. This can add a level of security with no additional code. Learn how to enable authentication for your web app here.
Once deployed, browse to the web app and navigate to the Open AI tab. Enter a query to the service and you should see a populated response from the server. The tutorial is now complete and you now know how to use OpenAI services to create intelligent applications.
Microsoft Tech Community – Latest Blogs –Read More
MDTI Earns Impactful Trio of ISO Certificates
We are excited to announce that Microsoft Defender Threat Intelligence (MDTI) has achieved ISO 27001, ISO 27017 and ISO 27018 certifications. The ISO, the International Organization for Standardization, develops market relevant international standards that support innovation and provide solutions to global challenges, including information security requirements around establishing, implementing, and improving an Information Security Management System (ISM).
These certificates emphasize the MDTI team’s continuous commitment to protecting customer information and following the strictest standards of security and privacy standards.
Certificate meaning and importance
ISO 27001: This certification demonstrates compliance of MDTI’s ISMS with best practices of industry, thereby providing a structured approach towards risk management pertaining to information security.
ISO 27017: This certificate is a worldwide standard that provides guidance on securing information in the cloud. It demonstrates that we have put in place strong controls and countermeasures to ensure our customers’ data is safe when stored in the cloud.
ISO 27018: This certificate sets out common objectives, controls and guidelines for protecting personally identifiable information (PII) processed in public clouds consistent with the privacy principles outlined in ISO 29100. This is confirmed by our ISO 27018 certification, which shows that we are committed to respecting our customers’ privacy rights and protecting their personal data through cloud computing.
What are the advantages of these certifications for our customers?
Enhanced Safety and Privacy Assurance: Our customers can be confident that the most sophisticated and exhaustive security and privacy standards offered in the market are in place to protect their data. We have ensured we exceed these certifications; therefore, their information is secure from emerging threats.
Reduced Risk and Liability Exposure: Through our certified ISMs and Privacy Information Management System (PIMS), consumers can significantly reduce liability for potential data breaches, legal actions, regulatory fines, or reputational risks. They use our efficient structures to boost resistance against cybercrime to reduce the risk of lawsuits.
Streamlined Compliance and Competitive Edge: The clients’ industry or market-specific rigorous regulatory and contractual requirements are usually facilitated by our certification programs. Global accreditation of international standards signifies that organizations are serious when it comes to data security. Their job reputation improves plus they get options for teaming up with other businesses that value safeguarding privacy.
What are the steps to begin with MDTI?
If you are interested in learning more about MDTI and how it can help you unmask and neutralize modern adversaries and cyberthreats such as ransomware, and to explore the features and benefits of MDTI please visit the MDT product web page.
Also, be sure to contact our sales team to request a demo or a quote.
Microsoft Tech Community – Latest Blogs –Read More
Mistral Large, Mistral AI’s flagship LLM, debuts on Azure AI Models-as-a-Service
Microsoft is partnering with Mistral AI to bring its Large Language Models (LLMs) to Azure. Mistral AI’s OSS models, Mixtral-8x7B and Mistral-7B, were added to the Azure AI model catalog last December. We are excited to announce the addition of Mistral AI’s new flagship model, Mistral Large to the Mistral AI collection of models in the Azure AI model catalog today. The Mistral Large model will be available through Models-as-a-Service (MaaS) that offers API-based access and token based billing for LLMs, making it easier to build Generative AI apps. Developers can provision an API endpoint in a matter of seconds and try out the model in the Azure AI Studio playground or use it with popular LLM app development tools like Azure AI prompt flow and LangChain. The APIs support two layers of safety – first, the model has built-in support for a “safe prompt” parameter and second, Azure AI content safety filters are enabled to screen for harmful content generated by the model, helping developers build safe and trustworthy applications.
The Mistral Large model
Mistral Large is Mistral AI’s most advanced Large Language Model (LLM), first available on Azure and the Mistral AI platform. It can be used on a full range of language-based task thanks to its state-of-the-art reasoning and knowledge capabilities. Key attributes:
Specialized in RAG: Crucial information is not lost in the middle of long context windows. Supports up to 32K tokens.
Strong in coding: Code generation, review and comments with support for all mainstream coding languages.
Multi-lingual by design: Best-in-class performance in French, German, Spanish, and Italian – in addition to English. Dozens of other languages are supported.
Responsible AI: Efficient guardrails baked in the model, with additional safety layer with safe prompt option.
Benchmarks
You can read more about the model and review evaluation results on Mistral AI’s blog: https://mistral.ai/news/mistral-large. The Benchmarks hub in Azure offers a standardized set of evaluation metrics for popular models including Mistral’s OSS models and Mistral Large.
Using Mistral Large on Azure AI
Let’s take care of the prerequisites first:
If you don’t have an Azure subscription, get one here: https://azure.microsoft.com/en-us/pricing/purchase-options/pay-as-you-go
Create an Azure AI Studio hub and project. Make sure you pick East US 2 or France Central as the Azure region for the hub.
Next, you need to create a deployment to obtain the inference API and key:
Open the Mistral Large model card in the model catalog: https://aka.ms/aistudio/landing/mistral-large
Click on Deploy and pick the Pay-as-you-go option.
Subscribe to the Marketplace offer and deploy. You can also review the API pricing at this step.
You should land on the deployment page that shows you the API and key in less than a minute. You can try out your prompts in the playground.
The prerequisites and deployment steps are explained in the product documentation: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral.
You can use the API and key with various clients. Review the API schema if you are looking to integrate the REST API with your own client: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral#reference-for-mistral-large-deployed-as-a-service. Let’s review samples for some popular clients.
Basic CLI with curl and Python web request sample: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/webrequests.ipynb
Mistral clients: Azure APIs for Mistral Large are compatible with the API schema offered on the Mistral AI ‘s platform which allows you to use any of the Mistral AI platform clients with Azure APIs. Sample notebook for the Mistral python client: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/mistralai.ipynb
LangChain: API compatibility also enables you to use the Mistral AI’s Python and JavaScript LangChain integrations. Sample LangChain notebook: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/langchain.ipynb
LiteLLM: LiteLLM is easy to get started and offers consistent input/output format across many LLMs. Sample LiteLLM notebook: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/litellm.ipynb
Prompt flow: Prompt flow offers a web experience in Azure AI Studio and VS code extension to build LLM apps with support for authoring, orchestration, evaluation and deployment. Learn more: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/prompt-flow. Out-of-the-box support for Mistral AI APIs on Azure is coming soon, but you can create a custom connection using the API and key, and use the SDK of your choice Python tool in prompt flow.
Develop with integrated content safety
Mistral AI APIs on Azure come with two layered safety approach – instructing the model through the system prompt and an additional content filtering system that screens prompts and completions for harmful content. Using the safe_prompt parameter prefixes the system prompt with a guardrail instruction as documented here. Additionally, the Azure AI content safety system that consists of an ensemble of classification models screens for specific types of harmful content. The external system is designed to be effective against adversarial prompts attacks such as prompts that ask the model to ignore previous instructions. When the content filtering system detects harmful content, you will receive either an error if the prompt was classified as harmful or the response will be partially or completely truncated with an appropriate message when the output generated is classified as harmful. Make sure you account for these scenarios where the content returned by the APIs is filtered when building your applications.
FAQs
What does it cost to use Mistral Large on Azure?
You are billed based on the number of prompt and completions tokens. You can review the pricing on the Mistral Large offer in the Marketplace offer details tab when deploying the model. You can also find the pricing on the Azure Marketplace: https://azuremarketplace.microsoft.com/en-us/marketplace/apps/000-000.mistral-ai-large-offer
Do I need GPU capacity in my Azure subscription to use Mistral Large?
No. Unlike the Mistral AI OSS models that deploy to VMs with GPUs using Online Endpoints, the Mistral Large model is offered as an API. Mistral Large is a premium model whose weights are not available, so you cannot deploy it to a VM yourself.
This blog talks about the Mistral Large experience in Azure AI Studio. Is Mistral Large available in Azure Machine Learning Studio?
Yes, Mistral Large is available in the Model Catalog in both Azure AI Studio and Azure Machine Learning Studio.
Does Mistral Large on Azure support function calling and Json output?
The Mistral Large model can do function calling and generate Json output, but support for those features will roll out soon on the Azure platform.
Mistral Large is listed on the Azure Marketplace. Can I purchase and use Mistral Large directly from Azure Marketplace?
Azure Marketplace enables the purchase and billing of Mistral Large, but the purchase experience can only be accessed through the model catalog. Attempting to purchase Mistral Large from the Marketplace will redirect you to Azure AI Studio.
Given that Mistral Large is billed through the Azure Marketplace, does it retire my Azure consumption commitment (aka MACC)?
Yes, Mistral Large is an “Azure benefit eligible” Marketplace offer, which indicates MACC eligibility. Learn more about MACC here: https://learn.microsoft.com/en-us/marketplace/azure-consumption-commitment-benefit
Is my inference data shared with Mistral AI?
No, Microsoft does not share the content of any inference request or response data with Mistral AI.
Are there rate limits for the Mistral Large API on Azure?
Mistral Large API comes with 200k tokens per minute and 1k requests per minute limit. Reach out to Azure customer support if this doesn’t suffice.
Are Mistral Large Azure APIs region specific?
Mistral Large API endpoints can be created in AI Studio projects to Azure Machine Learning workspaces in East US 2 or France Central Azure regions. If you want to use Mistral Large in prompt flow in project or workspaces in other regions, you can use the API and key as a connection to prompt flow manually. Essentially, you can use the API from any Azure region once you create it in East US 2 or France Central.
Can I fine-tune Mistal Large?
Not yet, stay tuned…
Supercharge your AI apps with Mistral Large today. Head over to AI Studio model catalog to get started.
Microsoft Tech Community – Latest Blogs –Read More
Mistal Large, Mistral AI’s flagship LLM, debuts on Azure AI Models-as-a-Service
Microsoft is partnering with Mistral AI to bring its Large Language Models (LLMs) to Azure. Mistral AI’s OSS models, Mixtral-8x7B and Mistral-7B, were added to the Azure AI model catalog last December. We are excited to announce the addition of Mistral AI’s new flagship model, Mistral Large to the Mistral AI collection of models in the Azure AI model catalog today. The Mistral Large model will be available through Models-as-a-Service (MaaS) that offers API-based access and token based billing for LLMs, making it easier to build Generative AI apps. Developers can provision an API endpoint in a matter of seconds and try out the model in the Azure AI Studio playground or use it with popular LLM app development tools like Azure AI prompt flow and LangChain. The APIs support two layers of safety – first, the model has built-in support for a “safe prompt” parameter and second, Azure AI content safety filters are enabled to screen for harmful content generated by the model, helping developers build safe and trustworthy applications.
The Mistral Large model
Mistral Large is Mistral AI’s most advanced Large Language Model (LLM), first available on Azure and the Mistral AI platform. It can be used on a full range of language-based task thanks to its state-of-the-art reasoning and knowledge capabilities. Key attributes:
Specialized in RAG: Crucial information is not lost in the middle of long context windows. Supports up to 32K tokens.
Strong in coding: Code generation, review and comments with support for all mainstream coding languages.
Multi-lingual by design: Best-in-class performance in French, German, Spanish, and Italian – in addition to English. Dozens of other languages are supported.
Responsible AI: Efficient guardrails baked in the model, with additional safety layer with safe prompt option.
Benchmarks
You can read more about the model and review evaluation results on Mistral AI’s blog: https://mistral.ai/news/mistral-large. The Benchmarks hub in Azure offers a standardized set of evaluation metrics for popular models including Mistral’s OSS models and Mistral Large.
Using Mistral Large on Azure AI
Let’s take care of the prerequisites first:
If you don’t have an Azure subscription, get one here: https://azure.microsoft.com/en-us/pricing/purchase-options/pay-as-you-go
Create an Azure AI Studio hub and project. Make sure you pick East US 2 or France Central as the Azure region for the hub.
Next, you need to create a deployment to obtain the inference API and key:
Open the Mistral Large model card in the model catalog: https://aka.ms/aistudio/landing/mistral-large
Click on Deploy and pick the Pay-as-you-go option.
Subscribe to the Marketplace offer and deploy. You can also review the API pricing at this step.
You should land on the deployment page that shows you the API and key in less than a minute. You can try out your prompts in the playground.
The prerequisites and deployment steps are explained in the product documentation: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral.
You can use the API and key with various clients. Review the API schema if you are looking to integrate the REST API with your own client: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral#reference-for-mistral-large-deployed-as-a-service. Let’s review samples for some popular clients.
Basic CLI with curl and Python web request sample: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/webrequests.ipynb
Mistral clients: Azure APIs for Mistral Large are compatible with the API schema offered on the Mistral AI ‘s platform which allows you to use any of the Mistral AI platform clients with Azure APIs. Sample notebook for the Mistral python client: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/mistralai.ipynb
LangChain: API compatibility also enables you to use the Mistral AI’s Python and JavaScript LangChain integrations. Sample LangChain notebook: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/langchain.ipynb
LiteLLM: LiteLLM is easy to get started and offers consistent input/output format across many LLMs. Sample LiteLLM notebook: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/litellm.ipynb
Prompt flow: Prompt flow offers a web experience in Azure AI Studio and VS code extension to build LLM apps with support for authoring, orchestration, evaluation and deployment. Learn more: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/prompt-flow. Out-of-the-box support for Mistral AI APIs on Azure is coming soon, but you can create a custom connection using the API and key, and use the SDK of your choice Python tool in prompt flow.
Develop with integrated content safety
Mistral AI APIs on Azure come with two layered safety approach – instructing the model through the system prompt and an additional content filtering system that screens prompts and completions for harmful content. Using the safe_prompt parameter prefixes the system prompt with a guardrail instruction as documented here. Additionally, the Azure AI content safety system that consists of an ensemble of classification models screens for specific types of harmful content. The external system is designed to be effective against adversarial prompts attacks such as prompts that ask the model to ignore previous instructions. When the content filtering system detects harmful content, you will receive either an error if the prompt was classified as harmful or the response will be partially or completely truncated with an appropriate message when the output generated is classified as harmful. Make sure you account for these scenarios where the content returned by the APIs is filtered when building your applications.
FAQs
What does it cost to use Mistral Large on Azure?
You are billed based on the number of prompt and completions tokens. You can review the pricing on the Mistral Large offer in the Marketplace offer details tab when deploying the model. You can also find the pricing on the Azure Marketplace: https://azuremarketplace.microsoft.com/en-us/marketplace/apps/000-000.mistral-ai-large-offer
Do I need GPU capacity in my Azure subscription to use Mistral Large?
No. Unlike the Mistral AI OSS models that deploy to VMs with GPUs using Online Endpoints, the Mistral Large model is offered as an API. Mistral Large is a premium model whose weights are not available, so you cannot deploy it to a VM yourself.
This blog talks about the Mistral Large experience in Azure AI Studio. Is Mistral Large available in Azure Machine Learning Studio?
Yes, Mistral Large is available in the Model Catalog in both Azure AI Studio and Azure Machine Learning Studio.
Does Mistral Large on Azure support function calling and Json output?
The Mistral Large model can do function calling and generate Json output, but support for those features will roll out soon on the Azure platform.
Mistral Large is listed on the Azure Marketplace. Can I purchase and use Mistral Large directly from Azure Marketplace?
Azure Marketplace enables the purchase and billing of Mistral Large, but the purchase experience can only be accessed through the model catalog. Attempting to purchase Mistral Large from the Marketplace will redirect you to Azure AI Studio.
Given that Mistral Large is billed through the Azure Marketplace, does it retire my Azure consumption commitment (aka MACC)?
Yes, Mistral Large is an “Azure benefit eligible” Marketplace offer, which indicates MACC eligibility. Learn more about MACC here: https://learn.microsoft.com/en-us/marketplace/azure-consumption-commitment-benefit
Is my inference data shared with Mistral AI?
No, Microsoft does not share the content of any inference request or response data with Mistral AI.
Are there rate limits for the Mistral Large API on Azure?
Mistral Large API comes with 200k tokens per minute and 1k requests per minute limit. Reach out to Azure customer support if this doesn’t suffice.
Are Mistral Large Azure APIs region specific?
Mistral Large API endpoints can be created in AI Studio projects to Azure Machine Learning workspaces in East US 2 or France Central Azure regions. If you want to use Mistral Large in prompt flow in project or workspaces in other regions, you can use the API and key as a connection to prompt flow manually. Essentially, you can use the API from any Azure region once you create it in East US 2 or France Central.
Can I fine-tune Mistal Large?
Not yet, stay tuned…
Supercharge your AI apps with Mistral Large today. Head over to AI Studio model catalog to get started.
Microsoft Tech Community – Latest Blogs –Read More
Windows Update Compliance Reporting FAQ
Hi, Jonas here!
Or as we say in the north of Germany: “Moin Moin!”
I wrote a frequently asked questions document about the report solution I created some years ago. Mainly because I still receive questions about the report solution and how to deal with update related errors. If you haven’t seen my ConfigMgr report solution yet, here is the link: “Mastering Configuration Manager Patch Compliance Reporting”
Hopefully the Q&A list will help others to reach 100% patch compliance.
If that’s even possible 😉
First things first
If you spend a lot of time dealing with update related issues or processes for client operating systems, have a look at “Windows Autopatch” HERE and let Microsoft deal with that.
For server operating systems, on-premises or in the cloud have a look at “Azure Update Manager” and “A Staged Patching Solution with Azure Update Manager“.
The section: “Some key facts and prerequisites” of the blog mentioned earlier covers the basics of the report solution and should answer some questions already.
Everything else is hopefully covered by the list below.
So, lets jump in…
The OSType field does not contain any data. What should I do?
This can mean two things.
No ConfigMgr client is installed. In that case other fields like client version also contain no data. Install the client or use another collection without the system in it to run the report and therefore exclude the system from the report.
The system has not send any hardware inventory data. Check the ConfigMgr client functionality.
Some fields contain a value of 999. What does that mean?
There is no data for the system found in the ConfigMgr database when a value of 999 is shown.
“Days Since Last Online” with a value of 999 typically means that the system had no contact with ConfigMgr at all. Which either means the system has no ConfigMgr client installed or the client cannot contact any ConfigMgr management point.
“Days Since Last AADSLogon” with a value of 999 means there is no data from AD System or Group Discovery in the ConfigMgr database for the system.
“Days Since Last Boot” with a value of 999 means there is no hardware inventory data from WMI class win32_operatingsystem in the ConfigMgr database for the system.
“Month Since Last Update Install” with a value of 999 means there is no hardware inventory data from WMI class win32_quickfixengineering in the ConfigMgr database for the system.
What does WSUS Scan error stand for?
Before a system is able to install updates, it needs to scan against the WSUS server to be able to report missing updates. If that scan fails, an error will be shown in the report.
In that case, other update information might be also missing and such an error should be fixed before any other update related analysis.
I found WSUS scan errors with a comment about a WSUS GPO
The ConfigMgr client tries to set a local policy to point to WSUS client to the WSUS server of the ConfigMgr infrastructure. That process fails in case a group policy tries to do the same with a different WSUS server name.
Remove the GPO for those systems to resolve the error.
I found WSUS scan errors mentioning the registry.pol file
The ConfigMgr client tries to set a local policy to point the WSUS client to the WSUS server of the ConfigMgr infrastructure. That process results in a policy entry in: “C:Windowssystem32grouppolicyMachineRegistry.pol”
If the file cannot be accessed the WSUS entry cannot be set and the process fails.
Delete the file in that case and run “gpupdate /force” as an administrator.
IMPORTANT: Local group policy settings made manually or via ConfigMgr task sequence need to be set again if the file has been deleted.
NOTE: To avoid any other policy problems (with Defender settings for example) it is best to re-install the ConfigMgr client after the file has been re-created.
I found WSUS scan errors mentioning a proxy problem, how can I fix that?
This typically happens when a system proxy is set and the WSUS agent tries to connect to the WSUS server via that proxy and fails.
You can do the following:
Open a command prompt as admin and run “netsh winhhtp show proxy”
If a proxy is present, either remove the proxy with: “netsh winhttp reset proxy”
Or, add either the WSUS server FQDN or just the domain to the bypass list.
Example: netsh winhttp set proxy proxy-server=”proxy.domain.local:8080″ bypass-list=”<local>;wsus.domain.local;*.domain.local”
Use either “wsus.domain.local” or “*.domain.local” in case your wsus is part of domain.local.
In some cases the proxy is set for the local SYSTEM account
Open: “regedit” as administrator
Open: [HKEY_USERSS-1-5-18SoftwareMicrosoftWindowsCurrentVersionInternet SettingsConnections]
Set “ProxyEnable” to “0” to disable the use of a proxy for the system account
When should I restart systems with a pending reboot?
As soon as possible. A pending reboot might be pending from another source than just software update installations like a normal application or server role installation and might prevent security update installations.
Some systems have all updates installed, but some deployments still show as non-compliant (Column: “Deployments Non Compliant”) What can I do?
This can happen if older update deployments exist and have no compliance changes over a longer period of time. Systems in that state are typically shown as “unknown” in the ConfigMgr console under “Deployments”.
Do one of the following to resolve that:
Remove older updates from an update group in case they are no longer needed
Remove the deployment completely
Delete the deployment and create a new one.
All actions will result in a re-evaluation of the deployment.
Column “Update Collections” does not show any entries.
The system is not a member of a collection with update deployments applied and is therefore not able to install updates. Make sure the system is part of an update deployment collection.
What is the difference between “Missing Updates All” and “Missing Updates Approved”?
“Missing Updates All” are ALL updates missing for a system whether deployed or not.
“Missing Updates Approved” are just the updates which are approved, deployed, or assigned (depending on the term you use) to a system and still missing. “Missing Updates Approved” should be zero at the end of your patch cycle, while “Missing Updates All” can always have a value other than zero.
Some systems are shown without any WSUS scan or install error, but have still updates missing. What can I do to fix that?
There can be multiple reasons for that.
Make sure the system is part of a collection with update deployments first
Check the update deployment start and deadline time and if the system sees (time is past start time) and is forced to install the update (time is past deadline time)
This is visible in the report: “Software Updates Compliance – Per device deployments”. Which can be opened individually or by clicking on the number in column: “Deployments Non Compliant” in any of the list views of the report solution.
The earliest deadline for a specific update and device is visible in the report: “Software Updates Compliance – Per device” or when clicked on the number of column: “Missing Updates Approved”.
Make sure the system either has no maintenance window at all or a maintenance window which fits the start and deadline time.
Make sure a maintenance window is at least 2h long to be able to install updates in it
Also, check the timeout configured for deployed updates on each update in the ConfigMgr console.
For example, if an update has a timeout of two hours configured and the maintenance window is set to two hours, installation of the update will NOT be triggered.
Check the restart notification client settings. This is especially important in server environments where a logged on user might not see a restart warning and therefore might not act on it. The restart time will be added to the overall timeout of each update and could exceed the overall allowed installation time of a maintenance window
Check the available space on drive C:. Too little space can cause all sorts of problems.
Start “cleanmgr.exe” as admin an delete unused files.
If nothing else worked: Reboot the system and trigger updates manually
If nothing else worked: Re-install the ConfigMgr client
If nothing else worked: Follow the document: “Troubleshoot issues with WSUS client agents”
Some systems are shown as uncompliant, but all updates are installed? What can I do to fix that?
This can either be a reporting delay or a problem with update compliance state messages.
If the update installation just finished, wait at least 45 to 60 minutes. This is because the default state message send interval is set to 15 minutes and the report result is typically cached for 30 minutes.
If the update installation time is in the past, this could be due to missing state messages.
In that case, run the following PowerShell line locally on the affected machines to re-send update compliance state messages
(New-Object -ComObject “Microsoft.CCM.UpdatesStore”).RefreshServerComplianceState
Installation errors indicate an issue with the CBS store. How can this be fixed?
If the CBS store is marked corrupt no security updates can be installed and the store needs to be fixed.
the following articles describe the process in more detail:
HERE and HERE
The CBS log is located under: “C:WindowsLogsCBSCBS.log”.
The large log file size sometimes causes issues when parsing the file for the correct log entry.
In addition to that, older logfiles are stored as CAB-files and can also be quite large in size.
The following script can be used to parse even very large files and related CAB-files for store corruption entries.
Get-CBSLogState.ps1
Are there any additional resources related to update compliance issues?
Yes, the following articles can help further troubleshoot update related issues:
Troubleshoot software update management
Troubleshoot software update synchronization
Troubleshoot software update scan failures
Troubleshoot software update deployments
Deprecation of Microsoft Support Diagnostic Tool (MSDT) and MSDT Troubleshooters – Microsoft Support
What can I do to increase my update compliance percentage?
This is an non exhaustive list of actions which can help to positively impact software update compliance:
As mentioned before do not leave a system too long in a pending reboot state.
As mentioned before make sure to always have enough space left on disk C: (~2GB+ for monthly security updates, ~10GB+ for feature updates)
Start “cleanmgr.exe” as admin and delete unused files for example.
Make sure a system has enough uptime to be able to download and install security updates.
If a system has limited bandwidth available it might need to stay online/active a while longer than other systems with more bandwidth available
You also might need to consider power settings for systems running on battery
What is a realistic update compliance percentage?
While the aim is to get to 100% fully patched systems, this goal can be quite hard in some situations. Some of the reasons behind bad patch compliance come from technical issues like the ones mentioned above under: “What can I do to increase my update compliance percentage?”. But other factors include the device delivery process for example. If you put ready to go systems on a shelf for a longer period, those devices will decrease overall patch compliance percentage.
To reach a high compliance percentage, know your workforce and know your update processes.
Reduce the blind spot and make sure each actively used system does not fall out of management due to errors, misconfigurations, or simply bad monitoring. Keep those devices on the shelf in mind and exclude them from compliance reporting for the duration of inactivity.
That’s it!
I hope you enjoyed reading this blog post. Stay safe!
Jonas Ohmsen
Microsoft Tech Community – Latest Blogs –Read More
Relational Data Synchronization between environments
Relational Data Synchronization between environments
There are business and/or technical cases where relational data should be duplicated to another environment. Since the demands of those business and/or technical cases are not the same, there are multiple technical solutions to achieve the goal.
In this article, I will discuss of the various solutions according to difference business needs, with deep dive into one family of solutions – sync solutions that is based on the database engine (DB engine). The content is Azure oriented, but the same concepts are true for other clouds as well.
I would expect that anyone that needs to sync relational data between environment can find here a good guideline.
General synchronization demands
Let us start with the typical demands:
Scenario
Latency
Typical solution family
Data Warehouse
Hours to day
ETL
Data mart
Minutes to hours
DB engine Sync
High utilized DB
Seconds to minutes
DB engine Full or Sync
High availability
Seconds
DB engine Full
Disaster Recovery
Seconds to minutes
DB engine Full
Network separation
Vary
Vary
DB engine Sync is the focus if this article. See below.
Here is high level description of those solution families:
ETL (Extract,Transform,Load):
Used for populating data warehouses or data marts from production systems
Usually, the schema on the target is more reporting friendly (star schema) than the production system
The data in the target can be in delay (usually hours) compared to the source
The source and the target can be utilizing different technologies
Tools in the market: Azure Data Factory, Informatica, Ascend
DB engine full:
Built-in replica mechanism to have another copy of the full database
With or without the ability to have one or more replicas that can be utilized as a read replica
Based on high availability, log shipping, backup & restore or storage-based solutions
Used for HA/DR and or read scale operation
Minimal latency (seconds)
Same technology
Read only on the target
DB engine sync
Tools in scope: SQL Data sync, Fabric Mirroring, Replication
Those tools support partial copy of the database
See more in the next chapter
Each option has its own pros and cons and sometimes you might use more than one solution in the same project.
In the rest of this article, I will focus on the DB engine sync solutions family usage.
More information:
ETL – Extract, transform, and load
Read only Replica: Azure SQL, PostgreSQL, MySQL
DB engine Sync Solutions Family
The need:
I cannot exaggerate the importance of choosing a synchronization solution based on your specific business needs. This is the reason that multiple solutions exist – to be able to support your specific need with a good-enough solution.
A sync process is responsible for sync data between environments. To be more exact, between source and one or more targets. The different solutions might have various kinds of characteristics.
Here are typical characteristics that you might be interested in:
Various kinds of technology
Different schema
Updates on both sides (conflict might happen)
Latency between the two copies
Maintenance efforts, skills required
The level of provider/user responsibility for the sync including re-sync probability, tools and efforts
I chose three key technologies (replication, SQL data sync, Fabric Mirroring) to discuss. The discussion is based on multiple discussions with my customers.
Replication:
Very mature technology which is supported by the majority of the relational database products
Low latency – usually seconds
Multiple flavors – transactional, merge, snapshot
Different table structure in the source and target are possible with limitations but add complexity
Multiple subscribers per source are supported
Monitoring is your responsibility and in case of failure, deep knowledge is needed to avoid reinitializing
For SQL server, you have a built-in replication monitor tool. For other databases you should check.
The monitor is not doing correction actions. Failing to track the replication status might cause a non-updated target environment
Replication of the data to a database of another provider might be possible usually with limitations. You will need a third-party tool to implement such a solution. For SQL Server Heterogeneous Database Replication is deprecated.
Azure SQL database cannot be a publisher
You must have a good DBA with specific replication knowledge to maintain the system
Typical scenarios for replication:
Filtering (part of the rows and/or the columns should be replicated
Low latency needs
Cross security boundaries with SQL authentication (see in the security section)
Cross database technologies (SQL server à Oracle)
More information:
Replication: Azure SQL MI, Azure SQL DB, PostgreSQL, MySQL
SQL Data Sync for Azure:
SQL Data Sync is a service built on Azure SQL Database that lets you synchronize the data you select bi-directionally across multiple databases, both on-premises and in the cloud, but only SQL Server based.
Azure SQL Data Sync does not support Azure SQL Managed Instance or Azure Synapse Analytics at this time
Source and target should be with the exact same schema
Multiple subscribers are supported
Typical scenarios for SQL Data Sync:
Considerable number of tables to be replicated
Managed by Azure experts (limited database knowledge needed)
SaaS solution preferred
Azure SQL database source
Bi-directional synchronization
More information:
Data Sync: Overview, Best Practices
Azure SQL Data Sync | Tips and Tricks
Mirroring in Microsoft Fabric (private preview):
The target for the synced data is sorted in delta lake table format – no need for relational database
The primary business scenario is reporting on the target
The schema cannot be changed on the target
Azure Cosmos DB, Azure SQL DB and Snowflake customers will be able to use Mirroring to mirror their data in OneLake and unlock all the capabilities of Fabric Warehouse, Direct Lake Mode, Notebooks and much more.
SQL Server, Azure PostgreSQL, Azure MySQL, Mongo DB and other databases and data warehouses will be coming in CY24.
Typical scenarios for Mirroring with Microsoft Fabric:
The target is reporting only that might integrate data from multiple sources
The cost associated with maintaining another relational engine for reporting is high. This aspect is even more significant for ISVs that are managing different environments for each customer (tenant)
Azure SQL or IaaS environment
Replacing an ETL system with no code solution
Part of your OneLake data architecture
More information:
Mirroring: Announcement, Copilot, Cosmos DB
Other aspects:
For the completeness of this article, here is a brief discussion of other aspects of the solutions that you should be aware of:
Identity and Security:
In all solutions – integrate solution is the best (replication authentication and replication security , SQL Data Sync, Mirroring).
For replication, you might use SQL authentication. For Azure SQL managed instance it is necessary.
Cost:
All the solutions do not have direct cost except for the services utilized for the source and target and possible cross data centers network bandwidth utilized.
Bi-directional and conflict resolution:
The only Azure native solution support is for SQL Data Sync.
Transactional replication – bi-directional (peer to peer) is rare but has multiple options. Last write wins is the automatic way as defined here.
Note:
Peer to peer is not supported by Azure SQL database offerings
Merge replication has more options but not on Azure SQL database offerings – see here
SQL Data Sync – Hub wins or Member wins (see here)
Mirroring – one direction only , so, not applicable
Scalability and performance:
In all solutions. You can expect reasonable pressure on the source (publisher) is expected.
SQL Data Sync add triggers to the source database while replication is using log reader (less pressure).
Monitoring and sync status:
For Replication – you have replication monitor and the tablediff utility
For SQL data Sync and Fabric mirroring – Monitoring Azure SQL Data Sync using OMS Log Analytics or Azure SQL Data Sync Health Checker
Real-time vs. Batch Synchronization:
All the solutions are well suited to real-time and short transactions. However, batch will work as well with more pressure on the SQL server log.
For Data Sync, empty tables provide the best performance at initialization time. If the target table is empty, Data Sync uses bulk insert to load the data. Otherwise, Data Sync does a row-by-row comparison and insertion to check for conflicts. If performance is not a concern, however, you can set up sync between tables that already contain data.
More information:
Empty tables provide the best performance
Choosing a DB engine Sync solution
Here is a short list of criteria that might help you choose a solution:
SQL Data Sync
The best solution for Azure SQL DB
Portal/script managed
Target should be from the SQL server family
Replication
The only solution for Azure SQL Managed Instance
Customable (filtering, schema change)
Deep database knowledge required
Fabric mirroring
Your solution where the destination can be/preferred on delta lake table format
Support multi sources (Azure SQL, Cosmos, Snowflake, more to come)
Portal/script managed
More information:
Compare SQL Data Sync with Transactional Replication
Conclusion
In the realm of data management, the need to synchronize relational data across environments arises from diverse business and technical requirements. This article has delved into the various solutions available, with a particular focus on database engine-based synchronization in the Azure ecosystem.
From the high-level demands of scenarios such as Data Warehouse, Data mart, High Utilized DB, High Availability, Disaster Recovery, to the intricacies of choosing between ETL, DB engine full, and DB engine sync solutions, we’ve explored the landscape of options available.
In the family of DB engine sync solutions, we’ve highlighted the importance of aligning your choice with specific business needs. Replication, a mature technology, offers low latency and supports various scenarios, though it requires vigilant monitoring. SQL Data Sync provides bi-directional synchronization for a considerable number of tables, managed by Azure professionals, while Microsoft Fabric’s Mirroring offers a unique approach for reporting scenarios.
Considerations such as identity and security, cost implications, conflict resolution, scalability, and monitoring have been discussed to provide a holistic view. Whether you prioritize low latency, transactional consistency, or ease of management, choosing the right solution is paramount.
As you navigate the complexities of relational data synchronization, keep in mind the nuances of each solution and the unique demands of your project. Whether opting for a well-established solution like Replication or embracing innovative approaches like Mirroring with Microsoft Fabric, make an informed decision based on your specific use case.
In conclusion, successful data synchronization is not a one-size-fits-all endeavor. By understanding the characteristics, advantages, and limitations of each solution, you empower yourself to make informed decisions that align with the dynamics of your data ecosystem. Explore further, stay updated on evolving technologies, and tailor your approach to meet the ever-evolving demands of your business.
You should remember that the technology world in general and in the cloud area in particular are constantly changing. The dynamic nature of data management and the importance of staying abreast of evolving technologies only emphasize that the reader should explore emerging solutions and best practices.
Microsoft Tech Community – Latest Blogs –Read More
Microsoft and Industry Leaders Enable RAN and Platform Programmability with Project Janus
Barcelona – February 26, 2024. Today at MWC 2024, Microsoft announced Project Janus, along with leaders across the telecommunications industry. Project Janus uses telco-grade cloud infrastructure compatible with O-RAN standards to draw on fine-grained telemetry from the radio access network (RAN), the edge cloud infrastructure, and other sources of data. This enables a communication service provider (CSP) to gain detailed monitoring and fast closed loop control of their RAN network. Janus has support and participation from CSPs such as Deutsche Telekom, and Vodafone; RAN and infrastructure providers CapGemini, Mavenir, and Intel Corporation; and RIC vendors and software innovators Juniper Networks, Aira Technologies, Amdocs, and Cohere Technologies.
“We know how vital the performance, security, and automation of the network is for CSPs, and going forward, more accurately optimizing complex networks,” said Yousef Khalidi, Corporate Vice President, Azure for Operators at Microsoft. “That’s why we’re excited to debut Project Janus alongside leading partners and supporters as an O-RAN compatible extension that makes RAN and platform even more programmable and optimized.”
Project Janus helps CSPs optimize RAN performance through visibility, analytics, AI, and closed loop control. To meet this objective, Microsoft and industry collaborators built a set of capabilities including RAN instrumentation tools that:
leverage the existing E2 O-RAN interface
update its service models to communicate with components of a CSP’s RAN and SMO architecture including the Distributed Unit (DU), Centralized Unit (CU), and RAN Intelligent Controller (RIC).
RAN, RIC, and xApp and rApp vendors are able to develop and use instrumentation tools to capture RAN data dynamically, and also combine them with platform data from cloud-based platforms hosting the RAN workloads.
This architecture enables several new use cases, such as precise analytics for anomaly detection and root cause analysis, interference detection, and optimizing other RAN performance metrics. The framework also enables new applications, such as fast vRAN power saving, failover, and live migration.
Project Janus will be available for everyone to include in their platform and network functions and will be supported natively by Microsoft’s Azure Operator Nexus platform.
To see specific use case examples, visit the “Unlock Operator Value with Programmable RAN & Platform” pod in the Microsoft booth at Mobile World Congress 2024 at 3H30 in Hall 3 during February 29-29, 2024 and check out www.microsoft.com/research/project/programmable-ran-platform/videos. Also read Mavenir, Microsoft and Intel Team for Real-Time Layer 1 vRAN Control white paper.
Telecommunications leaders are sharing support for the collaborative initiative:
Deutsche Telekom – “This initiative shows great promise to increase the pace of innovation and unlock new value through dynamic, customizable RAN data and analytics that can work within an O-RAN compliant framework. We look forward to seeing the participation by even more companies and developers in this burgeoning ecosystem.” – Petr Ledl, Vice President of Network Trials and Integration Lab and Chief Architect of Access Disaggregation program at Deutsche Telekom.
Vodafone – “The dynamic service models enabled by Project Janus are fully aligned with the vision of Open RAN in supporting the scale deployment of software-defined RAN. Access to the correct data at the right time and intelligent algorithms based on AI/ML capabilities will introduce significant performance and capacity benefits for all existing cellular networks and enable real autonomous ones.” – Francisco Martín Pignatelli, Head of Open RAN at Vodafone.
Hear from Microsoft Collaborators:
CapGemini – “CapGemini in collaboration with Microsoft has successfully demonstrated implementation of several use cases such as anomaly detection, energy savings and interference detection using Janus. These efforts have also demonstrated the benefits of being able to combine and reason over dynamic data from RAN, incremental to the predefined data types already available today, with dynamic data from the O-Cloud platform using Janus dynamic service models such as resolving key integration issues between RAN and platform as well as offering the power of leveraging AI/ML applications by developers to more precisely target areas of improvement for the RAN network.” – Rajat Kapoor, Vice President and Head of Software Frameworks at Capgemini.
Mavenir – “Improving RAN visibility and real-time control is essential to a CSP’s network performance and security, and it is Mavenir’s goal to support our customers with state-of-the-art observability. Data from our O-RAN-compliant DU/CU can be easily extracted dynamically and made available within our product management tools for tuning the operation of the Mavenir RAN. We demonstrated an advanced on-site debugging tool and customizable interference detection solution with Janus, which highlighted the flexibility of Janus to solve problems in real-time and improve system performance. With Janus, data from our Open RAN compliant DU can also be made available to an ecosystem of O-RAN focused application developers to provide insights and recommendations to the CSP to address and improve their network performance.” – Bejoy Pankajakshan, EVP, Chief Technology & Strategic Officer at Mavenir.
Intel Corporation – “With Intel FlexRAN reference architecture, Intel has been at the forefront of enabling the industry with virtualized, Open RAN to drive performance, flexibility and innovations, including AI. Microsoft’s Janus builds on FlexRAN’s software programmability to expose new data streams and application capabilities to the next generation of xApp developers, accelerating the adoption of AI in RAN networks to provide even more value to service providers”- Cristina Rodriguez, Vice President and General Manager of Wireless Access Network Division at Intel.
Juniper Networks – “Using the existing E2 O-RAN interface, Janus introduces the capability to bring more timely and customized RAN telemetry to Juniper Near-Real Time RIC. From this, we can enable xApp developers to use the incremental data to more precisely target areas of improvement for the performance and optimization of a RAN network.” – Constantine Polychronopoulos, Group VP of 5G and Telco Cloud at Juniper Networks.
Aira Technologies – “Our mission at Aira as an AI Defined Networking company is to enable the fully autonomous cellular RAN and our application of ML to wireless baseband processing is an industry first. Aira has showcased the use of Janus to collect and forward dynamic RAN data into our near-real time xApp where we apply leading-edge machine learning to drive better channel estimation and prediction to help maximize downlink throughput and range. We look forward to demonstrating, with Microsoft and the growing O-RAN ecosystem, even more innovation built on disaggregated and programmable networks.” – Anand Chandrasekher, Co-Founder and CEO at Aira Technologies.
Amdocs – “As a leading service provider and member of the ARI-5G Consortium, Amdocs is a key proponent of Open RAN and dedicated enabler of RAN intelligence and optimization and we do this today by offering among other things, Amdocs’ xApps such as the massive MIMO xApp. With Janus we look forward to leveraging dynamic service models with our network applications to further accelerate RAN performance and programmability for our CSP customers.” – Oleg Volpin, Division President Europe, Telefonica Global and Network Offering Division at Amdocs.
Cohere Technologies – “Cohere along with key operators and vendors is driving Multi-G ecosystem to enable co-existence of 4G, 5G and 6G and helping operators to do spectrum management in a seamless way. Janus’s dynamic infrastructure helps realize Multi-G’s dynamic infrastructure requirements and helps this vision.” – Prem Sankar Gopannan, Vice President of Product Architecture and Software Engineering.
Microsoft Tech Community – Latest Blogs –Read More