Tag Archives: microsoft
Authorization_RequestDenied
Hi team,
We are currently trying to get user email address removed for privacy reasons through the https://graph.microsoft.com/v1.0/users/email address removed for privacy reasons
We get the below error: (Forbidden:Authorization_RequestDenied) Insufficient privileges to complete the operation. Date: 2024-02-28T15:26:23. Request Id: ce9dca50-7dba-44d4-be95-e5a4d14aada0. Client Request Id: ce9dca50-7dba-44d4-be95-e5a4d14aada0.
We have granted the access policy to this user and we do have the correct scopes. Could you look at the logs and let us know what might be the issue here?
Thanks,
Vakul
Hi team, We are currently trying to get user email address removed for privacy reasons through the https://graph.microsoft.com/v1.0/users/email address removed for privacy reasons We get the below error: (Forbidden:Authorization_RequestDenied) Insufficient privileges to complete the operation. Date: 2024-02-28T15:26:23. Request Id: ce9dca50-7dba-44d4-be95-e5a4d14aada0. Client Request Id: ce9dca50-7dba-44d4-be95-e5a4d14aada0. We have granted the access policy to this user and we do have the correct scopes. Could you look at the logs and let us know what might be the issue here? Thanks,Vakul Read More
Azure Virtual Network now supports updates without subnet property
/* Style Definitions */
table.MsoNormalTable
{mso-style-name:”Table Normal”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-parent:””;
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin:0in;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:”Aptos”,sans-serif;
mso-ascii-font-family:Aptos;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Aptos;
mso-hansi-theme-font:minor-latin;}
table.MsoTableGrid
{mso-style-name:”Table Grid”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-priority:39;
mso-style-unhide:no;
border:solid windowtext 1.0pt;
mso-border-alt:solid windowtext .5pt;
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-border-insideh:.5pt solid windowtext;
mso-border-insidev:.5pt solid windowtext;
mso-para-margin:0in;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:”Aptos”,sans-serif;
mso-ascii-font-family:Aptos;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Aptos;
mso-hansi-theme-font:minor-latin;
mso-font-kerning:1.0pt;
mso-ligatures:standardcontextual;}
Azure API supports the HTTP methods PUT, GET, DELETE for the CRUD (Create/Retrieve/Update/Delete) operations on your resources. The PUT operation is used for both Create and Update. For existing resources, using a PUT with the existing resources preserves them and adds any new resources supplied in the JSON. If any of the existing resources are omitted from the JSON for the PUT operation, those resources are removed from the Azure deployment.
Based on customer support cases and feedback, we observed that this behavior causes problems for customers while performing updates to existing deployments. This is a challenge in the case of subnets in the VNet where any updates to the virtual network, or addition of resources (e.g. adding a routing table), to a virtual network require you to supply the entire virtual network configuration in addition to the subnets. To make it easier for customers, we have implemented a change in the PUT API behavior for virtual network updates. This change allows you to skip the subnet specification in a PUT call without deleting the existing subnets. This capability is now available in a Limited Preview in all the EUAP regions, US West Central and US North with API version 2023-09-01.
Previous behavior
The existing behavior has been to expect a subnet property in the PUT virtual network call. If a subnet property isn’t included, the subnets are deleted. This might not be the intention.
New PUT VNet behavior
Assuming your existing configuration is as follows:
“subnets”: [
{
“name”: “SubnetA”,
“properties”: {…}
},
{
“name”: “SubnetB”,
“properties”: {…}
},
{
“name”: “SubnetC”,
“properties”: {…}
},
{
“name”: “SubnetD”,
“properties”: {…}
}
]
The updated behavior is as follows:
If a PUT virtual network doesn’t include a subnet property, no changes to the existing set of subnets is made.
If subnet property is explicitly marked as empty, we will treat this as a request to delete all the existing subnets. For example:
“subnets”: []
OR
“subnets”: null
If a subnet property is supplied with specific values as follows:
“subnets”: [
{
“name”: “SubnetA”,
“properties”: {…}
},
{
“name”: “Subnet-B”,
“properties”: {…}
},
{
“name”: “Subnet-X”,
“properties”: {…}
}
]
In this case, the following changes are made to the virtual network:
SubnetA is unchanged. Assuming the supplied configuration is the same as existing.
SubnetB, SubnetC and SubnetD are deleted.
Two new subnets Subnet-B and Subnet-X are created with the new configuration.
This behavior remains unchanged from what Azure currently has today.
Next Steps
Test the new behavior in the regions listed above and share your feedback.
Microsoft Tech Community – Latest Blogs –Read More
The ABCs of ADX: Learning the Basics of Azure Data Explorer | Data Exposed: MVP Edition
You may have heard of Azure Data Explorer – but do you know what it does? Do we know the best ways to use it (and the ways we shouldn’t use it)? Do we know some things that it does better than anything else in the Microsoft data platform? Join us for a walkthrough of what Azure Data Explorer is, what it isn’t, and how to leverage it to offer your customers, colleagues, and users another tool in their data toolbox.
Resources:
Learning path: https://learn.microsoft.com/en-us/training/paths/data-analysis-data-explorer-kusto-query-language/
Help Cluster with built-in test data: https://dataexplorer.azure.com/clusters/help
View/share our latest episodes on Microsoft Learn and YouTube!
Microsoft Tech Community – Latest Blogs –Read More
Microsoft Teams Phone empowers frontline workers with smart and reliable communication
Teams Phone is a cloud calling solution that equips your entire workforce with flexible, reliable, and smart calling capabilities, all within Microsoft Teams. Earlier this month, we introduced a new Teams Phone for Frontline Workers offer1 that enables frontline workers to securely communicate with customers, colleagues, or suppliers in Teams.
Teams Phone keeps frontline workers mobile and connected with dedicated numbers and devices, making it a versatile solution for employees in various industries and job functions. For instance, a retail store associate can easily respond to customer inquiries on product information, or nurses can directly connect with their patients from anywhere, across devices. With Teams Phone, you can:
Route calls to the right person at the right time with auto-attendants, call queues, and call delegation.
Communicate securely with patients with electronic health record application integration, call recording, and transcription.
Create meaningful customer engagements with CRM system integration, consultative transfers, and call park.
Set frontline teams up quickly with shared calling, allowing groups of users to make and receive calls with a shared phone number and calling plan.
Simplify communication with common area phones in shared spaces
In today’s fast-paced work environment, effective communication is essential for seamless operations. However, not all frontline workers need a dedicated phone number to perform their tasks. In some scenarios, they may only need to make or receive occasional calls on behalf of a department. Common area phones cater to this need and unlock easy to use calling capabilities for frontline workers. With common area phones, frontline workers can make and receive calls through a shared mobile Android device, or a desk phone assigned to their team or department.
Common area phones in shared spaces have several use cases. A shared device can help retail store associates who are managing incoming calls in the curbside pick-up department, or receptionists in a clinic who are managing appointment scheduling requests. With common area phones, you can:
Route incoming calls efficiently, easily, and exactly where you need them with auto-attendants, call queues, call transfer, shared line appearance, and call park.
Relay important information between teams in real time with Walkie Talkie in Teams as well as hotline phones programmed to dial one number.
How to Get Started
Teams Phone for individual users
Get the Teams Phone for Frontline Workers license¹, available as an add-on to Microsoft 365 F1 and F3.
Use Teams Phone from any device where you’re logged into the Teams app. See the full list of certified devices here.
Learn more about how to set up Teams Phone.
Common area phones
Get the Teams Shared Device license.
Learn more about how to set up desk phones as common area phones or how to set up an Android mobile phone as a common area phone.
¹ Microsoft Teams Phone Standard for Frontline Workers ($4 user/month) will be available as an add-on to Microsoft 365 F1 ($2.25 user/month) and F3 ($8 user/month). Listed pricing may vary due to currency, country, and regional variant factors. Contact your Microsoft sales representative to learn more.
Microsoft Tech Community – Latest Blogs –Read More
Enhanced Performance in Additional Regions: Azure Synapse Analytics Spark Boosted by up to 77%
We are committed to continually advancing the capabilities of Azure Synapse Analytics Spark, and are pleased to announce substantial improvements that could increase Spark performance by as much as 77%.
Performance Metrics
Our internal testing, utilizing the 1TB TPC-H industry standard benchmark, indicates performance gains of up to 77%. It’s important to note that individual workloads may vary, but the enhancements are designed to benefit all Azure Synapse Analytics Spark users.
Technological Foundations
This performance uptick is attributable to our transition to the latest Azure v5 Virtual Machines. These VMs bring improved CPU performance, increased SSD throughput, and elevated remote storage IOPS.
Regional Availability
We have implemented these performance improvements in the following regions, bold indicates a new region:
Australia East
Australia Southeast
Canada Central
Canada East
Central India
Germany West Central
Japan West
Korea Central
Poland Central
South Africa North
South India
Sweden Central
Switzerland North
Switzerland West
UAE North
UK South
UK West
West Central US
Additionally, all Microsoft Fabric regions, with the exception of Qatar Central, are already operating with these enhanced performance capabilities.
Future Rollout
The global rollout of these improvements is an ongoing process and expected to take several quarters to complete. We will provide updates as additional regions are upgraded. Customers in updated regions will automatically benefit from the performance enhancements at no additional cost.
Next Steps for Users
No action is required on your part to benefit from these improvements. Once your region receives the upgrade, you may notice reduced job completion times. If cost-efficiency is a priority, you may opt to decrease node size or the number of nodes while maintaining improved performance levels.
Learn more about Optimizing Spark performance, Apache Spark pool configurations, Spark compute for Data Engineering and Data Science – Microsoft Fabric
Microsoft Tech Community – Latest Blogs –Read More
Stop Worrying and Love the Outage, Vol II: DCs, custom ports, and Firewalls/ACLs
Hello, it’s Chris Cartwright from the Directory Services support team again. This is the second entry in a series where I try to provide the IT community with some tools and verbiage that will hopefully save you and your business many hours, dollars and frustrations. Here we’re going to focus on some major direct and/or indirect changes to Active Directory that tend be pushed onto AD administrators. I want to arm you with the knowledge required for those conversations, and hopefully some successful deflection. After all, isn’t it better to learn the hard lessons from others, if you can?
Periodically, we get cases for replication failures, many times, involving lingering objects. Almost without fail, the cause is one of the following reasons:
DNS
SACL/DACL size
Network communication issues
Network communications issues almost always come down to a “blinky box” in the middle that is doing something it shouldn’t be, whether due to defective hardware/software, misconfiguration or the ever-present misguided configuration. Today, we’re going to focus on the third, a misguided configuration. That is to say, the things that your compliancy section has said must be done, that have little to no security benefit, but can easily result in a multi-hour, multi-day, or even (yes) multi-long outage. To be fair, the portents of an outage should be readily apparent with any monitoring in place. However, sometimes reporting agents are not installed, fail to function properly, or misconfigured or events themselves are missed (alert fatigue). So, one of things to do when compliancy starts talking about locking down DC communications is to ask them…
What is the problem you are trying to solve?
Have you been asked to isolate DCs? Create a lag site? Make sure that X DCs can only replicate with Y DCs?
The primary effect of doing any of this is alert fatigue for replication errors, which is a path to outage later. Additionally, if you have “Bridge all site links” enabled, you are giving the KCC the wrong information to create site links.
Don’t permanently isolate DCs
Don’t create lag sites
Do make sure you have backups in place
Do make sure KCC has the correct information, and then let it be unless your network topology changes.
Do make sure all DCs in the forest can reach all DCs in the forest (If your networks are fully routable)
Have you been asked to configure DCs to restrict the RPC ports they can use?
Every AD administrator should be familiar with the firewall ports and how RPC works. By default, DCs will register on these port ranges and listen for traffic. The RPC Endpoint Mapper keeps track of these ports and tells incoming requests where to go. One thing that RPC Endpoint Mapper will not do is keep track of firewall or ACL changes that were made.
Again, what is the security benefit here? It is one thing to control DC communications outbound from the perimeter. It is another thing to suggest that X port is more secure than Y port, especially when we’re talking about ports upon which DCs are listening. If your compliancy team is worried about rogue applications listening on DCs, you have bigger problems..like rogue applications existing on your DCs, presumably put there by rogue agents who now have control over your entire Active Directory.
The primary effect of “locking down” a DC in this way is not to improve security, but to mandate the creation or modification of some document with fine print, “Don’t forget to add these registry keys to avoid an outage”, that will inevitably be lost during turnover. Furthermore, going too far can lead to port exhaustion, another type of outage.
Don’t restrict AD/Netlogon to static ports without exhaustively discussing the risks involved, and heavily documenting it.
Don’t restrict the RPC dynamic range without exhaustively discussing the risks involved, and heavily documenting it.
Do restrict inbound/outbound perimeter traffic to your DCs.
“Hey, you said multi-day or multi-week outages. It’s not that hard to fix replication!”
It is true that once you’ve found the network issue preventing replication that it is usually an easy fix. However, if the “easy” fix is to rehost all your global catalog partitions with tens of thousands of objects on 90+ DCs, requiring manual administrative intervention, and a specific sequence of commands because your environment is filled with lingering objects, you’re going to be busy for a while.
Wrapping it up
As the venerable Aaron Margosis said, “…if you stick with the Windows defaults wherever possible or industry-standard configurations such as the Microsoft Windows security guidance or the USGCB, and use proven enterprise management technologies instead of creating and maintaining your own, you will increase flexibility, reduce costs, and be better able to focus on your organization’s real mission.”
Security is critical in this day and age, but so is understanding the implications and reasons beyond some check box on an audit document. Monitoring is also critical, but of little use if polluted with noise. Remember who will be mainlining caffeine all night to get operations back online when the lingering objects start rolling in, because it will not be the people that click the “scan” button once a month…
References
Creating a Site Link Bridge Design | Microsoft Learn
You Are Not Smarter Than The KCC | Microsoft Learn
Configure firewall for AD domain and trusts – Windows Server | Microsoft Learn
RPC over IT/Pro – Microsoft Community Hub
Remote Procedure Call (RPC) dynamic port work with firewalls – Windows Server | Microsoft Learn
Restrict Active Directory RPC traffic to a specific port – Windows Server | Microsoft Learn
10 Immutable Laws of Security | Microsoft Learn
Sticking with Well-Known and Proven Solutions | Microsoft Learn
Chris “Was it really worth it” Cartwright
Microsoft Tech Community – Latest Blogs –Read More
Azure DevOps blog closing -> moving to DevBlogs
Hello! We will be closing this Azure DevOps blog soon on Tech Community as part of consolidation efforts. We appreciate your continued readership and interest in this topic.
For Azure DevOps blog posts (including the last 10 posted here), please go here: Azure DevOps Blog (microsoft.com)
Microsoft Tech Community – Latest Blogs –Read More
Creating Intelligent Apps on App Service with .NET
You can use Azure App Service to work with popular AI frameworks like LangChain and Semantic Kernel connected to OpenAI for creating intelligent apps. In the following tutorial we will be adding an Azure OpenAI service using Semantic Kernel to a .NET 8 Blazor web application.
Prerequisites
An Azure OpenAI resource or an OpenAI account.
A .NET 8 Blazor Web App. Create the application with a template here.
Setup Blazor web app
For this Blazor web application, we’ll be building off the Blazor template and creating a new razor page that can send and receive requests to an Azure OpenAI OR OpenAI service using Semantic Kernel.
Right click on the Pages folder found under the Components folder and add a new item named OpenAI.razor
Add the following code to the ****OpenAI.razor file and click Save
“/openai”
@rendermode InteractiveServer
<PageTitle>Open AI</PageTitle>
<h3>Open AI Query</h3>
<input placeholder=”Input query” @bind=”newQuery” />
<button class=”btn btn-primary” @onclick=”SemanticKernelClient”>Send Request</button>
<br />
<h4>Server response:</h4> <p>@serverResponse</p>
@code {
public string? newQuery;
public string? serverResponse;
}
Next, we’ll need to add the new page to the navigation so we can navigate to the service.
Go to the NavMenu.razor file under the Layout folder and add the following div in the nav class. Click Save
<div class=”nav-item px-3″>
<NavLink class=”nav-link” href=”openai”>
<span class=”bi bi-list-nested-nav-menu” aria-hidden=”true”></span> Open AI
</NavLink>
</div>
After the Navigation is updated, we can start preparing to build the OpenAI client to handle our requests.
API Keys and Endpoints
In order to make calls to OpenAI with your client, you will need to first grab the Keys and Endpoint values from Azure OpenAI or OpenAI and add them as secrets for use in your application. Retrieve and save the values for later use.
For Azure OpenAI, see this documentation to retrieve the key and endpoint values. For our application, you will need the following values:
deploymentName
endpoint
apiKey
modelId
For OpenAI, see this documentation to retrieve the api keys. For our application, you will need the following values:
apiKey
modelId
Since we’ll be deploying to App Service we can secure these secrets in Azure Key Vault for protection. Follow the Quickstart to setup your Key Vault and add the secrets you saved from earlier.
Next, we can use Key Vault references as app settings in our App Service resource to reference in our application. Follow the instructions in the documentation to grant your app access to your Key Vault and to setup Key Vault references.
Then, go to the portal Environment Variables blade in your resource and add the following app settings:
For Azure OpenAI, use the following:
DEPOYMENT_NAME = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
ENDPOINT = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
API_KEY = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
MODEL_ID = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
For OpenAI, use the following:
OPENAI_API_KEY = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
OPENAI_MODEL_ID = @microsoft.KeyVault(SecretUri=https://myvault.vault.azure.net/secrets/mysecret/)
Once your app settings are saved, you can bring them into the code by injecting IConfiguration and referencing the app settings. Add the following code to your OpenAI.razor file:
@inject Microsoft.Extensions.Configuration.IConfiguration _config
@code {
private async Task SemanticKernelClient()
{
string deploymentName = _config[“DEPLOYMENT_NAME”];
string endpoint = _config[“ENDPOINT”];
string apiKey = _config[“API_KEY”];
string modelId = _config[“MODEL_ID”];
// OpenAI
string OpenAIModelId = _config[“OPENAI_MODEL_ID”];
string OpenAIApiKey = _config[“OPENAI_API_KEY”];
}
Semantic Kernel
Semantic Kernel is an open-source SDK that enables you to easily develop AI agents to work with your existing code. You can use Semantic Kernel with Azure OpenAI and OpenAI models.
To create the OpenAI client, we’ll first start by installing Semantic Kernel.
To install Semantic Kernel, browse the NuGet package manager in Visual Studio and install the Microsoft.SemanticKernel package. For NuGet Package Manager instructions, see here. For CLI instructions, see here.
Once the Semantic Kernel package is installed, you can now initialize the kernel.
Initialize the Kernel
To initialize the Kernel, add the following code to the OpenAI.razor file.
@code {
@using Microsoft.SemanticKernel;
private async Task SemanticKernelClient()
{
var builder = Kernel.CreateBuilder();
var kernel = builder.Build();
}
}
Here we are adding the using statement and creating the Kernel in a method that we can use when we send the request to the service.
Add your AI service
Once the Kernel is initialized, we can add our chosen AI service to the kernel. Here we will define our model and pass in our key and endpoint information to be consumed by the chosen model.
For Azure OpenAI, use the following code:
var builder = Kernel.CreateBuilder();
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
var kernel = builder.Build();
For OpenAI, use the following code:
var builder = Kernel.CreateBuilder();
builder.Services.AddOpenAIChatCompletion(
modelId: OpenAIModelId,
apiKey: OpenAIApiKey,
);
var kernel = builder.Build();
Configure prompt and create Semantic function
Now that our chosen OpenAI service client is created with the correct keys we can add a function to handle the prompt. With Semantic Kernel you can handle prompts by the use of a semantic functions, which turn the prompt and the prompt configuration settings into a function the Kernel can execute. Learn more on configuring prompts here.
First, we’ll create a variable that will hold the users prompt. Then add a function with execution settings to handle and configure the prompt. Add the following code to the OpenAI.razor file:
@using Microsoft.SemanticKernel.Connectors.OpenAI
private async Task SemanticKernelClient()
{
var builder = Kernel.CreateBuilder();
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
var kernel = builder.Build();
var prompt = @”{{$input}} ” + newQuery;
var summarize = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings { MaxTokens = 100, Temperature = 0.2 });
}
Lastly, we’ll need to invoke the function and return the response. Add the following to the OpenAI.razor file:
private async Task SemanticKernelClient()
{
var builder = Kernel.CreateBuilder();
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
var kernel = builder.Build();
var prompt = @”{{$input}} ” + newQuery;
var summarize = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings { MaxTokens = 100, Temperature = 0.2 })
var result = await kernel.InvokeAsync(summarize);
serverResponse = result.ToString();
}
Here is the example in it’s completed form. In this example, use the Azure OpenAI chat completion service OR the OpenAI chat completion service, not both.
“/openai”
@rendermode InteractiveServer
@inject Microsoft.Extensions.Configuration.IConfiguration _config
<PageTitle>OpenAI</PageTitle>
<h3>OpenAI input query: </h3>
<input class=”col-sm-4″ @bind=”newQuery” />
<button class=”btn btn-primary” @onclick=”SemanticKernelClient”>Send Request</button>
<br />
<br />
<h4>Server response:</h4> <p>@serverResponse</p>
@code {
@using Microsoft.SemanticKernel;
@using Microsoft.SemanticKernel.Connectors.OpenAI
private string? newQuery;
private string? serverResponse;
private async Task SemanticKernelClient()
{
// Azure OpenAI
string deploymentName = _config[“DEPLOYMENT_NAME”];
string endpoint = _config[“ENDPOINT”];
string apiKey = _config[“API_KEY”];
string modelId = _config[“MODEL_ID”];
// OpenAI
// string OpenAIModelId = _config[“OPENAI_DEPLOYMENT_NAME”];
// string OpenAIApiKey = _config[“OPENAI_API_KEY”];
// Semantic Kernel client
var builder = Kernel.CreateBuilder();
// Azure OpenAI
builder.Services.AddAzureOpenAIChatCompletion(
deploymentName: deploymentName,
endpoint: endpoint,
apiKey: apiKey,
modelId: modelId
);
// OpenAI
// builder.Services.AddOpenAIChatCompletion(
// modelId: OpenAIModelId,
// apiKey: OpenAIApiKey
// );
var kernel = builder.Build();
var prompt = @”{{$input}} ” + newQuery;
var summarize = kernel.CreateFunctionFromPrompt(prompt, executionSettings: new OpenAIPromptExecutionSettings { MaxTokens = 100, Temperature = 0.2 });
var result = await kernel.InvokeAsync(summarize);
serverResponse = result.ToString();
}
}
Now save the application and follow the next steps to deploy it to App Service. If you would like to test it locally first at this step, you can swap out the config values at with the literal string values of your OpenAI service. For example: string modelId = “gpt-4-turbo”;
Deploy to App Service
If you have followed the steps above, you are ready to deploy to App Service. If you run into any issues remember that you need to have done the following: grant your app access to your Key Vault, add the app settings with key vault references as your values. App Service will resolve the app settings in your application that match what you’ve added in the portal.
Authentication
Although optional, it is highly recommended that you also add authentication to your web app when using an Azure OpenAI or OpenAI service. This can add a level of security with no additional code. Learn how to enable authentication for your web app here.
Once deployed, browse to the web app and navigate to the Open AI tab. Enter a query to the service and you should see a populated response from the server. The tutorial is now complete and you now know how to use OpenAI services to create intelligent applications.
Microsoft Tech Community – Latest Blogs –Read More
MDTI Earns Impactful Trio of ISO Certificates
We are excited to announce that Microsoft Defender Threat Intelligence (MDTI) has achieved ISO 27001, ISO 27017 and ISO 27018 certifications. The ISO, the International Organization for Standardization, develops market relevant international standards that support innovation and provide solutions to global challenges, including information security requirements around establishing, implementing, and improving an Information Security Management System (ISM).
These certificates emphasize the MDTI team’s continuous commitment to protecting customer information and following the strictest standards of security and privacy standards.
Certificate meaning and importance
ISO 27001: This certification demonstrates compliance of MDTI’s ISMS with best practices of industry, thereby providing a structured approach towards risk management pertaining to information security.
ISO 27017: This certificate is a worldwide standard that provides guidance on securing information in the cloud. It demonstrates that we have put in place strong controls and countermeasures to ensure our customers’ data is safe when stored in the cloud.
ISO 27018: This certificate sets out common objectives, controls and guidelines for protecting personally identifiable information (PII) processed in public clouds consistent with the privacy principles outlined in ISO 29100. This is confirmed by our ISO 27018 certification, which shows that we are committed to respecting our customers’ privacy rights and protecting their personal data through cloud computing.
What are the advantages of these certifications for our customers?
Enhanced Safety and Privacy Assurance: Our customers can be confident that the most sophisticated and exhaustive security and privacy standards offered in the market are in place to protect their data. We have ensured we exceed these certifications; therefore, their information is secure from emerging threats.
Reduced Risk and Liability Exposure: Through our certified ISMs and Privacy Information Management System (PIMS), consumers can significantly reduce liability for potential data breaches, legal actions, regulatory fines, or reputational risks. They use our efficient structures to boost resistance against cybercrime to reduce the risk of lawsuits.
Streamlined Compliance and Competitive Edge: The clients’ industry or market-specific rigorous regulatory and contractual requirements are usually facilitated by our certification programs. Global accreditation of international standards signifies that organizations are serious when it comes to data security. Their job reputation improves plus they get options for teaming up with other businesses that value safeguarding privacy.
What are the steps to begin with MDTI?
If you are interested in learning more about MDTI and how it can help you unmask and neutralize modern adversaries and cyberthreats such as ransomware, and to explore the features and benefits of MDTI please visit the MDT product web page.
Also, be sure to contact our sales team to request a demo or a quote.
Microsoft Tech Community – Latest Blogs –Read More
Mistral Large, Mistral AI’s flagship LLM, debuts on Azure AI Models-as-a-Service
Microsoft is partnering with Mistral AI to bring its Large Language Models (LLMs) to Azure. Mistral AI’s OSS models, Mixtral-8x7B and Mistral-7B, were added to the Azure AI model catalog last December. We are excited to announce the addition of Mistral AI’s new flagship model, Mistral Large to the Mistral AI collection of models in the Azure AI model catalog today. The Mistral Large model will be available through Models-as-a-Service (MaaS) that offers API-based access and token based billing for LLMs, making it easier to build Generative AI apps. Developers can provision an API endpoint in a matter of seconds and try out the model in the Azure AI Studio playground or use it with popular LLM app development tools like Azure AI prompt flow and LangChain. The APIs support two layers of safety – first, the model has built-in support for a “safe prompt” parameter and second, Azure AI content safety filters are enabled to screen for harmful content generated by the model, helping developers build safe and trustworthy applications.
The Mistral Large model
Mistral Large is Mistral AI’s most advanced Large Language Model (LLM), first available on Azure and the Mistral AI platform. It can be used on a full range of language-based task thanks to its state-of-the-art reasoning and knowledge capabilities. Key attributes:
Specialized in RAG: Crucial information is not lost in the middle of long context windows. Supports up to 32K tokens.
Strong in coding: Code generation, review and comments with support for all mainstream coding languages.
Multi-lingual by design: Best-in-class performance in French, German, Spanish, and Italian – in addition to English. Dozens of other languages are supported.
Responsible AI: Efficient guardrails baked in the model, with additional safety layer with safe prompt option.
Benchmarks
You can read more about the model and review evaluation results on Mistral AI’s blog: https://mistral.ai/news/mistral-large. The Benchmarks hub in Azure offers a standardized set of evaluation metrics for popular models including Mistral’s OSS models and Mistral Large.
Using Mistral Large on Azure AI
Let’s take care of the prerequisites first:
If you don’t have an Azure subscription, get one here: https://azure.microsoft.com/en-us/pricing/purchase-options/pay-as-you-go
Create an Azure AI Studio hub and project. Make sure you pick East US 2 or France Central as the Azure region for the hub.
Next, you need to create a deployment to obtain the inference API and key:
Open the Mistral Large model card in the model catalog: https://aka.ms/aistudio/landing/mistral-large
Click on Deploy and pick the Pay-as-you-go option.
Subscribe to the Marketplace offer and deploy. You can also review the API pricing at this step.
You should land on the deployment page that shows you the API and key in less than a minute. You can try out your prompts in the playground.
The prerequisites and deployment steps are explained in the product documentation: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral.
You can use the API and key with various clients. Review the API schema if you are looking to integrate the REST API with your own client: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral#reference-for-mistral-large-deployed-as-a-service. Let’s review samples for some popular clients.
Basic CLI with curl and Python web request sample: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/webrequests.ipynb
Mistral clients: Azure APIs for Mistral Large are compatible with the API schema offered on the Mistral AI ‘s platform which allows you to use any of the Mistral AI platform clients with Azure APIs. Sample notebook for the Mistral python client: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/mistralai.ipynb
LangChain: API compatibility also enables you to use the Mistral AI’s Python and JavaScript LangChain integrations. Sample LangChain notebook: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/langchain.ipynb
LiteLLM: LiteLLM is easy to get started and offers consistent input/output format across many LLMs. Sample LiteLLM notebook: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/litellm.ipynb
Prompt flow: Prompt flow offers a web experience in Azure AI Studio and VS code extension to build LLM apps with support for authoring, orchestration, evaluation and deployment. Learn more: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/prompt-flow. Out-of-the-box support for Mistral AI APIs on Azure is coming soon, but you can create a custom connection using the API and key, and use the SDK of your choice Python tool in prompt flow.
Develop with integrated content safety
Mistral AI APIs on Azure come with two layered safety approach – instructing the model through the system prompt and an additional content filtering system that screens prompts and completions for harmful content. Using the safe_prompt parameter prefixes the system prompt with a guardrail instruction as documented here. Additionally, the Azure AI content safety system that consists of an ensemble of classification models screens for specific types of harmful content. The external system is designed to be effective against adversarial prompts attacks such as prompts that ask the model to ignore previous instructions. When the content filtering system detects harmful content, you will receive either an error if the prompt was classified as harmful or the response will be partially or completely truncated with an appropriate message when the output generated is classified as harmful. Make sure you account for these scenarios where the content returned by the APIs is filtered when building your applications.
FAQs
What does it cost to use Mistral Large on Azure?
You are billed based on the number of prompt and completions tokens. You can review the pricing on the Mistral Large offer in the Marketplace offer details tab when deploying the model. You can also find the pricing on the Azure Marketplace: https://azuremarketplace.microsoft.com/en-us/marketplace/apps/000-000.mistral-ai-large-offer
Do I need GPU capacity in my Azure subscription to use Mistral Large?
No. Unlike the Mistral AI OSS models that deploy to VMs with GPUs using Online Endpoints, the Mistral Large model is offered as an API. Mistral Large is a premium model whose weights are not available, so you cannot deploy it to a VM yourself.
This blog talks about the Mistral Large experience in Azure AI Studio. Is Mistral Large available in Azure Machine Learning Studio?
Yes, Mistral Large is available in the Model Catalog in both Azure AI Studio and Azure Machine Learning Studio.
Does Mistral Large on Azure support function calling and Json output?
The Mistral Large model can do function calling and generate Json output, but support for those features will roll out soon on the Azure platform.
Mistral Large is listed on the Azure Marketplace. Can I purchase and use Mistral Large directly from Azure Marketplace?
Azure Marketplace enables the purchase and billing of Mistral Large, but the purchase experience can only be accessed through the model catalog. Attempting to purchase Mistral Large from the Marketplace will redirect you to Azure AI Studio.
Given that Mistral Large is billed through the Azure Marketplace, does it retire my Azure consumption commitment (aka MACC)?
Yes, Mistral Large is an “Azure benefit eligible” Marketplace offer, which indicates MACC eligibility. Learn more about MACC here: https://learn.microsoft.com/en-us/marketplace/azure-consumption-commitment-benefit
Is my inference data shared with Mistral AI?
No, Microsoft does not share the content of any inference request or response data with Mistral AI.
Are there rate limits for the Mistral Large API on Azure?
Mistral Large API comes with 200k tokens per minute and 1k requests per minute limit. Reach out to Azure customer support if this doesn’t suffice.
Are Mistral Large Azure APIs region specific?
Mistral Large API endpoints can be created in AI Studio projects to Azure Machine Learning workspaces in East US 2 or France Central Azure regions. If you want to use Mistral Large in prompt flow in project or workspaces in other regions, you can use the API and key as a connection to prompt flow manually. Essentially, you can use the API from any Azure region once you create it in East US 2 or France Central.
Can I fine-tune Mistal Large?
Not yet, stay tuned…
Supercharge your AI apps with Mistral Large today. Head over to AI Studio model catalog to get started.
Microsoft Tech Community – Latest Blogs –Read More
Mistal Large, Mistral AI’s flagship LLM, debuts on Azure AI Models-as-a-Service
Microsoft is partnering with Mistral AI to bring its Large Language Models (LLMs) to Azure. Mistral AI’s OSS models, Mixtral-8x7B and Mistral-7B, were added to the Azure AI model catalog last December. We are excited to announce the addition of Mistral AI’s new flagship model, Mistral Large to the Mistral AI collection of models in the Azure AI model catalog today. The Mistral Large model will be available through Models-as-a-Service (MaaS) that offers API-based access and token based billing for LLMs, making it easier to build Generative AI apps. Developers can provision an API endpoint in a matter of seconds and try out the model in the Azure AI Studio playground or use it with popular LLM app development tools like Azure AI prompt flow and LangChain. The APIs support two layers of safety – first, the model has built-in support for a “safe prompt” parameter and second, Azure AI content safety filters are enabled to screen for harmful content generated by the model, helping developers build safe and trustworthy applications.
The Mistral Large model
Mistral Large is Mistral AI’s most advanced Large Language Model (LLM), first available on Azure and the Mistral AI platform. It can be used on a full range of language-based task thanks to its state-of-the-art reasoning and knowledge capabilities. Key attributes:
Specialized in RAG: Crucial information is not lost in the middle of long context windows. Supports up to 32K tokens.
Strong in coding: Code generation, review and comments with support for all mainstream coding languages.
Multi-lingual by design: Best-in-class performance in French, German, Spanish, and Italian – in addition to English. Dozens of other languages are supported.
Responsible AI: Efficient guardrails baked in the model, with additional safety layer with safe prompt option.
Benchmarks
You can read more about the model and review evaluation results on Mistral AI’s blog: https://mistral.ai/news/mistral-large. The Benchmarks hub in Azure offers a standardized set of evaluation metrics for popular models including Mistral’s OSS models and Mistral Large.
Using Mistral Large on Azure AI
Let’s take care of the prerequisites first:
If you don’t have an Azure subscription, get one here: https://azure.microsoft.com/en-us/pricing/purchase-options/pay-as-you-go
Create an Azure AI Studio hub and project. Make sure you pick East US 2 or France Central as the Azure region for the hub.
Next, you need to create a deployment to obtain the inference API and key:
Open the Mistral Large model card in the model catalog: https://aka.ms/aistudio/landing/mistral-large
Click on Deploy and pick the Pay-as-you-go option.
Subscribe to the Marketplace offer and deploy. You can also review the API pricing at this step.
You should land on the deployment page that shows you the API and key in less than a minute. You can try out your prompts in the playground.
The prerequisites and deployment steps are explained in the product documentation: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral.
You can use the API and key with various clients. Review the API schema if you are looking to integrate the REST API with your own client: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/deploy-models-mistral#reference-for-mistral-large-deployed-as-a-service. Let’s review samples for some popular clients.
Basic CLI with curl and Python web request sample: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/webrequests.ipynb
Mistral clients: Azure APIs for Mistral Large are compatible with the API schema offered on the Mistral AI ‘s platform which allows you to use any of the Mistral AI platform clients with Azure APIs. Sample notebook for the Mistral python client: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/mistralai.ipynb
LangChain: API compatibility also enables you to use the Mistral AI’s Python and JavaScript LangChain integrations. Sample LangChain notebook: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/langchain.ipynb
LiteLLM: LiteLLM is easy to get started and offers consistent input/output format across many LLMs. Sample LiteLLM notebook: https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/litellm.ipynb
Prompt flow: Prompt flow offers a web experience in Azure AI Studio and VS code extension to build LLM apps with support for authoring, orchestration, evaluation and deployment. Learn more: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/prompt-flow. Out-of-the-box support for Mistral AI APIs on Azure is coming soon, but you can create a custom connection using the API and key, and use the SDK of your choice Python tool in prompt flow.
Develop with integrated content safety
Mistral AI APIs on Azure come with two layered safety approach – instructing the model through the system prompt and an additional content filtering system that screens prompts and completions for harmful content. Using the safe_prompt parameter prefixes the system prompt with a guardrail instruction as documented here. Additionally, the Azure AI content safety system that consists of an ensemble of classification models screens for specific types of harmful content. The external system is designed to be effective against adversarial prompts attacks such as prompts that ask the model to ignore previous instructions. When the content filtering system detects harmful content, you will receive either an error if the prompt was classified as harmful or the response will be partially or completely truncated with an appropriate message when the output generated is classified as harmful. Make sure you account for these scenarios where the content returned by the APIs is filtered when building your applications.
FAQs
What does it cost to use Mistral Large on Azure?
You are billed based on the number of prompt and completions tokens. You can review the pricing on the Mistral Large offer in the Marketplace offer details tab when deploying the model. You can also find the pricing on the Azure Marketplace: https://azuremarketplace.microsoft.com/en-us/marketplace/apps/000-000.mistral-ai-large-offer
Do I need GPU capacity in my Azure subscription to use Mistral Large?
No. Unlike the Mistral AI OSS models that deploy to VMs with GPUs using Online Endpoints, the Mistral Large model is offered as an API. Mistral Large is a premium model whose weights are not available, so you cannot deploy it to a VM yourself.
This blog talks about the Mistral Large experience in Azure AI Studio. Is Mistral Large available in Azure Machine Learning Studio?
Yes, Mistral Large is available in the Model Catalog in both Azure AI Studio and Azure Machine Learning Studio.
Does Mistral Large on Azure support function calling and Json output?
The Mistral Large model can do function calling and generate Json output, but support for those features will roll out soon on the Azure platform.
Mistral Large is listed on the Azure Marketplace. Can I purchase and use Mistral Large directly from Azure Marketplace?
Azure Marketplace enables the purchase and billing of Mistral Large, but the purchase experience can only be accessed through the model catalog. Attempting to purchase Mistral Large from the Marketplace will redirect you to Azure AI Studio.
Given that Mistral Large is billed through the Azure Marketplace, does it retire my Azure consumption commitment (aka MACC)?
Yes, Mistral Large is an “Azure benefit eligible” Marketplace offer, which indicates MACC eligibility. Learn more about MACC here: https://learn.microsoft.com/en-us/marketplace/azure-consumption-commitment-benefit
Is my inference data shared with Mistral AI?
No, Microsoft does not share the content of any inference request or response data with Mistral AI.
Are there rate limits for the Mistral Large API on Azure?
Mistral Large API comes with 200k tokens per minute and 1k requests per minute limit. Reach out to Azure customer support if this doesn’t suffice.
Are Mistral Large Azure APIs region specific?
Mistral Large API endpoints can be created in AI Studio projects to Azure Machine Learning workspaces in East US 2 or France Central Azure regions. If you want to use Mistral Large in prompt flow in project or workspaces in other regions, you can use the API and key as a connection to prompt flow manually. Essentially, you can use the API from any Azure region once you create it in East US 2 or France Central.
Can I fine-tune Mistal Large?
Not yet, stay tuned…
Supercharge your AI apps with Mistral Large today. Head over to AI Studio model catalog to get started.
Microsoft Tech Community – Latest Blogs –Read More
Windows Update Compliance Reporting FAQ
Hi, Jonas here!
Or as we say in the north of Germany: “Moin Moin!”
I wrote a frequently asked questions document about the report solution I created some years ago. Mainly because I still receive questions about the report solution and how to deal with update related errors. If you haven’t seen my ConfigMgr report solution yet, here is the link: “Mastering Configuration Manager Patch Compliance Reporting”
Hopefully the Q&A list will help others to reach 100% patch compliance.
If that’s even possible 😉
First things first
If you spend a lot of time dealing with update related issues or processes for client operating systems, have a look at “Windows Autopatch” HERE and let Microsoft deal with that.
For server operating systems, on-premises or in the cloud have a look at “Azure Update Manager” and “A Staged Patching Solution with Azure Update Manager“.
The section: “Some key facts and prerequisites” of the blog mentioned earlier covers the basics of the report solution and should answer some questions already.
Everything else is hopefully covered by the list below.
So, lets jump in…
The OSType field does not contain any data. What should I do?
This can mean two things.
No ConfigMgr client is installed. In that case other fields like client version also contain no data. Install the client or use another collection without the system in it to run the report and therefore exclude the system from the report.
The system has not send any hardware inventory data. Check the ConfigMgr client functionality.
Some fields contain a value of 999. What does that mean?
There is no data for the system found in the ConfigMgr database when a value of 999 is shown.
“Days Since Last Online” with a value of 999 typically means that the system had no contact with ConfigMgr at all. Which either means the system has no ConfigMgr client installed or the client cannot contact any ConfigMgr management point.
“Days Since Last AADSLogon” with a value of 999 means there is no data from AD System or Group Discovery in the ConfigMgr database for the system.
“Days Since Last Boot” with a value of 999 means there is no hardware inventory data from WMI class win32_operatingsystem in the ConfigMgr database for the system.
“Month Since Last Update Install” with a value of 999 means there is no hardware inventory data from WMI class win32_quickfixengineering in the ConfigMgr database for the system.
What does WSUS Scan error stand for?
Before a system is able to install updates, it needs to scan against the WSUS server to be able to report missing updates. If that scan fails, an error will be shown in the report.
In that case, other update information might be also missing and such an error should be fixed before any other update related analysis.
I found WSUS scan errors with a comment about a WSUS GPO
The ConfigMgr client tries to set a local policy to point to WSUS client to the WSUS server of the ConfigMgr infrastructure. That process fails in case a group policy tries to do the same with a different WSUS server name.
Remove the GPO for those systems to resolve the error.
I found WSUS scan errors mentioning the registry.pol file
The ConfigMgr client tries to set a local policy to point the WSUS client to the WSUS server of the ConfigMgr infrastructure. That process results in a policy entry in: “C:Windowssystem32grouppolicyMachineRegistry.pol”
If the file cannot be accessed the WSUS entry cannot be set and the process fails.
Delete the file in that case and run “gpupdate /force” as an administrator.
IMPORTANT: Local group policy settings made manually or via ConfigMgr task sequence need to be set again if the file has been deleted.
NOTE: To avoid any other policy problems (with Defender settings for example) it is best to re-install the ConfigMgr client after the file has been re-created.
I found WSUS scan errors mentioning a proxy problem, how can I fix that?
This typically happens when a system proxy is set and the WSUS agent tries to connect to the WSUS server via that proxy and fails.
You can do the following:
Open a command prompt as admin and run “netsh winhhtp show proxy”
If a proxy is present, either remove the proxy with: “netsh winhttp reset proxy”
Or, add either the WSUS server FQDN or just the domain to the bypass list.
Example: netsh winhttp set proxy proxy-server=”proxy.domain.local:8080″ bypass-list=”<local>;wsus.domain.local;*.domain.local”
Use either “wsus.domain.local” or “*.domain.local” in case your wsus is part of domain.local.
In some cases the proxy is set for the local SYSTEM account
Open: “regedit” as administrator
Open: [HKEY_USERSS-1-5-18SoftwareMicrosoftWindowsCurrentVersionInternet SettingsConnections]
Set “ProxyEnable” to “0” to disable the use of a proxy for the system account
When should I restart systems with a pending reboot?
As soon as possible. A pending reboot might be pending from another source than just software update installations like a normal application or server role installation and might prevent security update installations.
Some systems have all updates installed, but some deployments still show as non-compliant (Column: “Deployments Non Compliant”) What can I do?
This can happen if older update deployments exist and have no compliance changes over a longer period of time. Systems in that state are typically shown as “unknown” in the ConfigMgr console under “Deployments”.
Do one of the following to resolve that:
Remove older updates from an update group in case they are no longer needed
Remove the deployment completely
Delete the deployment and create a new one.
All actions will result in a re-evaluation of the deployment.
Column “Update Collections” does not show any entries.
The system is not a member of a collection with update deployments applied and is therefore not able to install updates. Make sure the system is part of an update deployment collection.
What is the difference between “Missing Updates All” and “Missing Updates Approved”?
“Missing Updates All” are ALL updates missing for a system whether deployed or not.
“Missing Updates Approved” are just the updates which are approved, deployed, or assigned (depending on the term you use) to a system and still missing. “Missing Updates Approved” should be zero at the end of your patch cycle, while “Missing Updates All” can always have a value other than zero.
Some systems are shown without any WSUS scan or install error, but have still updates missing. What can I do to fix that?
There can be multiple reasons for that.
Make sure the system is part of a collection with update deployments first
Check the update deployment start and deadline time and if the system sees (time is past start time) and is forced to install the update (time is past deadline time)
This is visible in the report: “Software Updates Compliance – Per device deployments”. Which can be opened individually or by clicking on the number in column: “Deployments Non Compliant” in any of the list views of the report solution.
The earliest deadline for a specific update and device is visible in the report: “Software Updates Compliance – Per device” or when clicked on the number of column: “Missing Updates Approved”.
Make sure the system either has no maintenance window at all or a maintenance window which fits the start and deadline time.
Make sure a maintenance window is at least 2h long to be able to install updates in it
Also, check the timeout configured for deployed updates on each update in the ConfigMgr console.
For example, if an update has a timeout of two hours configured and the maintenance window is set to two hours, installation of the update will NOT be triggered.
Check the restart notification client settings. This is especially important in server environments where a logged on user might not see a restart warning and therefore might not act on it. The restart time will be added to the overall timeout of each update and could exceed the overall allowed installation time of a maintenance window
Check the available space on drive C:. Too little space can cause all sorts of problems.
Start “cleanmgr.exe” as admin an delete unused files.
If nothing else worked: Reboot the system and trigger updates manually
If nothing else worked: Re-install the ConfigMgr client
If nothing else worked: Follow the document: “Troubleshoot issues with WSUS client agents”
Some systems are shown as uncompliant, but all updates are installed? What can I do to fix that?
This can either be a reporting delay or a problem with update compliance state messages.
If the update installation just finished, wait at least 45 to 60 minutes. This is because the default state message send interval is set to 15 minutes and the report result is typically cached for 30 minutes.
If the update installation time is in the past, this could be due to missing state messages.
In that case, run the following PowerShell line locally on the affected machines to re-send update compliance state messages
(New-Object -ComObject “Microsoft.CCM.UpdatesStore”).RefreshServerComplianceState
Installation errors indicate an issue with the CBS store. How can this be fixed?
If the CBS store is marked corrupt no security updates can be installed and the store needs to be fixed.
the following articles describe the process in more detail:
HERE and HERE
The CBS log is located under: “C:WindowsLogsCBSCBS.log”.
The large log file size sometimes causes issues when parsing the file for the correct log entry.
In addition to that, older logfiles are stored as CAB-files and can also be quite large in size.
The following script can be used to parse even very large files and related CAB-files for store corruption entries.
Get-CBSLogState.ps1
Are there any additional resources related to update compliance issues?
Yes, the following articles can help further troubleshoot update related issues:
Troubleshoot software update management
Troubleshoot software update synchronization
Troubleshoot software update scan failures
Troubleshoot software update deployments
Deprecation of Microsoft Support Diagnostic Tool (MSDT) and MSDT Troubleshooters – Microsoft Support
What can I do to increase my update compliance percentage?
This is an non exhaustive list of actions which can help to positively impact software update compliance:
As mentioned before do not leave a system too long in a pending reboot state.
As mentioned before make sure to always have enough space left on disk C: (~2GB+ for monthly security updates, ~10GB+ for feature updates)
Start “cleanmgr.exe” as admin and delete unused files for example.
Make sure a system has enough uptime to be able to download and install security updates.
If a system has limited bandwidth available it might need to stay online/active a while longer than other systems with more bandwidth available
You also might need to consider power settings for systems running on battery
What is a realistic update compliance percentage?
While the aim is to get to 100% fully patched systems, this goal can be quite hard in some situations. Some of the reasons behind bad patch compliance come from technical issues like the ones mentioned above under: “What can I do to increase my update compliance percentage?”. But other factors include the device delivery process for example. If you put ready to go systems on a shelf for a longer period, those devices will decrease overall patch compliance percentage.
To reach a high compliance percentage, know your workforce and know your update processes.
Reduce the blind spot and make sure each actively used system does not fall out of management due to errors, misconfigurations, or simply bad monitoring. Keep those devices on the shelf in mind and exclude them from compliance reporting for the duration of inactivity.
That’s it!
I hope you enjoyed reading this blog post. Stay safe!
Jonas Ohmsen
Microsoft Tech Community – Latest Blogs –Read More
Relational Data Synchronization between environments
Relational Data Synchronization between environments
There are business and/or technical cases where relational data should be duplicated to another environment. Since the demands of those business and/or technical cases are not the same, there are multiple technical solutions to achieve the goal.
In this article, I will discuss of the various solutions according to difference business needs, with deep dive into one family of solutions – sync solutions that is based on the database engine (DB engine). The content is Azure oriented, but the same concepts are true for other clouds as well.
I would expect that anyone that needs to sync relational data between environment can find here a good guideline.
General synchronization demands
Let us start with the typical demands:
Scenario
Latency
Typical solution family
Data Warehouse
Hours to day
ETL
Data mart
Minutes to hours
DB engine Sync
High utilized DB
Seconds to minutes
DB engine Full or Sync
High availability
Seconds
DB engine Full
Disaster Recovery
Seconds to minutes
DB engine Full
Network separation
Vary
Vary
DB engine Sync is the focus if this article. See below.
Here is high level description of those solution families:
ETL (Extract,Transform,Load):
Used for populating data warehouses or data marts from production systems
Usually, the schema on the target is more reporting friendly (star schema) than the production system
The data in the target can be in delay (usually hours) compared to the source
The source and the target can be utilizing different technologies
Tools in the market: Azure Data Factory, Informatica, Ascend
DB engine full:
Built-in replica mechanism to have another copy of the full database
With or without the ability to have one or more replicas that can be utilized as a read replica
Based on high availability, log shipping, backup & restore or storage-based solutions
Used for HA/DR and or read scale operation
Minimal latency (seconds)
Same technology
Read only on the target
DB engine sync
Tools in scope: SQL Data sync, Fabric Mirroring, Replication
Those tools support partial copy of the database
See more in the next chapter
Each option has its own pros and cons and sometimes you might use more than one solution in the same project.
In the rest of this article, I will focus on the DB engine sync solutions family usage.
More information:
ETL – Extract, transform, and load
Read only Replica: Azure SQL, PostgreSQL, MySQL
DB engine Sync Solutions Family
The need:
I cannot exaggerate the importance of choosing a synchronization solution based on your specific business needs. This is the reason that multiple solutions exist – to be able to support your specific need with a good-enough solution.
A sync process is responsible for sync data between environments. To be more exact, between source and one or more targets. The different solutions might have various kinds of characteristics.
Here are typical characteristics that you might be interested in:
Various kinds of technology
Different schema
Updates on both sides (conflict might happen)
Latency between the two copies
Maintenance efforts, skills required
The level of provider/user responsibility for the sync including re-sync probability, tools and efforts
I chose three key technologies (replication, SQL data sync, Fabric Mirroring) to discuss. The discussion is based on multiple discussions with my customers.
Replication:
Very mature technology which is supported by the majority of the relational database products
Low latency – usually seconds
Multiple flavors – transactional, merge, snapshot
Different table structure in the source and target are possible with limitations but add complexity
Multiple subscribers per source are supported
Monitoring is your responsibility and in case of failure, deep knowledge is needed to avoid reinitializing
For SQL server, you have a built-in replication monitor tool. For other databases you should check.
The monitor is not doing correction actions. Failing to track the replication status might cause a non-updated target environment
Replication of the data to a database of another provider might be possible usually with limitations. You will need a third-party tool to implement such a solution. For SQL Server Heterogeneous Database Replication is deprecated.
Azure SQL database cannot be a publisher
You must have a good DBA with specific replication knowledge to maintain the system
Typical scenarios for replication:
Filtering (part of the rows and/or the columns should be replicated
Low latency needs
Cross security boundaries with SQL authentication (see in the security section)
Cross database technologies (SQL server à Oracle)
More information:
Replication: Azure SQL MI, Azure SQL DB, PostgreSQL, MySQL
SQL Data Sync for Azure:
SQL Data Sync is a service built on Azure SQL Database that lets you synchronize the data you select bi-directionally across multiple databases, both on-premises and in the cloud, but only SQL Server based.
Azure SQL Data Sync does not support Azure SQL Managed Instance or Azure Synapse Analytics at this time
Source and target should be with the exact same schema
Multiple subscribers are supported
Typical scenarios for SQL Data Sync:
Considerable number of tables to be replicated
Managed by Azure experts (limited database knowledge needed)
SaaS solution preferred
Azure SQL database source
Bi-directional synchronization
More information:
Data Sync: Overview, Best Practices
Azure SQL Data Sync | Tips and Tricks
Mirroring in Microsoft Fabric (private preview):
The target for the synced data is sorted in delta lake table format – no need for relational database
The primary business scenario is reporting on the target
The schema cannot be changed on the target
Azure Cosmos DB, Azure SQL DB and Snowflake customers will be able to use Mirroring to mirror their data in OneLake and unlock all the capabilities of Fabric Warehouse, Direct Lake Mode, Notebooks and much more.
SQL Server, Azure PostgreSQL, Azure MySQL, Mongo DB and other databases and data warehouses will be coming in CY24.
Typical scenarios for Mirroring with Microsoft Fabric:
The target is reporting only that might integrate data from multiple sources
The cost associated with maintaining another relational engine for reporting is high. This aspect is even more significant for ISVs that are managing different environments for each customer (tenant)
Azure SQL or IaaS environment
Replacing an ETL system with no code solution
Part of your OneLake data architecture
More information:
Mirroring: Announcement, Copilot, Cosmos DB
Other aspects:
For the completeness of this article, here is a brief discussion of other aspects of the solutions that you should be aware of:
Identity and Security:
In all solutions – integrate solution is the best (replication authentication and replication security , SQL Data Sync, Mirroring).
For replication, you might use SQL authentication. For Azure SQL managed instance it is necessary.
Cost:
All the solutions do not have direct cost except for the services utilized for the source and target and possible cross data centers network bandwidth utilized.
Bi-directional and conflict resolution:
The only Azure native solution support is for SQL Data Sync.
Transactional replication – bi-directional (peer to peer) is rare but has multiple options. Last write wins is the automatic way as defined here.
Note:
Peer to peer is not supported by Azure SQL database offerings
Merge replication has more options but not on Azure SQL database offerings – see here
SQL Data Sync – Hub wins or Member wins (see here)
Mirroring – one direction only , so, not applicable
Scalability and performance:
In all solutions. You can expect reasonable pressure on the source (publisher) is expected.
SQL Data Sync add triggers to the source database while replication is using log reader (less pressure).
Monitoring and sync status:
For Replication – you have replication monitor and the tablediff utility
For SQL data Sync and Fabric mirroring – Monitoring Azure SQL Data Sync using OMS Log Analytics or Azure SQL Data Sync Health Checker
Real-time vs. Batch Synchronization:
All the solutions are well suited to real-time and short transactions. However, batch will work as well with more pressure on the SQL server log.
For Data Sync, empty tables provide the best performance at initialization time. If the target table is empty, Data Sync uses bulk insert to load the data. Otherwise, Data Sync does a row-by-row comparison and insertion to check for conflicts. If performance is not a concern, however, you can set up sync between tables that already contain data.
More information:
Empty tables provide the best performance
Choosing a DB engine Sync solution
Here is a short list of criteria that might help you choose a solution:
SQL Data Sync
The best solution for Azure SQL DB
Portal/script managed
Target should be from the SQL server family
Replication
The only solution for Azure SQL Managed Instance
Customable (filtering, schema change)
Deep database knowledge required
Fabric mirroring
Your solution where the destination can be/preferred on delta lake table format
Support multi sources (Azure SQL, Cosmos, Snowflake, more to come)
Portal/script managed
More information:
Compare SQL Data Sync with Transactional Replication
Conclusion
In the realm of data management, the need to synchronize relational data across environments arises from diverse business and technical requirements. This article has delved into the various solutions available, with a particular focus on database engine-based synchronization in the Azure ecosystem.
From the high-level demands of scenarios such as Data Warehouse, Data mart, High Utilized DB, High Availability, Disaster Recovery, to the intricacies of choosing between ETL, DB engine full, and DB engine sync solutions, we’ve explored the landscape of options available.
In the family of DB engine sync solutions, we’ve highlighted the importance of aligning your choice with specific business needs. Replication, a mature technology, offers low latency and supports various scenarios, though it requires vigilant monitoring. SQL Data Sync provides bi-directional synchronization for a considerable number of tables, managed by Azure professionals, while Microsoft Fabric’s Mirroring offers a unique approach for reporting scenarios.
Considerations such as identity and security, cost implications, conflict resolution, scalability, and monitoring have been discussed to provide a holistic view. Whether you prioritize low latency, transactional consistency, or ease of management, choosing the right solution is paramount.
As you navigate the complexities of relational data synchronization, keep in mind the nuances of each solution and the unique demands of your project. Whether opting for a well-established solution like Replication or embracing innovative approaches like Mirroring with Microsoft Fabric, make an informed decision based on your specific use case.
In conclusion, successful data synchronization is not a one-size-fits-all endeavor. By understanding the characteristics, advantages, and limitations of each solution, you empower yourself to make informed decisions that align with the dynamics of your data ecosystem. Explore further, stay updated on evolving technologies, and tailor your approach to meet the ever-evolving demands of your business.
You should remember that the technology world in general and in the cloud area in particular are constantly changing. The dynamic nature of data management and the importance of staying abreast of evolving technologies only emphasize that the reader should explore emerging solutions and best practices.
Microsoft Tech Community – Latest Blogs –Read More
Microsoft and Industry Leaders Enable RAN and Platform Programmability with Project Janus
Barcelona – February 26, 2024. Today at MWC 2024, Microsoft announced Project Janus, along with leaders across the telecommunications industry. Project Janus uses telco-grade cloud infrastructure compatible with O-RAN standards to draw on fine-grained telemetry from the radio access network (RAN), the edge cloud infrastructure, and other sources of data. This enables a communication service provider (CSP) to gain detailed monitoring and fast closed loop control of their RAN network. Janus has support and participation from CSPs such as Deutsche Telekom, and Vodafone; RAN and infrastructure providers CapGemini, Mavenir, and Intel Corporation; and RIC vendors and software innovators Juniper Networks, Aira Technologies, Amdocs, and Cohere Technologies.
“We know how vital the performance, security, and automation of the network is for CSPs, and going forward, more accurately optimizing complex networks,” said Yousef Khalidi, Corporate Vice President, Azure for Operators at Microsoft. “That’s why we’re excited to debut Project Janus alongside leading partners and supporters as an O-RAN compatible extension that makes RAN and platform even more programmable and optimized.”
Project Janus helps CSPs optimize RAN performance through visibility, analytics, AI, and closed loop control. To meet this objective, Microsoft and industry collaborators built a set of capabilities including RAN instrumentation tools that:
leverage the existing E2 O-RAN interface
update its service models to communicate with components of a CSP’s RAN and SMO architecture including the Distributed Unit (DU), Centralized Unit (CU), and RAN Intelligent Controller (RIC).
RAN, RIC, and xApp and rApp vendors are able to develop and use instrumentation tools to capture RAN data dynamically, and also combine them with platform data from cloud-based platforms hosting the RAN workloads.
This architecture enables several new use cases, such as precise analytics for anomaly detection and root cause analysis, interference detection, and optimizing other RAN performance metrics. The framework also enables new applications, such as fast vRAN power saving, failover, and live migration.
Project Janus will be available for everyone to include in their platform and network functions and will be supported natively by Microsoft’s Azure Operator Nexus platform.
To see specific use case examples, visit the “Unlock Operator Value with Programmable RAN & Platform” pod in the Microsoft booth at Mobile World Congress 2024 at 3H30 in Hall 3 during February 29-29, 2024 and check out www.microsoft.com/research/project/programmable-ran-platform/videos. Also read Mavenir, Microsoft and Intel Team for Real-Time Layer 1 vRAN Control white paper.
Telecommunications leaders are sharing support for the collaborative initiative:
Deutsche Telekom – “This initiative shows great promise to increase the pace of innovation and unlock new value through dynamic, customizable RAN data and analytics that can work within an O-RAN compliant framework. We look forward to seeing the participation by even more companies and developers in this burgeoning ecosystem.” – Petr Ledl, Vice President of Network Trials and Integration Lab and Chief Architect of Access Disaggregation program at Deutsche Telekom.
Vodafone – “The dynamic service models enabled by Project Janus are fully aligned with the vision of Open RAN in supporting the scale deployment of software-defined RAN. Access to the correct data at the right time and intelligent algorithms based on AI/ML capabilities will introduce significant performance and capacity benefits for all existing cellular networks and enable real autonomous ones.” – Francisco Martín Pignatelli, Head of Open RAN at Vodafone.
Hear from Microsoft Collaborators:
CapGemini – “CapGemini in collaboration with Microsoft has successfully demonstrated implementation of several use cases such as anomaly detection, energy savings and interference detection using Janus. These efforts have also demonstrated the benefits of being able to combine and reason over dynamic data from RAN, incremental to the predefined data types already available today, with dynamic data from the O-Cloud platform using Janus dynamic service models such as resolving key integration issues between RAN and platform as well as offering the power of leveraging AI/ML applications by developers to more precisely target areas of improvement for the RAN network.” – Rajat Kapoor, Vice President and Head of Software Frameworks at Capgemini.
Mavenir – “Improving RAN visibility and real-time control is essential to a CSP’s network performance and security, and it is Mavenir’s goal to support our customers with state-of-the-art observability. Data from our O-RAN-compliant DU/CU can be easily extracted dynamically and made available within our product management tools for tuning the operation of the Mavenir RAN. We demonstrated an advanced on-site debugging tool and customizable interference detection solution with Janus, which highlighted the flexibility of Janus to solve problems in real-time and improve system performance. With Janus, data from our Open RAN compliant DU can also be made available to an ecosystem of O-RAN focused application developers to provide insights and recommendations to the CSP to address and improve their network performance.” – Bejoy Pankajakshan, EVP, Chief Technology & Strategic Officer at Mavenir.
Intel Corporation – “With Intel FlexRAN reference architecture, Intel has been at the forefront of enabling the industry with virtualized, Open RAN to drive performance, flexibility and innovations, including AI. Microsoft’s Janus builds on FlexRAN’s software programmability to expose new data streams and application capabilities to the next generation of xApp developers, accelerating the adoption of AI in RAN networks to provide even more value to service providers”- Cristina Rodriguez, Vice President and General Manager of Wireless Access Network Division at Intel.
Juniper Networks – “Using the existing E2 O-RAN interface, Janus introduces the capability to bring more timely and customized RAN telemetry to Juniper Near-Real Time RIC. From this, we can enable xApp developers to use the incremental data to more precisely target areas of improvement for the performance and optimization of a RAN network.” – Constantine Polychronopoulos, Group VP of 5G and Telco Cloud at Juniper Networks.
Aira Technologies – “Our mission at Aira as an AI Defined Networking company is to enable the fully autonomous cellular RAN and our application of ML to wireless baseband processing is an industry first. Aira has showcased the use of Janus to collect and forward dynamic RAN data into our near-real time xApp where we apply leading-edge machine learning to drive better channel estimation and prediction to help maximize downlink throughput and range. We look forward to demonstrating, with Microsoft and the growing O-RAN ecosystem, even more innovation built on disaggregated and programmable networks.” – Anand Chandrasekher, Co-Founder and CEO at Aira Technologies.
Amdocs – “As a leading service provider and member of the ARI-5G Consortium, Amdocs is a key proponent of Open RAN and dedicated enabler of RAN intelligence and optimization and we do this today by offering among other things, Amdocs’ xApps such as the massive MIMO xApp. With Janus we look forward to leveraging dynamic service models with our network applications to further accelerate RAN performance and programmability for our CSP customers.” – Oleg Volpin, Division President Europe, Telefonica Global and Network Offering Division at Amdocs.
Cohere Technologies – “Cohere along with key operators and vendors is driving Multi-G ecosystem to enable co-existence of 4G, 5G and 6G and helping operators to do spectrum management in a seamless way. Janus’s dynamic infrastructure helps realize Multi-G’s dynamic infrastructure requirements and helps this vision.” – Prem Sankar Gopannan, Vice President of Product Architecture and Software Engineering.
Microsoft Tech Community – Latest Blogs –Read More