Month: September 2024
Generative AI with Microsoft Fabric
Microsoft Fabric seamlessly integrates with generative AI to enhance data-driven decision-making across your organization. It unifies data management and analysis, allowing for real-time insights and actions.
With Real Time Intelligence, keeping grounding data for large language models (LLMs) up-to-date is simplified. This ensures that generative AI responses are based on the most current information, enhancing the relevance and accuracy of outputs. Microsoft Fabric also infuses generative AI experiences throughout its platform, with tools like Copilot in Fabric and Azure AI Studio enabling easy connection of unified data to sophisticated AI models.
Check out GenAI experiences with Microsoft Fabric.
Classify and protect schematized data with Microsoft Purview.
Connect data from OneLake to Azure AI Studio.
Watch our video here:
QUICK LINKS:
00:00 — Unify data with Microsoft Fabric
00:35 — Unified data storage & real-time analysis
01:08 — Security with Microsoft Purview
01:25 — Real-Time Intelligence
02:05 — Integration with Azure AI Studio
Link References
This is Part 3 of 3 in our series on leveraging generative AI. Watch our playlist at https://aka.ms/GenAIwithAzureDBs
Unfamiliar with Microsoft Mechanics?
As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.
Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries
Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog
Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast
Keep getting this insider knowledge, join us on social:
Follow us on Twitter: https://twitter.com/MSFTMechanics
Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/
Enjoy us on Instagram: https://www.instagram.com/msftmechanics/
Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
Video Transcript:
-If you want to bring custom Gen AI experiences to your app so that users can interact with them using natural language, the better the quality and recency of the data used to ground responses, the more relevant and accurate the generated outcome.
-The challenge, of course, is that your data may be sitting across multiple clouds, in your own data center and also on the edge. Here’s where the complete analytics platform Microsoft Fabric helps you to unify data wherever it lives at unlimited scale, without you having to move it.
-It incorporates a logical multi-cloud data lake, OneLake, for unified data storage and access and separately provides a real-time hub optimized for event-based streaming data, where change data capture feeds can be streamed from multiple cloud sources for analysis in real time without the need to pull your data. Then with your data unified, data professionals can work together in a collaborative workspace to ingest and transform it, analyze it, and also endorse it as they build quality data sets.
-And when, used with Microsoft Purview, this can be achieved with an additional layer of security where you can classify and protect your schematized data with protections flowing as everyone from your engineers, data analysts to your business users works with data in the Fabric workspace. Keeping grounding data for your LLMs up to date is also made easier by being able to act on it with Real Time Intelligence.
-For example, you might have a product recommendation engine on an e-commerce site and using Real Time Intelligence, you can create granular conditions to listen for changes in your data, like new stock coming in, and update data pipelines feeding the grounding data for your large language models.
-So now, whereas before the gen AI may not have had the latest inventory data available to it to ground responses, with Real Time Intelligence, generated responses can benefit from the most real-time, up-to-date information so you don’t lose out on sales. And as you work with your data, gen AI experiences are infused throughout Fabric. In fact, Copilot in Fabric experiences are available for all Microsoft Fabric workloads to assist you as you work.
-And once your data set is complete, connecting it from Microsoft Fabric to ground large language models in your gen AI apps is made easy with Azure AI Studio, where you can bring in data from OneLake seamlessly and choose from some of the most sophisticated large language models hosted in Azure to build custom AI experiences on your data, all of which is only made possible when you unify your data and act on it with Microsoft Fabric.
Microsoft Tech Community – Latest Blogs –Read More
Mseries announcements – GA of Mv3 High Memory and details on Mv3 Very High Memory virtual machines
Mv3 High Memory General Availability
Executing on our plan to have our third version of M-series (Mv3) powered by 4th generation Intel® Xeon® processors (Sapphire Rapids) across the board, we’re excited to announce that Mv3 High Memory (HM) virtual machines (VMs) are now generally available. These next-generation M-series High Memory VMs give customers faster insights, more uptime, lower total cost of ownership and improved price-performance for their most demanding workloads. Mv3 HM VMs are supported for RISE with SAP customers as well. With the release of this Mv3 sub-family and the sub-family that offers around 32TB memory, Microsoft is the only public cloud provider that can provide HANA certified VMs from around 1TB memory to around 32TB memory all powered by 4th generation Intel® Xeon® processors (Sapphire Rapids).
Key features on the new Mv3 HM VMs
The Mv3 HM VMs can scale for workloads from 6TB to 16TB.
Mv3 delivers up to 40% throughput over our Mv2 High Memory (HM), enabling significantly faster SAP HANA data load times for SAP OLAP workloads and significant higher performance per core for SAP OLTP workloads over the previous generation Mv2.
Powered by Azure Boost, Mv3 HM provides up to 2x more throughput to Azure premium SSD storage and up to 25% improvement in network throughput over Mv2, with more deterministic performance.
Designed from the ground up for increased resilience against failures in memory, disks, and networking based on intelligence from past generations.
Available in both disk and diskless offerings allowing customers the flexibility to choose the option that best meets their workload needs.
During our private preview, several customers such as SwissRe unlocked gains from the new VM sizes. In their own words:
“Mv3 High Memory VM results are promising – in average we see a 30% increase in the performance without any big adjustment.”
SwissRe
Msv3 High Memory series (NVMe)
Size
vCPU
Memory in GiB
Max data disks
Max uncached Premium SSD throughput: IOPS/MBps
Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps
Max NICs
Max network bandwidth (Mbps)
Standard_M416s_6_v3
416
5,696
64
130,000/4,000
130,000/4,000
8
40,000
Standard_M416s_8_v3
416
7,600
64
130,000/4,000
130,000/4,000
8
40,000
Standard_M624s_12_v3
624
11,400
64
130,000/4,000
130,000/4,000
8
40,000
Standard_M832s_12_v3
832
11,400
64
130,000/4,000
130,000/4,000
8
40,000
Standard_M832s_16_v3
832
15,200
64
130,000/ 8,000
260,000/ 8,000
8
40,000
Msv3 High Memory series (SCSI)
Size
vCPU
Memory in GiB
Max data disks
Max uncached Premium SSD throughput: IOPS/MBps
Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps
Max NICs
Max network bandwidth (Mbps)
Standard_M416s_6_v3
416
5,696
64
130,000/4,000
130,000/4,000
8
40,000
Standard_M416s_8_v3
416
7,600
64
130,000/4,000
130,000/4,000
8
40,000
Standard_M624s_12_v3
624
11,400
64
130,000/4,000
130,000/4,000
8
40,000
Standard_M832s_12_v3
832
11,400
64
130,000/4,000
130,000/4,000
8
40,000
Standard_M832s_16_v3
832
15,200
64
130,000/ 8,000
130,000/ 8,000
8
40,000
Mdsv3 High Memory series (NVMe)
Size
vCPU
Memory in GiB
Temp storage (SSD) GiB
Max data disks
Max cached* and temp storage throughput: IOPS / MBps
Max uncached Premium SSD throughput: IOPS/MBps
Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps
Max NICs
Max network bandwidth (Mbps)
Standard_M416ds_6_v3
416
5,696
400
64
250,000/1,600
130,000/4,000
130,000/4,000
8
40,000
Standard_M416ds_8_v3
416
7,600
400
64
250,000/1,600
130,000/4,000
130,000/4,000
8
40,000
Standard_M624ds_12_v3
624
11,400
400
64
250,000/1,600
130,000/4,000
130,000/4,000
8
40,000
Standard_M832ds_12_v3
832
11,400
400
64
250,000/1,600
130,000/4,000
130,000/4,000
8
40,000
Standard_M832ds_16_v3
832
15,200
400
64
250,000/1,600
130,000/ 8,000
260,000/ 8,000
8
40,000
Mdsv3 High Memory series (SCSI)
Size
vCPU
Memory in GiB
Temp storage (SSD) GiB
Max data disks
Max cached* and temp storage throughput: IOPS / MBps
Max uncached Premium SSD throughput: IOPS/MBps
Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps
Max NICs
Max network bandwidth (Mbps)
Standard_M416ds_6_v3
416
5,696
400
64
250,000/1,600
130,000/4,000
130,000/4,000
8
40,000
Standard_M416ds_8_v3
416
7,600
400
64
250,000/1,600
130,000/4,000
130,000/4,000
8
40,000
Standard_M624ds_12_v3
624
11,400
400
64
250,000/1,600
130,000/4,000
130,000/4,000
8
40,000
Standard_M832ds_12_v3
832
11,400
400
64
250,000/1,600
130,000/4,000
130,000/4,000
8
40,000
Standard_M832ds_16_v3
832
15,200
400
64
250,000/1,600
130,000/ 8,000
130,000/ 8,000
8
40,000
*Read iops is optimized for sequential reads
Regional Availability and Pricing
The VMs are now available in West Europe, North Europe, East US, and West US 2. For pricing details, please take a look here for Windows and Linux.
Additional resources:
SAP Certification for Mv3 on Azure
Details on Mv3 Very High Memory Virtual Machines
We are thrilled to unveil the latest and largest additions to our Mv3-Series, Standard_M896ixds_32_v3 and Standard_M1792ixds_32_v3 VM SKUs. These new VM SKUs are the result of a close collaboration between Microsoft, SAP, experienced hardware partners, and our valued customers.
Key features on the new Mv3 VHM VMs
Unmatched Memory Capacity: With close to 32TB of memory, both the Standard_M896ixds_32_v3 and Standard_M1792ixds_32_v3 VMs are ideal for supporting very large in-memory databases and workloads.
High CPU Power: Featuring 896 cores in the Standard_M896ixds_32_v3 VM and 1792 vCPUs** in the Standard_M1792ixds_32_v3 VM, these VMs are designed to handle high-end S/4HANA workloads, providing more CPU power than other public cloud offerings. Enhanced Network and Storage Bandwidth: Both VM types provide the highest network and storage bandwidth available in Azure for a full node VM, including up to 200-Gbps network bandwidth with Azure Boost.
Optimal Performance for SAP HANA: Certified for SAP HANA, these VMs adhere to the SAP prescribed socket-to-memory ratio, ensuring optimal performance for in-memory analytics and relational database servers.
Size
vCPU or cores
Memory in GiB
SAP HANA Workload Type
Standard_M896ixds_32_v3
896
30,400
OLTP (S/4HANA) / OLAP Scaleup
Standard_M1792ixds_32_v3
1792**
30,400
OLAP Scaleup
**Hyperthreaded vCPUs
Microsoft Tech Community – Latest Blogs –Read More
Azure Extended Zones: Optimizing Performance, Compliance, and Accessibility
Azure Extended Zones are designed to bring the power of Azure closer to end users in specific metropolitan areas or jurisdictions, catering to organizations that require low latency and stringent data residency controls. This innovative solution supports a variety of use cases, including real-time media editing, financial services, healthcare, and any industry where data localization and rapid response times are critical.
Key Benefits and Features:
Low Latency and High Performance:
Reduced Latency: Azure Extended Zones enable applications requiring rapid response times to operate with minimal latency. This is particularly beneficial for sectors such as media, where real-time processing is crucial. By locating resources closer to the end-users, Extended Zones ensure faster data access and lower latency, leading to improved performance and user experience.
Enhanced User Experience: Applications that depend on quick response times, like gaming or real-time analytics, benefit significantly from Azure Extended Zones’ ability to reduce the delay in data transmission.
Data Residency and Compliance:
Geographical Data Control: These zones allow organizations to keep their data within specific geographical boundaries, aligning with local privacy laws, regulatory requirements, and compliance standards. This is particularly crucial for industries such as finance, healthcare, and government, where data sovereignty is a major concern.
Regulatory Compliance: By ensuring that data stays within a defined region, Azure Extended Zones help organizations meet stringent data residency requirements, such as those mandated by GDPR in Europe or other regional data protection laws.
Service Availability and Integration:
Supported Azure Services: Azure Extended Zones offer a wide range of following Azure services
Service category
Available services
Compute
Azure virtual machines (general purpose: A, B, D, E, and F series and GPU NVadsA10 v5 series)
Virtual Machine Scale Sets
Azure Kubernetes Service
Networking
Azure Private Link
Standard public IP
Virtual networks
Virtual network peering
ExpressRoute
Azure Standard Load Balancer
DDoS (Standard protection)
Storage
Azure managed disks
Azure Premium Page Blobs
Azure Premium Block Blobs
Azure Premium Files
Azure Data Lake Storage Gen2
Hierarchical Namespace
Azure Data Lake Storage Gen2 Flat Namespace
Change Feed
Blob Features
– SFTP
– NFS
BCDR
Azure Site Recovery
Azure Backup
These services can be deployed and managed within Extended Zones, providing businesses with the flexibility to run complex workloads close to their customers.
Reference Architecture:
Existing Azure customers can integrate Extended Zones into their current setups with minimal disruption. The service is designed to complement Azure’s global infrastructure, making it easy to expand into new regions or jurisdictions as shown in the following diagram.
Requesting Access and Workload Deployment:
Requesting Access to Azure Extended Zones
To register for an Azure Extended Zone, follow these steps:
Select a Subscription: Choose the Azure subscription you want to register for an Extended Zone.
List Available Zones: Use the Get-AzEdgeZonesExtendedZone cmdlet in Azure PowerShell to list all available Extended Zones.
Register a Zone: Use Register-AzEdgeZonesExtendedZone -Name ‘zonename’ to register for a specific zone (e.g., Los Angeles).
Check Registration Status: Confirm the registration state with Get-AzEdgeZonesExtendedZone -Name ‘zonename’. The zone becomes usable once its state is “Registered.”
Workload Deployment: Once access is granted, users can deploy available Azure services within Azure Extended Zones using Azure Portal or CLI.
Use Cases and Industry Applications:
Media and Entertainment: Azure Extended Zones enable low-latency streaming and real-time media processing, making them ideal for content creation and distribution.
Financial Services: With stringent data residency and low-latency requirements, financial institutions can benefit from keeping data within local jurisdictions while ensuring fast transaction processing.
Healthcare: Extended Zones provide healthcare organizations with the ability to store and process patient data locally, ensuring compliance with health data regulations and improving response times for critical applications.
FAQs and Common Queries:
How does Azure Extended Zones differ from traditional Azure regions? Azure Extended Zones are designed to serve specific metropolitan areas or jurisdictions, focusing on low latency and data residency. Unlike traditional Azure regions that cover broader geographical areas, Extended Zones offer a more localized solution.
Can I use existing Azure services within Extended Zones? Yes, many Azure services, including virtual machines, Kubernetes, storage, and networking, are available within Extended Zones. This allows for seamless integration with your existing Azure infrastructure.
What are the limitations of Azure Extended Zones? While Extended Zones offer numerous benefits, they are currently available only in preview and may have limited service availability depending on the region. Additionally, not all Azure services may be supported within Extended Zones, so it’s important to verify compatibility based on your specific needs.
How can I request access to Azure Extended Zones? Access can be requested through the Azure portal by submitting a request form. The process involves providing details about your intended use case and the specific region where you need the service. Microsoft will review the request and grant access based on availability and alignment with the service’s objectives.
For more details and to request access, visit the Azure Extended Zones Overview, FAQ, and Request Access pages.
Please note: Azure Extended Zones are currently in preview. For legal terms applicable to Azure features in beta or preview, refer to Supplemental Terms of Use for Microsoft Azure Previews.
Microsoft Tech Community – Latest Blogs –Read More
Evaluate Fine-tuned Phi-3 / 3.5 Models in Azure AI Studio Focusing on Microsoft’s Responsible AI
Evaluate Fine-tuned Phi-3 / 3.5 Models in Azure AI Studio Focusing on Microsoft’s Responsible AI
This blog series has several versions, each covering different aspects and techniques. Check out the following resources:
Fine-Tune and Integrate Custom Phi-3 Models with Prompt Flow: Step-by-Step Guide
Detailed instructions for fine-tuning and integrating custom Phi-3 models with Prompt flow using a code-first approach.
Available on:
Fine-Tune and Integrate Custom Phi-3 Models with Prompt Flow in Azure AI Studio
Detailed instructions for fine-tuning and integrating custom Phi-3 models with Prompt flow in Azure AI / ML Studio using a low-code approach.
Available on:
Evaluate Fine-tuned Phi-3 / Phi-3.5 Models in Azure AI Studio Focusing on Microsoft’s Responsible AI
Detailed instructions for evaluating the Phi-3 / Phi-3.5 model in Azure AI Studio using a low-code approach.
Available on:
Fine-Tune and Integrate Custom Phi-3 Models with Prompt Flow: Step-by-Step GuideDetailed instructions for fine-tuning and integrating custom Phi-3 models with Prompt flow using a code-first approach.Available on:
MS Tech Community Phi-3 CookBook on GitHub
Fine-Tune and Integrate Custom Phi-3 Models with Prompt Flow in Azure AI StudioDetailed instructions for fine-tuning and integrating custom Phi-3 models with Prompt flow in Azure AI / ML Studio using a low-code approach.Available on:
MS Tech Community Phi-3 CookBook on GitHub
Evaluate Fine-tuned Phi-3 / Phi-3.5 Models in Azure AI Studio Focusing on Microsoft’s Responsible AIDetailed instructions for evaluating the Phi-3 / Phi-3.5 model in Azure AI Studio using a low-code approach.Available on:
MS Tech Community
How can you evaluate the safety and performance of a fine-tuned Phi-3 / Phi-3.5 model in Azure AI Studio?
Fine-tuning a model can sometimes lead to unintended or undesired responses. To ensure that the model remains safe and effective, it’s important to evaluate it. This evaluation helps to assess the model’s potential to generate harmful content and its ability to produce accurate, relevant, and coherent responses. In this tutorial, you will learn how to evaluate the safety and performance of a fine-tuned Phi-3 / Phi-3.5 model integrated with Prompt flow in Azure AI Studio.
Here is an Azure AI Studio’s evaluation process.
The Code first approach tutorial includes tips on how to use the Phi-3.5 model below at Fine-tune the Phi-3 model section.
The Low code approach tutorial currently supports only the Phi-3 model. This tutorial will be updated to include Phi-3.5 model fine-tuning as soon as it is supported in Azure AI/ML Studio.
The evaluation process in Azure AI Studio is identical for both Phi-3 and Phi-3.5, so the title of this tutorial includes both models.
For more detailed information and to explore additional resources about Phi-3 and Phi-3.5, please visit the Phi-3CookBook.
Prerequisites
Python
Azure subscription
Visual Studio Code
Fine-tuned Phi-3 / Phi-3.5 model
Table of Contents
Series1: Introduction to Azure AI Studio’s Prompt flow evaluation
Introduction to safety evaluation
Introduction to performance evaluation
Series2: Evaluating the Phi-3 / Phi-3.5 model in Azure AI Studio
Before you begin
Deploy Azure OpenAI to evaluate the Phi-3 / Phi-3.5 model
Evaluate the fine-tuned Phi-3 / Phi-3.5 model using Azure AI Studio’s Prompt flow evaluation
Series1: Introduction to Azure AI Studio’s Prompt flow evaluation
Introduction to safety evaluation
To ensure that your AI model is ethical and safe, it’s crucial to evaluate it against Microsoft’s Responsible AI Principles. In Azure AI Studio, safety evaluations allow you to evaluate an your model’s vulnerability to jailbreak attacks and its potential to generate harmful content, which is directly aligned with these principles.
Microsoft’s Responsible AI Principles
Before beginning the technical steps, it’s essential to understand Microsoft’s Responsible AI Principles, an ethical framework designed to guide the responsible development, deployment, and operation of AI systems. These principles guide the responsible design, development, and deployment of AI systems, ensuring that AI technologies are built in a way that is fair, transparent, and inclusive. These principles are the foundation for evaluating the safety of AI models.
Microsoft’s Responsible AI Principles include:
Fairness and Inclusiveness: AI systems should treat everyone fairly and avoid affecting similarly situated groups of people in different ways. For example, when AI systems provide guidance on medical treatment, loan applications, or employment, they should make the same recommendations to everyone who has similar symptoms, financial circumstances, or professional qualifications.
Reliability and Safety: To build trust, it’s critical that AI systems operate reliably, safely, and consistently. These systems should be able to operate as they were originally designed, respond safely to unanticipated conditions, and resist harmful manipulation. How they behave and the variety of conditions they can handle reflect the range of situations and circumstances that developers anticipated during design and testing.
Transparency: When AI systems help inform decisions that have tremendous impacts on people’s lives, it’s critical that people understand how those decisions were made. For example, a bank might use an AI system to decide whether a person is creditworthy. A company might use an AI system to determine the most qualified candidates to hire.
Privacy and Security: As AI becomes more prevalent, protecting privacy and securing personal and business information are becoming more important and complex. With AI, privacy and data security require close attention because access to data is essential for AI systems to make accurate and informed predictions and decisions about people.
Accountability: The people who design and deploy AI systems must be accountable for how their systems operate. Organizations should draw upon industry standards to develop accountability norms. These norms can ensure that AI systems aren’t the final authority on any decision that affects people’s lives. They can also ensure that humans maintain meaningful control over otherwise highly autonomous AI systems.
To learn more about Microsoft’s Responsible AI Principles, visit the What is Responsible AI?.
Safety metrics
In this tutorial, you will evaluate the safety of the fine-tuned Phi-3 / Phi-3.5 model using Azure AI Studio’s safety metrics. These metrics help you assess the model’s potential to generate harmful content and its vulnerability to jailbreak attacks. The safety metrics include:
Self-harm-related Content: Evaluates whether the model has a tendency to produce self-harm related content.
Hateful and Unfair Content: Evaluates whether the model has a tendency to produce hateful or unfair content.
Violent Content: Evaluates whether the model has a tendency to produce violent content.
Sexual Content: Evaluates whether the model has a tendency to produce inappropriate sexual content.
Evaluating these aspects ensures that the AI model does not produce harmful or offensive content, aligning it with societal values and regulatory standards.
Introduction to performance evaluation
To ensure that your AI model is performing as expected, it’s important to evaluate its performance against performance metrics. In Azure AI Studio, performance evaluations allow you to evaluate your model’s effectiveness in generating accurate, relevant, and coherent responses.
Image Source: Evaluation of generative AI applications
Performance metrics
In this tutorial, you will evaluate the performance of the fine-tuned Phi-3 / Phi-3.5 model using Azure AI Studio’s performance metrics. These metrics help you assess the model’s effectiveness in generating accurate, relevant, and coherent responses. The performance metrics include:
Groundedness: Evaluate how well the generated answers align with the information from the input source.
Relevance: Evaluates the pertinence of generated responses to the given questions.
Coherence: Evaluate how smoothly the generated text flows, reads naturally, and resembles human-like language.
Fluency: Evaluate the language proficiency of the generated text.
GPT Similarity: Compares the generated response with the ground truth for similarity.
F1 Score: Calculates the ratio of shared words between the generated response and the source data.
These metrics help you evaluate the model’s effectiveness in generating accurate, relevant, and coherent responses.
Series2: Evaluating the Phi-3 / Phi-3.5 model in Azure AI Studio
Before you begin
This tutorial is a follow up to the previous blog posts, “Fine-Tune and Integrate Custom Phi-3 Models with Prompt Flow: Step-by-Step Guide” and “Fine-Tune and Integrate Custom Phi-3 Models with Prompt Flow in Azure AI Studio.” In these posts, we walked through the process of fine-tuning a Phi-3 / Phi-3.5 model in Azure AI Studio and integrating it with Prompt flow.
In this tutorial, you will deploy an Azure OpenAI model as an evaluator in Azure AI Studio and use it to evaluate your fine-tuned Phi-3 / Phi-3.5 model.
Before you begin this tutorial, make sure you have the following prerequisites, as described in the previous tutorials:
A prepared dataset to evaluate the fine-tuned Phi-3 / Phi-3.5 model.
A Phi-3 / Phi-3.5 model that has been fine-tuned and deployed to Azure Machine Learning.
A Prompt flow integrated with your fine-tuned Phi-3 / Phi-3.5 model in Azure AI Studio.
You will use the test_data.jsonl file, located in the data folder from the ULTRACHAT_200k dataset downloaded in the previous blog posts, as the dataset to evaluate the fine-tuned Phi-3 / Phi-3.5 model.
Integrate the custom Phi-3 / Phi-3.5 model with Prompt flow in Azure AI Studio(Code first approach)
If you followed the low-code approach described in “Fine-Tune and Integrate Custom Phi-3 Models with Prompt Flow in Azure AI Studio“, you can skip this exercise and proceed to the next one. However, if you followed the code-first approach described in “Fine-Tune and Integrate Custom Phi-3 Models with Prompt Flow: Step-by-Step Guide” to fine-tune and deploy your Phi-3 / Phi-3.5 model, the process of connecting your model to Prompt flow is slightly different. You will learn this process in this exercise.
To proceed, you need to integrate your fine-tuned Phi-3 / Phi-3.5 model into Prompt flow in Azure AI Studio.
Create Azure AI Studio Hub
You need to create a Hub before creating the Project. A Hub acts like a Resource Group, allowing you to organize and manage multiple Projects within Azure AI Studio.
Sign in Azure AI Studio.
Select All hubs from the left side tab.
Select + New hub from the navigation menu.
Perform the following tasks:
Enter Hub name. It must be a unique value.
Select your Azure Subscription.
Select the Resource group to use (create a new one if needed).
Select the Location you’d like to use.
Select the Connect Azure AI Services to use (create a new one if needed).
Select Connect Azure AI Search to Skip connecting.
Select Next.
Create Azure AI Studio Project
In the Hub that you created, select All projects from the left side tab.
Select + New project from the navigation menu.
Enter Project name. It must be a unique value.
Select Create a project.
Add a custom connection for the fine-tuned Phi-3 / Phi-3.5 model
To integrate your custom Phi-3 / Phi-3.5 model with Prompt flow, you need to save the model’s endpoint and key in a custom connection. This setup ensures access to your custom Phi-3 / Phi-3.5 model in Prompt flow.
Set api key and endpoint uri of the fine-tuned Phi-3 / Phi-3.5 model
Visit Azure ML Studio.
Navigate to the Azure Machine learning workspace that you created.
Select Endpoints from the left side tab.
Select endpoint that you created.
Select Consume from the navigation menu.
Copy your REST endpoint and Primary key.
Add the Custom Connection
Visit Azure AI Studio.
Navigate to the Azure AI Studio project that you created.
In the Project that you created, select Settings from the left side tab.
Select + New connection.
Select Custom keys from the navigation menu.
Perform the following tasks:
Select + Add key value pairs.
For the key name, enter endpoint and paste the endpoint you copied from Azure ML Studio into the value field.
Select + Add key value pairs again.
For the key name, enter key and paste the key you copied from Azure ML Studio into the value field.
After adding the keys, select is secret to prevent the key from being exposed.
Select Add connection.
Create Prompt flow
You have added a custom connection in Azure AI Studio. Now, let’s create a Prompt flow using the following steps. Then, you will connect this Prompt flow to the custom connection to use the fine-tuned model within the Prompt flow.
Navigate to the Azure AI Studio project that you created.
Select Prompt flow from the left side tab.
Select + Create from the navigation menu.
Select Chat flow from the navigation menu.
Enter Folder name to use.
Select Create.
Set up Prompt flow to chat with your custom Phi-3 / Phi-3.5 model
You need to integrate the fine-tuned Phi-3 / Phi-3.5 model into a Prompt flow. However, the existing Prompt flow provided is not designed for this purpose. Therefore, you must redesign the Prompt flow to enable the integration of the custom model.
In the Prompt flow, perform the following tasks to rebuild the existing flow:
Select Raw file mode.
Delete all existing code in the flow.dag.yml file.
Add the folling code to flow.dag.yml.
input_data:
type: string
default: “Who founded Microsoft?”
outputs:
answer:
type: string
reference: ${integrate_with_promptflow.output}
nodes:
– name: integrate_with_promptflow
type: python
source:
type: code
path: integrate_with_promptflow.py
inputs:
input_data: ${inputs.input_data}
Select Save.
Add the following code to integrate_with_promptflow.py to use the custom Phi-3 / Phi-3.5 model in Prompt flow.
import requests
from promptflow import tool
from promptflow.connections import CustomConnection
# Logging setup
logging.basicConfig(
format=“%(asctime)s – %(levelname)s – %(name)s – %(message)s”,
datefmt=“%Y-%m-%d %H:%M:%S”,
level=logging.DEBUG
)
logger = logging.getLogger(__name__)
def query_phi3_model(input_data: str, connection: CustomConnection) -> str:
“””
Send a request to the Phi-3 / Phi-3.5 model endpoint with the given input data using Custom Connection.
“””
# “connection” is the name of the Custom Connection, “endpoint”, “key” are the keys in the Custom Connection
endpoint_url = connection.endpoint
api_key = connection.key
headers = {
“Content-Type”: “application/json”,
“Authorization”: f”Bearer {api_key}“
}
data = {
“input_data”: [input_data],
“params”: {
“temperature”: 0.7,
“max_new_tokens”: 128,
“do_sample”: True,
“return_full_text”: True
}
}
try:
response = requests.post(endpoint_url, json=data, headers=headers)
response.raise_for_status()
# Log the full JSON response
logger.debug(f”Full JSON response: {response.json()}“)
result = response.json()[“output”]
logger.info(“Successfully received response from Azure ML Endpoint.”)
return result
except requests.exceptions.RequestException as e:
logger.error(f”Error querying Azure ML Endpoint: {e}“)
raise
def my_python_tool(input_data: str, connection: CustomConnection) -> str:
“””
Tool function to process input data and query the Phi-3 / Phi-3.5 model.
“””
return query_phi3_model(input_data, connection)
For more detailed information on using Prompt flow in Azure AI Studio, you can refer to Prompt flow in Azure AI Studio.
Select Chat input, Chat output to enable chat with your model.
Now you are ready to chat with your custom Phi-3 / Phi-3.5 model. In the next exercise, you will learn how to start Prompt flow and use it to chat with your fine-tuned Phi-3 / Phi-3.5 model.
The rebuilt flow should look like the image below:
Start Prompt flow
Select Start compute sessions to start Prompt flow.
Select Validate and parse input to renew parameters.
Select the Value of the connection to the custom connection you created. For example, connection.
Chat with your custom Phi-3 / Phi-3.5 model
Select Chat.
Here’s an example of the results: Now you can chat with your custom Phi-3 / Phi-3.5 model. It is recommended to ask questions based on the data used for fine-tuning.
Deploy Azure OpenAI to evaluate the Phi-3 / Phi-3.5 model
To evaluate the Phi-3 / Phi-3.5 model in Azure AI Studio, you need to deploy an Azure OpenAI model. This model will be used to evaluate the performance of the Phi-3 / Phi-3.5 model.
Deploy Azure OpenAI
Sign in to Azure AI Studio.
Navigate to the Azure AI Studio project that you created.
In the Project that you created, select Deployments from the left side tab.
Select + Deploy model from the navigation menu.
Select Deploy base model.
Select Azure OpenAI model you’d like to use. For example, gpt-4o.
Select Confirm.
Evaluate the fine-tuned Phi-3 / Phi-3.5 model using Azure AI Studio’s Prompt flow evaluation
Start a new evaluation
Visit Azure AI Studio.
Navigate to the Azure AI Studio project that you created.
In the Project that you created, select Evaluation from the left side tab.
Select + New evaluation from the navigation menu.
Select Prompt flow evaluation.
perform the following tasks:
Enter the evaluation name. It must be a unique value.
Select Question and answer without context as the task type. Because, the UlTRACHAT_200k dataset used in this tutorial does not contain context.
Select the prompt flow you’d like to evaluate.
Select Next.
perform the following tasks:
Select Add your dataset to upload the dataset. For example, you can upload the test dataset file, such as test_data.json1, which is included when you download the ULTRACHAT_200k dataset.
Select the appropriate Dataset column that matches your dataset. For example, if you are using the ULTRACHAT_200k dataset, select ${data.prompt} as the dataset column.
Select Next.
perform the following tasks to configure the performance and quality metrics:
Select the performance and quality metrics you’d like to use.
Select the Azure OpenAI model that you created for evaluation. For example, select gpt-4o.
perform the following tasks to configure the risk and safety metrics:
Select the risk and safety metrics you’d like to use.
Select the threshold to calculate the defect rate you’d like to use. For example, select Medium.
For question, select Data source to {$data.prompt}.
For answer, select Data source to {$run.outputs.answer}.
For ground_truth, select Data source to {$data.message}.
Select Next.
Select Submit to start the evaluation.
The evaluation will take some time to complete. You can monitor the progress in the Evaluation tab.
Review the Evaluation Results
The results presented below are intended to illustrate the evaluation process. In this tutorial, we have used a model fine-tuned on a relatively small dataset, which may lead to sub-optimal results. Actual results may vary significantly depending on the size, quality, and diversity of the dataset used, as well as the specific configuration of the model.
Once the evaluation is complete, you can review the results for both performance and safety metrics.
Performance and quality metrics:
evaluate the model’s effectiveness in generating coherent, fluent, and relevant responses.
Risk and safety metrics:
Ensure that the model’s outputs are safe and align with Responsible AI Principles, avoiding any harmful or offensive content.
You can scroll down to view Detailed metrics result.
By evaluating your custom Phi-3 / Phi-3.5 model against both performance and safety metrics, you can confirm that the model is not only effective, but also adheres to responsible AI practices, making it ready for real-world deployment.
Congratulations!
You’ve completed this tutorial
You have successfully evaluated the fine-tuned Phi-3 model integrated with Prompt flow in Azure AI Studio. This is an important step in ensuring that your AI models not only perform well, but also adhere to Microsoft’s Responsible AI principles to help you build trustworthy and reliable AI applications.
Clean Up Azure Resources
Cleanup your Azure resources to avoid additional charges to your account. Go to the Azure portal and delete the following resources:
The Azure Machine learning resource.
The Azure Machine learning model endpoint.
The Azure AI Studio Project resource.
The Azure AI Studio Prompt flow resource.
Next Steps
Documentation
microsoft/Phi-3CookBook
Assess AI systems by using the Responsible AI dashboard
Evaluation and monitoring metrics for generative AI
Azure AI Studio documentation
Prompt flow documentation
Training Content
Introduction to Microsoft’s Responsible AI Approach
Introduction to Azure AI Studio
Reference
microsoft/Phi-3CookBook
What is Responsible AI?
Announcing new tools in Azure AI to help you build more secure and trustworthy generative AI applications
Evaluation of generative AI applications
Microsoft Tech Community – Latest Blogs –Read More
Bring Your Organizational Data to Azure AI Services with Microsoft Graph
Using AI to connect your business data with the AI applications you rely on isn’t just a nice-to-have—it’s essential in the current landscape.
By linking data from platforms like Microsoft 365 into AI-driven apps, you can simplify the tasks, reduce the need to switch between apps, and boost productivity.
This blog will walk you through how to easily connect your business data to Azure (and extention of that could be integrating it with the OpenAI services ) using Microsoft Graph , showing you just how powerful and straightforward these tools can be.
Why Integrate Your Data?
Imagine you’re deep in a project and need to find a specific document, email, or chat from Microsoft Teams. Normally, you’d have to jump between Outlook, OneDrive, and Teams, disrupting your workflow and wasting time. This is where integrating your business data into your applications becomes incredibly useful.
By using Microsoft Graph and Azure OpenAI services, you can pull all this information directly into your app, keeping everything in one place. This not only saves time but also helps you stay focused on your work. Whether you need to find files, emails, or chat histories, integrating these tools can simplify your day and keep you on track.
Core Use Cases for Microsoft Graph Enhanced by Generative AI
Microsoft Graph is versatile, and its applications are numerous. Here are some common use cases, now supercharged with generative AI:
Automating Microsoft 365 Workflows with Generative AI
Use Microsoft Graph in combination with generative AI to automate tasks such as:
Email Management: Not only can you automatically sort and respond to emails, but generative AI can draft personalized responses, summarize lengthy email threads, and even predict and prioritize emails that require immediate attention.
File Operations: Beyond managing files in OneDrive and SharePoint, generative AI can assist in creating content, generating summaries of documents, and suggesting relevant files based on the context of your work.
User Management: Automate user provisioning and updates, while generative AI can predict user needs, suggest role changes, and provide insights into user behavior and engagement.
Integrating Microsoft Teams to Enhance Productivity with Generative AI
Microsoft Graph enables deep integrations with Teams, and generative AI takes it a step further by allowing you to:
Create Teams and Channels: Automate the setup of new teams for projects, and use generative AI to suggest optimal team structures, recommend channels based on project requirements, and even draft initial posts to kickstart discussions.
Manage Conversations: Archive or monitor conversations for compliance, while generative AI can analyze conversation trends, detect sentiment, and provide insights into team dynamics and areas for improvement.
Custom Bots: Develop bots that interact with Teams users, enhanced with generative AI to provide more natural and context-aware interactions, answer complex queries, and even assist in decision-making processes.
By leveraging generative AI, Microsoft Graph can not only automate and streamline workflows but also provide intelligent insights and personalized experiences, significantly boosting productivity and efficiency.
Getting Started with Microsoft Graph
Microsoft Graph is a powerful API that lets you connect to various data points in Microsoft 365. With it, you can pull in emails, chats, files, and more into your application. To begin, you’ll need to set up something called an “App Registration” in Microsoft Entra ID (formerly Azure Active Directory). This registration allows your app to access the data securely.
Step 1: Set Up App Registration
Log in to the Azure Portal and navigate to Microsoft Entra ID.
Create a new app registration by giving it a name.
Select the type of accounts that can access this app—whether it’s just within your organization or available to users in other organizations as well.
Configure the Redirect URI if you’re developing a web app. For local development, this might look like http://localhost:3000.
Here’s a basic example of how your app registration might look in code:
{
“client_id”: “YOUR_CLIENT_ID”,
“tenant_id”: “YOUR_TENANT_ID”,
“redirect_uri”: “http://localhost:3000”
}
Now that your app is registered, you can start pulling in data using Microsoft Graph. We’ll be using a library called Microsoft Graph Toolkit (MGT), which makes this process much simpler.
Step 2: Install Microsoft Graph Toolkit
First, install the MGT package:
npm install /mgt
In your app, you’ll want to set up a provider that will handle authentication and make it easier to call Microsoft Graph APIs.
Step 3: Set Up Authentication
Create a graphService.js file where you’ll configure the provider:
import { Providers, MsalProvider } from ‘@microsoft/mgt’;
export const initGraph = () => {
if (!Providers.globalProvider) {
Providers.globalProvider = new MsalProvider({
clientId: ‘YOUR_CLIENT_ID’,
scopes: [‘User.Read’, ‘Files.Read’, ‘Mail.Read’, ‘Chat.Read’]
});
}
};
This snippet sets up the authentication process using your app registration details.
Once authentication is set up, you can start pulling data like files, emails, and chats into your app. Let’s look at a couple of ways to do this.
Step 4: Fetch and Display Files
You can fetch files related to a specific project or customer. Here’s how you might do that:
import { graph } from ‘@microsoft/mgt’;
const getFiles = async (query) => {
const response = await graph.api(‘/me/drive/search(q=” + query + ”)’)
.get();
return response.value;
};
// Example usage:
getFiles(‘ProjectX’).then(files => {
console.log(files);
});
Step 5: Use MGT Components to Simplify
Instead of writing the above code, you can use MGT’s ready-made components to fetch and display data with minimal code.
<mgt-file-list></mgt-file-list>
This single line of code will automatically pull in and display the user’s files. It’s simple, powerful, and easy to implement.
Microsoft Tech Community – Latest Blogs –Read More
DAPR, KEDA on ARO (Azure RedHat OpenShift): passo a passo
Neste artigo, teremos foco nas configurações necessárias para rodar DAPR, KEDA on ARO (Azure RedHat OpenShift).
Desta forma, aproveitei para montar este repositório no GitHub chamado “App-Plant-Tree” que cobre conceitos sobre Arquitetura Cloud-Native combinando as seguintes tecnologias:
Go – Producer/Consumer App
Distributed Application Runtime – DAPR
Kubernetes Event Driven Autoscaling – KEDA
Azure RedHat OpenShift (ARO)
Azure Container Registry (ACR)
Go SDK
Azure CLI
OpenShift CLI
DAPR CLI
Kubectl
Helm CLI
GIT bash
Visual Studio Code
Login no Azure usando CLI:
Defina os valores das variáveis conforme seu ambiente:
– $Location = ‘‘
– $ResourceGroupName = ‘‘
– $ClusterName = ‘‘
– $ContainerRegistryName = ‘‘
– $ServiceBusNamespace = ‘‘
Selecione sua assinatura azure:
Crie resource group:
Crie a virtual network
Crie a subnet para control plane
Crie a subnet para workers
Desligando configurações de network policies para Private Link Service
Crie o cluster ARO:
Crie o Container Registry:
Conectando o Container Registry ao ARO:
oc create secret docker–registry —docker–server=$ContainerRegistryName.azurecr.io —docker–username=<user name> —docker–password=<your password>—docker–email=unused acr–secret
oc secrets link default <pull_secret_name> —for=pull
Pegue a URL da console OpenShift
Pegue as credenciais OpenShift:
Valide a conexão com o cluster:
Adicione as referências:
helm repo update
helm upgrade —install dapr dapr/dapr —namespace dapr–system —create–namespace
helm upgrade —install dapr–dashboard dapr/dapr–dashboard —namespace dapr–system —create–namespace
Validar se os pods estão rodando:
Resposta esperada:
DAPR dashboard available at http://localhost:8080
Adicione as referências:
helm repo update
helm upgrade —install keda kedacore/keda –n keda–system —create–namespace
helm upgrade —install keda–add-ons–http kedacore/keda–add-ons–http –n keda–system —create–namespace
Verifique se os pods estão rodando:
Neste projeto, temos 3 diferentes opções exemplificadas (escolha uma):
Azure Service Bus
Redis
RabbitMq
docker build –t “$ContainerRegistryName.azurecr.io/consumer-app:1.0.0“ -f cmd/consumer/dockerfile .
docker build –t “$ContainerRegistryName.azurecr.io/producer-app:1.0.0“ -f cmd/producer/dockerfile .
docker push “$ContainerRegistryName.azurecr.io/producer-app:1.0.0“
Validar se os pods estão rodando:
kubectl logs -f –l app=consumer1 —all–containers=true –n tree
# configurar a porta para acesso local
kubectl port–forward pod/producer1 8081 8081 –n tree
# enviar post para a aplicação producer
– POST –> http://localhost:8081/plant
– Json Body: {“numberOfTrees“:100}
# Validar status dos pods
kubectl get pod –l app=consumer1 –n tree
Após finalizar seus testes, os próximos comandos te ajudarão a desinstalar todos os componentes de aplicação além de também excluir todos os componentes na azure.
helm uninstall keda –n keda–system
helm uninstall dapr –n dapr–system
Delete all Azure resources:
az acr delete —resource–group $ResourceGroupName —name $ContainerRegistryName
az group delete —name $ResourceGroupName
DAPR KEDA GO Project
DAPR – Pros/Cons
KEDA – Pros/Cons
Microsoft Tech Community – Latest Blogs –Read More
Remote Desktop Services enrolling for TLS certificate from an Enterprise CA
Hey! Rob Greene again. Been on a roll with all things crypto as of late, and you are not going to be disappointed with this one either!
Background
Many know that Remote Desktop Services uses a self-signed certificate for its TLS connection from the RDS Client to the RDS Server over the TCP 3389 connection by default. However, Remote Desktop Services can be configured to enroll for a certificate against an Enterprise CA, instead of continuing to use those annoying self-signed certificates everywhere.
I know there are other blogs out there that cover setting up the certificate template, and the group policy, but what if I told you most of the blogs that I have seen on this setup are incomplete, inaccurate, and do not explain what is happening with the enrollment and subsequent renewals of the RDS certificate!? I know… Shocker!!!
How this works
The Remote Desktop Service looks for a certificate, in the computer personal store, that has a specific Enhanced Key Usage with the Object Identifier (OID) of 1.3.6.1.4.1.311.54.1.2, which is typically named Remote Desktop Authentication, or Server Authentication. It prefers a certificate with the OID of Remote Desktop Authentication. https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn781533(v=ws.11)
Sidebar:
If you are a pretty regular consumer of the AskDS blog content you know how we love to recommend using one certificate on the server for a specific Enhanced Key Usage (EKU), and make sure that you have all the information required on the certificate so that it works with all applications that need to use the certificate.
This certificate is no different. I would recommend that the certificate that is used ONLY has the EKU for Remote Desktop Authentication and DOES NOT have an EKU of Server Authentication at all. The reason for this is that this certificate should not be controlled / maintained via Autoenrollment/renewal behaviors. This needs to be maintained by the Remote Desktop Configuration service, and you do not want certificates being used by other applications being replaced by a service like this as it will cause an issue in the long run.
There is a group policy setting that can be enabled to configure the Remote Desktop Service to enroll for the specified certificate and gives the NT AuthorityNetworkService account permission to the certificates private key which is a requirement for this to work.
The interesting thing about this is that you would think that the Remote Desktop Service service would be the service responsible for enrolling for this certificate, however it is the Remote Desktop Configuration (SessionEnv) service that is responsible for initial certificate requests as well as certificate renewals.
It is common to see the RDS Authentication Certificate template configured for autoenrollment, however this is one of the worse things you can do, and WILL cause issues with Remote Desktop Services once the certificate renewal timeframe comes in. Autoenrollment will archive the existing certificate causing RDS to no longer be able to find the existing certificate; then when you require TLS on the RDS Listener, users will fail to connect to the server. Then, at some point, Remote Desktop Configuration service will replace the newly issued certificate with a new one because it maintains the Thumbprint of the certificate that RDS should be using within WMI. When it tries to locate the original thumbprint and cannot find it, it will then attempt to enroll for a new one at the next service start. This is generally when we see the cases rolling in to the Windows Directory Services team because it appears to be a certificate issue even though this is a Remote Desktop Services configuration issue.
What we want to do is first make sure that all the steps are taken to properly configure the environment so that the Remote Desktop Configuration service is able to properly issue certificates.
The Steps
Like everything in IT (information technology), there is a list of steps that need to be completed to get this setup properly.
Configure the certificate template and add it to a Certification Authority to issue the template.
Configure the Group Policy setting.
Configuring the Certificate Template
The first step in the process is to create and configure the certificate template that we want to use:
Log on to a computer that has the Active Directory Certificate Services Tools Remote Server Administration Tools (RSAT) installed or a Certification Authority within the environment.
Launch: CertTmpl.msc (Certificate Template MMC)
Find the template named Computer, right click on it and select Duplicate Template.
On the Compatibility tab, select up to Windows Server 2012 R2 for Certification Authority and Certificate recipient. Going above this might cause issues with CEP / CES environments.
On the General tab, we need to give the template a name and validity period.
Type in a good descriptive name in the Template display name field.
If you would like to change the Validity period, you can do that as well.
You should NOT check the box Publish certificate in Active Directory.
NOTE: Make sure to copy the value in the Template name field, as this is the name that you will need to type in the group policy setting. Normally it will be the display name without any spaces in the name, but do not rely on this. Use the value you see during template creation or when looking back at the template later.
6. On the Extensions tab, the Enhanced Key Usage / Application Policies need to be modified.
a. Select Application Policies, and then click on the Edit button.
b. Multi select or select individually Client Authentication and Server Authentication and click the Remove button.
c. Click the Add button, and then click on the New button if you need to create the Application Policy for Remote Desktop Authentication. Otherwise find the Remote Desktop Authentication policy in the list and click the OK button.
d. If you need to create the Remote Desktop Authentication application policy, click the Add button, and then for the Name type in Remote Desktop Authentication, and type in 1.3.6.1.4.1.311.54.1.2 for the Object identifier value, and click the OK button.
e. Verify the newly created Remote Desktop Authentication application policy, and then click the OK button twice.
7. Remote Desktop service can use a Key Storage Provider (KSP). So, if you would like to change over from a Legacy Cryptographic Service Provider (CSP) to using a Key Storage Provider this can be done on the Cryptography tab.
8. Get the permissions set properly. To do this click on the Security tab.
a. Click the Add button and add any specific computer or computer groups you want to enroll for a certificate.
b. Then Make sure to ONLY select Allow Enroll permission. DO NOT select Autoenroll.
NOTE: Please keep in mind that Domain Controllers DO NOT belong to the Domain Computers group, so if you want all workstations, member server and Domain Controllers to enroll for this certificate, you will need Domain Computers and Enterprise Domain Controllers or Domain Controllers groups added with the security permission of Allow – Enroll.
9. When done making other changes to the template as needed, click the OK button to save the template.
Configure the Group Policy
After working through getting the certificate template created and configured to your liking, the next step in the process is to setup the Group Policy Object properly. The group policy setting that needs to be configured is located at: Computer ConfigurationPoliciesAdministrative TemplatesWindows ComponentsRemote Desktop ServicesRemote Desktop Session HostSecurity
With the Policy “Server authentication certificate template“
When adding the template name to this group policy it will accept one of two things:
Certificate template name, again this is NOT the certificate template display name.
Certificate templates Object Identifier value. Using this is not common, however some engineers will recommend this over the template name.
If you use the certificate template display name, the Remote Desktop Configuration service (SessionEnv) will successfully enroll for the certificate, however the next time the policy applies it will enroll for a new certificate again. This causes enrollments to happen and can make a CA very busy.
Troubleshoot issues of certificate issuance
Troubleshooting problems with certificate issuance is usually easy once you have a good understanding of how Remote Desktop Services goes about doing the enrollment, and there are only a few things to check out.
Investigating what Certificate Remote Desktop Service is configured to use.
The first thing to investigate is figuring out what certificate, if any,the Remote Desktop Services is currently configured to use. This is done by running a WMI query and can be done via PowerShell or good’ol WMIC. (Note: WMIC is deprecated and will be removed at a future date.)
PowerShell: Get-WmiObject -Class “Win32_TSGeneralSetting” -Namespace Rootcimv2Terminalservices
WMIC: wmic /namespace:\rootcimv2TerminalServices PATH Win32_TSGeneralSetting Get SSLCertificateSHA1Hash
We are interested in the SSLCertificateSHA1Hash value that is returned. This will tell us the thumbprint of the certificate it is attempting to load.
Keep in mind that if the Remote Desktop Service is still using the self-signed certificate, it can be found by:
launch the local computer certificate store (CertLM.msc).
Once the Computer store opened look for the store named: Certificates – Local ComputerRemote DesktopCertificates.
We would then double click on the certificate, then click on the Details tab, and find the field named Thumbprint.
Then validate if this value matches the value of SSLCertificateSHA1Hash from the output.
If there is no certificate in the Remote Desktop store, or if the SSLCertificateSHA1Hash value does not match any of the certificates in the store Thumbprint field, then it would be best to visit the Certificates – Local ComputerPersonalCertificates store next. Look for a certificate that has the Thumbprint field matching the SSLCertificateSHA1Hash value.
Does the Remote Desktop Service have permission to the Certificate private key
Once the certificate has been tracked down, we then must figure out if the certificate has a private key and if so, does the account running the service have permission to the private key?
If you are using Group Policy to deploy the certificate template information and the computer has permissions to enroll for the certificate, then the permissions in theory should be configured properly for the private key and have the NT AuthorityNetworkService with Allow – Read permissions to the private key.
If you are having this problem, then more than likely the environment is NOT configured to deploy the certificate template via the group policy setting, and it is just relying on computer certificate autoenrollment and a certificate that is valid for Server Authentication. Relying on certificate autoenrollment is not going to configure the correct permissions for the private key and add Network Service account permissions.
To check this, follow these steps:
launch the local computer certificate store (CertLM.msc).
Once the Computer store opened look for the store named: Certificates – Local ComputerPersonalCertificates.
Right click on the certificate that you are interested in, then select All Tasks, and click on Manage Private Keys.
4. Verify that Network Service account has Allow – Read Permissions. If not, then add it.
a. Click the Add button.
b. In the Select Users or Groups, click the Locations button, and select the local computer in the list.
c. Type in the name “Network Service”
d. Then click the Check Names button, and then click the OK button.
5. If the certificate does not appear to have a private key associated with it in via the Local Computer Certificate store snapin, then you may want to run the following CertUtil command to see if you can repair the association. CertUtil -RepairStore My [* / CertThumbprint].
How to change the certificate that Remote Desktop Services is using
If you have determined that Remote Desktop Services is using the wrong certificate, there are a couple of things that we can do to resolve this.
We can delete the certificate from the Computer Personal store and then cycle the Remote Desktop Configuration (SessionEnv) service. This would cause immediate enrollment of a certificate using the certificate template defined in the group policy.
PowerShell:
$RDPSettings = Get-WmiObject -Class “Win32_TSGeneralSetting” -Namespace Rootcimv2Terminalservices -Filter “TerminalName=’rdp-tcp'”
CertUtil -DelStore My $RDPSettings.SSLCertificateSHA1Hash
Net Stop SessionEnv
Net Start SessionEnv
2. We could update the Thumbprint value in WMI to reference another certificates thumbprint.
PowerShell:
$PATH = (Get-WmiObject -class “Win32_TSGeneralSetting” -Namespace rootcimv2terminalservices)
Set-WmiInstance -Path $PATH -argument @{SSLCertificateSHA1Hash=”CERTIFICATETHUMBRPINT”}
WMIC: wmic /namespace:\rootcimv2TerminalServices PATH Win32_TSGeneralSetting Set SSLCertificateSHA1Hash = “CERTIFICATETHUMBPRINT”
Conclusion
The first thing to remember is deploying certificates for Remote Desktop Services is best done by the Group Policy setting and to NOT setup the certificate template for autoenrollment. Setting the template up for autoenrollment will cause certificate issuance problems within the environment from multiple angles.
Unless you modify the certificate templates default Key Permissions setting found on the Request Handling tab, the account running the Remote Desktop Service will not have permission to the private key if the certificate is acquired via autoenrollment. This is not something that we would recommend.
This will cause a scenario where even if the SSLCertificateSHA1Hash value is correct, it will not be able to use the certificate because it will not have permission to use the private key. If you do have the template configured for custom Private Key permissions, you could again still have issues with the WMI SSLCertificateSHA1Hash value not being correct.
2. Configure the group policy setting properly as well as the certificate template. It is best to manage this configuration via group policy and you can ensure consistent experience for all RDS connections.
I know that a lot of you might have deeper questions about how the Remote Desktop Configuration service does this enrollment process, however, please keep in mind that the Remote Desktop Service is really owned by the Windows User Experience team in CSS, and so us Windows Directory Services engineers may not have that deeper level knowledge. We just get called in when the certificates do not work or fail to get issued. This is how we tend to know so much about the most common misconfigurations for this solution.
Rob “Why are RDS Certificates so complicated” Greene
Microsoft Tech Community – Latest Blogs –Read More
Office 365 for IT Pros September 2024 Update
Monthly Update #111 for Office 365 for IT Pros eBook
The Office 365 for IT Pros eBook team is delighted to announce that files are available for download for the September 2024 update of:
Office 365 for IT Pros (2025 edition) in PDF and EPUB formats.
Automating Microsoft 365 with PowerShell in PDF and EPUB formats.
Automating Microsoft 365 with PowerShell is available as part of the Office 365 for IT Pros bundle and as a separate product.
Subscribers can download the updates files using the link in the receipt emailed to them after their original purchase or from the library in their Gumroad.com account. We no longer make a Kindle version of the Office 365 for IT Pros eBook available through Amazon. It proved too difficult to release updates to readers through the convoluted Amazon process. The Automating Microsoft 365 with PowerShell book is available through Amazon in Kindle and paperback versions. The paperback is our first attempt at delivering a printed book and the response has been interesting. I guess some folk still like to have text on paper as a reference.
See our change log for information about the changes in the September 2024 update and our FAQ for details about how to download updates.
Changes in the Ecosystem
To ensure that the book content is updated and remains current, we spend a lot of time tracking change within the Microsoft 365 ecosystem. Three issues that are causing people some concerns are:
Microsoft plans to require accounts that connect to Azure administrative portals, like the Azure portal, Entra admin center, and Intune admin center or use the Azure PowerShell module and Cl, to use multifactor authentication. The requirement swings into force on October 15. In many respects, this is an excellent idea because the only accounts that access these sites are by definition administrators and all administrator accounts should be protected. But people assume that Microsoft will force all accounts to use MFA and that’s just not correct. More information is available here.
This month Microsoft plans to update Exchange Online with a revised SMTP AUTH Clients submission report to help organizations understand if apps and devices are using SMTP AUTH with basic authentication to submit messages to Exchange. The plan is to remove basic authentication for SMTP AUTH in September 2025, and the signs are that some organizations will struggle with this deadline as they do not know how to upgrade hardware (devices like multifunction printers) or apps to support OAuth. Follow the discussion online and if you have concerns, voice them there. Ian McDonald from the Exchange development group is responding to queries as they arise.
The new Outlook for Windows is generally available, and Microsoft is renaming the older Win32 version to be Outlook (classic). The rename process for the application is starting around now. Microsoft still plans to support Outlook classic until 2029 at the earliest so there’s no cause for immediate concern. The new Outlook is not ready to take over from Outlook classic yet and won’t be for several years. But it is the case that new functionality will increasingly be only available in the new Outlook (and likely OWA), and that’s something to take into consideration as Microsoft 365 tenants plan their client strategy for the coming years.
Other stuff is happening too – and all the time- but these are three of the big issues I hear discussed on an ongoing basis.
Discounted Subscriptions
We have traditionally allowed subscribers of prior editions to continue their subscriptions to cover new edition at discounted rates. The cheapest way to upgrade is always within three weeks of the release of a new edition. After that, we start to gradually reduce the discount. Our discount period finished today and there are no longer general discounts available for previous subscribers. Instead, we’re reaching out to people who have supported us over several editions to offer targeted discounts. We think this is a fairer approach to reward people who have helped us and to control the misuse of discount codes.
We know of about 70 cases where people who have never subscribed before having taken out subscriptions to the 2025 edition using codes that we made available to previous subscribers. Sometimes this happens because people pass their subscription to co-workers and sometimes it’s because people just like to share. In any case, our ability to offer discounted subscriptions is compromised when codes are misused, so we’re going to be a little more restrictive about how we issue discounts. I don’t think anyone’s doing anything particularly horrible here, but we’d like to take care of the folks who support us before anyone else gets the chance to use a discount.
On to Update #112
There’s no rest for the wicked and the Office 365 for IT Pros team is already working (or so they tell me) on update #112, which we anticipate releasing on October 1. No doubt lots will happen between this and then to add to the rich tapestry of life and the joys (!!!) of coping with constant change inside the Microsoft 365 ecosystem.