Category: Microsoft
Category Archives: Microsoft
Drive customer engagement with the power of AI
According to a recent IDC study commissioned by Microsoft, “For every $1 a company invests in AI, it is realizing an average return of $3.5X.” Because organizations realize a return on their AI investments within 14 months, customers are highly motivated to find partners with the necessary knowledge and skill set to deploy AI solutions today.
The Microsoft AI Partner Training Roadshow is a single-day, in-person event focused on driving customer engagement with the power of AI. The roadshow provides an exceptional opportunity to engage with Microsoft experts, hear about the latest trends in AI from Microsoft executives, and participate in technical or sales training.
Attend one of the six roadshow events
The Microsoft AI Partner Training Roadshow is scheduled in six cities across the globe, so there are only a few opportunities for deep learning on Microsoft generative and responsible AI technologies, cloud-scale data, and modern application development platforms, including Azure AI services and Microsoft Copilot.
The first event will be on March 1, 2024, in Hyderabad, India, followed by a second event in Bengaluru, India, on March 19. You don’t want to miss this opportunity. Register for an event near you.
Acquire generative and responsible AI knowledge from Microsoft experts
In a recent blog, Judson Althoff outlined four major opportunities where organizations can empower AI transformation:
Enriching employee experience
Reinventing customer engagement
Reshaping business processes
Bending the curve on innovation
Microsoft is focused on developing responsible AI strategies grounded in pragmatic innovation and enabling AI transformation to meet our customers’ needs. The Microsoft AI Partner Training Roadshow provides expert-led sessions and hands-on experiences to enhance your sales, pre-sales, and technical deployment capabilities across these impact areas.
Prepare technical and sales teams for AI success
Open to our Global Systems Integrator (GSI) and System Integrator (SI) partners, the Microsoft AI Partner Training Roadshow offers learning across multiple skill levels and interests. Alongside a keynote address by a Microsoft leader, there are four distinct learning paths for individuals with technical or sales backgrounds:
Sales Excellence with Microsoft AI Services: Master skills to confidently pitch Microsoft AI solutions by diving into solution use cases, exploring responsible AI commitments, and highlighting incentives to increase customer business value.
Technical Excellence with Azure AI: Build your own “Intelligent Agent” copilot to answer customer questions on products and services: Learn to build an “Intelligent Agent” that helps users find products, user profiles, and sales order information. This interactive experience features theoretical and lab sessions that prepare your technical teams to use Azure OpenAI and Azure AI Search.
Technical Excellence with Azure AI: Build a scalable data estate with a custom copilot for conversational data interaction: In this hands-on track, learn how to create a payments and transactions solution. Key subjects explored include business rules for data governance, patch operations for data replication, and customizing copilots for conversational AI.
Technical Excellence with Microsoft 365: Deep dive into the use and deployment of Copilot for Microsoft 365: Gain a fuller understanding of Copilot for Microsoft 365 with technical sessions on architecture, deployment, security, and compliance.
Bridge skill gaps in AI
Because AI is rapidly developing, there is a growing skills gap as employees work to keep up. In fact, 52% of participants of this IDC survey report that the lack of skilled workers is their biggest barrier to implementing and scaling AI. Much of the challenge isn’t simply adopting technology but also providing ample opportunities for employees to explore and learn.
To reconcile this divide, the Microsoft AI Partner Training Roadshow is committed to providing recent, up-to-date content for participants to study during and after the event. In addition to live keynote addresses and Q&A sessions, participants will have the chance to interact with and learn from technical and sales subject matter experts on topics that span generative and responsible AI technologies, cloud-scale data, and modern application development platforms, Azure AI services, and Microsoft Copilot
Prepare for the future
2023 introduced the world to the power of generative AI. Businesses are ready to deploy AI-based solutions as quickly as possible. The Microsoft AI Partner Training Roadshow places developers, solution architects, implementation consultants, and sales & pre-sales consultants at the forefront of AI transformation.
Because there will be no on-demand delivery post-event, we invite you to join us in Hyderabad, Bengaluru, or one of the other four cities across the globe that’s conveniently located near you.
Visit the Microsoft AI Partnership Roadshow website and register today to get started.
Microsoft Tech Community – Latest Blogs –Read More
Build an LLM-based application, benchmark models and evaluate output performance with Prompt Flow
Overview
In this article, I’ll be covering some of the capabilities of Prompt Flow, a development tool designed to streamline the entire development cycle of AI applications powered by Large Language Models (LLMs). Prompt Flow is available through Azure Machine Learning and Azure AI Studio (Preview).
Through Prompt Flow, I will:
Create a NER (Named-Entity Recognition) application;
Test different LLMs (GPT-3.5-Turbo vs. GPT-4-Turbo) through variant capability;
Evaluate output performance using a built-in evaluation method (QnA F1 Score Evaluation)
Note: As Azure AI Studio is in preview currently (February 2024), I’ll leverage Prompt Flow through the Azure Machine Learning Studio. After the preview period, everyone should use Azure AI Studio with Prompt Flow.
Create a NER application
On the Azure portal, go on Azure Machine Learning service, create a workspace, and launch the studio.
On the Azure Machine Learning Studio, you will find different features such as:
Prompt Flow, is a feature that allows you to author flows. Flows are executable workflows often consist of three parts:
Inputs: Represent data passed into the flow. There can be different data types like strings, integers, or boolean.
Nodes: Represent tools that perform data processing, task execution, or algorithmic operations. Tools are LLM tool (enables custom prompt creation utilizing LLMs), Python tool (allows the execution of custom Python scripts), Prompt tool (prepares prompts as strings for complex scenarios or integration with other tools)
Outputs: Represent the data produced by the flow.
Model Catalog, is the hub to deploy a wide-variety of third-party (Mistral, Meta, Hugging Face, Deci, Nvidia, etc.) open source as well as Microsoft developed foundation models pre-trained for various language, speech and vision use-cases. You can consume some of these models directly through their inference API endpoints called “Models as a Service” (e.g. Meta and Mistral) or deploy a real-time endpoint on a dedicated infrastructure (e.g. GPU) that you manage;
Notebook, to allow data scientists to create, edit, and run Jupyter notebooks in a secure, cloud-based environment;
Compute, a managed cloud-based workstation for data scientists. Each compute instance has only one owner, although you can share files between multiple compute instances. Use a compute instance as your fully configured and managed development environment in the cloud for machine learning. They can also be used as a compute target for training and inferencing for development and testing purposes.
Now, click on “Prompt Flow”, create a Flow by selecting a “Standard flow”. Now you should have a flow similar to this one:
This flow represents an application with different blocks. Let me go through each block (called node):
Inputs
Takes as input (prompt) a topic
Joke (LLM node)
Condition the LLM “to tell good jokes” through a system message. Takes as input the initial prompt.
You need to enable a Connection to make this action able to interact with an endpoint (e.g. LLM inference API, Vector Index such as Azure Search, Azure OpenAI deployed models, Qdrant, etc). To do that, you need to create this connection in a dedicated pane within Prompt Flow by specifying the provider, the endpoint, and credentials.
Echo (Python node)
Python script which takes as input the output (completion) of the LLM and echo it.
Output
Outputs the … output of the Python script.
To test your flow, provide an input and click on Run. On the outputs tab can review the outputs:
Now that I have a flow, I want to edit it to become a NER (Named Entity Recognition) flow that leverages an LLM to find entities from a given text content. To do that, I’ll edit the LLM node (previously named “joke”) and the Python node (previously named “echo”).
LLM node
I’ll rename the LLM node in “NER_LLM” with the configuration below.
To perform the NER I’ll use this prompt:
system:
Your task is to find entities of a certain type from the given text content.
If there’re multiple entities, please return them all with comma separated, e.g. “entity1, entity2, entity3”.
You should only return the entity list, nothing else.
If there’s no such entity, please return “None”.
user:
Entity type: {{entity_type}}
Text content: {{text}}
Entities:
Python node
I’ll rename the Python node in “cleansing” with the configuration below.
It runs the following Python code:
from typing import List
from promptflow import tool
@tool
def cleansing(entities_str: str) -> List[str]:
# Split, remove leading and trailing spaces/tabs/dots
parts = entities_str.split(“,”)
cleaned_parts = [part.strip(” t.””) for part in parts]
entities = [part for part in cleaned_parts if len(part) > 0]
return entities
Basically, this code snippet takes as input an entity (or entities if more than one) and cleanses a comma-separated string by removing extraneous whitespace, tabs, and dots from each element and returns a list of non-empty, trimmed strings.
Test the flow
To test the flow, I asked an LLM to provide me with an example. I asked the model to provide me in JSON format an “entity_type” and a “text” that contains entity or entities to extract through NER process. I took GPT-4-Turbo model through the Azure OpenAI Playground interface. Here is the example:
{“entity_type”: “location”, “text”: “Mount Everest is the highest peak in the world.”}
Then, I pass those into the inputs node on my flow:
Finally, I can run my flow. Basically, it will execute nodes after nodes from inputs to the outputs nodes:
I can see the result in the Outputs section:
And get more information in the Trace section such as API calls in which node, time to proceed, number of tokens process on the prompt and on the completion side, etc.
Variants
If you want to test different prompts, system messages, even different models you can create Variants. A variant refers to a specific version of a tool node that has distinct settings such as another models, different temperature, different top_p parameter, another prompts, etc. This way you’re able to perform basic A/B testing.
Let’s say you want to compare results between two of the most used OpenAI’s models that are GPT-3.5-Turbo and GPT-4-Turbo.
To make that happens, go on the LLM node, and click on “Generate variants”. On this example I’ll keep same prompt, same temperature, same top_p parameter, but I’ll change the LLM to interact with (from GPT-4-Turbo to GPT-3.5-Turbo).
To test multiple variants at the same time, click on Run, and select all my variants (variant_0 refers to GPT-4-Turbo and variant_1 refers to GPT-3.5-Turbo), so that I aggregate results within same outputs tab:
We can see that we obtain the same results, independently of the LLM used behind. Let’s be honest, this example isn’t very complex and can be easily handled by smaller model than GPT-4-Turbo but let’s keep it simple as the complexity of the task is not the main purpose of this blog post.
Evaluation
Now that I have a NER flow and having been able to test the application with different LLMs, I’d like to evaluate the output performance. This is where the Evaluate capability of Prompt Flow comes in. The evaluation feature enables you to select built-in evaluation methods and build your own custom evaluation methods.
Here, I’ll use the built-in “QnA F1 Score Evaluation” method. I won’t go deep into the details, but this evaluation method computes the F1 score based on words in the predicted answers and the ground truth.
Then, I need data samples to run the flow and then evaluate the outputs at a larger scale than one example. One of the use-case around Generative AI is the way of these models generate data samples so I’ll use GPT-4-Turbo to generate 50 examples that will serve to run flows and evaluate outputs.
Here is my system message:
Your task is to generate in .jsonl format a data set that will be used to evaluate an LLM-based application.
This application is performing NER (Named Entity Recognition) with 2 inputs: “entity_type” as a string (e.g. “job title”) and “text” as a string (e.g. “The software engineer is working on a new update for the application.”). The desired output is the entity or entities if they’re multiple (e.g. “software engineer”).
Here is my prompt:
Generate 50 samples:
Here are the first five lines of the completion:
{“entity_type”: “person”, “text”: “Elon Musk has announced a new Tesla model.”, “entity”: “Elon Musk”}
{“entity_type”: “organization”, “text”: “Google is planning to launch a new feature in its search engine.”, “entity”: “Google”}
{“entity_type”: “job title”, “text”: “Dr. Susan will take over as the Chief Medical Officer next month.”, “entity”: “Chief Medical Officer”}
{“entity_type”: “location”, “text”: “The Eiffel Tower is one of the most visited places in Paris.”, “entity”: “Eiffel Tower”}
{“entity_type”: “date”, “text”: “The conference is scheduled for June 23rd, 2023.”, “entity”: “June 23rd, 2023”}
Once I’m happy with my sample, I select Evaluate in Prompt Flow, where I can edit the run display name, add description and tags, select for each LLM nodes the variants I want to evaluate. In my case I select the two variants I created:
Now I need to select a runtime, upload my sample, do the inputs mapping:
Then I select the evaluation method I want to perform (QnA F1 Score Evaluation method here). Here I need to specify data sources for the ground_truth (from the sample) and for the answer (generated by the LLM within the flow):
Finally I can click on “Review + Submit”. Behind the scene, Prompt Flow is executing my flow in 2 separate runs, one with variant_0 and the other with variant_1. Once these runs will be completed, it will perform the QnA F1 Score Evaluation method to both runs.
We can see the results of the executions on the Runs tab:
First observation is the duration of each execution:
The execution based on variant_0 (GPT-4-Turbo) took 1mn 14s to be completed;
The execution based on variant_1 (GPT-3.5-Turbo) took 14s to be completed.
One thing to keep in mind in the LLM world is that using a larger model will – most of the time – result in longer inference speed.
Now let’s have a look at the evaluations. By selecting both evaluation runs we can output results:
We can observe that the flow with highest F1 score is the one leveraging GPT-4-Turbo model (F1 score == 0.95) compared to GPT-3.5-Turbo model (F1 score == 0.89).
Although the larger model results in better output performance evaluation, leveraging GPT-3.5-Turbo model results in 80% faster inference speed and more cost effective as well.
Inference speed and tokens pricing model are some of the trade-off that you need to make in order to make sure you choose the right model to answer your need.
Conclusion
In conclusion, we’ve been covering Prompt Flow within Azure Machine Learning and Azure AI Studio to build and evaluate AI applications powered by Large Language Models (LLMs). This blog post walks through the process of creating a Named-Entity Recognition (NER) application, testing it with different LLMs (GPT-3.5-Turbo and GPT-4-Turbo), and evaluating the output performance using the built-in QnA F1 Score Evaluation method.
We’ve been demonstrating the use of variants to perform A/B testing between different models and a performance evaluation using generated data samples to calculate the F1 score, highlighting the trade-offs between inference speed, model size, and cost-effectiveness.
About the author
Alexandre Levret is a Technology Specialist within Microsoft working with Digital Native customers (Unicorns & Scaleups) in EMEA on AI/ML and GenAI projects.
Microsoft Tech Community – Latest Blogs –Read More
Managing MDTI Premium licenses in Microsoft Entra Admin Center
This blog details how to assign and manage Defender Threat Intelligence (MDTI) licenses and contains links to helpful content and resources. It is intended for customers who recently purchased the MDTI Premium SKU or a SKU that enables MDTI Premium access for its user base, such as Copilot for Security. Global administrators or identity governance administrators responsible for assigning MDTI user seat assignments will find it particularly useful.
Prerequisites to assigning MDTI premium licenses
Your Microsoft account team should have notified you that your MDTI procurement processing has been complete and requested that you view the available licenses within your respective tenant. If your agreement has not been fully processed, you will not be able to view the “Defender Threat Intelligence” licenses.
Instructions to assign MDTI Premium Licenses
As mentioned above, global administrators or identity governance administrators are responsible for assigning MDTI premium licenses to users, and should review the following Microsoft Learn resources for best practices for assigning licenses to users within Microsoft Entra Admin Center:
Tutorial – Manage access to resources in entitlement management – Microsoft Entra ID Governance | Microsoft Learn
Microsoft Entra built-in roles – Microsoft Entra ID | Microsoft Learn
Figure 1 – This is how your “Defender Threat Intelligence” MDTI Premium SKU licenses appear in Microsoft Entra Admin Center.
Troubleshooting MDTI Premium Seat Assignments
Ensure that you have the permissions to assign “Defender Threat Intelligence” licenses. Only global administrators or identity governance administrators have the appropriate permissions to assign user licenses.
Check with your Microsoft Account team that your MDTI Premium SKU agreement has been processed.
If you have completed the troubleshooting steps above and still cannot locate your “Defender Threat Intelligence” licenses in Microsoft Entra Admin Center, please work with your Microsoft account team to engage a CSA or another technical resource for further support.
We Want to Hear from You!
Be sure to join our fast-growing community of security pros and experts to provide product feedback and suggestions. Let us know how MDTI is helping your team stay on top of threats. With an open dialogue, we can create a safer internet together. Also, learn more about how to use MDTI to unmask adversaries and address threats here.
Microsoft Tech Community – Latest Blogs –Read More
Streamlining Azure Marketplace Deployments
Streamlining Azure Marketplace Deployments
Navigating the complexity of deploying solutions to the Azure Marketplace is a common challenge faced by many of our partners at Microsoft. Recognizing this, our Global Partner Services team has developed a powerful tool to simplify this process: the Commercial Marketplace Offer Deployment Manager, or MODM.
Introducing MODM
MODM is a dedicated, first-party installer designed to streamline the deployment of intricate solutions in the Azure Marketplace. It is especially crafted to support deployments using HashiCorp’s Terraform and Azure Bicep, enhancing the versatility and efficiency of the deployment process.
How MODM Simplifies Deployment
The deployment process with MODM is straightforward, involving two main steps:
Step 1: Create Your Application Package
The initial phase involves packaging your solution into an application package using the Azure CLI Partnercenter Extension. MODM accommodates two types of solutions for packaging:
1. HashiCorp’s Terraform: This popular open-source infrastructure as code tool is now seamlessly supported for Azure Marketplace deployments. Previously, Terraform-based solutions needed conversion to Azure Resource Manager templates, a process that demanded significant development and testing efforts. MODM eliminates this requirement.
2. Azure Bicep: Azure Bicep offers a more readable and concise syntax compared to the JSON of Azure Resource Manager templates. With MODM, converting your Azure Bicep templates to ARM templates is a thing of the past.
Both Terraform and Bicep solutions require minimal prerequisites to be compatible with MODM. Place your solution in a directory with a main entry point file (main.tf for Terraform, main.bicep for Bicep), install the Azure CLI extension for Partnercenter, and execute a single command to create an application package ready for Azure Marketplace.
Simply execute:
az partnercenter marketplace offer package build –id simpleterraform –src $src_dir –package-type AppInstaller
Step 2: Publish Your Application Package
Publishing your application package follows the same protocol as any other Marketplace solution. Utilize the Azure CLI Extension for Partnercenter or the Partnercenter Portal for this purpose.
Post-Deployment: Installing Your Published Package
Installing a marketplace offer deployed with MODM is as straightforward as installing any other managed app. A unique aspect of MODM is the inclusion of a user-friendly front-end experience that allows you to monitor the installation progress and troubleshoot any issues that arise. Detailed documentation and a helpful video tutorial on this process are available for further guidance.
MODM’s Architecture Overview
MODM’s architecture is anchored by the App Installer, a virtual machine that plays a pivotal role in the deployment process. This component takes the packaged app.zip from the Partnercenter CLI command and oversees the installation, managing aspects like retries and machine restarts. A detailed breakdown of MODM’s architecture is available in our GitHub documentation.
Educational Resources and Tutorials
To assist you further, we have prepared video tutorials covering various aspects of using MODM:
Packaging Terraform Solutions
Installing the Published Offer
Source Code Repositories
MODM Installer
Azure CLI Partnercenter Extension
Microsoft Tech Community – Latest Blogs –Read More
Leverage Secure Multi Party Computation (SMPC) for machine learning inference in rs-fMRI datasets.
@Alberto Santamaria-Pang, @Ivan Tarapov, @Yonas Woldesenbet, @Sam Preston, @Rahul Sharma, @Nishanth Chandran, @Divya Gupta, @Kashish Mittal, and @Ajay Manchepalli.
Machine learning models are useful in analyzing patient data, helping in detecting diseases early, and enabling clinicians in creating personalized treatments. However, using these models in healthcare is challenging because it requires accessing and processing sensitive patient data while ensuring patient privacy and complying with strict regulations.
Traditional encryption methods can only protect data when it is stored and not when it is being used for computation. One way to perform computation on encrypted data is to decrypt it in a trusted region like a secure enclave, which is done in Microsoft’s product offering Azure Confidential Computing. A cryptographic way of protecting information exists that can operate directly on encrypted data without the need for decryption – this technique is known as Secure Multi-party Computation (SMPC). SMPC helps ensure that sensitive healthcare data remains secure while enabling healthcare professionals to perform computations on the data they need to provide better care for patients.
Traditional encryption vs. SMPC
While both traditional encryption methods and Secure Multi-Party Computation (SMPC) offer similar levels of data security, SMPC has the added capability of allowing computations on encrypted data. For instance, in the case of wanting to conduct model inference on an encrypted DICOM image, it’s possible to directly use the encrypted image with SMPC. The additional computational load or overhead of using SMPC depends on the specific function or computation being performed on the encrypted data.
Comparison criteria
Traditional encryption methods
Secure Multi‑Party Computation (SMPC)
Data exposure
Raw data needs to be decrypted for analysis or use.
Computation is performed on encrypted data.
Inference speed
Encryption and decryption overhead is minimal.
Joint computation on encrypted data can introduce overhead in latency.
Trust assumptions
Rely on trusted third‑party or secure infrastructure.
Distributed computation with privacy assurance.
Figure 1 Traditional encryption methods vs. Secure Multi‑Party Computation (SMPC).
SMPC transforms healthcare data analysis and ML
SMPC provides a solution that allows multiple parties to work together on their data without revealing any sensitive information. It helps healthcare providers and researchers securely analyze patient data and use ML models while maintaining patient privacy.
Here are some key benefits of SMPC in the healthcare sector:
Privacy preservation. SMPC protects individual patient data during the computation process. Each party only sees their own data, and the others’ data is hidden. This lets healthcare providers and researchers work together and use more data without risking privacy.
Collaborative research. SMPC facilitates collaborative research among healthcare institutions, enabling them to pool their data resources without compromising privacy. Multiple parties can train ML models together on their combined data while keeping patient records and information safe. This helps improve the ML models in healthcare by using more and different data sources and larger samples.
Secure data sharing. SMPC helps enable healthcare providers to more securely share specific information from their datasets with other authorized parties. For example, when studying rare diseases, healthcare organizations may be able to share some patient data points or features while helping preserve their identity and privacy. This controlled sharing mechanism helps enhance research and contributes to the advancement of medical knowledge.
Privacy‑preserving ML to improve the security of fMRI data analysis in healthcare.
In this blog we explore the application of SMPC to medical image analysis via machine learning techniques for a specific use case of functional Magnetic Resonance Imaging (fMRI) analysis. Applying ML to fMRI data has the potential to revolutionize healthcare by providing insights into brain function and diagnosing neurological disorders. However, the sensitive nature of fMRI data raises significant privacy concerns. To address these challenges, one may employ privacy‑preserving ML techniques, such as data anonymization, secure data encryption, federated learning, and differential privacy, which would allow leveraging the benefits of ML in fMRI analysis while maintaining patient confidentiality and adhering to regulatory requirements.
Before diving into the details of how OnnxBridge (an end-to-end compiler for converting Onnx Models to Secure Cryptographic backends) enables secure machine learning for fMRI data, it is important to understand how fMRI is relevant for neuroscience research. Functional magnetic resonance imaging (fMRI) is a technique that measures brain activity by detecting changes in blood flow. By using fMRI, researchers can identify which brain regions are involved in different cognitive functions, such as memory, language, or emotion. This is known as functional localization. However, fMRI data is often sensitive and confidential, as it can reveal personal information about the participants’ health, preferences, or personality. Therefore, it is essential to protect the privacy and security of fMRI data when performing machine learning analysis on it.
In the rest of this blog post, we cover these topics:
What rs‑fMRI is and how it measures brain activity by detecting changes in blood flow.
How SMPC protects the privacy and security of fMRI data when performing machine learning analysis using EzPC‑OnnxBridge, a crucial part of the EzPC project from Microsoft Research India (MPC-MSRI, 2021).
How to use EzPC‑OnnxBridge for rs‑fMRI to identify brain regions involved in different cognitive functions.
What is rs‑fMRI and how is it used to localize brain networks?
Unlike traditional fMRI, which captures brain activity during specific tasks or stimuli, rs‑fMRI delves into the spontaneous fluctuations of the brain when it is in a state of rest or free thinking. It explores the intricate networks of communication among different brain regions, shedding light on the underlying functional architecture that forms the foundation of our cognition.
The power of rs‑fMRI lies in its ability to measure and analyse blood oxygen level ‑dependent (BOLD) signals. By detecting changes in blood flow and oxygenation, rs‑fMRI provides a window into the brain’s dynamic activity during rest. These fluctuations in the BOLD signal, known as resting ‑state connectivity, are like whispers of communication between various regions of the brain, even when we are not consciously engaged in any cognitive task.
Through advanced computational algorithms and sophisticated statistical analysis, researchers can map and visualize these functional connections within the brain. However, it is important to note that rs‑fMRI is not without its challenges and limitations. The interpretation of resting ‑state connectivity requires careful consideration, as it represents correlations between brain regions rather than direct causality. Moreover, factors such as participant motion, physiological noise, and data pre‑processing methods can influence the results and must be rigorously addressed to help ensure data quality and reliability. Here’s where ML algorithms can help neuro‑radiologists to efficiently map and visualize brain networks towards different number of clinical applications. In this blog, we provide an example of how to use SMPC to automatically identify and localize brain networks using work published in [3].
Figure 2 Visualization of brain networks from 3D dual regression volumes.
How SMPC works using EzPC-OnnxBridge
We begin with an overview of how secure multi‑party computation (SMPC) works and then describe how EzPC‑OnnxBridge can be used in the application described above. EzPC OnnxBridge allows using SMPC without any knowledge of cryptography. We will now walk through the steps for using EzPC OnnxBridge for this application.
SMPC is a cryptographic primitive introduced in the 1980s [4,5] that helps enable two or more parties who have private data to collaborate (or compute joint functions) on their private/secret data, without sharing it in the clear with any entity. This is done through an interactive cryptographic protocol – each party performs computations on their data and exchange (seemingly random looking) messages with other parties iteratively. At the end of such an interaction, the parties learn only the output of the joint function. As an example, if two parties A and B have private inputs a and b and wish to compute the function y = f(a,b) which outputs 1 if a>b and 0, otherwise, they can run an SMPC protocol to precisely compute y and nothing else. SMPC protocols have been extensively studied in the cryptography community over the last four decades with latest research, such as the EzPC technology [6,7,8,9], making SMPC practical for large scale ML models. In the application of secure machine learning for fMRI data, we have 2 parties – one that holds the machine learning model and the other that holds an input data point for inference. For the first party, the weights of the ML model are private, while for the second party, the input data point is private. In typical applications, including ours, the model architecture is public and known to both parties.
1. Identify sensitive data
We first identify the data involved in a single inferencing between two parties:
Machine Learning Model (Model Weights + Model Architecture).
Input data for inference.
Image by author using [2].
In the above the secret (or private) data to the two parties are:
Model Weights (obtained after training publicly available model architecture on private data) to one party.
Input data to the other party.
Image by author using [2].
Typically, model architectures are openly available and do not hold any proprietary data of any of the parties.
2. Strip ML model of weights
Now that we know what the secret data involved in an inference are, the next step is to strip the ML model of its model weights so that the model architecture can be shared. This is shown in the figure below.
Image by author using [2].
The above step helps us confirm that the secret data is in no way involved in generating crypto protocols, and give us full control over our data, which we input only at the time of secure inference.
In the above image we can see the mlp.onnx model before and after its secret data (i.e., the weights and bias of all layers) is stripped and represented as an input value, which means the model architecture do not contain any secret data and expects it at runtime.
3. Generate SMPC protocols from architecture
After we have the model architecture without weights, we need to convert this architecture to cryptographically secure protocols which will run on the secret data and give us output as if it was run without any crypto or security guarantees involved. This is done through EzPC‑OnnxBridge and is depicted below.
Image by author using [2].
4. Secure inference on private data
Finally, we need to run the above generated crypto protocols for each of two parties involved. These protocols will take the secret data as input and will communicate with each other some encrypted (masked) bits and pieces of data, which have strong mathematical assurances such that at any point the data being communicated does not reveal any information about the secret data.
At the end of the computation, the output of the computation is revealed to the specified parties (one or both) involved in the computation.
Using EzPC OnnxBridge for rs-fMRI
EzPC offers an inference ‑app that serves as a front-end for SMPC operations. This application presents users with a graphical user interface (GUI) through which they can upload images and obtain results securely. Next, we’ll walk through the steps required to get the app running.
Internally, the application utilizes OnnxBridge, an ‑‑ end to end compiler, to convert Onnx files to SMPC cryptographic protocols. The compiler helps with the removal of confidential data from models before converting them to Secure Multi‑Party Computation (SMPC) protocols. Thus, EzPC provides a user ‑friendly interface that facilitates a more secure compilation and execution of machine learning models.
Let’s take a look at the practical implementation of OnnxBridge to conduct secure inference using the mlp.onnx model specifically designed for rs‑fMRI (resting‑state functional magnetic resonance imaging) images.
The setup steps from the EzPC GitHub repo will help us to get the inference‑app running. The steps will be executed in following order:
1. Install dependencies for:
Cryptographic backend
Compiler OnnxBridge
2. Set up server (model owner and model processing).
Extract MLP model from the JHU GitHub repository
Strips the model of its weights and save them in a file.
Loads the stripped model architecture.
Generates the secure backend code for the model architecture.
Share the stripped model architecture with dealer/client.
3. Set up dealer.
Compiles the model architecture received from server.
Compute and share pre generated randomness for server/client to reduce communication drastically and speed up inference.
Note: For the randomness generation there has been no involvement of secret data.
4. Set up client (acting as image owner).
Compiles the model architecture received from server.
5. Set up inference app.
Encrypts the input image and sends it to client VM which starts inference. See screenshots below.
Step 1: Upload the image.
Step 2: Receive encryption from dealer.
Step 3: Encrypt the image.
Step 4: Start secure inference.
With the above we can see how EzPC gives us an interface and empowers us with superior cryptographic backends to follow SMPC ideally without any interaction with the secret data.
References
MPC-MSRI. (2021). EzPC: Easy Secure Multi-party Computation. GitHub. Retrieved from https://github.com/MPC-MSRI/EzPC.
AmmarPL. (2021). fMRI Classification JHU. GitHub. Retrieved from https://github.com/AmmarPL/fMRI-Classification-JHU.
Empower Medical Innovations: Intel Accelerates PadChest & fMRI Models on Microsoft Azure* Machine Learning. https://www.intel.com/content/www/us/en/developer/articles/technical/intel-accelerates-padchest-fmri-models-on-azure-ml.html
Dsouza, Trevor. Machine Learning Icon, distributed under CC BY 3.0.
Ghate, S., Santamaria-Pang, A., Tarapov, I., Sair, H., Jones, C. (2022). Deep Labeling of fMRI Brain Networks Using Cloud Based Processing. In: Bebis, G., et al. Advances in Visual Computing. ISVC 2022. Lecture Notes in Computer Science, vol 13598. Springer, Cham. https://doi.org/10.1007/978-3-031-20713-6_21. https://doi.org/10.1007/978-3-031-20713-6_21.
Yao, A. (1982). Protocols for Secure Computations. In Proceedings of the 23rd Annual Symposium on Foundations of Computer Science (pp. 160-164). IEEE.
Goldreich, O., Micali, S., & Wigderson, A. (1987). How to play any mental game or A completeness theorem for protocols with honest majority. In Proceedings of the 19th Annual ACM Symposium on Theory of Computing (pp. 218-229). ACM.
Kumar, N., Rathee, M., Chandran, N., Gupta, D., Rastogi, A., & Sharma, R. (2020). CrypTFlow: Secure TensorFlow Inference. In Proceedings of the 41st IEEE Symposium on Security and Privacy (pp. 1247-1264). IEEE.
Rathee, D., Rathee, M., Kumar, N., Chandran, N., Gupta, D., Rastogi, A., & Sharma, R. (2020). CrypTFlow2: Practical 2 Party Secure Inference. In Proceedings of the 27th ACM Conference on Computer and Communications Security (pp. 1639-1656). ACM.
Chandran, N., Gupta, D., Rastogi, A., Sharma, R., & Tripathi, S. (2019). EzPC: Programmable and Efficient Secure Two-Party Computation for Machine Learning. In Proceedings of the 4th IEEE European Symposium on Security and Privacy (pp. 123-138). IEEE.
Gupta, K., Kumaraswamy, D., Chandran, N., Gupta, D. (2022). LLAMA: A Low Latency Math Library for Secure Inference. In Proceedings of the Privacy Enhancing Technologies Symposium (PoPETS).
Do more with your data with Microsoft Cloud for Healthcare
With Azure AI Health Insights, health organizations can transform their patient experience.
Microsoft Tech Community – Latest Blogs –Read More
Drive customer engagement with the power of AI
According to a recent IDC study commissioned by Microsoft, “For every $1 a company invests in AI, it is realizing an average return of $3.5X.” Because organizations realize a return on their AI investments within 14 months, customers are highly motivated to find partners with the necessary knowledge and skill set to deploy AI solutions today.
The Microsoft AI Partner Training Roadshow is a single-day, in-person event focused on driving customer engagement with the power of AI. The roadshow provides an exceptional opportunity to engage with Microsoft experts, hear about the latest trends in AI from Microsoft executives, and participate in technical or sales training.
Attend one of the six roadshow events
The Microsoft AI Partner Training Roadshow is scheduled in six cities across the globe, so there are only a few opportunities for deep learning on Microsoft generative and responsible AI technologies, cloud-scale data, and modern application development platforms, including Azure AI services and Microsoft Copilot.
The first event will be on March 1, 2024, in Hyderabad, India, followed by a second event in Bengaluru, India, on March 19. You don’t want to miss this opportunity. Register for an event near you.
Acquire generative and responsible AI knowledge from Microsoft experts
In a recent blog, Judson Althoff outlined four major opportunities where organizations can empower AI transformation:
Enriching employee experience
Reinventing customer engagement
Reshaping business processes
Bending the curve on innovation
Microsoft is focused on developing responsible AI strategies grounded in pragmatic innovation and enabling AI transformation to meet our customers’ needs. The Microsoft AI Partner Training Roadshow provides expert-led sessions and hands-on experiences to enhance your sales, pre-sales, and technical deployment capabilities across these impact areas.
Prepare technical and sales teams for AI success
Open to our Global Systems Integrator (GSI) and System Integrator (SI) partners, the Microsoft AI Partner Training Roadshow offers learning across multiple skill levels and interests. Alongside a keynote address by a Microsoft leader, there are four distinct learning paths for individuals with technical or sales backgrounds:
Sales Excellence with Microsoft AI Services: Master skills to confidently pitch Microsoft AI solutions by diving into solution use cases, exploring responsible AI commitments, and highlighting incentives to increase customer business value.
Technical Excellence with Azure AI: Build your own “Intelligent Agent” copilot to answer customer questions on products and services: Learn to build an “Intelligent Agent” that helps users find products, user profiles, and sales order information. This interactive experience features theoretical and lab sessions that prepare your technical teams to use Azure OpenAI and Azure AI Search.
Technical Excellence with Azure AI: Build a scalable data estate with a custom copilot for conversational data interaction: In this hands-on track, learn how to create a payments and transactions solution. Key subjects explored include business rules for data governance, patch operations for data replication, and customizing copilots for conversational AI.
Technical Excellence with Microsoft 365: Deep dive into the use and deployment of Copilot for Microsoft 365: Gain a fuller understanding of Copilot for Microsoft 365 with technical sessions on architecture, deployment, security, and compliance.
Bridge skill gaps in AI
Because AI is rapidly developing, there is a growing skills gap as employees work to keep up. In fact, 52% of participants of this IDC survey report that the lack of skilled workers is their biggest barrier to implementing and scaling AI. Much of the challenge isn’t simply adopting technology but also providing ample opportunities for employees to explore and learn.
To reconcile this divide, the Microsoft AI Partner Training Roadshow is committed to providing recent, up-to-date content for participants to study during and after the event. In addition to live keynote addresses and Q&A sessions, participants will have the chance to interact with and learn from technical and sales subject matter experts on topics that span generative and responsible AI technologies, cloud-scale data, and modern application development platforms, Azure AI services, and Microsoft Copilot
Prepare for the future
2023 introduced the world to the power of generative AI. Businesses are ready to deploy AI-based solutions as quickly as possible. The Microsoft AI Partner Training Roadshow places developers, solution architects, implementation consultants, and sales & pre-sales consultants at the forefront of AI transformation.
Because there will be no on-demand delivery post-event, we invite you to join us in Hyderabad, Bengaluru, or one of the other four cities across the globe that’s conveniently located near you.
Visit the Microsoft AI Partnership Roadshow website and register today to get started.
Microsoft Tech Community – Latest Blogs –Read More
Drive customer engagement with the power of AI
According to a recent IDC study commissioned by Microsoft, “For every $1 a company invests in AI, it is realizing an average return of $3.5X.” Because organizations realize a return on their AI investments within 14 months, customers are highly motivated to find partners with the necessary knowledge and skill set to deploy AI solutions today.
The Microsoft AI Partner Training Roadshow is a single-day, in-person event focused on driving customer engagement with the power of AI. The roadshow provides an exceptional opportunity to engage with Microsoft experts, hear about the latest trends in AI from Microsoft executives, and participate in technical or sales training.
Attend one of the six roadshow events
The Microsoft AI Partner Training Roadshow is scheduled in six cities across the globe, so there are only a few opportunities for deep learning on Microsoft generative and responsible AI technologies, cloud-scale data, and modern application development platforms, including Azure AI services and Microsoft Copilot.
The first event will be on March 1, 2024, in Hyderabad, India, followed by a second event in Bengaluru, India, on March 19. You don’t want to miss this opportunity. Register for an event near you.
Acquire generative and responsible AI knowledge from Microsoft experts
In a recent blog, Judson Althoff outlined four major opportunities where organizations can empower AI transformation:
Enriching employee experience
Reinventing customer engagement
Reshaping business processes
Bending the curve on innovation
Microsoft is focused on developing responsible AI strategies grounded in pragmatic innovation and enabling AI transformation to meet our customers’ needs. The Microsoft AI Partner Training Roadshow provides expert-led sessions and hands-on experiences to enhance your sales, pre-sales, and technical deployment capabilities across these impact areas.
Prepare technical and sales teams for AI success
Open to our Global Systems Integrator (GSI) and System Integrator (SI) partners, the Microsoft AI Partner Training Roadshow offers learning across multiple skill levels and interests. Alongside a keynote address by a Microsoft leader, there are four distinct learning paths for individuals with technical or sales backgrounds:
Sales Excellence with Microsoft AI Services: Master skills to confidently pitch Microsoft AI solutions by diving into solution use cases, exploring responsible AI commitments, and highlighting incentives to increase customer business value.
Technical Excellence with Azure AI: Build your own “Intelligent Agent” copilot to answer customer questions on products and services: Learn to build an “Intelligent Agent” that helps users find products, user profiles, and sales order information. This interactive experience features theoretical and lab sessions that prepare your technical teams to use Azure OpenAI and Azure AI Search.
Technical Excellence with Azure AI: Build a scalable data estate with a custom copilot for conversational data interaction: In this hands-on track, learn how to create a payments and transactions solution. Key subjects explored include business rules for data governance, patch operations for data replication, and customizing copilots for conversational AI.
Technical Excellence with Microsoft 365: Deep dive into the use and deployment of Copilot for Microsoft 365: Gain a fuller understanding of Copilot for Microsoft 365 with technical sessions on architecture, deployment, security, and compliance.
Bridge skill gaps in AI
Because AI is rapidly developing, there is a growing skills gap as employees work to keep up. In fact, 52% of participants of this IDC survey report that the lack of skilled workers is their biggest barrier to implementing and scaling AI. Much of the challenge isn’t simply adopting technology but also providing ample opportunities for employees to explore and learn.
To reconcile this divide, the Microsoft AI Partner Training Roadshow is committed to providing recent, up-to-date content for participants to study during and after the event. In addition to live keynote addresses and Q&A sessions, participants will have the chance to interact with and learn from technical and sales subject matter experts on topics that span generative and responsible AI technologies, cloud-scale data, and modern application development platforms, Azure AI services, and Microsoft Copilot
Prepare for the future
2023 introduced the world to the power of generative AI. Businesses are ready to deploy AI-based solutions as quickly as possible. The Microsoft AI Partner Training Roadshow places developers, solution architects, implementation consultants, and sales & pre-sales consultants at the forefront of AI transformation.
Because there will be no on-demand delivery post-event, we invite you to join us in Hyderabad, Bengaluru, or one of the other four cities across the globe that’s conveniently located near you.
Visit the Microsoft AI Partnership Roadshow website and register today to get started.
Microsoft Tech Community – Latest Blogs –Read More
Drive customer engagement with the power of AI
According to a recent IDC study commissioned by Microsoft, “For every $1 a company invests in AI, it is realizing an average return of $3.5X.” Because organizations realize a return on their AI investments within 14 months, customers are highly motivated to find partners with the necessary knowledge and skill set to deploy AI solutions today.
The Microsoft AI Partner Training Roadshow is a single-day, in-person event focused on driving customer engagement with the power of AI. The roadshow provides an exceptional opportunity to engage with Microsoft experts, hear about the latest trends in AI from Microsoft executives, and participate in technical or sales training.
Attend one of the six roadshow events
The Microsoft AI Partner Training Roadshow is scheduled in six cities across the globe, so there are only a few opportunities for deep learning on Microsoft generative and responsible AI technologies, cloud-scale data, and modern application development platforms, including Azure AI services and Microsoft Copilot.
The first event will be on March 1, 2024, in Hyderabad, India, followed by a second event in Bengaluru, India, on March 19. You don’t want to miss this opportunity. Register for an event near you.
Acquire generative and responsible AI knowledge from Microsoft experts
In a recent blog, Judson Althoff outlined four major opportunities where organizations can empower AI transformation:
Enriching employee experience
Reinventing customer engagement
Reshaping business processes
Bending the curve on innovation
Microsoft is focused on developing responsible AI strategies grounded in pragmatic innovation and enabling AI transformation to meet our customers’ needs. The Microsoft AI Partner Training Roadshow provides expert-led sessions and hands-on experiences to enhance your sales, pre-sales, and technical deployment capabilities across these impact areas.
Prepare technical and sales teams for AI success
Open to our Global Systems Integrator (GSI) and System Integrator (SI) partners, the Microsoft AI Partner Training Roadshow offers learning across multiple skill levels and interests. Alongside a keynote address by a Microsoft leader, there are four distinct learning paths for individuals with technical or sales backgrounds:
Sales Excellence with Microsoft AI Services: Master skills to confidently pitch Microsoft AI solutions by diving into solution use cases, exploring responsible AI commitments, and highlighting incentives to increase customer business value.
Technical Excellence with Azure AI: Build your own “Intelligent Agent” copilot to answer customer questions on products and services: Learn to build an “Intelligent Agent” that helps users find products, user profiles, and sales order information. This interactive experience features theoretical and lab sessions that prepare your technical teams to use Azure OpenAI and Azure AI Search.
Technical Excellence with Azure AI: Build a scalable data estate with a custom copilot for conversational data interaction: In this hands-on track, learn how to create a payments and transactions solution. Key subjects explored include business rules for data governance, patch operations for data replication, and customizing copilots for conversational AI.
Technical Excellence with Microsoft 365: Deep dive into the use and deployment of Copilot for Microsoft 365: Gain a fuller understanding of Copilot for Microsoft 365 with technical sessions on architecture, deployment, security, and compliance.
Bridge skill gaps in AI
Because AI is rapidly developing, there is a growing skills gap as employees work to keep up. In fact, 52% of participants of this IDC survey report that the lack of skilled workers is their biggest barrier to implementing and scaling AI. Much of the challenge isn’t simply adopting technology but also providing ample opportunities for employees to explore and learn.
To reconcile this divide, the Microsoft AI Partner Training Roadshow is committed to providing recent, up-to-date content for participants to study during and after the event. In addition to live keynote addresses and Q&A sessions, participants will have the chance to interact with and learn from technical and sales subject matter experts on topics that span generative and responsible AI technologies, cloud-scale data, and modern application development platforms, Azure AI services, and Microsoft Copilot
Prepare for the future
2023 introduced the world to the power of generative AI. Businesses are ready to deploy AI-based solutions as quickly as possible. The Microsoft AI Partner Training Roadshow places developers, solution architects, implementation consultants, and sales & pre-sales consultants at the forefront of AI transformation.
Because there will be no on-demand delivery post-event, we invite you to join us in Hyderabad, Bengaluru, or one of the other four cities across the globe that’s conveniently located near you.
Visit the Microsoft AI Partnership Roadshow website and register today to get started.
Microsoft Tech Community – Latest Blogs –Read More
Drive customer engagement with the power of AI
According to a recent IDC study commissioned by Microsoft, “For every $1 a company invests in AI, it is realizing an average return of $3.5X.” Because organizations realize a return on their AI investments within 14 months, customers are highly motivated to find partners with the necessary knowledge and skill set to deploy AI solutions today.
The Microsoft AI Partner Training Roadshow is a single-day, in-person event focused on driving customer engagement with the power of AI. The roadshow provides an exceptional opportunity to engage with Microsoft experts, hear about the latest trends in AI from Microsoft executives, and participate in technical or sales training.
Attend one of the six roadshow events
The Microsoft AI Partner Training Roadshow is scheduled in six cities across the globe, so there are only a few opportunities for deep learning on Microsoft generative and responsible AI technologies, cloud-scale data, and modern application development platforms, including Azure AI services and Microsoft Copilot.
The first event will be on March 1, 2024, in Hyderabad, India, followed by a second event in Bengaluru, India, on March 19. You don’t want to miss this opportunity. Register for an event near you.
Acquire generative and responsible AI knowledge from Microsoft experts
In a recent blog, Judson Althoff outlined four major opportunities where organizations can empower AI transformation:
Enriching employee experience
Reinventing customer engagement
Reshaping business processes
Bending the curve on innovation
Microsoft is focused on developing responsible AI strategies grounded in pragmatic innovation and enabling AI transformation to meet our customers’ needs. The Microsoft AI Partner Training Roadshow provides expert-led sessions and hands-on experiences to enhance your sales, pre-sales, and technical deployment capabilities across these impact areas.
Prepare technical and sales teams for AI success
Open to our Global Systems Integrator (GSI) and System Integrator (SI) partners, the Microsoft AI Partner Training Roadshow offers learning across multiple skill levels and interests. Alongside a keynote address by a Microsoft leader, there are four distinct learning paths for individuals with technical or sales backgrounds:
Sales Excellence with Microsoft AI Services: Master skills to confidently pitch Microsoft AI solutions by diving into solution use cases, exploring responsible AI commitments, and highlighting incentives to increase customer business value.
Technical Excellence with Azure AI: Build your own “Intelligent Agent” copilot to answer customer questions on products and services: Learn to build an “Intelligent Agent” that helps users find products, user profiles, and sales order information. This interactive experience features theoretical and lab sessions that prepare your technical teams to use Azure OpenAI and Azure AI Search.
Technical Excellence with Azure AI: Build a scalable data estate with a custom copilot for conversational data interaction: In this hands-on track, learn how to create a payments and transactions solution. Key subjects explored include business rules for data governance, patch operations for data replication, and customizing copilots for conversational AI.
Technical Excellence with Microsoft 365: Deep dive into the use and deployment of Copilot for Microsoft 365: Gain a fuller understanding of Copilot for Microsoft 365 with technical sessions on architecture, deployment, security, and compliance.
Bridge skill gaps in AI
Because AI is rapidly developing, there is a growing skills gap as employees work to keep up. In fact, 52% of participants of this IDC survey report that the lack of skilled workers is their biggest barrier to implementing and scaling AI. Much of the challenge isn’t simply adopting technology but also providing ample opportunities for employees to explore and learn.
To reconcile this divide, the Microsoft AI Partner Training Roadshow is committed to providing recent, up-to-date content for participants to study during and after the event. In addition to live keynote addresses and Q&A sessions, participants will have the chance to interact with and learn from technical and sales subject matter experts on topics that span generative and responsible AI technologies, cloud-scale data, and modern application development platforms, Azure AI services, and Microsoft Copilot
Prepare for the future
2023 introduced the world to the power of generative AI. Businesses are ready to deploy AI-based solutions as quickly as possible. The Microsoft AI Partner Training Roadshow places developers, solution architects, implementation consultants, and sales & pre-sales consultants at the forefront of AI transformation.
Because there will be no on-demand delivery post-event, we invite you to join us in Hyderabad, Bengaluru, or one of the other four cities across the globe that’s conveniently located near you.
Visit the Microsoft AI Partnership Roadshow website and register today to get started.
Microsoft Tech Community – Latest Blogs –Read More
Drive customer engagement with the power of AI
According to a recent IDC study commissioned by Microsoft, “For every $1 a company invests in AI, it is realizing an average return of $3.5X.” Because organizations realize a return on their AI investments within 14 months, customers are highly motivated to find partners with the necessary knowledge and skill set to deploy AI solutions today.
The Microsoft AI Partner Training Roadshow is a single-day, in-person event focused on driving customer engagement with the power of AI. The roadshow provides an exceptional opportunity to engage with Microsoft experts, hear about the latest trends in AI from Microsoft executives, and participate in technical or sales training.
Attend one of the six roadshow events
The Microsoft AI Partner Training Roadshow is scheduled in six cities across the globe, so there are only a few opportunities for deep learning on Microsoft generative and responsible AI technologies, cloud-scale data, and modern application development platforms, including Azure AI services and Microsoft Copilot.
The first event will be on March 1, 2024, in Hyderabad, India, followed by a second event in Bengaluru, India, on March 19. You don’t want to miss this opportunity. Register for an event near you.
Acquire generative and responsible AI knowledge from Microsoft experts
In a recent blog, Judson Althoff outlined four major opportunities where organizations can empower AI transformation:
Enriching employee experience
Reinventing customer engagement
Reshaping business processes
Bending the curve on innovation
Microsoft is focused on developing responsible AI strategies grounded in pragmatic innovation and enabling AI transformation to meet our customers’ needs. The Microsoft AI Partner Training Roadshow provides expert-led sessions and hands-on experiences to enhance your sales, pre-sales, and technical deployment capabilities across these impact areas.
Prepare technical and sales teams for AI success
Open to our Global Systems Integrator (GSI) and System Integrator (SI) partners, the Microsoft AI Partner Training Roadshow offers learning across multiple skill levels and interests. Alongside a keynote address by a Microsoft leader, there are four distinct learning paths for individuals with technical or sales backgrounds:
Sales Excellence with Microsoft AI Services: Master skills to confidently pitch Microsoft AI solutions by diving into solution use cases, exploring responsible AI commitments, and highlighting incentives to increase customer business value.
Technical Excellence with Azure AI: Build your own “Intelligent Agent” copilot to answer customer questions on products and services: Learn to build an “Intelligent Agent” that helps users find products, user profiles, and sales order information. This interactive experience features theoretical and lab sessions that prepare your technical teams to use Azure OpenAI and Azure AI Search.
Technical Excellence with Azure AI: Build a scalable data estate with a custom copilot for conversational data interaction: In this hands-on track, learn how to create a payments and transactions solution. Key subjects explored include business rules for data governance, patch operations for data replication, and customizing copilots for conversational AI.
Technical Excellence with Microsoft 365: Deep dive into the use and deployment of Copilot for Microsoft 365: Gain a fuller understanding of Copilot for Microsoft 365 with technical sessions on architecture, deployment, security, and compliance.
Bridge skill gaps in AI
Because AI is rapidly developing, there is a growing skills gap as employees work to keep up. In fact, 52% of participants of this IDC survey report that the lack of skilled workers is their biggest barrier to implementing and scaling AI. Much of the challenge isn’t simply adopting technology but also providing ample opportunities for employees to explore and learn.
To reconcile this divide, the Microsoft AI Partner Training Roadshow is committed to providing recent, up-to-date content for participants to study during and after the event. In addition to live keynote addresses and Q&A sessions, participants will have the chance to interact with and learn from technical and sales subject matter experts on topics that span generative and responsible AI technologies, cloud-scale data, and modern application development platforms, Azure AI services, and Microsoft Copilot
Prepare for the future
2023 introduced the world to the power of generative AI. Businesses are ready to deploy AI-based solutions as quickly as possible. The Microsoft AI Partner Training Roadshow places developers, solution architects, implementation consultants, and sales & pre-sales consultants at the forefront of AI transformation.
Because there will be no on-demand delivery post-event, we invite you to join us in Hyderabad, Bengaluru, or one of the other four cities across the globe that’s conveniently located near you.
Visit the Microsoft AI Partnership Roadshow website and register today to get started.
Microsoft Tech Community – Latest Blogs –Read More
Drive customer engagement with the power of AI
According to a recent IDC study commissioned by Microsoft, “For every $1 a company invests in AI, it is realizing an average return of $3.5X.” Because organizations realize a return on their AI investments within 14 months, customers are highly motivated to find partners with the necessary knowledge and skill set to deploy AI solutions today.
The Microsoft AI Partner Training Roadshow is a single-day, in-person event focused on driving customer engagement with the power of AI. The roadshow provides an exceptional opportunity to engage with Microsoft experts, hear about the latest trends in AI from Microsoft executives, and participate in technical or sales training.
Attend one of the six roadshow events
The Microsoft AI Partner Training Roadshow is scheduled in six cities across the globe, so there are only a few opportunities for deep learning on Microsoft generative and responsible AI technologies, cloud-scale data, and modern application development platforms, including Azure AI services and Microsoft Copilot.
The first event will be on March 1, 2024, in Hyderabad, India, followed by a second event in Bengaluru, India, on March 19. You don’t want to miss this opportunity. Register for an event near you.
Acquire generative and responsible AI knowledge from Microsoft experts
In a recent blog, Judson Althoff outlined four major opportunities where organizations can empower AI transformation:
Enriching employee experience
Reinventing customer engagement
Reshaping business processes
Bending the curve on innovation
Microsoft is focused on developing responsible AI strategies grounded in pragmatic innovation and enabling AI transformation to meet our customers’ needs. The Microsoft AI Partner Training Roadshow provides expert-led sessions and hands-on experiences to enhance your sales, pre-sales, and technical deployment capabilities across these impact areas.
Prepare technical and sales teams for AI success
Open to our Global Systems Integrator (GSI) and System Integrator (SI) partners, the Microsoft AI Partner Training Roadshow offers learning across multiple skill levels and interests. Alongside a keynote address by a Microsoft leader, there are four distinct learning paths for individuals with technical or sales backgrounds:
Sales Excellence with Microsoft AI Services: Master skills to confidently pitch Microsoft AI solutions by diving into solution use cases, exploring responsible AI commitments, and highlighting incentives to increase customer business value.
Technical Excellence with Azure AI: Build your own “Intelligent Agent” copilot to answer customer questions on products and services: Learn to build an “Intelligent Agent” that helps users find products, user profiles, and sales order information. This interactive experience features theoretical and lab sessions that prepare your technical teams to use Azure OpenAI and Azure AI Search.
Technical Excellence with Azure AI: Build a scalable data estate with a custom copilot for conversational data interaction: In this hands-on track, learn how to create a payments and transactions solution. Key subjects explored include business rules for data governance, patch operations for data replication, and customizing copilots for conversational AI.
Technical Excellence with Microsoft 365: Deep dive into the use and deployment of Copilot for Microsoft 365: Gain a fuller understanding of Copilot for Microsoft 365 with technical sessions on architecture, deployment, security, and compliance.
Bridge skill gaps in AI
Because AI is rapidly developing, there is a growing skills gap as employees work to keep up. In fact, 52% of participants of this IDC survey report that the lack of skilled workers is their biggest barrier to implementing and scaling AI. Much of the challenge isn’t simply adopting technology but also providing ample opportunities for employees to explore and learn.
To reconcile this divide, the Microsoft AI Partner Training Roadshow is committed to providing recent, up-to-date content for participants to study during and after the event. In addition to live keynote addresses and Q&A sessions, participants will have the chance to interact with and learn from technical and sales subject matter experts on topics that span generative and responsible AI technologies, cloud-scale data, and modern application development platforms, Azure AI services, and Microsoft Copilot
Prepare for the future
2023 introduced the world to the power of generative AI. Businesses are ready to deploy AI-based solutions as quickly as possible. The Microsoft AI Partner Training Roadshow places developers, solution architects, implementation consultants, and sales & pre-sales consultants at the forefront of AI transformation.
Because there will be no on-demand delivery post-event, we invite you to join us in Hyderabad, Bengaluru, or one of the other four cities across the globe that’s conveniently located near you.
Visit the Microsoft AI Partnership Roadshow website and register today to get started.
Microsoft Tech Community – Latest Blogs –Read More
Drive customer engagement with the power of AI
According to a recent IDC study commissioned by Microsoft, “For every $1 a company invests in AI, it is realizing an average return of $3.5X.” Because organizations realize a return on their AI investments within 14 months, customers are highly motivated to find partners with the necessary knowledge and skill set to deploy AI solutions today.
The Microsoft AI Partner Training Roadshow is a single-day, in-person event focused on driving customer engagement with the power of AI. The roadshow provides an exceptional opportunity to engage with Microsoft experts, hear about the latest trends in AI from Microsoft executives, and participate in technical or sales training.
Attend one of the six roadshow events
The Microsoft AI Partner Training Roadshow is scheduled in six cities across the globe, so there are only a few opportunities for deep learning on Microsoft generative and responsible AI technologies, cloud-scale data, and modern application development platforms, including Azure AI services and Microsoft Copilot.
The first event will be on March 1, 2024, in Hyderabad, India, followed by a second event in Bengaluru, India, on March 19. You don’t want to miss this opportunity. Register for an event near you.
Acquire generative and responsible AI knowledge from Microsoft experts
In a recent blog, Judson Althoff outlined four major opportunities where organizations can empower AI transformation:
Enriching employee experience
Reinventing customer engagement
Reshaping business processes
Bending the curve on innovation
Microsoft is focused on developing responsible AI strategies grounded in pragmatic innovation and enabling AI transformation to meet our customers’ needs. The Microsoft AI Partner Training Roadshow provides expert-led sessions and hands-on experiences to enhance your sales, pre-sales, and technical deployment capabilities across these impact areas.
Prepare technical and sales teams for AI success
Open to our Global Systems Integrator (GSI) and System Integrator (SI) partners, the Microsoft AI Partner Training Roadshow offers learning across multiple skill levels and interests. Alongside a keynote address by a Microsoft leader, there are four distinct learning paths for individuals with technical or sales backgrounds:
Sales Excellence with Microsoft AI Services: Master skills to confidently pitch Microsoft AI solutions by diving into solution use cases, exploring responsible AI commitments, and highlighting incentives to increase customer business value.
Technical Excellence with Azure AI: Build your own “Intelligent Agent” copilot to answer customer questions on products and services: Learn to build an “Intelligent Agent” that helps users find products, user profiles, and sales order information. This interactive experience features theoretical and lab sessions that prepare your technical teams to use Azure OpenAI and Azure AI Search.
Technical Excellence with Azure AI: Build a scalable data estate with a custom copilot for conversational data interaction: In this hands-on track, learn how to create a payments and transactions solution. Key subjects explored include business rules for data governance, patch operations for data replication, and customizing copilots for conversational AI.
Technical Excellence with Microsoft 365: Deep dive into the use and deployment of Copilot for Microsoft 365: Gain a fuller understanding of Copilot for Microsoft 365 with technical sessions on architecture, deployment, security, and compliance.
Bridge skill gaps in AI
Because AI is rapidly developing, there is a growing skills gap as employees work to keep up. In fact, 52% of participants of this IDC survey report that the lack of skilled workers is their biggest barrier to implementing and scaling AI. Much of the challenge isn’t simply adopting technology but also providing ample opportunities for employees to explore and learn.
To reconcile this divide, the Microsoft AI Partner Training Roadshow is committed to providing recent, up-to-date content for participants to study during and after the event. In addition to live keynote addresses and Q&A sessions, participants will have the chance to interact with and learn from technical and sales subject matter experts on topics that span generative and responsible AI technologies, cloud-scale data, and modern application development platforms, Azure AI services, and Microsoft Copilot
Prepare for the future
2023 introduced the world to the power of generative AI. Businesses are ready to deploy AI-based solutions as quickly as possible. The Microsoft AI Partner Training Roadshow places developers, solution architects, implementation consultants, and sales & pre-sales consultants at the forefront of AI transformation.
Because there will be no on-demand delivery post-event, we invite you to join us in Hyderabad, Bengaluru, or one of the other four cities across the globe that’s conveniently located near you.
Visit the Microsoft AI Partnership Roadshow website and register today to get started.
Microsoft Tech Community – Latest Blogs –Read More
Drive customer engagement with the power of AI
According to a recent IDC study commissioned by Microsoft, “For every $1 a company invests in AI, it is realizing an average return of $3.5X.” Because organizations realize a return on their AI investments within 14 months, customers are highly motivated to find partners with the necessary knowledge and skill set to deploy AI solutions today.
The Microsoft AI Partner Training Roadshow is a single-day, in-person event focused on driving customer engagement with the power of AI. The roadshow provides an exceptional opportunity to engage with Microsoft experts, hear about the latest trends in AI from Microsoft executives, and participate in technical or sales training.
Attend one of the six roadshow events
The Microsoft AI Partner Training Roadshow is scheduled in six cities across the globe, so there are only a few opportunities for deep learning on Microsoft generative and responsible AI technologies, cloud-scale data, and modern application development platforms, including Azure AI services and Microsoft Copilot.
The first event will be on March 1, 2024, in Hyderabad, India, followed by a second event in Bengaluru, India, on March 19. You don’t want to miss this opportunity. Register for an event near you.
Acquire generative and responsible AI knowledge from Microsoft experts
In a recent blog, Judson Althoff outlined four major opportunities where organizations can empower AI transformation:
Enriching employee experience
Reinventing customer engagement
Reshaping business processes
Bending the curve on innovation
Microsoft is focused on developing responsible AI strategies grounded in pragmatic innovation and enabling AI transformation to meet our customers’ needs. The Microsoft AI Partner Training Roadshow provides expert-led sessions and hands-on experiences to enhance your sales, pre-sales, and technical deployment capabilities across these impact areas.
Prepare technical and sales teams for AI success
Open to our Global Systems Integrator (GSI) and System Integrator (SI) partners, the Microsoft AI Partner Training Roadshow offers learning across multiple skill levels and interests. Alongside a keynote address by a Microsoft leader, there are four distinct learning paths for individuals with technical or sales backgrounds:
Sales Excellence with Microsoft AI Services: Master skills to confidently pitch Microsoft AI solutions by diving into solution use cases, exploring responsible AI commitments, and highlighting incentives to increase customer business value.
Technical Excellence with Azure AI: Build your own “Intelligent Agent” copilot to answer customer questions on products and services: Learn to build an “Intelligent Agent” that helps users find products, user profiles, and sales order information. This interactive experience features theoretical and lab sessions that prepare your technical teams to use Azure OpenAI and Azure AI Search.
Technical Excellence with Azure AI: Build a scalable data estate with a custom copilot for conversational data interaction: In this hands-on track, learn how to create a payments and transactions solution. Key subjects explored include business rules for data governance, patch operations for data replication, and customizing copilots for conversational AI.
Technical Excellence with Microsoft 365: Deep dive into the use and deployment of Copilot for Microsoft 365: Gain a fuller understanding of Copilot for Microsoft 365 with technical sessions on architecture, deployment, security, and compliance.
Bridge skill gaps in AI
Because AI is rapidly developing, there is a growing skills gap as employees work to keep up. In fact, 52% of participants of this IDC survey report that the lack of skilled workers is their biggest barrier to implementing and scaling AI. Much of the challenge isn’t simply adopting technology but also providing ample opportunities for employees to explore and learn.
To reconcile this divide, the Microsoft AI Partner Training Roadshow is committed to providing recent, up-to-date content for participants to study during and after the event. In addition to live keynote addresses and Q&A sessions, participants will have the chance to interact with and learn from technical and sales subject matter experts on topics that span generative and responsible AI technologies, cloud-scale data, and modern application development platforms, Azure AI services, and Microsoft Copilot
Prepare for the future
2023 introduced the world to the power of generative AI. Businesses are ready to deploy AI-based solutions as quickly as possible. The Microsoft AI Partner Training Roadshow places developers, solution architects, implementation consultants, and sales & pre-sales consultants at the forefront of AI transformation.
Because there will be no on-demand delivery post-event, we invite you to join us in Hyderabad, Bengaluru, or one of the other four cities across the globe that’s conveniently located near you.
Visit the Microsoft AI Partnership Roadshow website and register today to get started.
Microsoft Tech Community – Latest Blogs –Read More
Drive customer engagement with the power of AI
According to a recent IDC study commissioned by Microsoft, “For every $1 a company invests in AI, it is realizing an average return of $3.5X.” Because organizations realize a return on their AI investments within 14 months, customers are highly motivated to find partners with the necessary knowledge and skill set to deploy AI solutions today.
The Microsoft AI Partner Training Roadshow is a single-day, in-person event focused on driving customer engagement with the power of AI. The roadshow provides an exceptional opportunity to engage with Microsoft experts, hear about the latest trends in AI from Microsoft executives, and participate in technical or sales training.
Attend one of the six roadshow events
The Microsoft AI Partner Training Roadshow is scheduled in six cities across the globe, so there are only a few opportunities for deep learning on Microsoft generative and responsible AI technologies, cloud-scale data, and modern application development platforms, including Azure AI services and Microsoft Copilot.
The first event will be on March 1, 2024, in Hyderabad, India, followed by a second event in Bengaluru, India, on March 19. You don’t want to miss this opportunity. Register for an event near you.
Acquire generative and responsible AI knowledge from Microsoft experts
In a recent blog, Judson Althoff outlined four major opportunities where organizations can empower AI transformation:
Enriching employee experience
Reinventing customer engagement
Reshaping business processes
Bending the curve on innovation
Microsoft is focused on developing responsible AI strategies grounded in pragmatic innovation and enabling AI transformation to meet our customers’ needs. The Microsoft AI Partner Training Roadshow provides expert-led sessions and hands-on experiences to enhance your sales, pre-sales, and technical deployment capabilities across these impact areas.
Prepare technical and sales teams for AI success
Open to our Global Systems Integrator (GSI) and System Integrator (SI) partners, the Microsoft AI Partner Training Roadshow offers learning across multiple skill levels and interests. Alongside a keynote address by a Microsoft leader, there are four distinct learning paths for individuals with technical or sales backgrounds:
Sales Excellence with Microsoft AI Services: Master skills to confidently pitch Microsoft AI solutions by diving into solution use cases, exploring responsible AI commitments, and highlighting incentives to increase customer business value.
Technical Excellence with Azure AI: Build your own “Intelligent Agent” copilot to answer customer questions on products and services: Learn to build an “Intelligent Agent” that helps users find products, user profiles, and sales order information. This interactive experience features theoretical and lab sessions that prepare your technical teams to use Azure OpenAI and Azure AI Search.
Technical Excellence with Azure AI: Build a scalable data estate with a custom copilot for conversational data interaction: In this hands-on track, learn how to create a payments and transactions solution. Key subjects explored include business rules for data governance, patch operations for data replication, and customizing copilots for conversational AI.
Technical Excellence with Microsoft 365: Deep dive into the use and deployment of Copilot for Microsoft 365: Gain a fuller understanding of Copilot for Microsoft 365 with technical sessions on architecture, deployment, security, and compliance.
Bridge skill gaps in AI
Because AI is rapidly developing, there is a growing skills gap as employees work to keep up. In fact, 52% of participants of this IDC survey report that the lack of skilled workers is their biggest barrier to implementing and scaling AI. Much of the challenge isn’t simply adopting technology but also providing ample opportunities for employees to explore and learn.
To reconcile this divide, the Microsoft AI Partner Training Roadshow is committed to providing recent, up-to-date content for participants to study during and after the event. In addition to live keynote addresses and Q&A sessions, participants will have the chance to interact with and learn from technical and sales subject matter experts on topics that span generative and responsible AI technologies, cloud-scale data, and modern application development platforms, Azure AI services, and Microsoft Copilot
Prepare for the future
2023 introduced the world to the power of generative AI. Businesses are ready to deploy AI-based solutions as quickly as possible. The Microsoft AI Partner Training Roadshow places developers, solution architects, implementation consultants, and sales & pre-sales consultants at the forefront of AI transformation.
Because there will be no on-demand delivery post-event, we invite you to join us in Hyderabad, Bengaluru, or one of the other four cities across the globe that’s conveniently located near you.
Visit the Microsoft AI Partnership Roadshow website and register today to get started.
Microsoft Tech Community – Latest Blogs –Read More
Drive customer engagement with the power of AI
According to a recent IDC study commissioned by Microsoft, “For every $1 a company invests in AI, it is realizing an average return of $3.5X.” Because organizations realize a return on their AI investments within 14 months, customers are highly motivated to find partners with the necessary knowledge and skill set to deploy AI solutions today.
The Microsoft AI Partner Training Roadshow is a single-day, in-person event focused on driving customer engagement with the power of AI. The roadshow provides an exceptional opportunity to engage with Microsoft experts, hear about the latest trends in AI from Microsoft executives, and participate in technical or sales training.
Attend one of the six roadshow events
The Microsoft AI Partner Training Roadshow is scheduled in six cities across the globe, so there are only a few opportunities for deep learning on Microsoft generative and responsible AI technologies, cloud-scale data, and modern application development platforms, including Azure AI services and Microsoft Copilot.
The first event will be on March 1, 2024, in Hyderabad, India, followed by a second event in Bengaluru, India, on March 19. You don’t want to miss this opportunity. Register for an event near you.
Acquire generative and responsible AI knowledge from Microsoft experts
In a recent blog, Judson Althoff outlined four major opportunities where organizations can empower AI transformation:
Enriching employee experience
Reinventing customer engagement
Reshaping business processes
Bending the curve on innovation
Microsoft is focused on developing responsible AI strategies grounded in pragmatic innovation and enabling AI transformation to meet our customers’ needs. The Microsoft AI Partner Training Roadshow provides expert-led sessions and hands-on experiences to enhance your sales, pre-sales, and technical deployment capabilities across these impact areas.
Prepare technical and sales teams for AI success
Open to our Global Systems Integrator (GSI) and System Integrator (SI) partners, the Microsoft AI Partner Training Roadshow offers learning across multiple skill levels and interests. Alongside a keynote address by a Microsoft leader, there are four distinct learning paths for individuals with technical or sales backgrounds:
Sales Excellence with Microsoft AI Services: Master skills to confidently pitch Microsoft AI solutions by diving into solution use cases, exploring responsible AI commitments, and highlighting incentives to increase customer business value.
Technical Excellence with Azure AI: Build your own “Intelligent Agent” copilot to answer customer questions on products and services: Learn to build an “Intelligent Agent” that helps users find products, user profiles, and sales order information. This interactive experience features theoretical and lab sessions that prepare your technical teams to use Azure OpenAI and Azure AI Search.
Technical Excellence with Azure AI: Build a scalable data estate with a custom copilot for conversational data interaction: In this hands-on track, learn how to create a payments and transactions solution. Key subjects explored include business rules for data governance, patch operations for data replication, and customizing copilots for conversational AI.
Technical Excellence with Microsoft 365: Deep dive into the use and deployment of Copilot for Microsoft 365: Gain a fuller understanding of Copilot for Microsoft 365 with technical sessions on architecture, deployment, security, and compliance.
Bridge skill gaps in AI
Because AI is rapidly developing, there is a growing skills gap as employees work to keep up. In fact, 52% of participants of this IDC survey report that the lack of skilled workers is their biggest barrier to implementing and scaling AI. Much of the challenge isn’t simply adopting technology but also providing ample opportunities for employees to explore and learn.
To reconcile this divide, the Microsoft AI Partner Training Roadshow is committed to providing recent, up-to-date content for participants to study during and after the event. In addition to live keynote addresses and Q&A sessions, participants will have the chance to interact with and learn from technical and sales subject matter experts on topics that span generative and responsible AI technologies, cloud-scale data, and modern application development platforms, Azure AI services, and Microsoft Copilot
Prepare for the future
2023 introduced the world to the power of generative AI. Businesses are ready to deploy AI-based solutions as quickly as possible. The Microsoft AI Partner Training Roadshow places developers, solution architects, implementation consultants, and sales & pre-sales consultants at the forefront of AI transformation.
Because there will be no on-demand delivery post-event, we invite you to join us in Hyderabad, Bengaluru, or one of the other four cities across the globe that’s conveniently located near you.
Visit the Microsoft AI Partnership Roadshow website and register today to get started.
Microsoft Tech Community – Latest Blogs –Read More
IP address changes for Azure Service Bus and IP/DNS Changes for Azure Relay
What is Changing?
The infrastructure layer of Azure Relay and Service Bus is being upgraded which will cause the IP addresses used by customer namespaces to. For Azure Relay the gateway DNS names are also changing.
These changes are being made as part of our continuous improvements to our platform. The IP addresses of our services can change and should not be considered static and unchanging as previously communicated in the communication for Azure Service Bus and Azure Relay. There is no added charge for this nor are there any service interruptions during the migration.
Call to Action
If you are using IP addresses in your egress firewalls to your Azure Relay or Azure Service Bus namespaces, you will need to update them to use the namespace DNS names instead.
Alternative (not recommended!)
As a final alternative, it is possible to use the new IP addresses. We highly recommend against this, as you will need to keep track of any IP address changes yourself, and your service may be interrupted.
Azure Service Bus customers
If you are using Azure Service Bus premium, we recommend using service tags, as per our recommendations described in the service documentation. Service tags will automatically be updated if anything changes in our infrastructure.
If you are on Azure Service Bus standard / basic or cannot use service tags on Azure Service Bus Premium, use the fully qualified domain names for your specific namespaces, or the wildcard “*.servicebus.windows.net” domains. These will automatically resolve to the new IP addresses.
For Azure Service Bus, as an unrecommended alternative, the IP address can be found by executing a ping command against the fully qualified domain name of your specific namespace.
Azure Relay customers
For Azure Relay, configure your firewalls with the DNS names of all the Relay gateways, which can be found by running this script . This script will resolve the fully qualified domain names of all the gateways to which you need to establish a connection.
Furthermore, you can use the same script , to get the IP addresses of all the gateways to which you need to establish a connection.
Microsoft Tech Community – Latest Blogs –Read More
What’s new in Windows Autopatch: February 2024
The start of the new year brings a great opportunity for positive change, including the release of new features in Windows Autopatch. We heard your feedback! Here are some improvements made in response to your enterprise needs.
Import Update rings for Windows 10 and later in preview
Update rings allow you to specify how and when Windows as a service updates your Windows 10 or Windows 11 device with feature and quality updates. Update rings are available for Windows 10 and later. And if you’re a Windows Autopatch customer, you can now bring existing Update rings for Windows 10 and later policies into Windows Autopatch Management. For additional information, see Configure Update rings for Windows 10 and later policy in Intune.
Importing existing rings allows you to take advantage of the many capabilities of Windows Autopatch without impacting your existing Windows update schedules. Imported rings will automatically register all targeted devices into Windows Autopatch without the need to redeploy or change your existing update rings. Additionally, important rings will be reflected in the reporting and release experience.
Learn how to import update rings for Windows 10 and later. If needed, brush up on Windows client updates, channels, and tools.
Customer defined service outcomes in preview
Have you used Windows Autopatch reports to monitor the health and activity of your deployments? The insights from the reports can help you understand if your devices are maintaining update compliance targets.
Previously, deployment success measures were based on a static schedule of 21 days. This means that Windows Autopatch aims to keep at least 95% of eligible devices on the latest Windows quality update 21 days after release.
With this enhancement, the success of Windows Autopatch deployments will be based on your defined rings. We’ll also be introducing new columns in our release blade, as well as Windows quality and feature update reporting, to show the percentage complete for quality and feature updates. Devices that are up to date will remain in the “In Progress” status in reporting until you either get the current monthly cumulative update or an alert. If an alert is received, the status will change to “Not up to date.”
To learn more, read Service level objectives.
Improved data refresh speed and reporting accuracy
Windows Autopatch reporting provides rich insights into your patch compliance status, so you can make informed choices about protecting against defects and vulnerabilities.
This release is changing the refresh cycle for Windows Autopatch reporting. The refresh cycle refers to the amount of time from when a change is made to when it’s reflected in reporting and other UX components. This time will be reduced from every 24 hours to every 30 minutes. This improvement supports the many data streams that Windows Autopatch uses to provide current update status for all devices enrolled into Windows Autopatch.
To learn more, see Windows quality update reporting.
Take your next step with Windows Autopatch
We hope these enhancements will help you keep your devices secure and up to date with less hassle and more control. Get current and stay current with automation that leads to higher security and lower costs.
The ideas behind these releases originated from conversations, input, and requests from you, our customers. We’d love to hear your feedback and suggestions on how we can continue to make Windows Autopatch even better for you. You can share your thoughts and ideas with us on our feedback hub or by joining our community forum.
If you want to learn more about Windows Autopatch:
Visit our website.
Read our documentation.
Watch our guided demos.
If you want to try Windows Autopatch for yourself, sign up for a free trial or contact us for a demo.
Thank you for choosing Windows Autopatch and stay tuned for more updates and announcements.
Continue the conversation. Find best practices. Bookmark the Windows Tech Community, then follow us @MSWindowsITPro on X/Twitter. Looking for support? Visit Windows on Microsoft Q&A.
Microsoft Tech Community – Latest Blogs –Read More
Vision Transformer Learning with AML (Part 1)
Introduction
Applications of Computer Vision such as object recognition, semantic segmentation, object classification and others have seen recent adoption in healthcare, satellite imagery and visual surveillance among other fields.
This blog series aims to communicate some of the Azure Machine Learning (AML) platform features that have shown to be highly useful for Enterprise-scale Vision model training and inference.
This blog is organized in two parts:
Part 1 offers a brief overview of vision learning approaches and quickly describes common challenges for large-scale Vision Transformer model training.
Part 2 describes how Azure Machine Learning (AML) may be used to overcome many (if not all!) of these challenges for real-world, Enterprise applications.
Vision Transformers – An Overview
Vision Transformers is set of Representation Learning algorithms that are trained on visual information (i.e.: images and pictures) to produce various types of output.
While excellent results have been achieved with relatively simple algorithms such as Deep and Convolutional Neural Networks (DNN, CNN), usage in real-world applications is often challenging as many standard approaches require lots of labeled training data. CNN, for example, may be trained to recognize objects, but requires a large number of highly curated images with pristine, validated labels to be successful. Collecting, curating, labeling and updating such datasets is often prohibitively expensive and intractable at scale.
Case for Self-supervision
Many modern visual information processing approaches attempt to address the labeled data scarcity by leveraging the concept of Self-supervision. Self-supervision stems from the research on text embeddings, which has shown that useful context may be learned from the data’s intrinsic structures. Commonly used text embedding approaches (e.g.: word2vec) learn meaningful structures in text by leveraging relative information between words and sentences. For image processing, self-supervision aims to discover similar semantics from relative positions of pixel patches.
[2]
An interesting consequence of self-supervised approaches is that the trained network appears to capture non-trivial latent context that seems to generalize at the category level [5]. That is, the self-supervised model appears to spontaneously learn meaningful components across images (such as visual objects, their parts and segments) without any human intervention. This emergent property makes self-supervised algorithms ideal for real-world applications where quality labeled data may be unavailable. Further still, recent research has shown that coupling self-supervised models with other machine learning algorithms (e.g.: classification) produces excellent results with significantly reduced training data requirements [1,2,3,4,5].
Autoencoders
Autoencoders are mathematical objects that learn latent representation of input. For example, PCA and k-means are autoencoders because they map high-dimensional input to lower-dimensional projections with little loss of relative information [3]. For image processing, autoencoders are often used to capture semantic relationships between image sections in a way similar to how language models (e.g.: GPT) generalize linguistic concepts in text.
Many autoencoder algorithms have been proposed, with Vision Transformer (ViT) and Masked Autoencoder (MAE) gaining popularity in recent years.
Vision Transformer (ViT)
Conceptually, ViT is a special case of the Transformer architecture that has been popular since the advents of BERT and ChatGPT. Transformers is a flexible machine learning approach that couples autoencoders with decoding layers (of various complexity) to generate predictions based on the low-dimensional generalized semantic encoder learning. ViT is an implementation of the Transformer architecture for the image data.
[5]
Masked Autoencoder (MAE)
Masked Autoencoders (MAE) is a type of ViT [3] that operates on smaller subsets of the training data to significantly reduce compute and memory requirements.
[4]
Challenges at scale
While ViT, MAE and other approaches have been successful in various applications, there are many technical hurdles that must be overcome to deliver effective real-world solutions. This section outlines some of those challenges and Part 2 of this blog will show how the Azure Machine Learning (AML) platform features may be used to deliver quality visual learning models at scale.
Model Sizing
While vision transformer models can deliver state of the art performance on a variety of tasks, their large sizes make them difficult to train. Indeed, training these models on large image datasets requires a lot of GPU memory to store training inputs, model weights, gradients and optimizer states. Very often, the memory needed is more than a single GPU or Virtual Machine can deliver. Distributing the training process across multiple GPUs in a cluster of VMs is becoming the norm to efficiently train these models.
However, in order to fully reap the benefits of distributed model training, teams need to do a significant amount of configuration and tuning. For example, ensuring high bandwidth inter-GPU communication, selecting the appropriate training parallelism strategy or minimizing GPU idle time is paramount in order to speed up the model training. Getting all of this right is complex and can take days if not weeks.
In part 2 of this series, we will explore how Azure Machine Learning simplifies the procurement, configuration and scaling of the training infrastructure as well as optimizes distributed training workloads to speed them up without the headaches.
Data Wrangling
Some of the challenges of large-scale machine learning stream from the preprocessing steps where raw data must be manipulated to fit an appropriate tensor. The next section of this blog will discuss how AML components, such as Pipeline Designer, may be used to simplify Data Wrangling at scale.
Effective Debugging
Large-scale ML model training (vision or otherwise) must often distribute training processes over a large number of highly choreographed computation and data access resources in order to achieve the desired model quality. Debugging such processes is often challenging as distributed computation is often difficult to replicate in development environments.
Azure Machine Learning offers a number of ways to simplify and streamline debugging efforts by exposing effective logging facilities, direct connectivity to computation cluster members and extensive performance dashboards and metrics.
Model Deployment
Unlike model training, which is often accomplished in batches, model inference is often a continuous process that must be scaled to accept frequent requests from many concurrent users. While simple models (e.g.: scikit-learn) may be scaled easily with cheap CPUs, modern visual models often require several GPUs for the inference stage.
In addition, model performance indicators such as Drift are often difficult to capture and visualize. In the next second, we will discuss how AML may be used to track model performance over time and automatically manage model retraining based on monitoring of relevant KPIs.
Please stay tuned for the following blog to discuss how AML platform features may be used to simplify Model Sizing, Data Wrangling, Debugging and Model Deployment.
References:
1.Revisiting Self-Supervised Visual Representation Learning
2. Unsupervised Visual Representation Learning by Context Prediction
3. Masked Autoencoders Are Scalable Vision Learners
4. An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale
[2010.11929] An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale (arxiv.org)
Microsoft Tech Community – Latest Blogs –Read More
Visually group shapes in your diagrams with containers in Visio for the web
A container in Visio is a special shape that can hold other shapes inside of it. It’s often used to group related shapes together and can be a powerful tool to help organize and manage complex diagrams. The Visio desktop app has long supported containers, and now Visio for the web does, too! Users with a Visio Plan 1 or Visio Plan 2 license use containers in Visio for the web to create diagrams that are better organized and easier to understand and navigate.
Add a container from the new Container drop-down
To add a container to your diagram, first select the shapes you want it to contain. Then, from the Insert tab in the ribbon, select the new Container drop-down.
An image of a flowchart in Visio for the web demonstrates how to access new Container options from the Insert tab.
A cropped image of a drawing in Visio for the web highlights the new Container styles.
From here, you will see more than a dozen container styles, including Rounded, Translucent, Horizontal Brackets, Vertical Brackets, Corner Frame, Square, and more. Hover over the container shapes, then select the preferred option to add it to the canvas.
Add a container from the right-click menu
You can also add a container directly from the right-click menu. Simply right-click a shape—or a group of shapes—that you want to add to the new container, then select Insert Container > Add to New Container from the drop-down menu.
A cropped image of a flowchart in Visio for the web highlights the new Container options in the right-click menu.
Note that, when adding shapes to a container via the right-click menu, only the shape—or shapes—selected will be contained. To include adjacent shapes, drag the container so that it encompasses all relevant shapes. Then, right-click the shapes and select Insert Container > Add to Underlying Container to add the additional shapes to the new container.
A video of a flowchart in Visio for the web demonstrates how to add shapes to an underlying container.
You can also add multiple shapes to a container by dragging and dropping them into the container. As shown in the GIF above, when a selected shape is contained in a container, you will see a green highlight around the associated container. Drag the required shapes to the container shape and then, when you see the green highlight, drop (or release) the shapes to add them to the container.
Add a container from the new Container stencil
For simplicity and flexibility, we’ve added a third option for adding a container to your diagram. But first, you’ll need to add the new Container stencil to the Shapes pane. To do this, type “Container” in the Search box at the top of the Shapes pane. Scroll through the list of results and select the magnifying glass to see a preview of the Container stencil. Then, select the Add button to pin the stencil to the Shapes pane.
A video of a flowchart in Visio for the web demonstrates how to add the new Container stencil to the Shapes pane.
Once the stencil is added to the Shapes palette, you can select and drag the preferred container from the stencil onto the canvas. Follow the steps above to increase the size of the container and quickly add additional shapes to the underlying container.
Add a header to your container
Once the container has been added, you can replace the default “Heading” text with your own header name, format the text, and add more containers by following the steps outlined above.
An image of a flowchart in Visio for the web demonstrates how to add header text to a container by selecting the text in the heading field.
Remove a shape from a container or a container from a drawing
To remove a shape from a container, simply select the shape and drag it out of the container. If successfully removed, you’ll notice that the green highlight will no longer appear around the container when you select the removed shape.
You can also delete a container from your drawing without having to delete the contained shapes. Simply right-click the container that you want to delete, then select Delete from the right-click menu. After selecting Delete, you will receive a prompt confirming whether you want to delete not only the container, but also all of its contents. Unchecking the box will allow you to keep the contained shapes as part of your drawing while removing the container.
An image of a message prompt in Visio for the web asks a user, “Are you sure you want to delete this container?”
New Container tab in the ribbon
Select a container in your diagram to access the new Container tab in the ribbon. Here, you’ll see options to: fit the container to its contents, quickly change the current style of the container, rotate the heading text by 90 degrees counterclockwise, and lock the contents of the container to prevent certain actions, such as deleting contained shapes.
A cropped image of a flowchart in Visio for the web highlights the new Container tab in the ribbon.
Example user scenario: Solution architecture diagram
Containers can be used to group related shapes in an architecture diagram. The example below uses Translucent containers to group the three DevTest environments that exist under an Azure DevTest subscription, and clearly separates them from the Production environment.
An image of an Azure architecture diagram in Visio for the web demonstrates how to use containers to group components in DevTest and Production subscriptions.
Example user scenario: Large team org chart
Containers can also be a great way to visually group specific team members within an org chart. The example below uses the Classic 1 container style to indicate those who are part of a virtual team.
An image of a large team org chart in Visio for the web demonstrates how to use containers to identify virtual teams.
Learn more about how to clarify the structure of diagrams by using containers in Visio for the web.
We are listening!
We look forward to hearing your feedback and learning more about how you use containers in Visio for the web. Please share some of your use cases in the comments below or let us know how we can help to improve the experience. You can also send feedback via the Visio Feedback Portal or directly in the Visio web app using “Give Feedback to Microsoft” in the bottom right corner.
Did you know? The Microsoft 365 Roadmap is where you can get the latest updates on productivity apps and intelligent cloud services. Check out what features are in development and coming soon for Visio on the Microsoft 365 Roadmap.
Microsoft Tech Community – Latest Blogs –Read More
Avoid the complexity when utilizing Entra ID multi-tenants and School or Work/Microsoft Accounts
Ideally, it would be convenient to manage the Prod env, Test env, and Dev env within a single Entra ID tenant. However, from the perspective of governance and compliance, it is common to separate the Entra ID tenant for the Prod env from the other envs. In such cases, various complexities arise, such as guest invitations for School or Work Account, use of Microsoft Account, and individuals using multiple accounts.
In this post, we outline several patterns for addressing issues that commonly arise in such scenarios and their corresponding solutions.
Dealing with authentication and authorization issues related to Entra ID can be time-consuming in the absence of prior knowledge, so we hope this knowledge proves helpful to all.
I recently had challenge when I attemptted to control access to Cosmos DB using RBAC, as described in Use system-assigned managed identities to access Azure Cosmos DB data article. While the this topic itself is simple, it becomes more intricate when dealing with the subject.
If you don’t fall into any of the categories listed below, you’re fine. However, many of you may encounter issues at some point. The initial item alone might not be issue, but when the second, third, and fourth items come into play, the complexity of Entra ID shows up.
Use your local machine for the development
Leverage Entra ID multi-tenants and guest invitation
Use Microsoft account instead of School or Work account
Use both of Microsoft account and School or Work account
I have tried a couple of scenarios, so I will share knowledge which work well or not work well in this post.
#1: Single Entra ID tenant, a subscription exists within the same EntraID tenant, and using School or Work account in the same EntraID tenant – this works well
Let’s try the official article at first. The concept is illustrated in the diagram below:
There is no built-in role for Cosmos DB data access, so we need to create the role as custom role. In this example, we create the custom role, which can read, write, and delete the data. The JSON file is as follows:
{
“RoleName”: “CosmosDBDataAccessRole”,
“Type”: “CustomRole”,
“AssignableScopes”: [“/”],
“Permissions”: [{
“DataActions”: [
“Microsoft.DocumentDB/databaseAccounts/readMetadata”,
“Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/*”,
“Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/*”
]
}]
}
Then, you create this custom role on your environment. You can get the resource ID of the custom role here as format, “XXXXXXX-XXX-XXX-XXXX-XXXXXXXXX” (actually a randomly generated UUID). This will be used later.
$rgName = “your-resource-group-name”
$cosmosdbName = “your-cosmosdb-name”
az cosmosdb sql role definition create -a $cosmosdbName -g $rgName -b role-definition-rw.json
{
“assignableScopes”: [
“/subscriptions/your-subscription-id/resourceGroups/your-resource-group-name/providers/Microsoft.DocumentDB/databaseAccounts/your-cosmosdb-name”
],
“id”: “/subscriptions/your-subscription-id/resourceGroups/your-resource-group-name/providers/Microsoft.DocumentDB/databaseAccounts/your-cosmosdb-name/sqlRoleDefinitions/XXXXXXX-XXX-XXX-XXXX-XXXXXXXXX”,
“name”: “XXXXXXX-XXX-XXX-XXXX-XXXXXXXXX”,
“permissions”: [
{
“dataActions”: [
“Microsoft.DocumentDB/databaseAccounts/readMetadata”,
“Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/*”,
“Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/*”
],
“notDataActions”: []
}
],
“resourceGroup”: “your-resource-group-name”,
“roleName”: “CosmosDBDataAccessRole”,
“type”: “Microsoft.DocumentDB/databaseAccounts/sqlRoleDefinitions”,
“typePropertiesType”: “CustomRole”
}
Next, you need to obtain the ID of your School or Work account using the following “az ad user show” command:
az ad user show –id “myuser@normalian.xxx”
{
“@odata.context”: “https://graph.microsoft.com/v1.0/$metadata#users/$entity”,
“businessPhones”: [],
“displayName”: “aduser-normalian”,
“givenName”: “aduser”,
“id”: “YYYYYYY-YYY-YYY-YYYY-YYYYYYYYY”,
“jobTitle”: “Principal Administrator”,
“mail”: “myuser@normalian.xxx”,
“mobilePhone”: null,
“officeLocation”: null,
“preferredLanguage”: null,
“surname”: “normalian”,
“userPrincipalName”: “myuser@normalian.xxx”
}
Finally, assign the custom role to the user.
$ az cosmosdb sql role assignment create -a $cosmosdbName -g $rgName -p “YYYYYYY-YYY-YYY-YYYY-YYYYYYYYY” -d “XXXXXXX-XXX-XXX-XXXX-XXXXXXXXX” -s “/”
{
“id”: “/subscriptions/your-subscription-id/resourceGroups/your-resource-group-name/providers/Microsoft.DocumentDB/databaseAccounts/your-cosmosdb-name/sqlRoleAssignments/787a36f9-
a7f3-40c8-a860-99d4c6ae5fd9″,
“name”: “787a36f9-a7f3-40c8-a860-99d4c6ae5fd9”,
“principalId”: “b0bde25a-d588-410c-a16d-30fc001661c4”,
“resourceGroup”: “your-resource-group-name”,
“roleDefinitionId”: “/subscriptions/your-subscription-id/resourceGroups/your-resource-group-name/providers/Microsoft.DocumentDB/databaseAccounts/your-cosmosdb-name/sqlRoleDefinit
ions/26eeca83-1fd8-4f40-8a5e-b66dac7d3e08″,
“scope”: “/subscriptions/your-subscription-id/resourceGroups/your-resource-group-name/providers/Microsoft.DocumentDB/databaseAccounts/your-cosmosdb-name”,
“type”: “Microsoft.DocumentDB/databaseAccounts/sqlRoleAssignments”
}
In your source code, you can leverage the authentication info. Refer to the Programmatically access the Azure Cosmos DB keys article if you need the detail more. This method allows you to access Cosmos DB without secret strings.
using Azure.Identity;
using Microsoft.Azure.Cosmos;
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
builder.Services.AddControllersWithViews();
builder.Services.AddSingleton<CosmosClient>(serviceProvider =>
{
return new CosmosClient(
accountEndpoint: builder.Configuration[“AZURE_COSMOS_DB_NOSQL_ENDPOINT”]!,
tokenCredential: new DefaultAzureCredential()
);
});
var app = builder.Build();
I believe this scenario should be simple.
#2: You have EntraID multi-tenants and You use a subscription on different EntraID tenant from School or Work Account’s one – this works well
Let’s try another scenario. I believe this one is very popular in large companies. This scenario use School or Work Account on production env Entra ID tenant, and use the account as a guest use on development env Entra ID tenant. This scenario also assume that your subscription is on development env EntraID tenant. This scenario diagram is as follows:
Here are two ket points in this scenario:
How to assign custom roles to the guest user
How to use the development env EntraID tenant with DefaultAzureCredential
First, I try to get resource id for the guest user as follows, but this does not work well.
$ az ad user show –id “myuser01@normalian.xxx”
az : ERROR: Resource ‘myuser01@normalian.xxx’ does not exist or one of its queried reference-property objects are not present.
At line:1 char:1
+ az ad user show –id “myuser01@normalian.xxx”
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (ERROR: Resource…re not present.:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
You can figure out why this does not work well by checking your development env Entra ID tenant. The guest use contains “#EXT#@” and appear as follows:
You can get the resource id here. You can also get the ID with the command as follows.
az ad user show –id “daichi_mycompany.com#EXT#@normalianxxxxx.onmicrosoft.com”
{
“@odata.context”: “https://graph.microsoft.com/v1.0/$metadata#users/$entity”,
“businessPhones”: [],
“displayName”: “daichi”,
“givenName”: null,
“id”: “4bf33ec0-dc63-4468-cc7d-edd4c9820fee”,
“jobTitle”: “CLOUD SOLUTION ARCHITECT”,
“mail”: “daichi@mycompany.com”,
“mobilePhone”: null,
“officeLocation”: null,
“preferredLanguage”: null,
“surname”: null,
“userPrincipalName”: “daichi_mycompany.com#EXT#@normalianxxxxx.onmicrosoft.com”
}
You can assign the custom role properly with the ID using “az cosmosdb sql role assignment create” command.
Next, you need to configure to use development env EntraID tenant in your code. The School or Work account uses production env EntraID tenant without any setting, then you will find “organizational account belongs to the production EntraID, but Azure subscription is under the development EntraID” error as follows. (This example is ASP.NET Core):
To avoid this, you can address this with the code as follows:
using Azure.Identity;
using Microsoft.Azure.Cosmos;
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
builder.Services.AddControllersWithViews();
builder.Services.AddSingleton<CosmosClient>(serviceProvider =>
{
// var connectionString = builder.Configuration.GetConnectionString(“CosmosDB”);
// return new CosmosClient(connectionString);
var option = new DefaultAzureCredentialOptions()
{
TenantId = “your-entraid-tenant-id”,
};
return new CosmosClient(
accountEndpoint: builder.Configuration[“AZURE_COSMOS_DB_NOSQL_ENDPOINT”]!,
tokenCredential: new DefaultAzureCredential(option)
);
});
var app = builder.Build();
By setting up the tenant ID here, you can access proper Entra ID tenant.
#3: Use Microsoft account invited as a guest user on EntraID tenant – this works well
This case is same concept with second use case. This is a diagram for the scenario as follows:
Run “az ad user show” command with the account name adding #EXT#@ and your EntraID tenant. It’s fine to check on your Entra ID tenant directly. Then, run “az cosmosdb sql role assignment create” command to assign the custom role to the use.
az ad user show –id “warito_test_hotmail.com#EXT#@normalianxxxxx.onmicrosoft.com”
{
“@odata.context”: “https://graph.microsoft.com/v1.0/$metadata#users/$entity”,
“businessPhones”: [],
“displayName”: “Daichi”,
“givenName”: “Daichi”,
“id”: “94bfd636-b2e9-4d44-b895-8d51277e7abe”,
“jobTitle”: “Principal Normal”,
“mail”: null,
“mobilePhone”: null,
“officeLocation”: null,
“preferredLanguage”: null,
“surname”: “Isamin”,
“userPrincipalName”: “warito_test_hotmail.com#EXT#@normalianxxxxx.onmicrosoft.com”
}
$ az cosmosdb sql role assignment create -a $cosmosdbName -g $rgName -p “your-user-object-resourceid” -d “your-customrole-resourceid” -s “/”
{
“id”: “/subscriptions/your-subscription-id/resourceGroups/your-resource-group-name/providers/Microsoft.DocumentDB/databaseAccounts/your-cosmosdb-name/sqlRoleAssignments/7adc585c-
74d6-4979-a3ed-3d968de2d27e”,
“name”: “7adc585c-74d6-4979-a3ed-3d968de2d27e”,
“principalId”: “b0bde25a-d588-410c-a16d-30fc001961c4”,
“resourceGroup”: “your-resource-group-name”,
“roleDefinitionId”: “/subscriptions/your-subscription-id/resourceGroups/your-resource-group-name/providers/Microsoft.DocumentDB/databaseAccounts/your-cosmosdb-name/sqlRoleDefinit
ions/7adc585c-74d6-4979-a3ed-3d968de2d27e”,
“scope”: “/subscriptions/your-subscription-id/resourceGroups/your-resource-group-name/providers/Microsoft.DocumentDB/databaseAccounts/your-cosmosdb-name”,
“type”: “Microsoft.DocumentDB/databaseAccounts/sqlRoleAssignments”
}
#4 Create Service Principal on development env Entra ID tenant – This does not work
I guess you might come up an idea “Why not just create service principal in the development env Entra ID tenant?” Here is a diagram for this scenario as follows. This approach does not work.
Note that the Client ID and Object ID of the Service Principal are different. When you run commands to assign the custom role, it appears as follows:
$ az cosmosdb sql role assignment create -a $cosmosdbName -g $rgName -p “your-serviceprincipal-clientied” -d “your-customrole-resourceid” -s “/”
az : ERROR: (BadRequest) The provided principal ID [“your-serviceprincipal-clientied”] was not found in the AAD tenant(s) [b4301d50-52bf-43f0-bfaa-915234380b1a] which are associated
with the customer’s subscription.
At line:1 char:1
+ az cosmosdb sql role assignment create -a $cosmosdbName -g $rgName -p …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (ERROR: (BadRequ…s subscription.:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
ActivityId: b1b3a860-c244-11ee-9296-085bd676b6c2, Microsoft.Azure.Documents.Common/2.14.0, Microsoft.Azure.Documents.Common/2.14.0, Microsoft.Azure.Documents.Common/2.14.0,
Microsoft.Azure.Documents.Common/2.14.0, Microsoft.Azure.Documents.Common/2.14.0, Microsoft.Azure.Documents.Common/2.14.0
Code: BadRequest
Message: The provided principal ID [“your-serviceprincipal-clientied”] was not found in the AAD tenant(s) [b4301d50-52bf-43f0-bfaa-915234380b1a] which are associated with the
customer’s subscription.
ActivityId: b1b3a860-c244-11ee-9296-085bd676b6c2, Microsoft.Azure.Documents.Common/2.14.0, Microsoft.Azure.Documents.Common/2.14.0, Microsoft.Azure.Documents.Common/2.14.0,
Microsoft.Azure.Documents.Common/2.14.0, Microsoft.Azure.Documents.Common/2.14.0, Microsoft.Azure.Documents.Common/2.14.0
$ az cosmosdb sql role assignment create -a $cosmosdbName -g $rgName -p “your-serviceprincipal-objectid” -d “your-customrole-resourceid” -s “/”
az : ERROR: (BadRequest) The provided principal ID [“your-serviceprincipal-objectid”] was found to be of an unsupported type : [Application]
At line:1 char:1
+ az cosmosdb sql role assignment create -a $cosmosdbName -g $rgName -p …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (ERROR: (BadRequ…: [Application]:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
ActivityId: f24dccfd-c244-11ee-9712-085bd676b6c2, Microsoft.Azure.Documents.Common/2.14.0, Microsoft.Azure.Documents.Common/2.14.0, Microsoft.Azure.Documents.Common/2.14.0,
Microsoft.Azure.Documents.Common/2.14.0, Microsoft.Azure.Documents.Common/2.14.0, Microsoft.Azure.Documents.Common/2.14.0
Code: BadRequest
Message: The provided principal ID [“your-serviceprincipal-objectid”] was found to be of an unsupported type : [Application]
ActivityId: f24dccfd-c244-11ee-9712-085bd676b6c2, Microsoft.Azure.Documents.Common/2.14.0, Microsoft.Azure.Documents.Common/2.14.0, Microsoft.Azure.Documents.Common/2.14.0,
Microsoft.Azure.Documents.Common/2.14.0, Microsoft.Azure.Documents.Common/2.14.0, Microsoft.Azure.Documents.Common/2.14.0
As evident from the above, you get an error stating, “There’s no such ID in the first place”. This is expected since the object ID is not specified. In second, you get an error stating, “Assigning to the Application is unsupported!” when you use the Object ID. Take note of this.
#5: Use multi-accounts on EntraID multi-tenants – This case works
This scenario is popular when you manage multiple customers simultaneously. Let’s assume the following example for illustration.
Account for Project ①: myuser@normalian.xxx – School or Work account
Account for Project ②: personalxxxx@outlook.com – Microsoft account
We have to look back authentication priority of DefaultAzureCredential. Refer to DefaultAzureCredential class for more details.
In this case, we switch the accounts using Azure Cli for AzureCliCredential. First, run command as follows and acquire authentication info for your development account.
az login
Next, refer to the example source code as follows. You can configure the priority.
using Azure.Identity;
using Microsoft.Azure.Cosmos;
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
builder.Services.AddControllersWithViews();
builder.Services.AddSingleton<CosmosClient>(serviceProvider =>
{
// var connectionString = builder.Configuration.GetConnectionString(“CosmosDB”);
// return new CosmosClient(connectionString);
var option = new DefaultAzureCredentialOptions()
{
ExcludeEnvironmentCredential = true,
ExcludeWorkloadIdentityCredential = true,
ExcludeManagedIdentityCredential = true,
ExcludeSharedTokenCacheCredential = true,
ExcludeVisualStudioCredential = true,
ExcludeVisualStudioCodeCredential = true,
TenantId = “your-entraid-tenant-id”,
};
return new CosmosClient(
accountEndpoint: builder.Configuration[“AZURE_COSMOS_DB_NOSQL_ENDPOINT”]!,
tokenCredential: new DefaultAzureCredential(option)
);
});
Reference
DefaultAzureCredential Class
Use system-assigned managed identities to access Azure Cosmos DB data
How to manage Azure subscriptions with the Azure CLI
Microsoft Tech Community – Latest Blogs –Read More