Tag Archives: microsoft
Windows Server upgrades
Morning All.
We in the process of replacing dated Windows Server hardware and OS. I would like to find out what you think of in-place upgrades from 2012 to 2022 and the pros and cons going with this.
a additional method is to run a restore from a backup on the new server and then do an in place upgrade to 2022 , will it then be nessacery to re-join the user machines to the domain.
Thanks in advance
Morning All. We in the process of replacing dated Windows Server hardware and OS. I would like to find out what you think of in-place upgrades from 2012 to 2022 and the pros and cons going with this.a additional method is to run a restore from a backup on the new server and then do an in place upgrade to 2022 , will it then be nessacery to re-join the user machines to the domain. Thanks in advance Read More
Fine-Tune and Integrate Custom Phi-3 Models with Prompt Flow in Azure AI Studio
Fine-Tune and Integrate Custom Phi-3 Models with Prompt Flow in Azure AI Studio
This blog series have several versions, each covering different aspects and techniques. Check out the following resource:
– Fine-Tune and Integrate Custom Phi-3 Models with Prompt Flow: Step-by-Step Guide
: Detailed instructions for fine-tuning and integrating custom Phi-3 models with Prompt flow using a code-first approach.
– Code-first approach: End-to-end (E2E) sample on Phi-3CookBook
: An end-to-end (E2E) sample on Phi-3CookBook, developed based on the “Fine-Tune and Integrate Custom Phi-3 Models with Prompt Flow: Step-by-Step Guide” for a code-first approach.
– Fine-Tune and Integrate Custom Phi-3 Models with Prompt Flow in Azure AI Studio
: Detailed instructions for fine-tuning and integrating custom Phi-3 models with Prompt Flow in Azure AI / ML Studio using a low-code approach.
Introduction
Phi-3 is a family of small language models (SLMs) developed by Microsoft that delivers exceptional performance and cost-effectiveness. In this tutorial, you will learn how to fine-tune the Phi-3 model and integrate the custom Phi-3 model with Prompt Flow in Azure AI Studio. By leveraging Azure AI / ML Studio, you will establish a workflow for deploying and utilizing custom AI models. This tutorial is divided into three series:
Series 1: Set up Azure resources and Prepare for fine-tuning
Create Azure Machine Learning Workspace: You start by setting up an Azure Machine Learning workspace, which serves as the hub for managing machine learning experiments and models.
Request GPU Quotas: Since Phi-3 model fine-tuning typically benefits from GPU acceleration, you request GPU quotas in your Azure subscription.
Add Role Assignment: You set up a User Assigned Managed Identity (UAI) and assign it necessary permissions (Contributor, Storage Blob Data Reader, AcrPull) to access resources like storage accounts and container registries.
Set up the Project: You create a local environment, set up a virtual environment, install required packages, and create a script (download_dataset.py) to download the dataset (ULTRACHAT_200k) required for fine-tuning.
Series 2: Fine-tune and Deploy the Phi-3 model in Azure ML Studio
Create Compute Cluster: In Azure ML Studio, you create a dedicated GPU compute cluster (Standard_NC24ads_A100_v4) for fine-tuning the Phi-3 model.
Fine-tune the Phi-3 Model: Using the Azure ML Studio interface, you fine-tune the Phi-3 model by specifying training and validation datasets, and configuring parameters like learning rate.
Deploy the Fine-tuned Model: Once fine-tuning is complete, you register the model, create an online endpoint, and deploy the model to make it accessible for real-time inference.
Series 3: Integrate the custom Phi-3 model with Prompt Flow in Azure AI Studio
Create Azure AI Studio Hub and Project: You create a Hub (similar to a resource group) and a Project within Azure AI Studio to manage your AI-related work.
Add a Custom Connection: To integrate the fine-tuned Phi-3 model with Prompt Flow, you create a custom connection in Azure AI Studio, specifying the endpoint and authentication key generated during model deployment in Azure ML Studio.
Create Prompt Flow: You create a new Prompt flow within the Azure AI Studio Project, configure it to use the custom connection, and design the flow to interact with the Phi-3 model for tasks like chat completion.
Note
Unlike the previous tutorial, Fine-Tune and Integrate Custom Phi-3 Models with Prompt Flow: Step-by-Step Guide, which involved running code locally, this tutorial focuses entirely on fine-tuning and integrating your model within the Azure AI / ML Studio environment.
Here is an overview of this tutorial.
Note
For more detailed information and to explore additional resources about Phi-3, please visit the Phi-3CookBook.
Prerequisites
Python
Azure subscription
Visual Studio Code
Table of Contents
Series 1: Set Up Azure resources and Prepare for fine-tuning
Create Azure Machine Learning workspace
Request GPU quotas in Azure subscription
Add role assignment
Set up the project 1.Prepare dataset for fine-tuning
Series 2: Fine-tune and Deploy the Phi-3 model in Azure ML Studio
Fine-tune the Phi-3 model
Deploy the fine-tuned Phi-3 model
Series 3: Integrate the custom phi-3 model with Prompt flow in Azure AI Studio
Integrate the custom Phi-3 model with Prompt flow
Chat with your custom Phi-3 model
Congratulation!
Series 1: Set up Azure resources and Prepare for fine-tuning
Create Azure Machine Learning workspace
In this exercise, you will:
Create an Azure Machine Learning Workspace.
Create an Azure Machine Learning Workspace
Type azure machine learning in the search bar at the top of the portal page and select Azure Machine Learning from the options that appear.
Select + Create from the navigation menu.
Select New workspace from the navigation menu.
Perform the following tasks:
Select your Azure Subscription.
Select the Resource group to use (create a new one if needed).
Enter Workspace Name. It must be a unique value.
Select the Region you’d like to use.
Select the Storage account to use (create a new one if needed).
Select the Key vault to use (create a new one if needed).
Select the Application insights to use (create a new one if needed).
Select the Container registry to None.
Select Review + Create.
Select Create.
Request GPU Quotas in Azure Subscription
In this tutorial, you will learn how to fine-tune and deploy a Phi-3 model, using GPUs. For fine-tuning, you will use the Standard_NC24ads_A100_v4 GPU, which requires a quota request. For deployment, you will use the Standard_NC6s_v3 GPU, which also requires a quota request.
Note
Only Pay-As-You-Go subscriptions (the standard subscription type) are eligible for GPU allocation; benefit subscriptions are not currently supported.
For those using benefit subscriptions (such as Visual Studio Enterprise Subscription) or those looking to quickly test the fine-tuning and deployment process, this tutorial also provides guidance for fine-tuning with a minimal dataset using a CPU. However, it is important to note that fine-tuning results are significantly better when using a GPU with larger datasets.
In this exercise, you will:
Request GPU Quotas in your Azure Subscription
Request GPU Quotas in Azure Subscription
Visit Azure ML Studio.
Perform the following tasks to request Standard NCADSA100v4 Family quota:
Select Quota from the left side tab.
Select the Virtual machine family to use. For example, select Standard NCADSA100v4 Family Cluster Dedicated vCPUs, which includes the Standard_NC24ads_A100_v4 GPU.
Select the Request quota from the navigation menu.
Inside the Request quota page, enter the New cores limit you’d like to use. For example, 24.
Inside the Request quota page, select Submit to request the GPU quota.
Perform the following tasks to request Standard NCSv3 Family quota:
Select Quota from the left side tab.
Select the Virtual machine family to use. For example, select Standard NCSv3 Family Cluster Dedicated vCPUs, which includes the Standard_NC6s_v3 GPU.
Select the Request quota from the navigation menu.
Inside the Request quota page, enter the New cores limit you’d like to use. For example, 24.
Inside the Request quota page, select Submit to request the GPU quota.
Add role assignment
To fine-tune and deploy your models, you must first ceate a User Assigned Managed Identity (UAI) and assign it the appropriate permissions. This UAI will be used for authentication during deployment, so it is critical to grant it access to the storage accounts, container registry, and resource group.
In this exercise, you will:
Create User Assigned Managed Identity(UAI).
Add Contributor role assignment to Managed Identity.
Add Storage Blob Data Reader role assignment to Managed Identity.
Add AcrPull role assignment to Managed Identity.
Create User Assigned Managed Identity(UAI)
Type managed identities in the search bar at the top of the portal page and select Managed Identities from the options that appear.
Select + Create.
Perform the following tasks to navigate to Add role assignment page:
Select your Azure Subscription.
Select the Resource group to use (create a new one if needed).
Select the Region you’d like to use.
Enter the Name. It must be a unique value.
Select Review + create.
Select + Create.
Add Contributor role assignment to Managed Identity
Navigate to the Managed Identity resource that you created.
Select Azure role assignments from the left side tab.
Select +Add role assignment from the navigation menu.
Inside Add role assignment page, Perform the following tasks:
Select the Scope to Resource group.
Select your Azure Subscription.
Select the Resource group to use.
Select the Role to Contributor.
Select Save.
Add Storage Blob Data Reader role assignment to Managed Identity
Type azure storage accounts in the search bar at the top of the portal page and select Storage accounts from the options that appear.
Select the storage account that associated with the Azure Machine Learning workspace. For example, finetunephistorage.
Perform the following tasks to navigate to Add role assignment page:
Navigate to the Azure Storage account that you created.
Select Access Control (IAM) from the left side tab.
Select + Add from the navigation menu.
Select Add role assignment from the navigation menu.
Inside Add role assignment page, Perform the following tasks:
Inside the Role page, type Storage Blob Data Reader in the search bar and select Storage Blob Data Reader from the options that appear.
Inside the Role page, select Next.
Inside the Members page, select Assign access to Managed identity.
Inside the Members page, select + Select members.
Inside Select managed identities page, select your Azure Subscription.
Inside Select managed identities page, select the Managed identity to Manage Identity.
Inside Select managed identities page, select the Manage Identity that you created. For example, finetunephi-managedidentity.
Inside Select managed identities page, select Select.
Select Review + assign.
Add AcrPull role assignment to Managed Identity
Type container registries in the search bar at the top of the portal page and select Container registries from the options that appear.
Select the container registry that associated with the Azure Machine Learning workspace. For example, finetunephicontainerregistries
Perform the following tasks to navigate to Add role assignment page:
Select Access Control (IAM) from the left side tab.
Select + Add from the navigation menu.
Select Add role assignment from the navigation menu.
Inside Add role assignment page, Perform the following tasks:
Inside the Role page, Type AcrPull in the search bar and select AcrPull from the options that appear.
Inside the Role page, select Next.
Inside the Members page, select Assign access to Managed identity.
Inside the Members page, select + Select members.
Inside Select managed identities page, select your Azure Subscription.
Inside Select managed identities page, select the Managed identity to Manage Identity.
Inside Select managed identities page, select the Manage Identity that you created. For example, finetunephi-managedidentity.
Inside Select managed identities page, select Select.
Select Review + assign.
Set up the project
To download the datasets needed for fine-tuning, you will set up a local environment.
In this exercise, you will
Create a folder to work inside it.
Create a virtual environment.
Install the required packages.
Create a download_dataset.py file to download the dataset.
Create a folder to work inside it
Open a terminal window and type the following command to create a folder named finetune-phi in the default path.
mkdir finetune-phi
Type the following command inside your terminal to navigate to the finetune-phi folder you created.
cd finetune-phi
Create a virtual environment
Type the following command inside your terminal to create a virtual environment named .venv.
python -m venv .venv
Type the following command inside your terminal to activate the virtual environment.
.venvScriptsactivate.bat
Note
If it worked, you should see (.venv) before the command prompt.
Install the required packages
Type the following commands inside your terminal to install the required packages.
pip install datasets==2.19.1
Create donload_dataset.py
Note
Complete folder structure:
└── YourUserName
. └── finetune-phi
. └── download_dataset.py
Open Visual Studio Code.
Select File from the menu bar.
Select Open Folder.
Select the finetune-phi folder that you created, which is located at C:UsersyourUserNamefinetune-phi.
In the left pane of Visual Studio Code, right-click and select New File to create a new file named download_dataset.py.
Prepare dataset for fine-tuning
In this exercise, you will run the download_dataset.py file to download the ultrachat_200k datasets to your local environment. You will then use this datasets to fine-tune the Phi-3 model in Azure Machine Learning.
In this exercise, you will:
Add code to the download_dataset.py file to download the datasets.
Run the download_dataset.py file to download datasets to your local environment.
Download your dataset using download_dataset.py
Open the download_dataset.py file in Visual Studio Code.
Add the following code into download_dataset.py.
import json
import os
from datasets import load_dataset
def load_and_split_dataset(dataset_name, config_name, split_ratio):
“””
Load and split a dataset.
“””
# Load the dataset with the specified name, configuration, and split ratio
dataset = load_dataset(dataset_name, config_name, split=split_ratio)
print(f”Original dataset size: {len(dataset)}”)
# Split the dataset into train and test sets (80% train, 20% test)
split_dataset = dataset.train_test_split(test_size=0.2)
print(f”Train dataset size: {len(split_dataset[‘train’])}”)
print(f”Test dataset size: {len(split_dataset[‘test’])}”)
return split_dataset
def save_dataset_to_jsonl(dataset, filepath):
“””
Save a dataset to a JSONL file.
“””
# Create the directory if it does not exist
os.makedirs(os.path.dirname(filepath), exist_ok=True)
# Open the file in write mode
with open(filepath, ‘w’, encoding=’utf-8′) as f:
# Iterate over each record in the dataset
for record in dataset:
# Dump the record as a JSON object and write it to the file
json.dump(record, f)
# Write a newline character to separate records
f.write(‘n’)
print(f”Dataset saved to {filepath}”)
def main():
“””
Main function to load, split, and save the dataset.
“””
# Load and split the ULTRACHAT_200k dataset with a specific configuration and split ratio
dataset = load_and_split_dataset(“HuggingFaceH4/ultrachat_200k”, ‘default’, ‘train_sft[:1%]’)
# Extract the train and test datasets from the split
train_dataset = dataset[‘train’]
test_dataset = dataset[‘test’]
# Save the train dataset to a JSONL file
save_dataset_to_jsonl(train_dataset, “data/train_data.jsonl”)
# Save the test dataset to a separate JSONL file
save_dataset_to_jsonl(test_dataset, “data/test_data.jsonl”)
if __name__ == “__main__”:
main()
Type the following command inside your terminal to run the script and download the dataset to your local environment.
python download_dataset.py
Verify that the datasets were saved successfully to your local finetune-phi/data directory.
Note
Note on dataset size and fine-tuning time
In this tutorial, you use only 1% of the dataset (split=’train[:1%]’). This significantly reduces the amount of data, speeding up both the upload and fine-tuning processes. You can adjust the percentage to find the right balance between training time and model performance. Using a smaller subset of the dataset reduces the time required for fine-tuning, making the process more manageable for a tutorial.
Series 2: Fine-tune and Deploy the Phi-3 model in Azure ML Studio
Fine-tune the Phi-3 model
In this exercise, you will fine-tune the Phi-3 model in Azure Machine Learning Studio.
In this exercise, you will:
Create computer cluster for fine-tuning.
Fine-tune the Phi-3 model in Azure Machine Learning Studio.
Create computer cluster for fine-tuning
Visit Azure ML Studio.
Select Compute from the left side tab.
Select Compute clusters from the navigation menu.
Select + New.
Perform the following tasks:
Select the Region you’d like to use.
Select the Virtual machine tier to Dedicated.
Select the Virtual machine type to GPU.
Select the Virtual machine size filter to Select from all options.
Select the Virtual machine size to Standard_NC24ads_A100_v4.
Select Next.
Perform the following tasks:
Enter Compute name. It must be a unique value.
Select the Minimum number of nodes to 0.
Select the Maximum number of nodes to 1.
Select the Idle seconds before scale down to 120.
Select Create.
Fine-tune the Phi-3 model
Visit Azure ML Studio.
Select the Azure Macnine Learning workspace that you created.
Perform the following tasks:
Select Model catalog from the left side tab.
Type phi-3-mini-4k in the search bar and select Phi-3-mini-4k-instruct from the options that appear.
Select Fine-tune from the navigation menu.
Perform the following tasks:
Select Select task type to Chat completion.
Select + Select data to upload Traning data.
Select the Validation data upload type to Provide different validation data.
Select + Select data to upload Validation data.
Tip
You can select Advanced settings to customize configurations such as learning_rate and lr_scheduler_type to optimize the fine-tuning process according to your specific needs.
Select Finish.
In this exercise, you successfully fine-tuned the Phi-3 model using Azure Machine Learning. Please note that the fine-tuning process can take a considerable amount of time. After running the fine-tuning job, you need to wait for it to complete. You can monitor the status of the fine-tuning job by navigating to the Jobs tab on the left side of your Azure Machine Learning Workspace. In the next series, you will deploy the fine-tuned model and integrate it with Prompt flow.
Deploy the fine-tuned model
To integrate the fine-tuned Phi-3 model with Prompt flow, you need to deploy the model to make it accessible for real-time inference. This process involves registering the model, creating an online endpoint, and deploying the model.
In this exercise, you will:
Register the fine-tuned model in the Azure Machine Learning workspace.
Create an online endpoint.
Deploy the registered fine-tuned Phi-3 model.
Register the fine-tuned model
Visit Azure ML Studio.
Select the Azure Macnine Learning workspace that you created.
Select Models from the left side tab.
Select + Register.
Select From a job output.
Select the job that you created.
Select Next.
Select Model type to MLflow.
Ensure that Job output is selected; it should be automatically selected.
Select Next.
Select Register.
You can view your registered model by navigating to the Models menu from the left side tab.
Deploy the fine-tuned model
Navigate to the Azure Macnine Learning workspace that you created.
Select Endpoints from the left side tab.
Select Real-time endpoints from the navigation menu.
Select Create.
select the registered model that you created.
Select Select.
Perform the following tasks:
Select Virtual machine to Standard_NC6s_v3.
Select the Instance count you’d like to use. For example, 1.
Select the Endpoint to New to create an endpoint.
Enter Endpoint name. It must be a unique value.
Enter Deployment name. It must be a unique value.
Select Deploy.
Warning
To avoid additional charges to your account, make sure to delete the created endpoint in the Azure Machine Learning workspace.
Check deployment status in Azure Machine Learning Workspace
Navigate to Azure Machine Learning workspace that you created.
Select Endpoints from the left side tab.
Select the endpoint that you created.
On this page, you can manage the endpoints during the deployment process.
Note
Once the deployment is complete, ensure that Live traffic is set to 100%. If it is not, select Update traffic to adjust the traffic settings. Note that you cannot test the model if the traffic is set to 0%.
Series 3: Integrate the custom phi-3 model with Prompt flow in Azure AI Studio
Integrate the custom Phi-3 model with Prompt flow
After successfully deploying your fine-tuned model, you can now integrate it with Prompt Flow to use your model in real-time applications, enabling a variety of interactive tasks with your custom Phi-3 model.
In this exercise, you will:
Create Azure AI Studio Hub.
Create Azure AI Studio Project.
Create Prompt flow.
Add a custom connection for the fine-tuned Phi-3 model.
Set up Prompt flow to chat with your custom Phi-3 model
Note
You can also integrate with Promptflow using Azure ML Studio. The same integration process can be applied to Azure ML Studio.
Create Azure AI Studio Hub
You need to create a Hub before creating the Project. A Hub acts like a Resource Group, allowing you to organize and manage multiple Projects within Azure AI Studio.
Visit Azure AI Studio.
Select All hubs from the left side tab.
Select + New hub from the navigation menu.
Perform the following tasks:
Enter Hub name. It must be a unique value.
Select your Azure Subscription.
Select the Resource group to use (create a new one if needed).
Select the Location you’d like to use.
Select the Connect Azure AI Services to use (create a new one if needed).
Select Connect Azure AI Search to Skip connecting.
Select Next.
Create Azure AI Studio Project
In the Hub that you created, select All projects from the left side tab.
Select + New project from the navigation menu.
Enter Project name. It must be a unique value.
Select Create a project.
Add a custom connection for the fine-tuned Phi-3 model
To integrate your custom Phi-3 model with Prompt flow, you need to save the model’s endpoint and key in a custom connection. This setup ensures access to your custom Phi-3 model in Prompt flow.
Set api key and endpoint uri of the fine-tuned Phi-3 model
Visit Azure ML Studio.
Navigate to the Azure Machine learning workspace that you created.
Select Endpoints from the left side tab.
Select endpoint that you created.
Select Consume from the navigation menu.
Copy your REST endpoint and Primary key.
Add the Custom Connection
Visit Azure AI Studio.
Navigate to the Azure AI Studio project that you created.
In the Project that you created, select Settings from the left side tab.
Select + New connection.
Select Custom keys from the navigation menu.
Perform the following tasks:
Select + Add key value pairs.
For the key name, enter endpoint and paste the endpoint you copied from Azure ML Studio into the value field.
Select + Add key value pairs again.
For the key name, enter key and paste the key you copied from Azure ML Studio into the value field.
After adding the keys, select is secret to prevent the key from being exposed.
Select Add connection.
Perform the following tasks to add the custom Phi-3 model’s key:
Create Prompt flow
You have added a custom connection in Azure AI Studio. Now, let’s create a Prompt flow using the following steps. Then, you will connect this Prompt flow to the custom connection so that you can use the fine-tuned model within the Prompt flow.
Navigate to the Azure AI Studio project that you created.
Select Prompt flow from the left side tab.
Select + Create from the navigation menu.
Select Chat flow from the navigation menu.
Enter Folder name to use.
Select Create.
Set up Prompt flow to chat with your custom Phi-3 model
In the Prompt flow, perform the following tasks to rebuild the existing flow:
Select Raw file mode.
Delete all existing code in the flow.dag.yml file.
Add the folling code to flow.dag.yml file.
inputs:
input_data:
type: string
default: “Who founded Microsoft?”
outputs:
answer:
type: string
reference: ${integrate_with_promptflow.output}
nodes:
– name: integrate_with_promptflow
type: python
source:
type: code
path: integrate_with_promptflow.py
inputs:
input_data: ${inputs.input_data}
Select Save.
Add the following code to integrate_with_promptflow.py file to use the custom Phi-3 model in Prompt flow.
import logging
import requests
from promptflow import tool
from promptflow.connections import CustomConnection
# Logging setup
logging.basicConfig(
format=”%(asctime)s – %(levelname)s – %(name)s – %(message)s”,
datefmt=”%Y-%m-%d %H:%M:%S”,
level=logging.DEBUG
)
logger = logging.getLogger(__name__)
def query_phi3_model(input_data: str, connection: CustomConnection) -> str:
“””
Send a request to the Phi-3 model endpoint with the given input data using Custom Connection.
“””
# “connection” is the name of the Custom Connection, “endpoint”, “key” are the keys in the Custom Connection
endpoint_url = connection.endpoint
api_key = connection.key
headers = {
“Content-Type”: “application/json”,
“Authorization”: f”Bearer {api_key}”
}
data = {
“input_data”: {
“input_string”: [
{“role”: “user”, “content”: input_data}
],
“parameters”: {
“temperature”: 0.7,
“max_new_tokens”: 128
}
}
}
try:
response = requests.post(endpoint_url, json=data, headers=headers)
response.raise_for_status()
# Log the full JSON response
logger.debug(f”Full JSON response: {response.json()}”)
result = response.json()[“output”]
logger.info(“Successfully received response from Azure ML Endpoint.”)
return result
except requests.exceptions.RequestException as e:
logger.error(f”Error querying Azure ML Endpoint: {e}”)
raise
@tool
def my_python_tool(input_data: str, connection: CustomConnection) -> str:
“””
Tool function to process input data and query the Phi-3 model.
“””
return query_phi3_model(input_data, connection)
Note
For more detailed information on using Prompt flow in Azure AI Studio, you can refer to Prompt flow in Azure AI Studio.
Select Chat input, Chat output to enable chat with your model.
Now you are ready to chat with your custom Phi-3 model. In the next exercise, you will learn how to start Prompt flow and use it to chat with your fine-tuned Phi-3 model.
Note
The rebuilt flow should look like the image below:
Chat with your custom Phi-3 model
Now that you have fine-tuned and integrated your custom Phi-3 model with Prompt flow, you are ready to start interacting with it. This exercise will guide you through the process of setting up and initiating a chat with your model using Prompt flow. By following these steps, you will be able to fully utilize the capabilities of your fine-tuned Phi-3 model for various tasks and conversations.
Chat with your custom Phi-3 model using Prompt flow.
Start Prompt flow
Select Start compute sessions to start Prompt flow.
Select Validate and parse input to renew parameters.
Select the Value of the connection to the custom connection you created. For example, connection.
Chat with your custom Phi-3 model
Select Chat.
Here’s an example of the results: Now you can chat with your custom Phi-3 model. It is recommended to ask questions based on the data used for fine-tuning.
Congratulations!
You’ve completed this tutorial
Congratulations! You have successfully completed the tutorial on fine-tuning and integrating custom Phi-3 models with Prompt flow in Azure AI Studio. This tutorial introduced the process of fine-tuning, deploying, and integrating the custom Phi-3 model with Prompt flow using Azure ML Studio and Azure AI Studio.
Clean Up Azure Resources
Cleanup your Azure resources to avoid additional charges to your account. Go to the Azure portal and delete the following resources:
The Azure Machine learning resource.
The Azure Machine learning model endpoint.
The Azure AI Studio Project resource.
The Azure AI Studio Prompt flow resource.
Next Steps
Documentation
microsoft/Phi-3CookBook
Azure/azure-llm-fine-tuning
Azure Machine Learning documentation
Azure AI Studio documentation
Prompt flow documentation
Training Content
Prompt flow tutorials
Introduction to Azure AI Studio
Reference
Microsoft Tech Community – Latest Blogs –Read More
New Video Course: Generative AI for Beginners
It’s hard to deny how big a deal AI has been in recent years. It’s everywhere, from medicine to the entertainment industry. With all the new tech, AI is now accessible to everyone, not just experts.
It’s important for everyone to know something about generative AI. This is a very promising area with some big wins already.
If you want to work in different areas, it’s really important to understand Generative AI. With that in mind, the Microsoft Advocacy Team has put together a Generative AI for Beginners Course to teach you the basics and then show you how to use it to create practical applications.
In this article, we’ll dive a bit deeper into the course and what you can expect from it.
Generative AI for Beginners v.2 – What to Expect?
The Generative AI for Beginners course is a totally free online course that teaches you the basics, all the way to creating your own Generative AI applications.
The course has 18 lessons and lots of practical projects. The Microsoft Advocacy team put it together.
The course includes two types of lessons: Learn lessons, which break down the fundamental concepts of a topic, and Build lessons, which go over those concepts and then show examples made in Python and TypeScript.
Who is the course for?
The course is designed for anyone who wants to learn about generative AI, from beginners to experts. So, you don’t need any prior knowledge or programming expertise to take the course.
What will you learn?
By the end of the course, you will be able to:
Understand the fundamental concepts of Generative AI
Understand the Lifecycle of a Generative AI Application
Create Generative AI applications with Python and TypeScript
Understand Prompt Engineering
Learn about LLMs and GPTs
Use Vector Databases for Generative AI application creation
Use No Code/Low Code applications for Generative AI application creation
Learn about Agents
Fine-Tune LLMs
RAG (Retrieval Augmented Generation)
And much more!
Generative AI Learning Outcomes
The course has 18 lessons:
Introduction to Generative AI and LLMs
Exploring and comparing different LLMs
Using Generative AI Responsibly
Understanding Prompt Engineering Fundamentals
Building Text Generation Applications
Building Search Apps Vector Databases
Building Image Generation Applications
Building Low Code AI Applications
Integrating External Applications with Function Calling
Designing UX for AI Applications
Securing Your Generative AI Applications
The Generative AI Application Lifecycle
Retrieval Augmented Generation (RAG) and Vector Databases
Open Source Models and Hugging Face
What if I have any questions? How can I get to the bottom of them?
Don’t worry! You won’t be on this journey alone! The course includes a discussion forum on Discord, where you can ask questions, share knowledge, interact with the course creators and AI specialists from Microsoft, and other students.
Join the Azure AI Community Discord
I love it! I’m ready to get started. How do I go about doing it?
To start the course, just go to: Generative AI for Beginners and follow the instructions.
The course is available in several languages, including:
English
Chinese/Mandarin
Brazilian Portuguese
Japanese
If you prefer learning through videos, the Microsoft Advocacy team has put together a series of videos about the Generative AI for Beginners course, which you can watch below.
Conclusion
Generative AI is a hot topic that’s becoming more accessible to everyone. It’s becoming more and more important for professionals to learn about generative AI.
Take advantage of this opportunity and start the Generative AI for Beginners course now and become an expert in Generative AI!
Additional Resources
Just wanted to let you know about a couple more resources that might be useful for you. Just a heads-up: The resources below go hand-in-hand with the main Generative AI for Beginners course.
Free Course: Get started with Azure OpenAI Service
Free Course: Fundamentals of Generative AI
Collection: Generative AI for Beginners
We hope you enjoyed the article and that you’re interested in taking the Generative AI for Beginners course. If you have any questions, please don’t hesitate to ask!
See you next time!
Microsoft Tech Community – Latest Blogs –Read More
Protection Label’s watermarks are editable?
Hi,
We are rolling out information protection labels using Purview.
I noticed the limited number of colours for watermarking content within the label setting. This was intriguing as I don’t understand why a colour hex value couldn’t be supported… Especially as the yellow is extremely hard to read, if I was to change it to readable orange that would be better.
I hope am wrong, watermarks are just editable textbox withing the header and footer of word doc? I can change text and colour, save the document and the label’s watermark are tampered with just like that?
I was hoping that the label’s watermarking wouldn’t be editable in a word doc and embedded. I understand the label can still provide technical controls, but tamper proof water marks would be useful.
Hi, We are rolling out information protection labels using Purview. I noticed the limited number of colours for watermarking content within the label setting. This was intriguing as I don’t understand why a colour hex value couldn’t be supported… Especially as the yellow is extremely hard to read, if I was to change it to readable orange that would be better. I hope am wrong, watermarks are just editable textbox withing the header and footer of word doc? I can change text and colour, save the document and the label’s watermark are tampered with just like that? I was hoping that the label’s watermarking wouldn’t be editable in a word doc and embedded. I understand the label can still provide technical controls, but tamper proof water marks would be useful. Read More
Connect with Application Insights in ‘not Local auth mode’ using OpenTelemetry
TOC
What is it
How to use it
References
What is it
Azure Web Apps or Azure Function Apps frequently communicate with Application Insights to log various levels of data, which can later be reviewed and filtered in the Log Analytics Workspace.
Taking Python as an example, the official documentation mentions that the OpenCensus package will no longer be supported after 2024-09-30.
The article suggests OpenTelemetry as the latest alternative. In response to the growing cybersecurity awareness among many companies, many users have disabled the ‘Local Authentication’ feature in Application Insights to enhance security.
Therefore, this article will focus on how Web Apps/Function Apps can use Managed Identity to communicate with Application Insights and utilize the latest OpenTelemetry package to avoid the predicament of unsupported packages.
How to use it
According to Microsoft Entra authentication for Application Insights – Azure Monitor | Microsoft Learn, sample code with “OpenCensus” will EOS after 2024-09-30 which means this method is deprecatedfrom now. (will show up in further code snippet with method 1)
Currently, Microsoft officially suggest user apply OpenTelemetry as the new method. (will show up in further code snippet with method 2).
Step 1:
Function App should use system/user assigned managed identity to issue credential for accessing AI (i.e., Application Insights), I choose system assigned managed identity in this sample.
In the “Role Assignment”, please add the “Monitoring Metrics Publisher” to the target AI resource, I add the parent RG (i.e., resource group) from that AI in this experiment.
Step 2:
In code level, I use Function App python V1 architecture from the python code, but I think V1 and V2 could achieve the same goal.
[requirements.txt]
# Method 2: opentelemetry
azure-monitor-opentelemetry
azure-identity
[<TriggerName>/__init__.py]
# Method 2: opentelemetry
from azure.monitor.opentelemetry import configure_azure_monitor
from logging import INFO, getLogger
from azure.identity import ManagedIdentityCredential
credential = ManagedIdentityCredential()
configure_azure_monitor(
connection_string=’InstrumentationKey=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX;IngestionEndpoint=https://XXXXXX-X.in.applicationinsights.azure.com/;LiveEndpoint=https://XXXXXX.livediagnostics.monitor.azure.com/;ApplicationId=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX’,
credential=credential
)
# Method 2: opentelemetry
logger2 = getLogger(__name__)
logger2.setLevel(INFO)
logger2.info(“Method 2: opentelemetry”)
logger2.handlers.clear()
The connection_string mentioned in the code can be obtained through the AI’s overview page.
Step 3:
After the deployment to the Function App, we could use online Code+Test from Azure portal
And the corresponding AI will got the log.
References:
azure-monitor-opentelemetry · PyPI
Microsoft Tech Community – Latest Blogs –Read More
Where to complain against OneCard?
compromised, report this immediately from the OneCard app or via phone on 08093-158-918 or email us on email address removed for privacy reasons.
compromised, report this immediately from the OneCard app or via phone on 08093-158-918 or email us on email address removed for privacy reasons. Read More
Microsoft 365 Copilot
I tried to use AI command to convert from word document(Information sources) to power point, however, it didnt giving any result. Any same scenario and suggestion can share to me
I tried to use AI command to convert from word document(Information sources) to power point, however, it didnt giving any result. Any same scenario and suggestion can share to me Read More
Your network access has been interrupted – MS Access application on Remote Desktop Web Access
We are in the process of moving a Citrix provided MS Access application to “Remote Desktop Web Access” After the application has been launched for about an hour it bugs out with “Your Network Access has been interrupted … ” We lose access to our SQL tables but even more significantly – even local tables within the front end itself throw the same error when attempting to open them – that is not even attaching to an external table.
We are in the process of moving a Citrix provided MS Access application to “Remote Desktop Web Access” After the application has been launched for about an hour it bugs out with “Your Network Access has been interrupted … ” We lose access to our SQL tables but even more significantly – even local tables within the front end itself throw the same error when attempting to open them – that is not even attaching to an external table. Read More
Intune Discovered Apps
Hello All,
In the process of trying to use Graph to pull out the apps installed on user devices from Intune for a database being created in PowerApps for our IT Admin to ensure that licenses are removed from a device after it is returned by the user.
Our process of licensing apps is less than streamlined, so if a user is licensed for something like Adobe or Navisworks, they are manually installing these themselves.
As a result they arent visible in detectedApps, only in the Discovered Apps list.
Have had a solid dig through the available resources and through various discussion boards but havent found a way that the Discovered Apps list can be pulled through Graph.
Has anyone found a way to get this data out or is this a feature yet to be made available?
Thanks in advance.
Hello All, In the process of trying to use Graph to pull out the apps installed on user devices from Intune for a database being created in PowerApps for our IT Admin to ensure that licenses are removed from a device after it is returned by the user. Our process of licensing apps is less than streamlined, so if a user is licensed for something like Adobe or Navisworks, they are manually installing these themselves. As a result they arent visible in detectedApps, only in the Discovered Apps list. Have had a solid dig through the available resources and through various discussion boards but havent found a way that the Discovered Apps list can be pulled through Graph. Has anyone found a way to get this data out or is this a feature yet to be made available? Thanks in advance. Read More
Requesting Access to TSP Marketing SharePoint
Hello,
I belive the way SharePoint functions has changed, and we can no longer access a page to request access. What is the correct process to get access to the Marketing SharePoint for TSPs?
Any assistance would be greatly appreciated
Hello,I belive the way SharePoint functions has changed, and we can no longer access a page to request access. What is the correct process to get access to the Marketing SharePoint for TSPs?Any assistance would be greatly appreciated Read More
Removing linkedin learning video after survey completion
Hi there-
we are wondering if it is possible to remove linkedin learning videos that appears at the bottom of the page after survey completion.
Please see screenshot below-
Hi there- we are wondering if it is possible to remove linkedin learning videos that appears at the bottom of the page after survey completion. Please see screenshot below- Read More
System Center Orchestrator – OS Upgrade
Hello community,
I am asking for your help to see how we can do an OS upgrade of our System Center Orchestrator infrastructure.
We have our System Center Orchestrator 2019 infrastructure on Windows Server 2016. I would like to upgrade to Windows Server 2022.
What is the best way to do this?
I would like to user new servers and not do an in-place upgrade.
What would be the procedure?
Is SCORCH 2019 supported on 2022? I don’t see it in the documentation.
Thank you very much!!
Hello community, I am asking for your help to see how we can do an OS upgrade of our System Center Orchestrator infrastructure.We have our System Center Orchestrator 2019 infrastructure on Windows Server 2016. I would like to upgrade to Windows Server 2022. What is the best way to do this? I would like to user new servers and not do an in-place upgrade. What would be the procedure?Is SCORCH 2019 supported on 2022? I don’t see it in the documentation. Thank you very much!! Read More
Teams forwarding calls coming from specific group
Hi friends,
I would like to forward my calls to a voicemail in Teams coming from a specific group of users and not from every caller. They are internal users and group would be a Microsoft365 group.
Is it possible?
Thanks
Hi friends,I would like to forward my calls to a voicemail in Teams coming from a specific group of users and not from every caller. They are internal users and group would be a Microsoft365 group. Is it possible? Thanks Read More
Dev Channel update to 128.0.2708.0 is live.
Hello Insiders! We released 128.0.2708.0 to the Dev channel! This includes numerous fixes. For more details on the changes, check out the highlights below.
Seamless SVG copy-paste on the web – Windows Blogs
Added Features:
Added a toast view for Super Drag and Drop feature to solve over triggering problem.
Implemented logic to terminate a browser process stuck in shutdown.
Improved Reliability:
Resolved an issue causing the browser to crash when toggling dark mode while using the picture-in-picture feature.
Fixed a browser narrator crash after the selection of a suggestion.
Resolved a browser crash related to auto grouping on Android.
Resolved a crash occurring after navigating specific webpages in the browser.
Resolved a crash observed during browser shutdown.
Resolved an issue causing the browser to crash when clicking on ‘Settings & More’ button on Xbox.
Changed Behavior:
Resolved an issue where, upon invoking the ‘split screen’ button, keyboard focus did not move inside the ‘split screen’ section, preventing navigation to other screens using the F6 key.
Fixed an issue where keyboard input prevented editing (adding squares, text, etc.) to the captured screenshot.
Fixed an issue where the Vertical tabs toggle button was not showing properly in Tab actions menu.
Resolved an issue where empty group names were inconsistently labeled in the Recently Closed Hub.
Addressed an issue where pressing the ‘Enter’ key did not work when renaming the first half of a favorite’s name.
Android:
Fixed an issue where the icons and names for top sites were not displayed correctly on Android.
Resolved a problem where the browser would open duplicate tabs when clicking links from other apps in Shared Device Mode.
iOS:
Addressed an issue where the keyboard remained active in a pop-up state and could not be deactivated when editing top sites.
Fixed an issue where the edit bar would unexpectedly move up and down while performing a long press on top sites to activate edit mode for renaming.
See an issue that you think might be a bug? Remember to send that directly through the in-app feedback by heading to the … menu > Help and feedback > Send feedback and include diagnostics so the team can investigate.
Thanks again for sending us feedback and helping us improve our Insider builds.
~Gouri
Hello Insiders! We released 128.0.2708.0 to the Dev channel! This includes numerous fixes. For more details on the changes, check out the highlights below.
Seamless SVG copy-paste on the web – Windows Blogs
Microsoft Photos introduces even more AI editing capabilities with Microsoft Designer – Windows Blogs
Added Features:
Added a toast view for Super Drag and Drop feature to solve over triggering problem.
Implemented logic to terminate a browser process stuck in shutdown.
Improved Reliability:
Resolved an issue causing the browser to crash when toggling dark mode while using the picture-in-picture feature.
Fixed a browser narrator crash after the selection of a suggestion.
Resolved a browser crash related to auto grouping on Android.
Resolved a crash occurring after navigating specific webpages in the browser.
Resolved a crash observed during browser shutdown.
Resolved an issue causing the browser to crash when clicking on ‘Settings & More’ button on Xbox.
Changed Behavior:
Resolved an issue where, upon invoking the ‘split screen’ button, keyboard focus did not move inside the ‘split screen’ section, preventing navigation to other screens using the F6 key.
Fixed an issue where keyboard input prevented editing (adding squares, text, etc.) to the captured screenshot.
Fixed an issue where the Vertical tabs toggle button was not showing properly in Tab actions menu.
Resolved an issue where empty group names were inconsistently labeled in the Recently Closed Hub.
Addressed an issue where pressing the ‘Enter’ key did not work when renaming the first half of a favorite’s name.
Android:
Fixed an issue where the icons and names for top sites were not displayed correctly on Android.
Resolved a problem where the browser would open duplicate tabs when clicking links from other apps in Shared Device Mode.
iOS:
Addressed an issue where the keyboard remained active in a pop-up state and could not be deactivated when editing top sites.
Fixed an issue where the edit bar would unexpectedly move up and down while performing a long press on top sites to activate edit mode for renaming.
See an issue that you think might be a bug? Remember to send that directly through the in-app feedback by heading to the … menu > Help and feedback > Send feedback and include diagnostics so the team can investigate.
Thanks again for sending us feedback and helping us improve our Insider builds.
~Gouri Read More
Right click send to email not working
When i seled a file and right click and choose send to email nothing happens. I reinstalled office no change. Operating system is windows 10
When i seled a file and right click and choose send to email nothing happens. I reinstalled office no change. Operating system is windows 10 Read More
SharePoint Document Library Column Not Copying to Another Document Library
Hi,
I have a SharePoint document library and I need to copy the contents to another document library with identical columns. When I use the Copy to option, all the other column data copies except for one Single Line of Text column.
The columns and the column types are identical in both libraries.
How do I get the other column contents to copy to the new library? I have included photos.
Thank you!
Hi, I have a SharePoint document library and I need to copy the contents to another document library with identical columns. When I use the Copy to option, all the other column data copies except for one Single Line of Text column. The columns and the column types are identical in both libraries. How do I get the other column contents to copy to the new library? I have included photos. Thank you! Original LibraryNew Library Read More
Xbox profile picture
I recently had a profile picture of a girl in a dress laying down with nothing show on her but still managed to have my account suspended, why is that ? And what can I not have in my profile picture so this doesn’t happen again?
I recently had a profile picture of a girl in a dress laying down with nothing show on her but still managed to have my account suspended, why is that ? And what can I not have in my profile picture so this doesn’t happen again? Read More
How to prevent all Windows 11 auto-reboots and auto-shutdowns
I’m running Windows 11 on my HP laptop. I have gone into Settings and disabled everything I can find to prevent automatic updates, automatic reboots, and automatic shutdowns. Nonetheless. every 5-10 days, my Windows 11 laptop auto-shuts down. I’m guessing that is is part of some sort of forced security update procedure.
Whatever this is due to, it is totally and completely unacceptable to me.
I need the machine to be up 24×7, because I ssh into it remotely at various random and unpredictable times, and that of course is prevented when the machine is rebooting or shut down.
Is there any way for me to totally prevent these automatic reboots and shutdowns?
PLEASE!!!!!!!!!!
Yes, I understand that automatic security updates are meant for my “protection”. However, I have been working in IT and computer security for decades, and I know much, much better how to protect my own machine than some generic software developed at Microsoft that is meant for millions of non-technical, non-IT-aware people to use.
What Microsoft *SHOULD* do is give us the *option* (i.e., only if we select this option) for the following scenario:
Whenever there is a pending software or security upgrade, we are notified and given a way to either (A) accept it at the moment, or (B) postpone its installation until **WE** decide that **WE** are ready for the upgrade to be installed … and under this proposed option, Microsoft would NEVER (!!!) force any upgrades upon us.
To be clear, I mean this to be an OPTION. In other words, if we don’t select this option, then Microsoft’s standard “force upgrades upon users” policy will continue.
MacOS gives us the option to deal with security upgrades in *exactly* this proposed manner. Most Linux distros give us the option to deal with security upgrades in *exactly* this proposed manner.
Why does Microsoft seem to be refusing to offer its own users this same option?
Or am I missing something about the Windows 11 software that will indeed allow me to optionally set up my computer to behave in this exact, proposed manner regarding auto-reboot, auto-shutdown, and security upgrades? I hope that I *am* missing something!
I am totally willing to accept all risks that Microsoft thinks (even though they would be wrong in my particular case) that I am facing in taking full, personal responsibility for my own machine’s security.
Thank you for any suggestions as to how I can get my Windows 11 machine to never automatically reboot and never automatically shut down.
And again to be clear: by “never”, I don’t mean “infrequently” or “only now and then” or anything like that. I mean “never” to mean *NEVER* !
I’m running Windows 11 on my HP laptop. I have gone into Settings and disabled everything I can find to prevent automatic updates, automatic reboots, and automatic shutdowns. Nonetheless. every 5-10 days, my Windows 11 laptop auto-shuts down. I’m guessing that is is part of some sort of forced security update procedure.Whatever this is due to, it is totally and completely unacceptable to me.I need the machine to be up 24×7, because I ssh into it remotely at various random and unpredictable times, and that of course is prevented when the machine is rebooting or shut down. Is there any way for me to totally prevent these automatic reboots and shutdowns?PLEASE!!!!!!!!!!Yes, I understand that automatic security updates are meant for my “protection”. However, I have been working in IT and computer security for decades, and I know much, much better how to protect my own machine than some generic software developed at Microsoft that is meant for millions of non-technical, non-IT-aware people to use.What Microsoft *SHOULD* do is give us the *option* (i.e., only if we select this option) for the following scenario:Whenever there is a pending software or security upgrade, we are notified and given a way to either (A) accept it at the moment, or (B) postpone its installation until **WE** decide that **WE** are ready for the upgrade to be installed … and under this proposed option, Microsoft would NEVER (!!!) force any upgrades upon us.To be clear, I mean this to be an OPTION. In other words, if we don’t select this option, then Microsoft’s standard “force upgrades upon users” policy will continue.MacOS gives us the option to deal with security upgrades in *exactly* this proposed manner. Most Linux distros give us the option to deal with security upgrades in *exactly* this proposed manner.Why does Microsoft seem to be refusing to offer its own users this same option?Or am I missing something about the Windows 11 software that will indeed allow me to optionally set up my computer to behave in this exact, proposed manner regarding auto-reboot, auto-shutdown, and security upgrades? I hope that I *am* missing something!I am totally willing to accept all risks that Microsoft thinks (even though they would be wrong in my particular case) that I am facing in taking full, personal responsibility for my own machine’s security.Thank you for any suggestions as to how I can get my Windows 11 machine to never automatically reboot and never automatically shut down.And again to be clear: by “never”, I don’t mean “infrequently” or “only now and then” or anything like that. I mean “never” to mean *NEVER* ! Read More
New Blog | Microsoft Power BI and Microsoft Defender for Cloud
Introduction
As cloud environments grow more complex and threats increase, organizations need robust tools to monitor, analyze, and respond to security issues effectively. Microsoft Defender for Cloud (MDC) offers robust security management, but to unlock its full potential, organizations need powerful visualization and analysis tools.
While Azure Workbooks provide valuable visualizations for MDC data, integrating Microsoft Power BI offers an enhanced approach to data analysis and visualization. Power BI’s advanced features, such as customizable dashboards, interactive elements, and seamless integration with various data sources, make it ideal for enhancing the value derived from MDC data.
This article is the first in a series of correlated blogs that will explore scenarios and applicability in depth. As an introduction to the series, this article provides the foundation on how to start leveraging Power BI to report and dashboard MDC insights.
Benefits of Using Power BI with Microsoft Defender for Cloud
Advanced Data Visualization: Power BI provides a wide array of visualization options, allowing security teams to create highly customized and visually rich dashboards that effectively communicate insights to different stakeholders.
Enhanced Data Analysis: Power BI’s robust analytical tools, including DAX (Data Analysis Expressions) and built-in AI capabilities, enable security teams to perform complex data analysis and uncover deeper insights.
Seamless Integration: Power BI integrates with various data sources, including Azure Resource Graph, allowing you to consolidate data from multiple platforms into a single, unified view.
Collaborative Features: Power BI facilitates collaboration by enabling teams to share dashboards and reports easily, with role-based access controls ensuring data security.
Ease of Use: Power BI’s intuitive drag-and-drop functionality makes it simple for users to create and customize visualizations without extensive technical knowledge, making it accessible to users of all skill levels.
Step-by-Step Guide to Integrating MDC Data into Power BI
To integrate MDC data into Power BI, follow these steps:
Step 1: Set Up Power BI and Azure Resource Graph
Install Power BI Desktop: Download Power BI Desktop.
Enable Azure Resource Graph: Ensure that you have the necessary permissions to access Azure Resource Graph.
Step 2: Connect Power BI to Azure Resource Graph
Open Power BI Desktop: Launch Power BI Desktop on your computer.
Get Data: Click on Get Data on the Home tab.
Select Azure Resource Graph: In the Get Data window, search for Azure Resource Graph and select it.
Connect: Click Connect and sign in with your Azure credentials.
Read the full post here: Microsoft Power BI and Microsoft Defender for Cloud
By Giulio Astori
Introduction
As cloud environments grow more complex and threats increase, organizations need robust tools to monitor, analyze, and respond to security issues effectively. Microsoft Defender for Cloud (MDC) offers robust security management, but to unlock its full potential, organizations need powerful visualization and analysis tools.
While Azure Workbooks provide valuable visualizations for MDC data, integrating Microsoft Power BI offers an enhanced approach to data analysis and visualization. Power BI’s advanced features, such as customizable dashboards, interactive elements, and seamless integration with various data sources, make it ideal for enhancing the value derived from MDC data.
This article is the first in a series of correlated blogs that will explore scenarios and applicability in depth. As an introduction to the series, this article provides the foundation on how to start leveraging Power BI to report and dashboard MDC insights.
Benefits of Using Power BI with Microsoft Defender for Cloud
Advanced Data Visualization: Power BI provides a wide array of visualization options, allowing security teams to create highly customized and visually rich dashboards that effectively communicate insights to different stakeholders.
Enhanced Data Analysis: Power BI’s robust analytical tools, including DAX (Data Analysis Expressions) and built-in AI capabilities, enable security teams to perform complex data analysis and uncover deeper insights.
Seamless Integration: Power BI integrates with various data sources, including Azure Resource Graph, allowing you to consolidate data from multiple platforms into a single, unified view.
Collaborative Features: Power BI facilitates collaboration by enabling teams to share dashboards and reports easily, with role-based access controls ensuring data security.
Ease of Use: Power BI’s intuitive drag-and-drop functionality makes it simple for users to create and customize visualizations without extensive technical knowledge, making it accessible to users of all skill levels.
Step-by-Step Guide to Integrating MDC Data into Power BI
To integrate MDC data into Power BI, follow these steps:
Step 1: Set Up Power BI and Azure Resource Graph
Install Power BI Desktop: Download Power BI Desktop.
Enable Azure Resource Graph: Ensure that you have the necessary permissions to access Azure Resource Graph.
Step 2: Connect Power BI to Azure Resource Graph
Open Power BI Desktop: Launch Power BI Desktop on your computer.
Get Data: Click on Get Data on the Home tab.
Select Azure Resource Graph: In the Get Data window, search for Azure Resource Graph and select it.
Connect: Click Connect and sign in with your Azure credentials.
Read the full post here: Microsoft Power BI and Microsoft Defender for Cloud
date entry
How do I enter dates prior to 1900 that I can sort?
How do I enter dates prior to 1900 that I can sort? Read More