Category: Microsoft
Category Archives: Microsoft
When and how should I perform a QᴜɪᴄᴋBᴏᴏᴋs ᴘᴀʏʀᴏʟʟ tax table update?
Hello community, I’m struggling with updating the ᴘᴀʏʀᴏʟʟ tax table in QᴜɪᴄᴋBᴏᴏᴋs. Despite several attempts, the update fails to complete. Any tips or reliable sources for manual updates would be greatly appreciated. Thank you for your assistance!
Hello community, I’m struggling with updating the ᴘᴀʏʀᴏʟʟ tax table in QᴜɪᴄᴋBᴏᴏᴋs. Despite several attempts, the update fails to complete. Any tips or reliable sources for manual updates would be greatly appreciated. Thank you for your assistance! Read More
Step-by-step: OneDrive Sync Health
0. Overview
This blog shows a step-by-step guide to getting OneDrive Sync Health information using Microsoft Graph Data Connect for SharePoint. This includes detailed instructions on how to extract OneDrive device sync information and use that to run analytics for your tenant.
If you follow these steps, you will have a Power BI dashboard like the one shown below, which includes total devices per OneDrive version, number of devices by Backup Folder enabled and total devices by date last updated. You can also use the many other properties available in the OneDrive Sync Health dataset.
To get there, you can split the process into 3 distinct parts:
Set up your tenant for Microsoft Graph Data Connect, configuring its prerequisites.
Configure and run a pipeline to get OneDrive Sync Health using Azure Synapse.
Use Power BI to read the data about OneDrive Sync Health and show it in a dashboard.
1. Setting up Microsoft Graph Data Connect
The first step in the process is to enable Microsoft and its prerequisites. You will need to do a few things to make sure everything is ready to run the pipeline:
Enable Data Connect in your Microsoft 365 Admin Center. This is where your Tenant Admin will check the boxes to enable the Data Connect and enable the use of SharePoint datasets.
Create an application identity to run your pipelines. This is an application created in Azure Active Directory which will be granted the right permissions to run your pipelines and access your Azure Storage account.
Create an Azure Resource Group for all the resources we will use for Data Connect, like the Azure Storage account and the Azure Synapse workspace.
Create an Azure Storage account. This is the place in your Azure account where you will store the data coming from your pipeline. This is also the place where Power BI will read the data for creating the dashboards.
Create a container and folder in your Storage Account. This is the location where the data will go.
Grant the application identity the required access to the Storage account. This makes sure that the application identity has permission to write to the storage.
Add your Microsoft Graph Data Connect application in the Azure Portal. Your Microsoft Graph Data Connect application needs to be associated with a subscription, resource group, storage account, application identity and datasets.
Finally, your Global Administrator needs to use Microsoft Admin Center to approve Microsoft Graph Data Connect application access.
Let us look at each one of these.
1a. Enable Microsoft Graph Data Connect
The first preparation step is to go into Microsoft 365 Admin Center and enable Microsoft Graph Data Connect.
Navigate to Microsoft 365 Admin Center at https://admin.microsoft.com/ and make sure you are signed in as a Global Administrator.
Select the option to Show all options on the left.
Click on Settings, then on Org settings.
Select the settings for Microsoft Graph Data Connect.
Check the box to turn Data Connect on.
Make sure to also check the box to enable access to the SharePoint and OneDrive datasets.
IMPORTANT: You must wait 48 hours for onboarding your tenant and another 48 hours for the initial data collection and curation. For example, if you check the boxes on August 1st, you will be able to run your first data pull on August 5th, targeting the data for August 3rd. You can continue with the configuration, but do not trigger your pipeline before that.
1b. Create the Application Identity
You will need to create an Application in Microsoft Entra ID (formerly Azure Active Directory) and set up an authentication mechanism, like a certificate or a secret. For these application configuration tasks, you will need the role of Application Administrator. You will use this Application later when you configure the pipeline.
IMPORTANT: Creating the application must be done by a different user than the user with the Global Administrator role who completes step 1h.
Here are the steps:
Navigate to the Azure Portal at https://portal.azure.com
Find Microsoft Entra ID service in the list of Azure services.
Select the option for App Registration on the list on the left.
Click the link to New Registration to create a new one.
Enter an app name, select “this organizational directory only” and click on the Register button.
On the resulting screen, select the link to Add a certificate or secret.
Select the “Client secrets” tab and click on the option for New client secret.
Enter a description, select an expiration period, and click the Add button.
Copy the secret value (there is a copy button next to it). We will need that secret value later.
Secret values can only be viewed immediately after creation. Save the secret before leaving the page.
Click on the Overview link on the left to view the details about your app registration.
Make sure to copy the application (client) ID. We will need that value later as well.
1c. Create the Azure Resource Group
You will need to create an Azure Resource Group for all the resources we will use for Data Connect, including the Storage Account and Synapse Workspace.
Here are the steps.
Navigate to the Azure Portal at https://portal.azure.com
Find the Resource Groups in the list of Azure services.
Click on the Create link to create a new resource group.
Select a name and a region.
IMPORTANT: You must use a region that matches the region of your Microsoft 365 tenant.
Click on Review + Create, make sure you have everything correctly entered and click Create.
1d. Create the Azure Storage Account
You will need to create an Azure Storage Account to store the data coming from SharePoint. This should be an Azure Data Lake Gen2 storage account. You should also authorize the Application you created to write to this storage account. Here are the steps.
Navigate to the Azure Portal at https://portal.azure.com
Find the Storage accounts service in the list of Azure services.
Click on the Create link to create a new storage account.
Select a subscription, resource group (created in step 1c), account name, region, and type (standard is fine).
Make sure your new account name contains only lowercase letters and numbers.
IMPORTANT: You must use a region that matches the region of your Microsoft 365 tenant.
Click on the Advanced tab. Under Data Lake Storage Gen2 check the box to Enable hierarchical namespace.
Click on Review, make sure you have everything correctly entered and click Create.
Wait until the deployment is completed and click on Go to resource.
Click on the Access keys option on the left to see the keys to access the storage account.
Click on Show for one of the two keys and use the copy icon whenever you need the key.
1e. Grant access to the Storage Account
You will need to grant the Application Id the required access to the Storage Account. Here are those steps:
In the Storage account you just created, click the Access Control (IAM) option on the left.
Click on the link to Add on the horizontal bar.
Click on the link to Add on the horizontal bar and click on the option to Add role assignment.
In the Role tab, select the built-in Storage Blob Data Contributor role and click on the Next button.
In the Members tab, select user, group or service principal and click on the Select members link.
In the Select members window, click on the application id you created in item 1b and click the Select button.
Then click on the Review + Assign button.
Review the role assignment and click on the Review + assign button.
You’ve now completed the role assignment.
1f. Create a container and folder in your Storage Account
The next step is to create a container and folder for the data you will bring from Data Connect. Follow these steps:
In the Storage account you just created, click the Containers option on the left.
You will see only the default $logs container in the list. Click on the Container link on the horizontal bar.
Click on the newly created container and in that container, click on + Add Directory to create a new folder for your dataset. For instance, you could call it “synchealth”.
With that, you have a location to later store your data with the path as container/folder.
1g. Add your Microsoft Graph Data Connect application
Your Microsoft Graph Data Connect application needs to be associated with a subscription, resource group, storage account, application identity and datasets. This will define everything that the app will need to run your pipelines.
Search for the “Microsoft Graph Data Connect” service in the Azure Portal at https://portal.azure.com or navigate directly to https://aka.ms/MGDCinAzure to get started.
Select the option to Add a new application.
Under Application ID, select the one from step 1b and give it a description.
Select Single-Tenant for Publish Type.
Select Azure Synapse for Compute Type.
Select Copy Activity for Activity Type.
Fill the form with the correct Subscription and Resource Group (from step 1c).
Under Destination Type, select Azure Storage Account.
Under Storage Account, select the Storage Account we created in step 1d.
Under Storage Account Uri, select the option with “dfs” in the name.
Click on “Next: Datasets”.
In the dataset page, under Dataset, select BasicDataSet_v0.OneDriveSyncHealth_v0.
Under Columns, select all.
Click on “Review + Create” and click “Create” to finish.
You will now see the app in the list for Graph Data Connect.
1h. Approve the Microsoft Graph Data Connect Application
Your last step in this section is to have a Global Administrator approve the Microsoft Graph Data Connect application.
Make sure this step is performed by a Global administrator who is not the same user that created the application.
Navigate to Microsoft 365 Admin Center at http://admin.microsoft.com/
Select the option to Show all options on the left.
Click on Settings, then on Org settings.
Click on the tab for Security & privacy.
Select the option for settings for Microsoft Graph Data Connect applications.
You will see the app you defined with the status Pending Authorization.
Double-click the app name to start the authorization.
Follow the wizard to review the app data, the datasets, the columns and the destination, clicking Next after each screen.
In the last screen, click on Approve to approve the app.
Note: The Global administrator that approves the application cannot be the same user that created the application. If it is, the tool will say “app approver and developer cannot be the same user.”
2. Run a Pipeline
Next, you will configure a pipeline in either Azure Data Factory or Azure Synapse. We will use Synapse here. You will trigger this pipeline to pull SharePoint data from Microsoft 365 and drop it into the Azure Storage account.
Here is what you will need to do:
Create a new Azure Synapse workspace. This is the place where you create and run your pipelines.
Use the Copy Data tool in Azure Synapse. This tool will help you with the task.
Create a new source to get the OneDrive Sync Health dataset from Microsoft 365.
Create a new destination with a storage folder in Azure Storage to receive the data.
Deploy and trigger the pipeline.
Monitor the pipeline to make sure it has finished running and that the data is available.
Let us look at each one of these.
2a. Create the Azure Synapse workspace
To get started, you need to create an Azure Synapse workspace, if you do not already have one.
Here are the steps:
Navigate to the Azure Portal at https://portal.azure.com
Find the Azure Synapse Analytics service in the list of Azure services.
Click on the Create link to create the new Azure Synapse workspace.
Enter the subscription, resource group (created in step 1d), the new workspace name, region, storage account name (created in step 1e) and new file system name.
IMPORTANT: You must use a region that matches the region of your Microsoft 365 tenant.
Click on the Security tab. Select the option to Use only Microsoft Entra ID authentication (formerly AAD authentication). Click on the Review + create button.
Click Create. Wait until the deployment is completed and click on Go to resource.
Note: After you create an Azure Synapse workspace, you might run into an error that says, “The Azure Synapse resource provider (Microsoft Synapse) needs to be registered with the selected subscription”. You might also run into a validation error later with a message like “Customer subscription GUID needs to be registered with Microsoft.Sql resource provider”. These providers might not be registered with your subscription by default. If you run into these issues, see this doc on how to register a new resource provider and make sure your subscription is registered with both Microsoft.Synapse and Microsoft.Sql resource providers. Thanks to Carl Grzywacz for pointing these out.
2b. Use the Copy Data tool in Azure Synapse
Our Azure Data Factory pipeline will use a data source (Microsoft 365) and a data sink (Azure Storage). Let us start by configuring the data source in our Data Factory.
Follow the steps:
Navigate to the Azure Portal at https://portal.azure.com
Find the Azure Synapse Analytics service in the list of Azure services.
Click on the name of your Azure Synapse workspace (created in item 2a).
Click on the Open link inside the big box for Synapse Studio.
In the Synapse Studio, select the fourth icon on the left to go to the Integrate page.
Click on the bug + icon and select the option for the Copy Data tool to start.
Keep options for the Built-in copy task and Run once now. Then click the Next button.
You will then have to define the source and destination.
2c. Define the data source
The first step is to define your data source, which will be Microsoft Graph Data Connect (Data Connect source).
Here are the steps you should take:
On the Source data store page, click on the New connection option.
On the New connection page, enter “365” on the search box and select Microsoft 365 (Office 365).
Click the Continue button to reach the page to define the details of the new connection.
Enter the Name and Description for the new connection
Also enter the Service principal ID and the Service principal key. These are the application ID and the secret that we captured in step 1b.
Click on the Test connection option on the bottom right to make sure the credentials are working.
Then click on the Create button to create the new connection and go back to the Source data store page.
This time around, the connection will be filled in and the list of datasets will be available.
Check the box next to BasicDataSet_v0.OneDriveSyncHealth_v0 and click on the Next button.
In the Apply Filter page, click on next.
Click on the Next button to finish the source section and move to the destination section.
2d. Define the data destination
Next, you need to point to the location where the data will go, which is an Azure Storage account.
Here are the steps:
On the Destination data store page, click on the New connection option.
Select the option for Azure Data Lake Storage Gen 2
Click the Continue button to reach the page to define the details of the new connection.
Enter the Name and Description for the new connection.
Change the Authentication type to Service Principal, add the Storage account name from the drop-down list.
Enter the Service principal ID and the Service principal key. Again, these are the application id and the secret that we captured in step 1b.
Click on the Test connection option on the bottom right to make sure the credentials are working.
Then click on the Create button to create the new connection and go back to the Destination data store page.
This time around, the connection will be filled in and a few options will be available.
Enter a Folder path. This is the container and folder you created in step 1f and you can browse to it.
Click Next to reach the Review and finish page of the Copy Data tool.
2e. Deploy and trigger the pipeline
Now we will deploy the pipeline and run it.
Follow the steps:
In the Review and finish page, click the Edit link on the top right to enter a name and description for your pipeline. Then click Save.
Click on the Next button to start the deployment.
Once it is all finished, click on the Monitor button to see how the pipeline is running.
2f. Monitor the pipeline
After the data copy tool finishes, you can monitor the running pipeline. You will land in the main “pipeline runs” pages, with a list of pipelines.
In your case, there should be only one:
If you click on the Pipeline name, you will see the details for each activity in the pipeline. In this case, you should see only one activity in the pipeline, which is the copy of the dataset.
Wait until the status for the activity and pipeline reaches Succeeded. This could take a few minutes, depending on the number of sites in your environment.
Once the pipeline has finished running, the data will be in Azure Storage, in the container and folder that you have specified. It shows as one or more JSON files, plus a metadata folder with information about the request.
3. Create a Power BI Dashboard
The last step is to use the data you just got to build a Power BI dashboard.
You will need to:
Create a new Power BI file.
Query the data from the Azure Storage account.
Create your dashboard.
3a. Create a new Power BI file
Now that you have the data in Azure Storage, you can bring it into Power BI to build reports and dashboards.
Here is how to get started:
You will start by opening the Power BI desktop application.
If you don’t have the application, download from https://powerbi.microsoft.com/en-us/downloads/
3b. Query the Data
Now you can bring the data into Power BI, directly from Azure.
In your new Power BI report, in the Home tab, click on the Get Data dropdown menu and click on More.
In the list of sources, select Azure, click on Azure Data Lake Storage Gen2 and click on Connect.
Enter the URL with the full path to the ADLS Gen2 data, with container and folder, in the following format:
https://accountname.dfs.core.windows.net/container/folder
This is the Storage Account name that you created in steps 1e and 1g.
Click OK.
In the next screen you need to authenticate to the storage account.
Select the option to provide an account key, which was mentioned in step 1e.
Click Connect.
In the following screen you will see the list of JSON files coming from the storage account.
Note that you get a few JSON files, but keep in mind that two of them are just metadata.
Click on the Transform Data button to load all the files into a Power Query.
The Power Query Editor window will show, with the files listed.
First, change the query Name from Query1 to a more meaningful name.
Next, scroll to the left until you find the Folder Path column.
You should see one of the paths that includes a metadata folder. We want to filter that out.
On the row with the Folder Path that includes the word metadata, right click that cell, select the Text Filters option and then the Does Not Contain option. That will get rid of that row only.
In the formula, make sure to enter
= Table.SelectRows(Source, each not Text.Contains([Folder Path], “metadata”))
Now that you removed the rows for metadata, scroll all the way to the right to find the Content column.
On the Content column, click on the icon with two down arrows called Combine Files (see arrow below)
At this point Power BI does a whole lot to the data, including loading the JSON file, renaming the columns, and expanding the columns with structures (like Storage Metrics and Owner).
You can now just click on the Close and Apply button to close the Query Editor.
3c. Create the Power BI Dashboard
Now that the data is available in Power BI, let’s create some dashboards.
After you close the Query Editor and go back to the main Power BI window, you will have all the Sites data available to you to create reports and dashboards. They will be under the Fields pane on the right.
The schema for this dataset is available publicly at https://github.com/microsoftgraph/dataconnect-solutions/blob/main/Datasets/data-connect-dataset-onedrivesynchealth.md. This shows the data type and a brief description of each column.
You can now drag visualizations and fields to the main canvas. For instance, you can just double-click on the stacked bar chart in the visualizations and resize the chart to span the entire page. Then drag the SyncAppVersion field to the Y-axis and the OneDriveDeviceId to the X-axis. That’s it!
4. Conclusion
You have triggered your first pipeline and populated a dashboard. Now there is a lot more that you could do.
Here are a few suggestions:
Investigate the many datasets in Data Connect, which you can easily use in your Synapse workspace.
Trigger your pipeline on a schedule, to always have fresh data in your storage account.
Extend your pipeline to do more, like join multiple data sources or take the data to a SQL database.
Publish your Power BI dashboard to share with other people in your tenant.
You can read more about Microsoft Graph Data Connect for SharePoint at https://aka.ms/SharePointData. There you will find many details, including a list of datasets available, complete with schema definitions and samples.
Microsoft Tech Community – Latest Blogs –Read More
issue with my photos
when i input phtos on to a doc my other existing photos interfere with them
when i input phtos on to a doc my other existing photos interfere with them Read More
Copilot for Sales configuration error with Outlook
Hello – I am having a challenge from Outlook to Copilot for Sales. Steps I am taking:
1. in Outlook, click on Copilot for Sales add in
2. click on save email to Dynamics 365.
3. Get an error message (attached below)
Please advise. I am following the steps in the docs site Fix mailbox errors in Dynamics 365 – Copilot for Sales | Microsoft Learn but following the steps is not resolving the issue.
Hello – I am having a challenge from Outlook to Copilot for Sales. Steps I am taking:
1. in Outlook, click on Copilot for Sales add in
2. click on save email to Dynamics 365.
3. Get an error message (attached below)
Please advise. I am following the steps in the docs site Fix mailbox errors in Dynamics 365 – Copilot for Sales | Microsoft Learn but following the steps is not resolving the issue.
Read More
How can I use Teams for Education if I already use my school email for Teams for Business?
My end goal is to be able to use the FLIP app available in Teams for Education. I intend to use FLIP as a video discussion platform for my university classes (FLIP is formerly Flipgrid, a 3rd-party tool from 2012, and acquired by Microsoft in 2018). FLIP is no longer supported by Microsoft as a 3rd-party tool. Recent news articles indicate that certain features have been integrated into Teams for Education.
– https://help.flip.com/hc/en-us/articles/23985951972375-A-New-Chapter-for-Flip-FAQ
– https://help.flip.com/hc/en-us/articles/115003080054-Microsoft-Teams-integration
– https://answers.microsoft.com/en-us/msteams/forum/all/integrating-flip-into-microsoft-teams/fd43bd57-f755-4c4c-ab0d-2c71f79fdca3
My university already has a Microsoft Teams for Business account. But, FLIP isn’t available for Teams for Business. As part of the tool acquisition deal, Microsoft only agreed to provide it for free for educational purposes. Therefore, the FLIP app was removed from the Teams for Business product on July 1st, 2024. So far as I understand, FLIP is only being integrated into the Teams for Education product. I tried downloading Teams for Education, but when I install it, Teams for Business opens. I can’t open Teams for Education.
Here are my questions:
1. Sales: Can I download and use Teams for Education independently from Teams for Business? Does Teams for Education require purchasing a separate license? Would I need a separate email (apart from the email I use for Teams for Business)?
2. Teams for Business Engineers: Is there a chance similar functionality to the FLIP app will be integrated into Teams for Business? If so, when might I expect that?
Many thanks for helping me get FLIP (features) working for my students for Fall semester.
My end goal is to be able to use the FLIP app available in Teams for Education. I intend to use FLIP as a video discussion platform for my university classes (FLIP is formerly Flipgrid, a 3rd-party tool from 2012, and acquired by Microsoft in 2018). FLIP is no longer supported by Microsoft as a 3rd-party tool. Recent news articles indicate that certain features have been integrated into Teams for Education.- https://help.flip.com/hc/en-us/articles/23985951972375-A-New-Chapter-for-Flip-FAQ- https://help.flip.com/hc/en-us/articles/115003080054-Microsoft-Teams-integration- https://answers.microsoft.com/en-us/msteams/forum/all/integrating-flip-into-microsoft-teams/fd43bd57-f755-4c4c-ab0d-2c71f79fdca3 My university already has a Microsoft Teams for Business account. But, FLIP isn’t available for Teams for Business. As part of the tool acquisition deal, Microsoft only agreed to provide it for free for educational purposes. Therefore, the FLIP app was removed from the Teams for Business product on July 1st, 2024. So far as I understand, FLIP is only being integrated into the Teams for Education product. I tried downloading Teams for Education, but when I install it, Teams for Business opens. I can’t open Teams for Education. Here are my questions:1. Sales: Can I download and use Teams for Education independently from Teams for Business? Does Teams for Education require purchasing a separate license? Would I need a separate email (apart from the email I use for Teams for Business)?2. Teams for Business Engineers: Is there a chance similar functionality to the FLIP app will be integrated into Teams for Business? If so, when might I expect that? Many thanks for helping me get FLIP (features) working for my students for Fall semester. Read More
Comprehensive AI Safety and Security with defense in depth for Enterprises
Azure AI Content Safety APIs
Azure AI Content Safety is a new service that helps detect hateful, violence, sexual, and self-harm content in images and text, and assign severity scores, allowing businesses to limit and prioritize what content moderators need to review. Unlike most solutions used today, Azure AI Content Safety can handle nuance and context, reducing the number of false positives and easing the load on human content moderator teams.
Prompt Shields (preview)
Identify and block direct and indirect prompt injection attacks before they impact your model, scans text for the risk of a User input attack on a Large Language Model. Quickstart
Groundedness detection (preview)
It detect model “hallucinations” so you can block or highlight ungrounded responses, detects whether the text responses of large language models (LLMs) are grounded in the source materials provided by the users. Quickstart
Protected material text detection (preview)
Blocks copyrighted or known content like song lyrics, scans AI-generated text for known text content (for example, song lyrics, articles, recipes, selected web content). Quickstart
Custom categories (rapid) API (preview)
Create and deploy your own content filters, lets you define emerging harmful content patterns and scan text and images for matches. How-to guide
Analyze text API
Scans text for sexual content, violence, hate, and self harm with multi-severity levels.
Analyze image API
Scans images for sexual content, violence, hate, and self harm with multi-severity levels.
Resources
https://github.com/Azure-Samples/genai-gateway-apim
https://aka.ms/SecuringAI/Build
https://aka.ms/purviewai/developerblog
https://github.com/Azure/PyRIT
azure-sdk-for-python/sdk/contentsafety/azure-ai-contentsafety/samples at main · Azure/azure-sdk-for-python (github.com)
azure-sdk-for-net/sdk/contentsafety/Azure.AI.ContentSafety/samples at main · Azure/azure-sdk-for-net (github.com)
Microsoft Threat Modeling Tool overview – Azure | Microsoft Learn
Configure GitHub Advanced Security for Azure DevOps features – Azure Repos | Microsoft Learn
Enterprise AppSec with GitHub Advanced Security
What is Azure AI Content Safety? – Azure AI services | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
We seek formal approval from Microsoft to use logo under”Appreciated and Empowered by MMicrosoft”
Dear Team,
Thank you for your prompt and encouraging response! We at ** greatly appreciate Microsoft’s commitment to digital education and the resources offered via the Microsoft Learn platform.
Here are some points we’d like to clarify:
– Integration of Microsoft Learn Courses: We plan to incorporate Microsoft’s online courses into our training modules to provide essential digital skills to about 40 youths. Could you confirm if there are any limitations on the number of participants who can access these courses simultaneously?
– Promotional Activities: To celebrate our collaboration and the achievements of our youth, we propose:
*Sharing success stories featuring our participants who achieve certification.
*Displaying Microsoft’s logo alongside ours in promotional materials, under the theme “Appreciated and Empowered by Microsoft Community.”
*Publicly acknowledging our youth’s certification achievements through our platforms and community engagements.
– Approval Request: We seek formal approval from Microsoft to use community logo and “Appreciated and Empowered by Microsoft Community” in our promotions, ensuring that we respect and appropriately acknowledge Microsoft’s support.
We are enthusiastic about the potential impact on our youth and look forward to your guidance on these matters.
Thank you once again for your support and collaboration.
Warm regards,
Jessica Mirza
Chief Learning Officer
Dear Team,Thank you for your prompt and encouraging response! We at ** greatly appreciate Microsoft’s commitment to digital education and the resources offered via the Microsoft Learn platform.Here are some points we’d like to clarify:- Integration of Microsoft Learn Courses: We plan to incorporate Microsoft’s online courses into our training modules to provide essential digital skills to about 40 youths. Could you confirm if there are any limitations on the number of participants who can access these courses simultaneously?- Promotional Activities: To celebrate our collaboration and the achievements of our youth, we propose:*Sharing success stories featuring our participants who achieve certification.*Displaying Microsoft’s logo alongside ours in promotional materials, under the theme “Appreciated and Empowered by Microsoft Community.”*Publicly acknowledging our youth’s certification achievements through our platforms and community engagements.- Approval Request: We seek formal approval from Microsoft to use community logo and “Appreciated and Empowered by Microsoft Community” in our promotions, ensuring that we respect and appropriately acknowledge Microsoft’s support.We are enthusiastic about the potential impact on our youth and look forward to your guidance on these matters.Thank you once again for your support and collaboration.Warm regards,Jessica MirzaChief Learning Officer Read More
Renamed PIM Group names not updating
I configured several Entra security groups with PIM a few months ago. However, the groups names were always intended to be temporary. This morning I renamed the groups. On the PIM screens, the original names rename. Does anyone know how I can fix this?
Thanks in advance for help!
I configured several Entra security groups with PIM a few months ago. However, the groups names were always intended to be temporary. This morning I renamed the groups. On the PIM screens, the original names rename. Does anyone know how I can fix this? Thanks in advance for help! Read More
MacOS Defender and Full Disk Access
Working on deploying Defender on MacOS via intune…most of it is solid, however I noticed “Microsoft Defender Endpoint Security Extension” doesnt have full disk access and needs it…the native “Microsoft Defender” has it ok…its deployed as the option for Defender under MacOS and not a LOB…anyone else run into this?
Working on deploying Defender on MacOS via intune…most of it is solid, however I noticed “Microsoft Defender Endpoint Security Extension” doesnt have full disk access and needs it…the native “Microsoft Defender” has it ok…its deployed as the option for Defender under MacOS and not a LOB…anyone else run into this? Read More
Outlook.live.com disappearance of find related e-mails from sender
In the past / old version of outlook/hotmail I was able to right-click on a sender and select “Find all related e-mails” from this person.
Now this feature is gone? Where did it go?
If it is gone what is another way to conveniently find all e-mails from a particular sender besides having to go to the search bar and type it’s name or e-mail address?
In the past / old version of outlook/hotmail I was able to right-click on a sender and select “Find all related e-mails” from this person.Now this feature is gone? Where did it go?If it is gone what is another way to conveniently find all e-mails from a particular sender besides having to go to the search bar and type it’s name or e-mail address? Read More
Schematized source: when tables or columns are deleted in the source…?
I’m new to using Purview. I started using Data Map to scan in assets from an Azure SQL Database, and enrich those assets with additional metadata (descriptions, classifications, sensitivity labels). Then I intend to have many end users browsing the Data Catalog to discover data, with this additional metadata.
I was concerned about potentially losing the “additional metadata” that I and my team manually enter into Purview. The schema of our Azure SQL Database does change over time, so I wondered: what happens if in the source database, some columns/tables are dropped or renamed, then we run the scan in Purview again? Would we lose any of the additional metadata we entered into Purview?
I did some quick testing, and it seems like “yes,” in some cases we could. From my testing, it seems like if a column like `column_x` is dropped from an existing table in the source, or renamed to `column_y`; then an incremental Purview scan occurs; then, the new version of the table in the Purview Data Catalog no longer has `column_x`. It’s gone. And whatever additional metadata I had entered for that column is lost.
On the other hand, if a table like `table_x` is dropped in the source; then an incremental Purview scan occurs; the old table with the additional metadata we added is still there in Data Catalog. (And presumably, if `table_x` comes back into existence later, with the same columns, then my additional metadata would still be there and still be applicable. Though I didn’t test this.)
Anyway, I am wondering how people handle this. Like maybe a column gets renamed in the source…but I don’t want my scheduled Purview scans to cause me to lose the additional metadata we had entered for `column_x`! I want it to be retained, or at least I want to re-enter it, for the same column with its new name `column_y`. Copy and paste it. Do you ever back up your additional metadata, by exporting assets and their metadata? (Is this even possible? I think I read you could do this.) Any other solutions?
I’m new to using Purview. I started using Data Map to scan in assets from an Azure SQL Database, and enrich those assets with additional metadata (descriptions, classifications, sensitivity labels). Then I intend to have many end users browsing the Data Catalog to discover data, with this additional metadata. I was concerned about potentially losing the “additional metadata” that I and my team manually enter into Purview. The schema of our Azure SQL Database does change over time, so I wondered: what happens if in the source database, some columns/tables are dropped or renamed, then we run the scan in Purview again? Would we lose any of the additional metadata we entered into Purview? I did some quick testing, and it seems like “yes,” in some cases we could. From my testing, it seems like if a column like `column_x` is dropped from an existing table in the source, or renamed to `column_y`; then an incremental Purview scan occurs; then, the new version of the table in the Purview Data Catalog no longer has `column_x`. It’s gone. And whatever additional metadata I had entered for that column is lost. On the other hand, if a table like `table_x` is dropped in the source; then an incremental Purview scan occurs; the old table with the additional metadata we added is still there in Data Catalog. (And presumably, if `table_x` comes back into existence later, with the same columns, then my additional metadata would still be there and still be applicable. Though I didn’t test this.) Anyway, I am wondering how people handle this. Like maybe a column gets renamed in the source…but I don’t want my scheduled Purview scans to cause me to lose the additional metadata we had entered for `column_x`! I want it to be retained, or at least I want to re-enter it, for the same column with its new name `column_y`. Copy and paste it. Do you ever back up your additional metadata, by exporting assets and their metadata? (Is this even possible? I think I read you could do this.) Any other solutions? Read More
Check for matching records in a range using conditional formatting
Hi,
colC and colD contain lists of stocks
C contain both the company name and the stock ticker.
Example: 1SPATIAL PLC (XLON:SPA)
D only contains the company name.
Example: 1SPATIAL PLC
The formula should ignore the stock ticker.
Example: (XLON:SPA)
Each matching record will be in the same row.
The formula should return 1SPATIAL PLC
There are many rows.
Checking for duplicates using conditional formatting might be the best option here, but I can be persuaded to use a different approach.
Thank you.
Hi,colC and colD contain lists of stocks C contain both the company name and the stock ticker.Example: 1SPATIAL PLC (XLON:SPA) D only contains the company name.Example: 1SPATIAL PLC The formula should ignore the stock ticker.Example: (XLON:SPA) Each matching record will be in the same row. The formula should return 1SPATIAL PLC There are many rows. Checking for duplicates using conditional formatting might be the best option here, but I can be persuaded to use a different approach. Thank you. Read More
SLM (Small Language Model) with your Data | Data Exposed
Explore the capabilities and considerations for Small Language Models (SLMs). Whether you’re using out-of-the-box SLMs or customizing/fine-tuning them with your own data, we’ll cover practical considerations and best practices. Enhance your language processing capabilities with SLMs!
Resources:
View/share our latest episodes on Microsoft Learn and YouTube!
Microsoft Tech Community – Latest Blogs –Read More
How to evenly distribute an amount across multiple periods that are variables
I have a total dollar amount that needs to be spread over multiple periods. The periods are tied to the construction of a facility. As such, the timeline needs to be easily adjusted in case the construction takes longer or shorter than expected. The amount to distribute can occur evenly over the time period. I white-knuckled a formula but was wondering if there is a more elegant way to calculate this.
In the below example, the cost to build a Commercial Scale Facility is $15,000. Construction will begin on 9/1/2024 and be completed on 9/1/2027. The cost is evenly distributed between each month of construction.
I have a total dollar amount that needs to be spread over multiple periods. The periods are tied to the construction of a facility. As such, the timeline needs to be easily adjusted in case the construction takes longer or shorter than expected. The amount to distribute can occur evenly over the time period. I white-knuckled a formula but was wondering if there is a more elegant way to calculate this. In the below example, the cost to build a Commercial Scale Facility is $15,000. Construction will begin on 9/1/2024 and be completed on 9/1/2027. The cost is evenly distributed between each month of construction. Read More
Autofit height View Ms Lists
I use sharepoint with Ms lists and before I could see my list views in kanban at the top with the “Autofit height” option, lately that option no longer appears, do you know how to enable it?
I use sharepoint with Ms lists and before I could see my list views in kanban at the top with the “Autofit height” option, lately that option no longer appears, do you know how to enable it? Read More
How to look my data in one cell not overlapped
Hello everyone,
A quick question – how to make/look my data in highlighted green area to be visible completely without occupying more space(or extending cell length as we used to print the document on one page. My purpose is to show data in a way so that it prints on page corrcetly.
Hello everyone, A quick question – how to make/look my data in highlighted green area to be visible completely without occupying more space(or extending cell length as we used to print the document on one page. My purpose is to show data in a way so that it prints on page corrcetly. Read More
Hiding SharePoint Files and Pages from Copilot
How do we stop legacy items from returning as search results in Copilot?
I only know how to exclude specific sites, but we’d like to exclude just particular pages without unpublishing them or revoking access.
I was considering using either the Term Store or the Search schema. I have been unable to find any documentation regarding this, so not sure it is possible yet.
How do we stop legacy items from returning as search results in Copilot?I only know how to exclude specific sites, but we’d like to exclude just particular pages without unpublishing them or revoking access. I was considering using either the Term Store or the Search schema. I have been unable to find any documentation regarding this, so not sure it is possible yet. Read More
FsLogix Outlook stuck at „Load Profile“
Hi all,
Fslogix profile and office container are loaded correctly. All apps starts without an issue except Outlook. Outlook startup is stuck at „Load Profile“.
the only workaround is to logoff the user.
thanks
Hi all,Fslogix profile and office container are loaded correctly. All apps starts without an issue except Outlook. Outlook startup is stuck at „Load Profile“.the only workaround is to logoff the user. thanks Read More
Azure Backup for SQL Server in Azure VM: Tips and Tricks from the Field
Authored by: Michael Piskorski, Laura Grob, Wilson Souza, Armen Kaleshian, David Pless, Anna Hoffman
Setting the Stage
We recently worked with a customer that migrated their Windows and SQL Servers to Azure that wanted to use Azure Backup for a consistent enterprise backup experience. The SQL Servers had multiple databases of varying sizes, some that were multi-terabyte. A single Azure Backup vault was deployed using a policy that was distributed to all the SQL Servers. During the migration process, the customer observed issues with the quality of the backups and poor virtual machine performance while the backups were running. We worked through the issues by reviewing the best practices, modifying the Azure Backup configuration, and changing the virtual machine SKU. For this specific example, the customer needed to change their SKU from Standard_E8bds_v5 to Standard_E16bds_v5 to support the additional IOPS and throughput required for the backups. They used premium SSD v1 and the configuration met the IOPS and throughput requirements.
In this post, we share some of the techniques we used to identify and resolve the performance issues that were observed.
Azure Backup Vault configuration
There are three primary areas to consider when defining your Azure Backup vault strategy for SQL database workloads. They are the following: 1). private network access 2). DNS resolution and 3). limitations of the Backup Vault. In our experience, we found that most customers, especially in regulated industries, are required to disable public network access to the Azure Backup vault requiring additional configuration to be made.
There are several important considerations to be aware of when applying this restriction to the Azure Backup vault.
When creating your private endpoint make sure you select the correct resource type.
Azure Recovery Services Vault supports private endpoints for Azure Backup and Azure Site Recovery. Ensure you select Recovery Services vault as the Resource and AzureBackup as the Target subresource type. . The ‘Create and use private endpoints (v2 experience) for Azure Backup’ article describes the process of configuring private endpoints for use by an Azure Backup Recovery Vault.
Ensure private endpoints for target resources are integrated into an Azure Private DNS zone.
Databases on the virtual machine configured for protection by Azure Backup will require name resolution of the backup vault private endpoint. In addition, having a well architected DNS strategy is necessary to ensure reliability of the service. This area is often challenging for newer Azure customers. The Azure Private Endpoint DNS integration guide provides a good overview of private endpoints and how they are integrated into Azure Private DNS Zones.
Consider Azure Backup limits
Review the current limitations outlined in the Backup Vault Support documentation
Using multiple Azure Recovery Services Vaults in Azure Backup can enhance data security and disaster recovery readiness, but it also increases management complexity. While Azure’s immutable vaults protect against ransomware and other threats, ensuring a clear and efficient recovery process is important.
Balancing the number of vaults is key to aligning with recovery objectives and compliance needs without adding unnecessary administrative overhead.
Review the Azure Backup Reliability document to ensure you select the proper configuration for your Recovery Services Vault to meet your architecture requirements.
In designing a comprehensive and consistent enterprise backup strategy, consider the types of workloads that are protected, the amount of data protected, the sensitivity of the data being backed up, and the total number of protected items that a single vault can support. The best practice is to protect similar workloads. For example, virtual machines are configured against one vault and databases against a separate vault. This allows for ease of management, isolation per least privilege, and autonomy allowing each workload owner to manage their own backups. The Backup cloud and on-premises workloads to cloud guide provides a comprehensive set of best practices and designs for a variety of multiple vault architectures.
Azure Virtual Machine sizing considerations
Given the Azure Well-Architected Framework, selecting the appropriate virtual machine size for a workload requires consideration of its business continuity requirements. Some customers may not take their backup requirements into consideration when collecting the performance metrics to properly size the VM SKU and disk configuration. This means the IOPS, disk throughput, and network throughput metrics may not reflect the backup activity in the data collected during the sizing exercise.
For example, Azure Backup for SQL Server can (by default) support running up to 20 database backups simultaneously using 200 MB/sec of disk throughput per database. If you have larger databases on an instance those iterative backup processes can take a long time to complete. You will want to consider the following items when determining how many backups can run concurrently:
Application workloads and other business processes that may run at the same time backups are running
Disk IOPS and throughput required by both the SQL Server application workload and the backups
VM IOPS and throughput limits
Network consumption rate
Ensure you consider the window available for your backups and maintenance tasks as this will help determine if you need to scale up the VM or storage or reduce the number of concurrent backups. If you decide you need to override the default backup configuration, you do have an option to configure the number of concurrent backups. To override the default setting, you will want to create an ExtensionSettingsOverrides.json file on the server located in the C:Program FilesAzure Workload Backupbin folder. You would configure the Default Backup Tasks Threshold parameter to 5 in this example by using the following code:
{“DefaultBackupTasksThreshold”: 5}
Once you save your changes and close the file you will need to restart the following service: AzureWLBackupCoordinatorSvc
Ensure there are sufficient IOPS and throughput capacity at both the virtual machine and disk layers. You can use the metrics in the Azure portal for the virtual machine and disks to check if the maximum throughput is being reached. If your virtual machine supports less than 200 mb/sec, Azure Backup will not transfer the data at the optimum speed. There is a spreadsheet that can be leveraged to assist with sizing (link).
There are a few items to consider when you are deploying your backup policies. The policy supports all three types of SQL backups: full, differential, and transaction log backups and different recovery models. One key point to consider is the backup compression setting in the policy overrides the SQL Server instance level setting so if you want to use backup compression, ensure that you check that box in the policy. If you choose to leverage differential backups you can trigger one per day in the backup policy. Transaction log backups can be triggered as often as every 15 minutes.
Auto-protection is an option that runs a discovery process typically every 8 hours to determine if new databases have been added to the instance. When it finds newly created databases it will trigger the backup within 32 hours. You can manually run a discovery to ensure new databases are backed up sooner. If you select auto-protect you cannot exclude databases, it will cover all databases on the instance.
The option to configure writing backups to local storage and the recovery vault simultaneously can be invoked for the backup type of your choice by creating a PluginConfigSettings.json file in the C:Program FilesAzure Workload Backupbinplugins location. An example of the JSON code is shown below:
{
“EnableLocalDiskBackupForBackupTypes”: [“Log”],
“LocalDiskBackupFolderPath”: “E:\LocalBackup”,
}
This example enables simultaneous writes of the transaction log backups to the E drive as well as the recovery services vault.
Azure Backup for SQL Server has several feature considerations and limitations which have changed over time. The most current information can be found in the ‘Support matrix for SQL Server Backup in Azure VMs’ and under the ‘Feature considerations and limitations‘ section.
Troubleshooting tools and tips
In this section we will discuss some tips you can leverage if you run into issues with backups failing or running longer than expected. The two primary areas of focus will include metric data for virtual machine and disk throughput as well as how to correlate the databases to the scheduled tasks for backups.
Instrumentation
One area to consider when experiencing slow backup performance is the constraints on the virtual machine and any attached disk. To determine the cause of the slowness on both the virtual machine and disk, review the metric “VM Uncached Bandwidth Consumed Percentage”. This metric defines if your virtual machine is I/O capped. The value is calculated by dividing the total actual uncached throughput on a virtual machine by the maximum provisioned virtual machine throughput. If you observe this metric reaching 100 percent during a backup job, this will affect the performance of the backup due to the virtual machine using all the uncached bandwidth of the VM. This screenshot identifies the metric that is being referenced in this article.
To see if there are constraints on the disk level you would look at the associated disk IOPS and throughput limits using the Logical Disk MB/s and Logical Disk IOPS within the virtual machine’s Insights blade. The screenshots below show the metrics referenced.
Now let us use this example, if you are using the virtual machine SKU “Standard_E8bds_v5” the max uncached throughput is 650MBps. When you take into consideration that each database backup that is running may consume up to 200MBps of uncached throughput it is important to ensure your virtual machine and disk configuration can support the number of concurrent backups running.
Leveraging the metrics and logging will help you when troubleshooting performances issues and below we will discuss a script you can leverage to help to split database backups into multiple schedules. The key point to understand is it is particularly important to understand the SKU size of the virtual machine and disks with relation to uncached throughput limits and how many databases backups are required on your server in relation to your backup requirements / goals.
The recommended amount of uncached throughput at the virtual machine or disk is above 850 MBps, which is referenced in Azure Backup support matrix for SQL Server Backup in Azure VMs.
Correlate Database Backup to Task Script:
As we have discussed, the disk throughput capacity based on the virtual machine SKU and disk configuration is important in designing a performant and reliable backup strategy in Azure. . If you observe throughput levels that are hitting the maximum causing throttling, you can leverage the script below to determine which database backups are scheduled to run and when. Based on this information, you can then choose to either scale up the virtual machine SKU, modify the disk configuration (depending on where the throttling is occurring), or modify the number of concurrent backups as detailed in an earlier section of this post.
Sample Code:
# Script to associate database name and type of backup to GUID
$tasks = Get-ScheduledTask -TaskPath *IaaSWorkloadBackup* | ? { $_.taskpath -notlike “*HKTask*” -and $_.taskpath -notlike “*Telemetry*” -and $_.taskpath -notlike “*WorkloadInquiry*” -and $_.state -ne “Disabled” }
$dirlist = dir “C:Program FilesAzure Workload BackupCatalogWorkloadExtDatasourceCatalogWorkloadExtDatasourceTable*”
$tasklist = @()
foreach ($datasource in $dirlist)
{
$json = Get-Content -Path $datasource.fullname | ConvertFrom-Json
$schedules = @($tasks | ? { $_.taskpath -like (‘*’ + $json.datasourceId + ‘*’)})
foreach ($task in $schedules)
{
$jobsobject = New-Object PSObject
Add-Member -InputObject $jobsobject -MemberType NoteProperty -Name poname -Value $json.poName
Add-Member -InputObject $jobsobject -MemberType NoteProperty -Name datasourceid -Value $json.datasourceId
Add-Member -InputObject $jobsobject -MemberType NoteProperty -Name TaskName -Value $task.URI.Split(“”)[-1]
Add-Member -InputObject $jobsobject -MemberType NoteProperty -Name Container -Value $json.containerUniqueName
switch ($task.Triggers)
{
{$_.repetition.interval -ne $null} {Add-Member -InputObject $jobsobject -MemberType NoteProperty -Name Type -Value “Log”; break}
{$_.DaysOfWeek -gt 1} {Add-Member -InputObject $jobsobject -MemberType NoteProperty -Name Type -Value “Differential”; break}
DEFAULT {Add-Member -InputObject $jobsobject -MemberType NoteProperty -Name Type -Value “Full”}
}
$tasklist += $jobsobject
}
}
$tasklist | Sort-Object poname | ft -AutoSize
This sample code once executed on the virtual machine will provide an output of all the databases and correlate them to the backup task ID as shown in the below for an example output.
This output will show you the database name that is assigned to the datasourceid and the type of backup. Now you can open task scheduler on the virtual machine and relate the database to the scheduled task. You can see in the screenshot below that the “model” database is associated with datasourceid “5422562649858860379” which correlates to the schedule the database is on for log and database backups.
This allows you to see the Azure Backup schedule for your database backups and helps you plan the number of concurrent backups you can run based on the uncached throughput capacity of your virtual machine and disk configuration.
Summary
In this post, we provided details on the main considerations when designing a solution that leverages Azure Backup to protect both your Azure virtual machines and SQL Servers.
Below are the key summary points from this article:
Backup Vault Configuration: It is crucial to consider private network access, DNS resolution, and the limitations of the Backup Vault. Private endpoints must be correctly configured, and the Azure Private DNS zone integrated.
VM Sizing Considerations: Proper VM sizing should account for backup requirements, as Azure Backup for SQL Server can run multiple database backups simultaneously, affecting disk throughput and performance.
Backup Policy Deployment: Policies support full, differential, and transaction log backups. Note that backup compression settings in the policy override SQL Server instance level settings.
Troubleshooting Tools: Utilize metric data for VM and disk throughput and scripts to correlate databases to scheduled tasks for backups, ensuring performance is not hindered by reaching uncached bandwidth limits.
Links to reference materials and scripts to troubleshoot common issues have been provided to facilitate the use of Azure Backup to protect your SQL server workloads. Any scripts, metrics, or limitations may change over time as our products continue to evolve.
Helpful references:
FAQ – Backing up SQL Server databases on Azure VMs – Azure Backup | Microsoft Learn
Back up SQL Server databases to Azure – Azure Backup | Microsoft Learn
Azure Backup support matrix for SQL Server Backup in Azure VMs – Azure Backup | Microsoft Learn
Azure Backup support matrix – Azure Backup | Microsoft Learn
Back up SQL Server databases to Azure – Azure Backup | Microsoft Learn
Restore SQL Server databases on an Azure VM – Azure Backup | Microsoft Learn
Automation in Azure Backup support matrix – Azure Backup | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
CMMC Workbook
Does anyone know why the CMMC workbook is located in Microsoft Sentinel instead of in Defender for Cloud? Firms that are not using Sentinel cannot use this potentially helpful tool.
Will that workbook function if it is stored elsewhere in Azure?
Does anyone know why the CMMC workbook is located in Microsoft Sentinel instead of in Defender for Cloud? Firms that are not using Sentinel cannot use this potentially helpful tool. Will that workbook function if it is stored elsewhere in Azure? Read More