Author: Tony Redmond
Microsoft Issues Updated Guidance for Defender for Office 365 Licensing
Changes to MDO P2 to Remove Requirements to License All Shared Mailboxes
Last August, I wrote about the issue of unexpected costs for Microsoft 365 customers when Microsoft Defender for Office 365 Plan 2 (MDO P2) was enabled in a tenant because MDO P2 is included as a service plan in Office 365/Microsoft 365 E5 licenses. No administrator action is required to use MDO P2; the presence of an E5 license is enough to activate its protection.
According to the MDO service description (August 2025), when MDO P2 is used by a tenant, “licenses must be acquired for users or mailboxes falling under one or more of the following scenarios:
- All Exchange Online users on the tenant. This is because Plan 2 features and capabilities protect all users in the tenant.
- All shared mailboxes on the tenant.”
In other words, the presence of just one E5 license automatically invokes the need for MDO P2 licenses for every Exchange Online user and shared mailbox. Buying MDO P2 at $5/user/month to remain compliant quickly racks up a substantial bill.
Group mailboxes also benefit from MDO P2 protection, but the service description makes no mention of a license requirement for these mailboxes, despite the efforts made by Microsoft over the years to give group mailboxes equivalent functionality to shared mailboxes.
Removing Inconsistency and Incoherence
In short, inconsistencies and incoherence abounded in the MDO P2 licensing requirements. The MDO team agreed to take the issue away to see what could be done to improve matters, and now they’ve come back with a revised licensing scheme.
The big change is the removal of the requirement for MDO P2 licenses for all user and shared mailboxes when E5 licenses are present. The previous position was indefensible and it’s good that Microsoft agreed.
Instead of a “MDO P2 licenses required for all mailboxes” approach, Microsoft uses the “if you benefit from a feature, you pay for a feature” rule that already applied to MDO P1 licensing. The new licensing terms are shown in FIgure 1:

Microsoft Defender for Office 365 P2 can be licensed through any of the following:
“Microsoft Defender for Office 365 Plan 2 standalone, Microsoft 365 E5/A5/G5, Office 365 E5/A5/G5, Microsoft Defender Suite/EDU/GOV/FLW, and Microsoft Defender + Purview Suite FLW provide the rights for a user to benefit from Microsoft Defender for Office 365 Plan 2.”
In other words, tenant administrators must decide which mailboxes should benefit from MDO P2 and then license those mailboxes accordingly. Licensing is automatic for accounts with E5 licenses because the MDO P2 service plan is already present. Shared mailboxes that tenants want to receive MDO protection will need to be licensed.
Custom Policies Required to Scope MDO Coverage
Unless a tenant licenses every user and shared mailbox, the new licensing arrangement means that administrators must create custom scoped policies to enable the MDO P2 safe links, safe attachments, and anti-phishing features for target groups rather than using the scope of the default policy to “cover everyone.” The target group can include user and shared mailboxes.
In large tenants, several custom policies will probably be required to cover different target groups. Dynamic distribution groups aren’t supported for scoped policies, but dynamic Microsoft 365 Groups are. Using dynamic Microsoft 365 Groups creates the requirement for Entra P1 licenses for all users that are members of a dynamic group.
One issue is that the membership rules for dynamic Microsoft 365 Groups don’t offer an off-the-shelf way to find shared mailboxes. Shared mailboxes will need to be marked in some manner such as a value in a custom attribute to allow a membership rule to find and include their accounts in group membership. On the upside, a dynamic Microsoft 365 Group to find shared mailboxes for MDO protection can also assign the MDO P2 license to the mailboxes.
I can see why Microsoft has gone down the path of using custom scoped policies to target the mailboxes to receive MDO protection. It’s a feature that already exists and works, but I’m not sure how much use custom scoped MDO policies get in the real world because I have never used these kinds of policies. I’m also unsure about the amount of administrative effort that will be necessary to set up and maintain the policies, especially in large tenants.
Group Mailboxes Don’t Need MDO Licenses
No mention is made about the group mailboxes used by Microsoft 365 Groups. This might be because Microsoft 365 Groups come about through the creation of other Microsoft 365 objects, like Teams and group-connected SharePoint Online sites. By contrast, creating a shared mailbox is a standalone operation to support the work of a team or to preserve a leaver mailbox, so it could be argued that it would be unfair to insist on licensing the automatic operation. In any case, I suspect that some debate will continue on this point.
Guiding Principles
The new licensing arrangement for MDO P2 can be broken down into four guiding principles:
- MDO licenses are required for any mailbox (or rather, the user account that the mailbox belongs to) that comes within the scope of an MDO policy to enable features like safe link and safe attachments.
- The majority of MDO processing happens during mail flow delivery to mailboxes. If a mailbox comes within the scope of an MDO policy (including a policy covering all mailboxes), it gets the benefit of the MDO features. If the account isn’t within the scope of an MDO policy, it doesn’t.
- When considering the protection of shared mailboxes, only include shared mailboxes that actively receive external email that require protection. Exclude shared mailboxes like those used to retain leaver data (use inactive mailboxes instead), defunct mailboxes (consider their removal), and mailboxes used exclusively to process internal email.
- MDO licenses don’t need to be assigned to the accounts that own shared mailboxes. All Microsoft requires is that the tenant has sufficient MDO licenses to cover the user and shared mailboxes that come within the scope of MDO policies.
- Accounts that benefit from MDO P2 features must be licensed for those features.
The new MDO licensing arrangement is better, but it requires more thought and action from tenant administrators, especially to configure and maintain policies to make MDO P2 features available to user accounts.
Insight like this doesn’t come easily. You’ve got to know the technology and understand how to look behind the scenes. Benefit from the knowledge and experience of the Office 365 for IT Pros team by subscribing to the best eBook covering Office 365 and the wider Microsoft 365 ecosystem.
Using the SharePoint Site Attestation Policy
Forcing Owners to Confirm Details of Their Sites
The site attestation policy is part of the site lifecycle management component of SharePoint advanced management (SAM). It’s also one of the SAM features available to tenants with Microsoft 365 Copilot licenses. The basic idea is to force site owners to periodically attest that the settings of their site, including its membership, remain valid. If the site owners can’t or don’t confirm the site details, SharePoint Online can enforce an action such as archiving the site.
Microsoft 365 roadmap item 494159 lists the site attestation policy as generally available from August 2025. However, that’s not quite the case as the policy is still listed in the SharePoint admin center as a preview feature (Figure 1).

Imposing site attestation can clear out many sites that form the digital debris that clogs up Microsoft 365 tenants. Apart from releasing expensive SharePoint “hot” storage by moving the content of non-attested sites into “cold” archive storage, the biggest benefit is to remove the files held in these sites from Copilot processing. This reduces the risk that obsolete and incorrect information will find its way into Copilot responses and improves the overall quality of Copilot processing.
Configuring a Site Attestation Policy
Like the other site lifecycle policies, configuring a site attestation policy is pretty straightforward. The usual process is to configure a policy in simulation mode so that the policy runs to generate a report about the sites within the policy scope for administrators to review.
Scoping means defining what sites the policy should process, like all team-connected sites. In Figure 2 I’ve selected to combine several criteria to form a precise scope. You can select one or more container management sensitivity labels to use. Filtering by site creation source is interesting because it allows you to select sites created using methods like PnP, PowerShell, or the SharePoint admin center. Running the policy in simulation mode will create a report to tell you exactly what sites match the scope.

The policy configuration specifies how often the policy runs, who must attest sites, and what SharePoint Online should do if attestation doesn’t happen. In Figure 3, we see the configuration for an annual review where lack of attestation by site owners leads to sites being moved to Microsoft 365 Archive.

Given that most SharePoint Online sites are used with Teams and that many Microsoft 365 tenant administrators probably couldn’t differentiate between site owners and site administrators, I wonder if the configuration could be simplified to a single option that combines the two. Just a thought.
After running in simulation to identify any issues and making necessary tweaks, such as including or excluding certain sites, the attestation policy can be launched to do its business.
Site Owner Actions
Turning on the site attestation policy causes SharePoint Online to send Outlook actionable messages to site owners to ask them to confirm site details. I received 63 messages within ten minutes, including duplicate messages for a couple of sites.
The initial message (Figure 4 left) informs the site owners about their responsibilities and sets a attestation deadline. Pressing the “Yes, settings are accurate” button allows the owner to attest that everything is OK without leaving the message. Acknowledgement is automatic by updating the same message (Figure 4 – right).

You’ll notice that no button exists for a site owner to declare that the site settings are inaccurate. The assumption is that the site owner will simply ignore the messages sent by SharePoint Online. After three monthly warnings, SharePoint will enforce the action set in the policy. It would be nice to give site owners the ability to accelerate the process with an option to take the policy action immediately. Maybe that will come in a future release.
Removing Digital Debris is Goodness
Regular site attestation seems like a solid idea. Anything to remove debris from a tenant is goodness. One concern that I have is that moving a team-connected site to Microsoft 365 Archive does nothing to affect the team. Users won’t be able to access files in the SharePoint site, but shouldn’t an archive action process everything? After all, Teams supports team archiving.
Insight like this doesn’t come easily. You’ve got to know the technology and understand how to look behind the scenes. Benefit from the knowledge and experience of the Office 365 for IT Pros team by subscribing to the best eBook covering Office 365 and the wider Microsoft 365 ecosystem.
Modernizing Sensitivity Label Grouping for App Display
The End of Parent-Child Label Relationships
Message center notification MC1111778 (last updated 24 September 2025, Microsoft 365 roadmap item 386900) announces the modernization of sensitivity label grouping to a “dynamic architecture” consisting of labels and label grouping rather than parent and child labels. The new architecture supports moving sensitivity labels between groups “without losing referential integrity.” In other words, the settings of sensitivity labels remain intact when they are moved from one label group to another.
Removing the Last Vestiges of AIP
When Microsoft launched Azure Information Protection (AIP) labels in 2016, they adopted a two-tier parent-child model for organizing the display of labels. In this model, the parent label functions as a navigation location for child labels and cannot be applied to files. When sensitivity labels took over from AIP labels, the same arrangement was kept. In Figure 1, the Secret label is the parent and the All Company and My Team are child labels.

When details of an assigned label are viewed in client user interfaces, the structure is displayed as ParentChild (Figure 2).

The problem with the parent-child structure is its strict nature. Once a child label is created and deployed in active use, it becomes very difficult (if not practically impossible) to change the labeling structure to reflect current business requirements. The inflexible nature of the parent-child structure is the main reason why I never recommended its use to customers. It’s difficult enough to construct a workable labeling structure for a tenant without having to deal with inflexible groupings.
Public Preview and Migration
Microsoft is currently deploying the modern label architecture in public preview with the aim of attaining general availability in December 2025. New tenants created after 1 October 2025 must use the new architecture. No administrator action is required before general availability occurs, but it might be a good idea afterwards to review the current label structure to see if sensitivity labels can be presented in more effective manner to end users.
When a tenant is upgraded, any existing parent-child groups are migrated to the new architecture. During the preview, if a tenant has parent-child label groups, they can use the manual migration method invoked from the Information Protection section of the Purview portal (Figure 3). Migration is an irreversible process, so take the time to read up before plunging ahead and migrate a set of sensitivity labels in a test tenant first.

Launching the migration is preceded by notification of what the expected outcome will be (Figure 4). My tenant has used sensitivity labels since their AIP predecessors and has accumulated many different sensitivity labels used for content protection and container management over the years, including two parent-child groups (for testing only).

The migration took just a few seconds and only difference seen afterwards is that the parent labels are now label groups and the child labels are members of those groups. The Secret parent viewed earlier became a label group and also a standalone sensitivity label. The standalone label takes the name, GUID, and settings as the original parent label. Following the migration, I updated the display name of the affected labels and label groups to make their function obvious.
The new architecture exposes options in the Purview portal to move sensitivity labels into and out of groups. This is the big advantage of the change as administrators can now easily construct and change label groups according to business demands. For instance, I created a label group called Email Labels to organize the sensitivity labels most appopriately used for email to give additional guidance to end users. Figure 5 shows how the new label group appears in OWA.

Notice how all the sensitivity labels in the Email Labels group have the same label color. This might affect the carefully-crafted custom colors you might have assigned to sensitivity labels in the past. Another important change is that the standalone labels moved into the label group have priority orders based on the priority assigned to the label group. Label priority is supposed to indicate the degree of confidentiality or sensitivity of files that labels are applied to, so some rearrangement of labels is probably needed here. A change in label priority can lead to an increase in document mismatch notifications, and that’s not a good thing.
Although you can move container management labels into label groups there’s no point in doing so. First, organizations tend to have relatively few container management labels, so there’s no need for grouping. Second, the applications that use container management labels, like Teams and SharePoint Online, display container management labels in a simple list.
PowerShell Changes
A set of cmdlets in the security and compliance module support sensitivity labels. The label settings manipulated by the cmdlets use the same properties to update label group membership as was used to associate a child label with a parent label. For instance, a label group has the isParent and isLabelGroup settings set to true:
$Label = Get-Label -Identity 'Email Labels' $Label.Settings [isparent, True] [islabelgroup, True]
A sensitivity label in a label group has the isParent property set to false and the identifier for the label group in its ParentId property:
$Label = Get-Label -Identity '1b070e6f-4b3c-4534-95c4-08335a5ca610' $Label.Settings [contenttype, File, Email] [isparent, False] [parentid, 62acd157-1757-4361-9a53-71ea316279ca]
To move a label into a label group, run the Set-Label cmdlet and update the ParentId parameter with the identifier for the label group. Here’s an example of moving a label into the Email Labels group:
Set-Label -Identity 'Employee Confidential' -ParentId (Get-Label -Identity 'Email Labels').ImmutableId
To move a sensitivity label out of a label group, pass $null or the identifier for another label group as the parent identifier.
Heading to a New Architecture
Referring to a new way to manage sensitivity labels for display in applications as a new architecture is a stretch. However, it’s still a good change. It will take time for tenants to figure out how to best use label groups, but that will come in time. In the meantime, the task is to migrate to the new architecture, either manually or by waiting just a few more weeks.
Insight like this doesn’t come easily. You’ve got to know the technology and understand how to look behind the scenes. Benefit from the knowledge and experience of the Office 365 for IT Pros team by subscribing to the best eBook covering Office 365 and the wider Microsoft 365 ecosystem.
Auto-Updating Teams Work Location is Not Employee Monitoring
Setting Teams Work Location by Reference to a Wi-Fi Network
I’m amazed at some of the commentary flowing from MC1081568 (last updated 24 October 2025, Microsoft 365 roadmap item 488800) about a new Teams feature to automatically set a work location based on connecting to a Wi-Fi network or known peripherals such as Teams Rooms devices. The way some people described it, you’d think that this is tantamount to Microsoft making a method available for managers to keep an eye on employee work habits. The simple truth is that automation work location detection is not, and anyone who thinks that it is reveals a woeful lack of knowledge about how Teams works.
Setting work location has been a feature in Teams and Outlook for quite a while (Figure 1). The idea is that people can collaborate more effectively with co-workers if everyone knows where everyone is. Knowing where people are is important from a support perspective too, especially when Teams Phone serves as the corporate phone system.

Today, users must set their location manually. I forget to do so as a matter of course, just like I suspect many others do. But Teams knows when people connect to a work network. At least, it can if automatic detection is configured in Microsoft Places. In addition, the tenant must configure a Teams work location detection policy to enable automatic detection because by default, the feature is off.
Managing the Work Location Detection Policy with PowerShell
To configure the policy, connect to Microsoft Teams PowerShell and either run the Set-CsTeamsWorkLocationDetectionPolicy to switch automatic detection on by default for all users or (recommended) run the New-CsTeamsWorkLocationDetectionPolicy cmdlet to create a new work location detection policy and assign that policy to the users who you want the policy to apply to. This command creates a new policy:
New-CsTeamsWorkLocationDetectionPolicy -Identity AutoDetectNetwork -EnableWorkLocationDetection $true
To assign the policy to user accounts, use the Grant-CsTeamsWorkLocationDetectionPolicy cmdlet:
Grant-CsTeamsWorkLocationDetectionPolicy -Identity Lotte.Vetler@office365itpros.com -Policy AutoDetectNetwork
The Get-CsTeamsWorkLocationDetectionPolicy reports which work location detection policies enable automatic detection:
Identity EnableWorkLocationDetection -------- --------------------------- Global False Tag:NetworkDetectOn True Tag:AutoDetectNetwork True
It’s important to remember that Teams clears location information at the end of the working day and does not update locations outside working hours (based on Outlook settings).
Keeping an Eye on User Locations
For those who suspect that managers will monitor their locations to check where people are, my response is that managers can do this today by checking the user profiles for their employees where their location is displayed (Figure 2).

Having been a senior manager in several organizations, my view is that any manager that devotes time to this kind of checking needs to reevaluate how they allocate their time. It is something that might be justified when monitoring a problem employee, but not elsewhere. If people are really worried about management oversight, they can use the Teams browser or mobile clients. Detecting location automatically only works for the Teams desktop clients for Windows and MacOS.
Privacy is Important
People are right to worry about their privacy, and they should understand the potential impact of new functionality on how they work. In this case, I don’t think that there’s much to complain about. There are better tools available if an organization wants to monitor employee productivity. Automation work location detection by Teams to register if someone is in the office is not going to worry the people who build employee monitoring software. It shouldn’t worry you either.
Learn about managing Teams and the rest of Microsoft 365 by subscribing to the Office 365 for IT Pros eBook. Use our experience to understand what’s important and how best to protect your tenant
Stealing Access Token Secrets from Teams is Hard Unless a Workstation is Compromised
French Security Company Highlights Stealing Teams Access Tokens from the Local State File
On October 23, 2025, a French security company called Randorisec, published an article about stealing Microsoft Teams access tokens in 2025. Over the next few hours, I received several messages asking if the news as reported was serious and required action. My response was “Nope.”
I don’t think that the article surfaces any new information. More importantly, the compromise as described is only possible if attackers first manage to gain control over a workstation running Teams. In that scenario, the problem is more serious than fetching a few access tokens to use to send messages with the Graph API. Let’s discuss what the article reveals and why I’m sanguine about its findings.
The Teams Local State File
The discussion centers on fetching content from the local state file used by Teams, which is found in:
%LocalAppData%PackagesMSTeams_8wekyb3d8bbweLocalCacheMicrosoftMSTeamsEBWebViewLocal State
The article explains how to fetch and decrypt cookies protected using the Chromium Data Protection API (DPAPI), which in turn are used to fetch access tokens. I’m not sure that there’s anything new here because I found several articles to explain the process (here’s a good example). Chromium-based browsers use JSON-formatted local state files to store information needed for browser sessions, including encrypted keys used to protect sensitive information like user passwords.
Why Does Teams Use a Local State File?
What people might not understand is why Teams uses a local state file to hold information about the current client configuration, software version, other client settings, and encrypted content (Figure 1). The answer is that the Teams V2 client architecture depends on the WebView2 component. WebView2 uses the Edge rendering engine to display content within apps, including Teams, the new Outlook for Windows, and features shared between Outlook clients like the Room Finder. Microsoft includes the WebView2 component with Office and other products.

Because the Teams clients are deeply integrated with WebView2, it makes sense to adopt other Chromium constructs, like the local state file and DPAPI, and that’s probably why you end up with a Teams-specific local state file that behaves much like the local state file used by Chromium browsers.
Access Tokens for Teams
Eventually, the researchers end up with access tokens that can be used to interact with Teams via the Graph API. Getting to the access tokens requires fetching them from the cookies SQLlite database. This file is found in the %LocalAppData%PackagesMSTeams_8wekyb3d8bbweLocalCacheMicrosoftMSTeamsEBWebViewWV2Profile_tfwNetwork folder and is locked when a Teams client is active.
The assertion that they can use the tokens to send email is erroneous. As pointed out in the article, the tokens are for use with Teams, not Exchange Online, so the permissions granted in the tokens do not permit use of the Mail Send API.
Local State File is Inaccessible Unless a Device is Compromised
Don’t get me wrong. Security researchers do a great job of finding weaknesses in products before attackers figure out how to use those weaknesses to do damage. I applaud the efforts of the Randorisec team, but I just don’t think that there’s anything surprising to become too concerned about. The attempt to hype the problem by Cyber Security News is also regretable. I wonder if either the researchers or reporter actually know anything about how Teams works, but hey, all publicity is good.
I keep on going back to the simple fact that before an attacker can access the Teams local state file and cookies database, they’ve broken into the workstation and therefore have full access to whatever’s on that device. In all probability, they can start the Teams client and can send chats and channel messages without needing to fetch and decrypt information.
The best defence is to stop attackers from compromising user accounts by deploying strong multifactor authentication. If you can do that, you shouldn’t need to worry about the details of Teams, WebView2, and the cookies file.
So much change, all the time. It’s a challenge to stay abreast of all the updates Microsoft makes across the Microsoft 365 ecosystem. Subscribe to the Office 365 for IT Pros eBook to receive insights updated monthly into what happens within Microsoft 365, why it happens, and what new features and capabilities mean for your tenant.
Allowing Users to Add Enterprise Apps to Entra ID is a Bad Idea
Reviewing Enterprise Apps is a Good Idea
Over the years, I have advised Microsoft 365 tenants to check and clean up enterprise apps regularly. Initially, the Graph APIs available to report information about enterprise apps weren’t too approachable and lacked some data. However, the situation has improved and it’s now easier to get a solid handle on the enterprise apps present in a tenant, the usage of those apps, and the permissions used by apps to access data.
Given that the original clean-up script dates back to April 2020, I’ve been writing a new script based on the Microsoft Graph PowerShell SDK to demonstrate how to generate review data. (Microsoft released V2.32 of the SDK on October 20, 2025, so far, the new version appears to be solid). In any case, once I’ve finished tweaking the code, I’ll write up details about what the script does and release it via the Office 365 for IT Pros GitHub repository.
The Case of the Newly-Added Enterprise Application
One of the checks performed by the script highlights recently added service principals. After writing the code, I was interested to discover the presence of an enterprise app called GuideAnts, added on 15 October 2025 by my account. I couldn’t remember anything about adding such an app. Advancing age has a nasty habit of eroding immediate recall.
In any case, running an audit log search confirmed that my account had added the service principal (Use the Search-UnifiedAuditLog cmdlet to search the audit log for events with operations = “Add Service Principal.”). Here’s an extract from the audit log:
Actor                         : {@{ID=Tony.Redmond@office365itpros; Type=5}, @{ID=1003BFFD805C87B0; Type=3}, @{ID=Azure ESTS Service; Type=1}, @{ID=00000001-0000-0000-c000-000000000000; Type=2}…}
InterSystemsId                : e5fce0de-688c-4e1e-bf64-22d9246ba0e6
IntraSystemId                 : 00000000-0000-0000-0000-000000000000
SupportTicketId               :
Target                        : {@{ID=ServicePrincipal_d448e5cc-80cc-4c95-8aca-356068dc2972; Type=2},@{ID=d448e5cc-80cc-4c95-8aca-356068dc2972; Type=2}, @{ID=ServicePrincipal; Type=2},@{ID=guideants; Type=1}…}
Having still no memory of doing such a thing, I exported my browser history and loaded the CSV file into PowerShell to check it:
$History = Import-CSV browserhistory.csv
$History | Where-Object {$_.pagetitle -like "*GuideAnts*"} | Format-table DateTime, PageTitle, NavigatedToURL
DateTime                 PageTitle                 NavigatedToUrl
--------                 ---------                 --------------
2025-10-15T20:26:54.855Z GuideAnts Notebooks       https://go.guideants.ai/access
2025-10-15T20:26:30.514Z GuideAnts Notebooks       https://go.guideants.ai/login
2025-10-15T20:26:29.801Z GuideAnts Notebooks       https://go.guideants.ai/
This is the kind of interaction captured when someone goes through the consent process to add an enterprise app (Figure 1) and consents on behalf of the organization. There was no doubt. I was the culprit.

This is an example of bad practice in action. I might have been tired, and I might have wanted to check out the app because I was writing about ISV AI-powered add-ins for Microsoft 365 at the time, but these are not acceptable excuses.
Consent Approval Workflow for Enterprise Apps
I violated my personal standards in three ways. First, I added an enterprise app without much consideration, perhaps because the permissions sought for the app were pretty benign. Second, I added an unverified app. Enterprise apps published by ISVs should go through the Microsoft verification process to give tenants some additional trust that the app comes from a reputable publisher.
Third, I used my administrator account. Had I used my normal account, I wouldn’t have been able to add an enterprise app because the tenant settings would block immediate app creation by users. Instead, a request to add the app would have gone through a consent approval workflow for approval by an administrator (Figure 2). Even if that administrator was me, being forced to go through the approval process might have caused me to think why an enterprise app was needed, or to review the reply URLs used by the app and ask myself why these URLs are required.

We live and learn from our mistakes. I hope that I won’t make the same mistake again!
GuideAnts AI Notebooks
Apart from noting the unverified nature of the enterprise app, none of the above is criticism of the GuideAnts app (an AI-powered notebook). The app’s author is Doug Ware, an ex-MVP, who publishes some interesting AI-related content on Elumenotion.com. The app is currently in preview. You can read more about GuideAnts here and decide if you want its enterprise app to exist in your tenant. Use invite code 22VG6Y if you want to join the preview.
Learn how to exploit the data available to Microsoft 365 tenant administrators through the Office 365 for IT Pros eBook. We love figuring out how things work.
Updating the Entra ID Password Protection Policy with the Microsoft Graph PowerShell SDK
Use SDK Cmdlets to Create or Update Password Protection Policy Settings
A reader asks if the script written for the article about updating the Entra ID banned password list can be used to update other settings in the Entra ID password protection policy. The answer is “of course.” The code is PowerShell, and it can be adapted to update any of the password protection settings found in the Entra admin center (Figure 1).

A few considerations must be remembered when updating the Entra ID password protection policy:
- You don’t need additional licenses to use the default password protection policy. If you create a custom policy by updating settings, user accounts must be licensed with Entra P1 or P2.
- Custom password policy settings are immediately effective across the entire tenant. You can’t assign a custom password policy to specific users or groups.
- In a hybrid environment, password protection can extend to Active Directory.
Creating a Password Protection Policy
The underlying concepts for creating a custom password policy are similar to the management of other Entra ID policies (like the Microsoft 365 groups policy):
Check if a custom policy exists, or rather, a directory setting object created using the directory setting template for password rules. The template always has the identifier 5cf42378-d67d-4f36-ba46-e8b86229381d, so we can check if a custom password protection policy exists follows:
$Policy = (Get-MgBetaDirectorySetting | Where-Object {$_.TemplateId -eq "5cf42378-d67d-4f36-ba46-e8b86229381d"})A client-side filter is used because the Graph API does not support server-side filtering against template identifiers.
If a password policy object is not available, you can create a new password policy object. The values for the policy settings are in a hash table containing an array of values. Each value (a setting) is a hash table consisting of the setting name and its value. For example, this code creates the hash table to hold the setting for lockout duration:
$Value5 = @{}
$Value5.Add("Name", "LockoutDurationInSeconds")
$Value5.Add("Value", $LockoutDuration -as [int32])
After populating values for all settings (or just the ones that are different from the default), run the New-MgBetaDirectorySetting cmdlet to create the new custom password policy:
$NewBannedListParameters = @{}
$NewBannedListParameters.Add("templateId", "5cf42378-d67d-4f36-ba46-e8b86229381d")
$NewBannedListParameters.Add("values", ($Value1, $Value2, $Value3, $Value4, $Value5, $Value6))
$Policy = New-MgBetaDirectorySetting -BodyParameter $NewBannedListParameters -ErrorAction Stop
Updating the Password Protection Policy
If a custom policy already exists, fetch the policy settings, update the value for the settings that you want to change, and use the Update-MgBetaDirectorySetting cmdlet to update the policy. This example changes the lock out duration time to 120 seconds (the default is 60 seconds):
[array]$PolicyValues = Get-MgBetaDirectorySetting -DirectorySettingId $Policy.Id | Select-Object -ExpandProperty Values
($PolicyValues | Where-Object {$_.Name -eq "LockOutDurationInSeconds"}).Value = 120
Update-MgBetaDirectorySetting -DirectorySettingId $Policy.id -Values $PolicyValues -ErrorAction Stop
The code for these operations is the same as used in the script to update the banned passwords list. Grab what you need from that script and repurpose it to do whatever you need to. For instance, some organizations like to validate that the password policy settings in the tenants that they manage are consistent and up to date. This is easily done on a periodic basis by creating a PowerShell runbook in Azure Automation. I imagine that checking the password policy would only be one of the Entra ID configuration checks that such a runbook would process. At least, that’s how I would do it.
Next Step – Testing Configurations
The Maester utility includes some checks against the password policy and it would be easy to expand test coverage to whatever aspect of the password policy you consider needs to be checked. Once you’ve mastered programmatic manipulation of the Entra ID password protection policy settings, anything is possible.
Support the work of the Office 365 for IT Pros team by subscribing to the Office 365 for IT Pros eBook. Your support pays for the time we need to track, analyze, and document the changing world of Microsoft 365 and Office 365. Only humans contribute to our work!
Important Change Coming for Entra ID Passkeys in November 2025
Passkey Settings Behavior Change After Introduction of New Passkey Profiles
If your focus is on Entra ID or security, you probably agree with the statement that passkeys are the future for authentication. Or at least, the immediate next step. Who knows what might happen after passkeys are fully deployed? After all, it wasn’t so long ago that people congratulated themselves for using SMS messages for multifactor authentication.
In any case, message center notification MC1097225 (first published 17 June 2025, updated 20 October 2025) marks an important point in the evolution of passkey support within Entra ID. Where today Entra ID supports tenant-wide controls for passkeys as an authentication method, from November 2025 (December 2025 for government clouds), the preview Entra ID feature will support up to ten passkey profiles per tenant. The intention behind the change is to allow tenants to exert more granular control over which users can use what passkeys for authentication.
Granular control is usually goodness, and there’s goodness in this change. You’ll be able to create a passkey profile for departments or other groups and dictate what kind of passkeys the users within the scope of the profile can use.
Passkey Authenticator Attestation
A potential downside exists that should be understood before rushing to embrace the change. When a tenant opts in to use the new approach, Entra ID switches to a new schema to describe what passkey policies are. Logically enough, the existing passkey settings become the default passkey policy, and if the setting to enforce attestation is disabled, Entra ID will become less strict about the kind of passkeys it accepts as an authentication method.
Passkeys have an Authenticator Attestation GUID (AAGUID), a 128-bit identifier to identify the make and model. In enterprise environments, it is common practice to decide on a set of passkeys or FIDO2 keys that the tenant wishes to support. This decision is enforced by specifying the AAGUIDs in the passkey settings.
But as part of the change to the new passkey schema, Microsoft says that “if Enforce attestation is disabled (in a policy), we (Entra ID) will start accepting security key or passkey providers using the following attestation statements:
- “none”
- “tpm”
- “packed” (AttCA type only)
- Custom attestation formats ≤ 32 characters
This will allow a wider range of security keys and passkey providers to be accepted for registration and authentication in Microsoft Entra ID.”
That doesn’t sound too serious, but it does mean that if your current passkey settings do not enforce attestation (Figure 1), anyone covered by the default policy created when the switchover happens will be able to choose whatever passkey type they like.

A Passkey Setting Worth Checking
Some tenants might not care very much about the non-enforcement of attestation. Others will care deeply because of the work they’ve done previously to figure out what kind of passkeys should be used within the tenant. In either case, it’s worthwhile considering the topic and deciding if attestation should be enforced.
Microsoft says that there’s no administrator action necessary for the change. It will be deployed automatically to tenants, and you might not realize that anything has happened if you don’t have the need to review authentication methods.
APIs Not Ready for Change
MC1097225 contains an important note: “If you continue using Graph API or third-party tools to modify the policy, the schema will not change until General Availability.” Remember, what comes in November is a preview and it takes time for APIs to catch up with change. Customers who have built tools to manage authentication methods can continue to use those methods until general availability happens, which will probably be in early to mid-2026 (my guess). When that happens, I guess I’ll revisit my password and authentication methods report script.
Support the work of the Office 365 for IT Pros team by subscribing to the Office 365 for IT Pros eBook. Your support pays for the time we need to track, analyze, and document the changing world of Microsoft 365 and Office 365. Only humans contribute to our work!
Automating Microsoft 365 with PowerShell November 2025 Update
Updated PDF and EPUB Files Available for Automating Microsoft 365 with PowerShell eBook
The Office 365 for IT Pros team is happy to announce the availability of version 17 of the Automating Microsoft 365 with PowerShell eBook. Updated PDF and EPUB files are available for download from Gumroad.com by subscribers of the Office 365 for IT Pros eBook and by those who bought the PowerShell book separately. Remember, when you subcribe to these books, you’re entitled to receive any updates we release for the edition.
We’re still working on the November 2025 update of the main Office 365 and anticipate that it will be ready for subscribers to download on November 1, 2025.
Final Retirement of AzureAD and AzureADPreview Modules
This month marks the final retirement of the AzureAD and AzureAD Preview modules. Microsoft made the original announcement about the retirement of these and the MSOL module on August 26, 2021. Fifty months and multiple postponements later, Microsoft has eventually managed to cajole, persuade, and force customers to dump the old modules to embrace the Graph. At least, Microsoft wants customers to replace old code with cmdlets from the Microsoft Graph PowerShell SDK or the Microsoft Entra module. Naturally, Automating Microsoft 365 with PowerShell is absolutely the best text to consult for anyone who needs to upgrade old scripts. The worked-out code examples are of great help when figuring out cmdlet syntax.
The Entra module is based on the Microsoft Graph PowerShell SDK. It features cmdlets to work with Entra objects like users, groups, and devices with aliases to make the cmdlets work like their AzureAD equivalents, if one exists. I don’t recommend using the Entra module because I think it’s better that administrators and developers understand how to use the full Graph.
Paperbacks at TEC
The TEC 2025 conference was at the start of October. During the event (enjoyable as always), I ordered some copies of the paperback version of Automating Microsoft 365 for PowerShell for delivery to the hotel (Figure 1).

After looking at the Word and PDF versions of the book for months, I wanted to see how the content looked after going through Amazon’s print-on-demand process to verify that people who buy the paperback will be happy. I think they will because the quality surpassed my expectation. It’s definitely not in the same class as the production quality seen in books like the Microsoft Press Inside Out series, but the book is perfectly acceptable.
Point Updates
Those who pay close attention (or who have time to spare) might notice that point releases appear for Automating Microsoft 365 with PowerShell. For instance, the current release is version 17.2, two point releases from version 17.0. Last month, we issued 16.0 through 16.4.
We issue point releases when we correct minor errors or add some material that’s important and we want readers to benefit from without waiting for a monthly update. Minor errors include grammatical and spelling errors, like an annoying “Get-MgServicePrincipall” discovered in V17.0. Code errors like an incorrect parameter also justify a point release, as does the inclusion of a new example. There’s no point in using electronic publishing if you can’t take advantage of the mechanism to improve the quality and content of the book on an ongoing basis.
Our release cadence poses problems for the paperback version because we obviously can’t update printed books. The books I had delivered to TEC 2025 were version 16.0 and the text printed on those pages will always remain the same. Such is the downside of committing words to print instead of an electronic medium.
Sharing Knowledge
We continue to add content to Automating Microsoft 365 with PowerShell. It’s become my go-to notebook to capture experiences, hints, and insights acquired by working with different Graph APIs and SDK cmdlets. It’s been quite a journey so far and I anticipate that there’s much more to come. Stay tuned.
New Audio-Only Recording Option for Teams Meetings
Audio-Only Recording to Protect User Privacy During Recording Playback
In 2023, mesh avatars were the focus for helping people who didn’t like to appear with their video turned on in Teams meetings. To some, it seemed utterly cool to be able to hand over their visual online presence to an avatar that they created with care to be broadly similar to their real self. Avatars are dumb (your voice remains your voice), but they can express some visual reactions to what happened during meetings.
Earlier this year, Microsoft released the ability to create a mesh avatar from a photo in an attempt to make the avatars more realistic. Figure 1 shows the avatar I created from my photo. My efforts didn’t create a very realistic digital presence.

The Avatars for Teams service plan is included with many Microsoft 365 and Office 365 products, so most the Teams installed base of 320 million monthly active users can use avatars. According to the Teams Avatar app, 83.8K people installed the app to create or update their avatars in the last month, so interest remains in having a way to attend meetings in a visual sense without projecting our real self, flawed and imperfect as that might be.
Audio-Only Recording for Teams Meetings
Which brings us neatly to the news announced in Microsoft 365 message notification MC1173926 (16 October 2025) that the Teams meeting recording feature will soon be able to create an audio-only recording. Deployment of the feature to make it generally available has started and should be complete in late November 2025.
What’s interesting is that Microsoft says that making an audio-only recording for a meeting offers “a more comfortable and convenient recording experience.” Microsoft goes on to note that audio-only recording “alleviates concerns about facial information exposure when recording is necessary, offering a more privacy-conscious approach to recording meetings.”
I thought avatars were all about making the visual side of meetings more comfortable for users. However, it’s important to remember that using avatars is a personal choice to customize the video feed for people who opt to use avatars. Audio-only recording is a meeting option to suppress the video feed that flows into the meeting recording for all users, no matter whether they use Teams desktop, browser, or mobile clients. Participants can have their cameras turned on during meetings, but only the audio feeds will make it into the .MP4 file created in the meeting organizer’s OneDrive for Business account.
Suppressing the video feed for the recording means that anyone who plays the recording afterwards cannot see how the participants appeared during the meeting, including if any avatars are used. All the playback can deliver is the audio stream. This is what Microsoft means when they refer to a more privacy-conscious approach. It seems reasonable to say that if you’re not in a meeting, privacy of the participants is better respected if you cannot see how people appeared during the meeting.
The ability to generate a meeting transcript depends on the audio feed, so suppressing the video feed has no impact on the transcript.
No Administrative Controls
There doesn’t seem to be any administrative control in the Teams meeting policy for an organization to decide that audio-only recording is the default or only option for Teams meetings. Microsoft says that administrator intervention is unnecessary because audio-only recording “integrates into existing workflows.” In other words, “Audio only” is an option in a drop-down menu for an organizer to decide what to record for a meeting (Figure 2).

See the Microsoft documentation for more information about recordings for Teams meetings.
Little Value in the Video Stream in Recordings
It took me a little while to work out why Microsoft wanted to introduce audio-only recordings for Teams meetings. After thinking things through, I think this is a good idea. Few of us really want our visual appearance to be replayed in recordings, and it’s uncertain if the video stream adds much value to those who listen to recordings after an event. The transcript is a much more valuable artifact, especially if Microsoft 365 Copilot can reason over it to produce a summary and action items.
Learn about managing Teams and the rest of Microsoft 365 by subscribing to the Office 365 for IT Pros eBook. Use our experience to understand what’s important and how best to protect your tenant.
Outlook Gets AI Drafting of Meeting Agendas
Agenda Auto-Draft Available for OWA and the New Outlook
Microsoft is doing its level best to convince Microsoft 365 tenants to invest in Copilot. Given the massive capital investment in datacenters to power AI experiences, it’s unsurprising that engineering groups are busy infusing Copilot features into as many applications as possible. Features like Copilot memory add value and help dissuade tenants from investigating other options, such as the ChatGPT Connector for SharePoint Online and OneDrive for Business.
Of course, a SharePoint connector is limited when compared to the breadth of integration of Copilot across the Microsoft 365 apps. Because Copilot works well for some and not for others, work continues apace to find new ways to integrate AI in daily tasks. This brings me to message center notification MC1171854 (13 Oct 2025), which describes “Intelligent agenda suggestions for calendar events.” The feature is available now, but only to users with Microsoft 365 Copilot licenses.
Agenda Auto-Draft Uses AI to Generate Some Bullet Points
At first glance, I didn’t see much to get excited about. The description says that AI is used “to automatically generate a proposed agenda when users create or edit a calendar event, making it easier to align meeting goals, participants, and discussion topics.” I’ve never had any problems coming up with a few salient points for a draft meeting agenda, and agendas have a nasty habit of changing as soon as meetings start. However, I can see the value of being able to create some bullet points to frame an agenda.
What happens is that Microsoft has updated the calendar scheduling form to add an auto draft an agenda option to the set of prompts available when the Draft with Copilot button is used. When the auto draft option is used, Copilot uses the meeting subject to generate an agenda composed of some introductory text and some bullet points. Copilot has always been good at generating bullet points in document and message summaries!
In Figure 1, the meeting subject is Review Chapter Updates for Office 365 for IT Pros. Copilot’s suggested agenda items seem reasonable, and it looks as if Copilot discovered that Office 365 for IT Pros is an eBook from information found internally or on the web (Bing search).

If the meeting organizer doesn’t like the draft agenda, they can simply instruct Copilot to retry or adjust the text by making the agenda longer or shorter. The changes proposed in further versions are not dramatic, likely due to using the meeting subject as the core input to the AI processing.
Eventually, the suggested text is accepted or rejected. If accepted, it can be further edited before the meeting notice is sent.
Now Available Worldwide
Auto-draft of meeting agendas is now a default feature that is enabled in OWA and the new Outlook. According to Microsoft, the feature was enabled worldwide from October 9, 2025.
There’s no administrative control to enable or disable auto-draft for meeting agendas. Given the dramatic difference between the scheduling interface of Outlook classic, it’s unlikely that auto-draft of agendas will find its way into that client.
New Feature that Won’t Move the Needle
Agenda auto-draft won’t move the needle at all when the time comes for Microsoft 365 tenants to decide whether to embrace Microsoft 365 Copilot. It’s a feature that will please some people (those who scheduled meetings and discover how to use agenda auto-draft). For most, I suspect that this is one of the Copilot features that will pass them by because they never need to create an agenda. But that’s always true for new software features.
So much change, all the time. It’s a challenge to stay abreast of all the updates Microsoft makes across the Microsoft 365 ecosystem. Subscribe to the Office 365 for IT Pros eBook to receive insights updated monthly into what happens within Microsoft 365, why it happens, and what new features and capabilities mean for your tenant.
Using the Secret Management PowerShell Module with Azure Key Vault and Azure Automation
Use Secret Management to Store and Manage Secrets Needed by Azure Automation Runbooks
Storing hard-coded account credentials in PowerShell scripts is a big security no-no. Previously, I’ve discussed using Azure Key Vault to store passwords and other credentials that might be needed by PowerShell scripts or Azure Automation runbooks. Another method that I’ve used with runbooks is to store the credentials as an automation account resource, most recently when using the Connect-IPPSSession cmdlet to update a Microsoft 365 retention policy. Unhappily, the security and compliance cmdlets don’t currently support managed identities.
Although storing credential objects is the easiest way to make credentials available to an automation account, I would prefer to avoid duplication by using Azure Key Vault everywhere. This decision this brought me to the Secret Management PowerShell module. Essentially, the module supports an easy-to-use connection to Azure Key Vault (and other repositories) to access secrets stored in the vault, like usernames and passwords.
Installing the Secret Management Module
First, install the necessary module from the PowerShell gallery.
Install-Module Microsoft.PowerShell.SecretManagement -Repository PSGallery -Force -Scope AllUsers
Remember to make the module available in any Azure Automation runtime environments where runbooks will use the module to fetch secrets. The module only supports PowerShell core, so I tested runbooks with a custom PowerShell 7.4 runtime environment.
Registering Azure Key Vault with Secret Management
Before you can fetch any secrets from a vault, you must register the vault for the current session. The Secret Management module supports access to Azure Key Vault through one of its default extensions, but first a connection is needed an Azure account that’s linked to a subscription. Interactively, you’d do something like this:
Connect-AzAccount -Subscription 25429342-a1a5-4427-9e2d-551840f2ad25
In an Azure automation runbook, you can use a managed identity:
Connect-AzAccount -Identity
In this case, the secrets I need to use are stored in an Azure Key Vault called “Office365ITPros.” To access Azure Key Vault, the signed-in account must have permission to the target vault granted via a legacy access policy or an appropriate Azure RBAC role. This requirement also applies to the automation account used to execute runbooks, where permission is granted to the automation account’s service principal.
With the necessary access, I can use the Register-SecretVault cmdlet to connect to Azure Key Vault for the current session as follows. The call to the Get-SecretVault cmdlet is to confirm that the registration worked.
$parameters = @{
    Name = 'Azure'
    ModuleName = 'Az.KeyVault'
    VaultParameters = @{
        AZKVaultName = ‘Office365ITPros'
        SubscriptionId = (Get-AzContext).Subscription.Id
    }
    DefaultVault = $true
}
Register-SecretVault @parameters
Get-SecretVault
Name  ModuleName  IsDefaultVault
----  ----------  --------------
Azure Az.KeyVault True
Fetching and Using Secrets in an Azure Automation Runbook
Once a vault is properly registered, the Get-Secret cmdlet can fetch secrets from the target vault. We need to combine the secrets holding the username and password for an Exchange Online administrator account into a credentials object. The object can then be used with the Connect-ExchangeOnline and Connect-IPPSSession cmdlets to connect to Exchange Online and the compliance endpoint before running the cmdlets necessary to complete whatever task is required.
This example shows how to list the sensitivity labels defined in the tenant after making all the necessary connections and registrations. The full code is listed below. Figure 1 shows the output from the Azure automation test pane.
Connect-AzAccount -Identity
$parameters = @{
    Name = 'Azure'
    ModuleName = 'Az.KeyVault'
    VaultParameters = @{
        AZKVaultName = 'Office365ITPros'
        SubscriptionId = (Get-AzContext).Subscription.Id
    }
    DefaultVault = $true
}
Register-SecretVault @parameters
Get-SecretVault
$UserName = Get-Secret -Name ExoAccountName -AsPlainText -Vault Azure
$Password = Get-Secret -Name ExoAccountPassword -Vault Azure
$Credentials = New-Object 'Management.Automation.PsCredential' $UserName, $Password
Connect-ExchangeOnline -Credential $Credentials -DisableWAM
Connect-IPPSSession -Credential $Credentials
Get-Label | Format-Table ImmutableId, DisplayName

Secret Management is an Alternative to Credential Resources
There’s no doubt that storing credential objects as Azure Automation resources is the easiest way to manage credentials used with runbooks. However, the credential objects are associated with individual automation accounts and not shared elsewhere. Putting credentials in Azure Key Vault and accessing those credentials using the Secret Management module isn’t much harder, and those credentials are available to any user or service principal that’s allowed access to the key vault. You pay your money and make your choice…
Need help to write and manage PowerShell scripts for Microsoft 365, including Azure Automation runbooks? Get a copy of the Automating Microsoft 365 with PowerShell eBook, available standalone or as part of the Office 365 for IT Pros eBook bundle.
The My Sign-Ins Portal, Applications, and Conditional Access
Making Conditional Access and the My Sign-Ins Portal Work Better
A couple of weeks ago, I attended a keynote at the TEC 2025 conference where Alex Simons, Microsoft Corporate VP for Entra, discussed the investments Entra is making to develop agents to help tenant administrators to work smarter. There’s a cost to these agents in the form of Entra premium licenses and the security compute units required to run the agents. Microsoft’s bet is that they can deliver sufficient value to customers through agents to take the cost question off the table. Time will tell.
The Conditional Access optimization agent is one of the agents Microsoft has available in preview. I think both agents can do more and have said so both in print and in person. At this point, the conditional access agent seems more practical and likely to have an impact simply because it’s so easy to screw up conditional access policies.
Which brings me to a LinkedIn post by David Nündel reporting that Microsoft has exposed several additional first-party applications in the Entra admin center. There’s nothing really surprising here because Microsoft 365 and Entra ID are constructed from many multitenant applications. Instances of these applications exist in customer tenants (or rather, service principals for the applications) that can then be used in different aspects of tenant management.
Applications and the My Sign-Ins Portal
What is surprising and useful is that the newly-exposed applications relate to the My Sign-ins portal where users can perform actions such as changing their password, removing themselves as guest accounts from other Microsoft 365 tenants, and viewing recent sign-in activity (Figure 1).

The point is that the My Sign-ins portal relies on access to several applications to display the information revealed by the various menu options. If access to the applications is blocked by something like a conditional access policy, then the portal cannot function. And as it so happens, the newly revealed applications are those that are needed by the My Sign-Ins portal. Six applications are in the set with the following display names and application identifiers:
- My Signins: 19db86c3-b2b9-44cc-b339-36da233a3be2
- My Profile: 8c59ead7-d703-4a27-9e55-c96a0054c8d2
- Microsoft App Access Panel: 0000000c-0000-0000-c000-000000000000
- AADReporting: 1b912ec3-a9dd-4c4d-a53e-76aa7adb28d7
- Windows Azure Active Directory: 00000002-0000-0000-c000-000000000000
- Azure Credential Configuration Endpoint Service: ea890292-c8c8-4433-b5ea-b09d0668e1a6
Checking Service Principals for the My Sign-Ins Portal Applications
Service principals for most or maybe all of these applications are likely already present in your tenant. When I checked using the Microsoft Graph PowerShell SDK command shown below, only the My SignIns application was missing:
Get-MgServicePrincipal -filter "displayName eq 'Azure Credential Configuration Endpoint Service' or displayName eq 'Windows Azure Active Directory' or displayName eq 'AADReporting' or displayName eq 'Microsoft App Access Panel' or displayName eq 'My Profile' or displayName eq 'My SignIns'" | Format-Table DisplayName, Id, AppId DisplayName Id AppId ----------- -- ----- My Profile 1f1f813e-0778-4b5b-a379-a924c97e023f 8c59ead7-d703-4a27-9e55-c96a0054c8d2 AADReporting 31bd9b44-bc6b-42df-9be6-3030109b84a5 1b912ec3-a9dd-4c4d-a53e-76aa7adb28d7 Microsoft App Access Panel 10334c63-ac46-4b2a-a80a-dc9c62e34dd8 0000000c-0000-0000-c000-000000000000 Windows Azure Active Directory 2be71509-6ab9-44d7-bfd8-eff4e50bfc7c 00000002-0000-0000-c000-000000000000 Azure Credential Configuration Endpoint Service 6d1fdc7c-f64b-4aeb-9133-5246b467035c ea890292-c8c8-4433-b5ea-b09d0668e1a6
The problem was easily fixed by running the New-MgServicePrincipal cmdlet:
New-MgServicePrincipal -AppId 19db86c3-b2b9-44cc-b339-36da233a3be2 DisplayName Id AppId SignInAudience ServicePrincipalType ----------- -- ----- -------------- -------------------- My Signins a7cda215-2932-4042-8e3e-631ecf7ae23b 19db86c3-b2b9-44cc-b339-36da233a3be2 AzureADMultipleOrgs Application
The command to create a service principal from an application identifier works because the My SignIns application is a multitenant application owned by Microsoft. We can prove this by using the tenant relationship API to check the value of the identifier for the owning tenant. Using the Find-MgTenantRelationshipTenantInformationByTenantId cmdlet requires the Graph CrossTenantInformation.ReadBasic.All permission:
$AppTenantOwner = (Get-MgServicePrincipal -ServicePrincipalId a7cda215-2932-4042-8e3e-631ecf7ae23b).AppOwnerOrganizationId
Find-MgTenantRelationshipTenantInformationByTenantId -TenantId $AppTenantOwner
Write-Host ("The tenant name is {0} and its default domain is {1}" -f $TenantInfo.displayName, $TenantInfo.DefaultDomainName)
The tenant name is Microsoft Services and its default domain is sharepoint.com
No Point in Repeating What’s Already Available
With all the applications in place, you can use them in conditional access policies. I don’t like repeating information that’s already online, and I hate seeing many different descriptions of a new feature published by people who haven’t bothered to add any personal insight or knowledge to help others understand the technology better.
With that point in mind, you can read about how these applications could be used in a description of configuring conditional access for guest users by MVP Kenneth Van Surksum. Kenneth adds a few more applications to the “must exclude from blocking” list, so it’s important that you read the article. Excluding applications in conditional access policies simply allows users to access applications that they need to do their jobs, or to make functionality work, like the exclusion required by Outlook to handle sensitivity labels.
Now all I want to know is whether the Entra conditional access optimization agent is ready to optimize for this condition. I suspect not, because it’s clear that first generation agents solve immediate issues (like stopping people from locking themselves out) rather than delivering great insight into more subtle policy details.
Changing the Offline Access Period for Sensitivity Labels
Offline Access Lets Clients Like Outlook Work with Protected Content
The use of Microsoft Purview sensitivity labels to protect confidential files and messages seems to be more widespread. Although Microsoft doesn’t publish data to say how many Microsoft 365 tenants use sensitivity labels or the percentage of files stored in SharePoint Online and OneDrive for Business that are protected by sensitivity labels, my guess is that use has grown steadily over the last few years. Certainly, Microsoft is encouraging the use of sensitivity labels by increasing its use in different places. For example, implementing dynamic watermarking, preventing Microsoft 365 Copilot from using content from documents with specific sensitivity labels in AI-generated responses, and removing the requirement to pay to use the Graph API to assign sensitivity labels programmatically. These are all good signs that the sensitivity label framework is developing and building out nicely.
Offline Access to Protected Content
Protecting files with encryption applied by assigning a sensitivity label is a core piece of functionality. Encryption is managed by the Azure Rights Management service, which controls the interpretation and enforcement of the access rights assigned to users through sensitivity label settings.
When an authenticated user attempts to access a protected item, they obtain a use license from the Azure Rights Management service. The use license is a certificate containing the access rights for the item (like whether the user can print the item), the encryption key used to encrypt the content, and if access expires at any point. Importantly, the validity of the use license is limited.
If access to the item is not date-limited, the service issues a use license with a validity period based on the offline access setting contained in the sensitivity label (by default, 30 days). The validity period controls when the user must next authenticate to continue to have access to the item. In practical terms, during the validity period, the existence of the use license means that the user doesn’t need to prove their right to access the content. This is the basis for offline access to protected content by clients such as Outlook. The use license is available on the workstation and can be used to access the protected item even when a network connection is unavailable.
Once the validity period expires, the user is prompted to reauthenticate. During the reauthentication process, the service checks the label settings and evaluates group membership (if used to grant access rights) to establish precisely what rights the user has to the item before it issues a new use license.
Setting the Access Period for a Sensitivity Label
You can restrict the maximum period for offline access on a per-label or tenant-wide basis. To change the validity period for a label, edit the Allow offline access setting (Figure 1) and select the number of days for offline access. Always means that the label uses the maximum validity period for the tenant. Never means that items protected by the label cannot be accessed offline.

Changing the Maximum Validity Period for a Tenant
A sensitivity label cannot have a longer offline access period than the tenant maximum validity period. While 30 days is a good balance between frequent user reauthorization and maintaining security for offline content, some believe that a shorter period is better because it limits the ability of people who leave the organization to access sensitive information. A use license is bound to the device where access occurred, so to continue to have access to the protected content, the person who left must have access to the device.
In any case, a tenant administrator can change the validity period setting for the tenant with PowerShell using the Set-AipServiceMaxUseLicenseValidityTime cmdlet from the AIPService module. The AIPService module only supports Windows PowerShell (5.1). Don’t bother trying to run it on PowerShell 7. Here’s an example of setting the period to 14 days:
Import-Module AIPService Connect-AipService Set-AipServiceMaxUseLicenseValidityTime 14 WARNING: The MaxUseLicenseValidityTime will be updated by this operation. Confirm Are you sure you want to perform this action? Performing the operation "Set-AipServiceMaxUseLicenseValidityTime" on target "current organization". [Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "Y"): y The MaxUseLicenseValidityTime for the Azure Information Protection service has been successfully set to 14. Get-AipServiceMaxUseLicenseValidityTime 14
The adjusted validity period only applies to newly-issued use licenses. The new value can be anything from 0 to 65535 days (which should be enough for anyone).
Test Before Deployment
As always, it’s best to make changes to settings like the maximum validity period in a test tenant to assess if the change breaks anything. I don’t think it will, but it’s always best to test, assess, and then deploy.
Learn about managing sensitivity labels and the rest of Microsoft Purview Information Protection by subscribing to the Office 365 for IT Pros eBook. Use our experience to understand what’s important and how best to protect your tenant.
ChatGPT Enterprise Connects to SharePoint Online
SharePoint Connector Throws Down the Gauntlet to Microsoft 365 Copilot

An October 8 LinkedIn post announced that OpenAI business customers can “centrally deploy SharePoint for their entire workspace,” The move throws down the gauntlet to Microsoft 365 Copilot by delivering the same kind of ability to reason over files stored in SharePoint Online and OneDrive for Business. While Microsoft 365 Copilot boasts more points of integration with Microsoft 365 apps, including SharePoint agents, the new Knowledge agent (in preview), and the ability to consume SharePoint content in custom agents built with Copilot Studio, I don’t think anyone in Microsoft will be happy to see OpenAI offer customers the opportunity to fully exploit the information stored in SharePoint Online.
Given that Microsoft 365 Copilot uses the OpenAI models, including GPT-5, it’s hard to know why companies opt for OpenAI enterprise, especially if those companies use SharePoint Online (which implies that they use Microsoft 365). List prices for the two offerings is compatible, but Microsoft 365 Copilot delivers more integrated functionality.
OpenAI and SharePoint Online
OpenAI has long offered the ability for individual users to connect to OneDrive for Business accounts and SharePoint Online sites. Access is granted through OAuth authentication against Entra ID and is limited to the information accessible to the user, just like any other app that uses the Graph API to interact with SharePoint Online and OneDrive for Business. Because the OpenAI connector is an app, the app can be blocked to prevent users from being able to upload information to OpenAI.
The description of the ChatGPT SharePoint Connector says “The admin-managed sync connector lets an administrator authenticate once and deploy across the entire organization. Users don’t need to set up anything themselves—it just works. To configure the connector, administrators must be both a SharePoint Online (or tenant) administrator and a ChatGPT administrator. During the configuration, the administrator can choose to synchronize all files or scope the connector to specific sites and folders, with the synchronized copies appearing in ChatGPT as “admin-managed” files. According to OpenAI, new files or updates made to SharePoint files are available to ChatGPT within an hour.
Access to files is governed by “strict email domain matching between SharePoint and ChatGPT. A user’s SharePoint account must match their ChatGPT account email.” I guess this means that user principal names must match the email addresses used to create ChatGPT accounts for ChatGPT to allow access to synchronized files. Of course, Microsoft 365 does not insist that user principal names match a user’s primary SMTP address, so there’s some opportunity for mismatches here.
OpenAI notes that synchronized connectors are only available to customers based in the U.S. that enable data residency or international customers who don’t mind that their data is stored in the U.S. They note that “We don’t yet support in-region storage for non-US data residency configurations.”
The SharePoint Connector
Overall, it seems like the new version of the ChatGPT connector uses application permissions like Sites.Read.All and Files.Read.All to access SharePoint and OneDrive content and synchronize it to ChatGPT, while User.Read.All, Group.Read.All, and GroupMember.Read.All permissions are used for account matching. An example of an app using Graph permissions to read SharePoint is available here.
One thing that’s become painfully obvious since the introduction of Microsoft 365 Copilot is that Microsoft 365 tenants store some complete rubbish in SharePoint Online. Old files and misleading and inaccurate content is stored alongside interesting and useful information, but Copilot can’t tell the difference between the two. Add in some sensitive and confidential information that should never appear in AI-generated output, and you can understand why Microsoft has struggled to make Copilot work for SharePoint in the real world (rather that carefully curated demos). Solutions like Restricted Content Discovery and the DLP Policy for Copilot allow organizations to hide content from Copilot or stop Copilot using information in its responses. It’s taken time for these solutions to arrive, but things are much better now.
OpenAI has the advantage of learning from Microsoft’s toils. It seems like OpenAI uses scoping to restrict what SharePoint content ChatGPT can process, which is kind of like what Restricted Content Discovery does.
Why Use the OpenAI Connector?
Apart from avoiding having to buy Microsoft 365 Copilot licenses, I could never understand why Microsoft 365 tenants let people upload corporate information to ChatGPT for processing. The enterprise SharePoint connector is even worse in my eyes, even if OpenAI guarantees that the information loaded through the connector is never used to train its models.
The notion of synchronizing SharePoint files to ChatGPT so that they people can use that content with ChatGPT seems a little crazy. As far as I can tell, OpenAI offers none of the compliance functionality that Microsoft has developed to protect and secure SharePoint Online. For instance, how does ChatGPT deal with files protected by sensitivity labels?
It seems like once the connector copies SharePoint Online sites to ChatGPT, a Microsoft 365 tenant runs some risk of losing control over information. It’s hard enough to persuade people to store important files in SharePoint Online rather than OneDrive for Business. Adding ChatGPT to the mix makes the task of managing corporate files even harder.
Insight like this doesn’t come easily. You’ve got to know the technology and understand how to look behind the scenes. Benefit from the knowledge and experience of the Office 365 for IT Pros team by subscribing to the best eBook covering Office 365 and the wider Microsoft 365 ecosystem.
Microsoft 365 Copilot Usage Report API General Availability
It’s Nice to be GA, but What Can You Do with the Copilot Usage Report API?
MC877369 first appeared in August 2024 to announce the availability of Microsoft 365 Copilot usage data through the Graph usage reports API (Microsoft 365 roadmap item 396562). The most recent update (6 Oct 2025) sets out a new timeline for general availability of the APIs, which is now expected to roll out in late October 2025 for worldwide completion in late November 2025. Microsoft doesn’t say why the latest delay occurred or why it’s taken so long to move the API from preview to GA.
Still at the Beta Endpoint
Although the Copilot usage report API is heading for general availability, it’s still only accessible through the beta endpoint. There’s nothing wrong with that, providing the API works. Normally, Microsoft Graph APIs accessible through the beta endpoint are under active development to solve performance or reliability problems, or to complete the features necessary to move to production (V1.0) status.
Using the Copilot Usage Report API
I first looked at the API in September 2024 and concluded that most value can be derived from the Copilot user activity detail API. Knowing what apps people use Copilot in is valuable information if you want to do things like:
- Knowing what departments Copilot is being used in and those that need a little help to get going. By including user data from Entra ID with Copilot usage data, we can slice and dice the usage data to generate additional insights (Figure 1).

- Look for user accounts with expensive ($360/year) Microsoft 365 Copilot licenses and automatically remove underused licenses so that the licenses can be reallocated to people who might use them more. The folks who lose the Microsoft 365 Copilot licenses might be happy with the no-charge Microsoft Copilot chat capability. Or they might be the folks in the company who are using ChatGPT and other AI tools instead of Copilot.
- A variation on the theme is to integrate Microsoft 365 audit data with Copilot usage report data to drill down into what people are doing with Copilot. The intention once again is to weed out underused Microsoft 365 Copilot licenses so that others might be assigned those licenses.
- I have a script to create a composite picture of user activity across multiple workloads. It would be easy to add the Copilot usage data to the mix.
Example PowerShell scripts are available to demonstrate the principles explored in each scenario. The point is that usage data is interesting in its own right, but it becomes more powerful when combined with other easily-accessible Microsoft 365 data sources about user activity.
Remember to allow full display of usernames and other information for the report data. If you don’t, the usage data will be obfuscated (concealed) and won’t be able to match up with data from other Microsoft 365 sources.
Other Usage Report APIs
Microsoft 365 supports a bunch of other usage reports APIs for different workloads. Not all workloads featured in the Microsoft 365 admin center are available through a Graph API (like Forms, Project, Visio, and Viva Learning). The same is true for some sub-reports (like Copilot agents). However, there’s enough data available to be able to build a good picture of how people use Microsoft 365 across the board.
The issue with reporting SharePoint URLs (first reported in September 2023) persists. Some security issue is apparently cramping Microsoft’s ability to include site URLs in the site activity report (powered by the getSharePointSiteUsageDetail API), which means that the usage data returned for a site looks like this:
Report Refresh Date : 2025-10-07 Site Id : 66bbf297-2f09-43ec-ab94-9333deacf769 Site URL : Owner Display Name : Project Haycock Owners Is Deleted : False Last Activity Date : 2025-05-23 File Count : 375 Active File Count : 131 Page View Count : 0 Visited Page Count : 0 Storage Used (Byte) : 110786012 Storage Allocated (Byte) : 27487790694400 Root Web Template : Group Owner Principal Name : projecthaycock@office365itpros.com Report Period : 180
The Site Id can be used to find the website URL:
(Get-MgSite -SiteId '66bbf297-2f09-43ec-ab94-9333deacf769').WebUrl https://office365itpros.sharepoint.com/sites/projecthaycock
It’s a mystery why Microsoft won’t or can’t fix this irritating issue. Just one of those cloud mysteries…
Exchange 2016 and 2019 End of Life and Some Interesting Exchange Online Developments
October 14 2025 is a Big Day for Exchange Server

On October 14, 2025, Exchange Server 2019 reaches its formal end-of-life. The same is true for Exchange Server 2016 and the only supported version of an Exchange on-premises server is Exchange Server Subscription Edition (SE). No cataclysmic event will happen on October 18, and servers will continue to work as before and not burst into flames spontaneously, but the transition to subscription-based licensing is a big event in the 29-year history of Exchange Server (to date).
To be fair to Microsoft, they have made the technical aspect of the upgrade to Exchange SE very simple. Exchange SE is essentially the same as Exchange 2019 CU15 with some extra tweaks. I’ve only heard of minor difficulties during server upgrades. The biggest issue customers seem to have is understanding exactly what licenses they need to run SE (the same as Exchange 2019), especially in hybrid environments where the organization has Microsoft 365 licenses and there is a very small (but important) presence on-premises.
Moving to Cloud First Identity
Speaking of the on-premises presence, Microsoft released its Guide for Cloud-First Identity Management (guidance for IT architects) to lay out principles for transferring the Source of Authority (SOA) for user and group management from Active Directory to Entra ID in what Microsoft calls a “phased, low-risk migration path” to minimize the use of Active Directory.
There are many threads involved here. Organizations want to improve their security posture and remove a dependency on Active Directory that might be exploited by attackers. Moving as much as possible to Entra ID makes sense from an administration perspective too because better tools and APIs are available for that platform. Microsoft wants customers to move to Entra ID not only to improve security but also to enable a market for its Entra premium licenses and products, like ID Governance. Apps that depend on Active Directory for authentication are the usual blocker because these apps must be upgraded to authenticate with Entra ID, and sometimes there isn’t the knowledge or drive to do this work.
Microsoft can encourage the move to cloud-first identities by helping organizations to move system objects like users and groups to Entra ID. Exchange Server has a big influence over Active Directory. Exchange 2000 was the first enterprise application to exploit Active Directory (based in many ways on the Exchange 5.5 X.500 Directory Store) and the two have stayed in lockstep since. Moving mail-enabled recipients like accounts with mailboxes, contacts, public folders, and groups from on-premises to the cloud enables the removal of the last Exchange Server, unless one is needed to provide SMTP routing for apps and/or devices.
The HVE Conundrum
Speaking of SMTP routing, last year I wrote about Microsoft’s High Volume Email (HVE) and Email Communication Services (ECS) solutions. Both are based on Exchange Online. In an attempt to clarify the roles of the two products, Microsoft removed the limited ability to send email to external recipients from HVE and points customers who want to send large quantities of email outside their tenant to ECS.
HVE is still in preview with general availability now scheduled for March 2026. Microsoft posted the latest update five months ago to say that support for Basic authentication would persist in HVE until September 2028. The move is indicative of the pressure from customers because of the issues involved in upgrading apps and devices to use modern (OAuth 2.0) authentication. I’m not sure that the new date is feasible because I hear that many organizations have multi-function devices that use SMTP to send email via Exchange Online that have zero chance of being upgraded. Will customers cave in and junk these devices or will more pressure go into Microsoft to extend the retirement date for basic authentication even further. We shall see.
Auto-Archiving for Exchange Online
On October 7, Microsoft announced auto-archiving for Exchange Online, due to be rolled out to commercial tenants later this month and into the government cloud in November 2025. Archiving has been around since Exchange 2010 and Exchange retention policies can configure a move to archive action for items after they reach a certain age (still an unsupported action for Microsoft 365 retention policies).
The new feature moves the oldest items from user mailboxes to their archive mailboxes when mailboxes become 90% full. For most Exchange Online mailboxes, that point is reached when mailboxes use 90 GB of the normal 100 GB quota. The idea is that “threshold-based” archiving is more proactive and effective than when the Managed Folder Assistant only moves items based on date. It seems like a good idea and I’m looking forward to seeing it in action (not that my mailbox is close to 90 GB).
Two Types of Contacts
You might not know this, but Exchange Online supports two types of contact object. The MailContact object is a mail-enabled object (for example, every guest account has a matching mail contact object), and the Contact object is not. Microsoft has decided to deprecate the Contact object from December 2025. I don’t think this should cause any disruption because as far as I can tell, Contact objects are vestiges of long-forgotten synchronization with on-premises Exchange.
Don’t use the PowerShell example from the article to check your tenant for Contact objects. Always use server-side filtering, so the right command is:
Get-Contact -RecipientTypeDetails Contact -ResultSize Unlimited
On the other hand, who cares if a single PowerShell command isn’t as fast as it can be? You’ll only run it once.
Teams Support for Emojis in Chat and Channels Section Names
The Need to Make Teams Chat Section Names into Visual Anchors
Still amazed by the news that Teams reactions to chat and channel conversations support up to 20 emojis (apparently to convey nuanced responses), the news delivered in MC1166877 (6 October 2025, Microsoft 365 roadmap item 503300) that Teams will support emojis in section names for chat and channels quite blew my mind.
Microsoft says that they’re introducing the feature to allow “users to personalize and visually organize their workspace more expressively, aligning with familiar experiences from other collaboration platforms like Slack.” In other words, because Slack plasters emojis around its interface, Teams must follow. In this case, the desktop and browser clients get the feature first followed by mobile clients, with deployment scheduled to targeted release tenants in early November 2025. If all goes well (and what can go wrong with an emoji?), general availability will follow in late November 2025 to all commercial and education tenants. Think of it as a thanksgiving present.
Chat and Channel Sections
Teams introduced sections as part of the new Chat and Channels experience in late 2024. Sections allow users to organize chats and channels into convenient groupings that make sense to the user, For example, I have a section for chats with the individual members of the Office 365 for IT Pros author team. I have another session for chats with people who work at Microsoft, and I use another section for the channels that I think most important in terms of checking for new messages daily, and so on.
Until now, section names are confined to simple text. When the update lands in your tenant, you’ll be able to enliven the section names with emojis. You can create a new section or rename existing sections and insert as many emojis as you like up to the 50-character limit for a section name (Figure 1).

To access the set of available emojis, use the Windows icon and . (period sign) combination. I believe this is the method to insert emojis with MacOS.
Figure 2 shows the kind of “visual anchors” that emojis create for sections. Beauty is in the eye of the beholder, but I’m not sure that the emojis add much to my ability to navigate. Maybe the new section names will grow on me.

No Custom Emojis
Disappointingly, Teams doesn’t support custom emojis for section names. When I wrote about custom emojis last year, I created several new emojis, including a rather good Mickey Mouse. However, it seems like the set of emojis revealed for picking is limited to emojis supported by the operating system rather than Teams emojis.
No Administrator Control Over Teams Chat Section Names
I know that some tenant administrators will see emojis in section names as a mere frippery, something that Microsoft is wasting time on instead of fixing other problems, so let me note that there’s no control over allowing emojis to be used. Adding emojis to sections is base functionality that cannot be switched off, so the only thing a tenant can do is keep their users in a state of blissful ignorance and hope that no one ever finds out what they can do to create “visual anchors” to navigate through Teams chats and channel conversations.
So much change, all the time. It’s a challenge to stay abreast of all the updates Microsoft makes across the Microsoft 365 ecosystem. Subscribe to the Office 365 for IT Pros eBook to receive insights updated monthly into what happens within Microsoft 365, why it happens, and what new features and capabilities mean for your tenant.
Chromium 141 Update Will Affect Offline Access for SharePoint Online and OneDrive for Business
Make Sure Policy is in Place to Maintain Offline Capability for Chromium 141-Based Browsers
The content of Microsoft 365 message center notifications often seems to assume that the reader understands the basic concepts underlying the news communicated in notifications. That’s not always the case. There’s so much change in the Microsoft 365 environment that few can track everything that happens and understand why a change is important.
MC1150662 is an example. This notification was originally published on 9 September 2025 and revised on 3 October 2025. It contains some important information and an action that tenant administrators should take before Microsoft rolls out an update as part of the deployment of Chromium 141 later this month. At least, I assume a further update is coming because the updates for Edge, Chrome, and Brave today all moved to Chromium version 141 (Figure 1) and the predicted issues have not emerged in any of those browsers.

What Microsoft says will happen is that access to OneDrive for Business, Microsoft Lists, and SharePoint Online document libraries will be prompted to allow access to the local network (Figure 2).

Allowing access via the prompt controls the ability of OneDrive and SharePoint to use offline access to data and to use programs like Microsoft Nucleus, a synchronization engine for data-oriented information like Lists that’s also used by the Microsoft OneDrive Sync Service. Essentially, if access is not allowed, OneDrive synchronization and the intelligent incremental synchronization used by SharePoint Online will stop working.
That’s a pretty serious situation, but I’m not sure that people will realize this from MC1150662. Or maybe people are much smarter than I am, and I should stop worrying.
Updating the OneDrive Client
The easiest item on the must-do list is to make sure that workstations update to at least build 25.164 of the OneDrive sync client. The build number is revealed in the About page of the client settings (Figure 3).

Updating Policies for Workstations
The next item is to deploy the LocalNetworkAccessAllowedForUrls policy to workstations with URL to allow the SharePoint Online and OneDrive for Business URLs to access local network endpoints (the workstation). The registry update file that I used to prepare Edge is:
[HKEY_LOCAL_MACHINESOFTWAREPoliciesMicrosoftEdgeLocalNetworkAccessAllowedForUrls] "1"="https://office365itpros.sharepoint.com" "2"=https://office365itpros-my.sharepoint.com
Figure 4 shows the update when applied to the system registry.

See the Microsoft documentation for more details about how to apply the LocalNetworkAccessAllowedForUrls policy update for other browsers and for MacOS. One topic that isn’t covered in the documentation is if users with guest accounts need to update the system registry on their workstations to use B2B Sync to download files from SharePoint Online document libraries in host tenants, Without seeing the updated software in action, I assume that the change will require users to add the SharePoint endpoint for the tenants that they want to synchronize with to the registry. For example, if you have a guest account in the Contoso.com domain and want to synchronize files from a SharePoint Online document library that you have access to, you’ll need to include an entry for https://contoso.sharepoint.com.
Microsoft also recommends checking the values for the DisableNucleusSync and DIsableOfflineMode policies to make sure that their settings are as expected to allow users to work in offline mode with synchronized data.
Keep an Eye Out When Chromium 141 Arrives
It took me a while to get my head around the importance and impact of the information contained in MC1150662. The text above is my best interpretation of what Microsoft communicates in the notification. It’s hard to be definite until all the moving parts are available. Feel free to disagree!
Support the work of the Office 365 for IT Pros team by subscribing to the Office 365 for IT Pros eBook. Your support pays for the time we need to track, analyze, and document the changing world of Microsoft 365 and Office 365. Only humans contribute to our work!
What’s the Best Way to Find SharePoint Online Sites with Graph PowerShell?
Considering the Use of the Get-MgSite and Get-MgAllSite Cmdlets

A reader asked if the the getAllSites Graph API is the best way to retrieve details of SharePoint Online sites with PowerShell. On the surface, the API seems like a good way for apps to fetch details of SharePoint Online sites for processing, especially in multi-geo scenarios, which is why the API exists. The API can also fetch sites for a single-geo tenant. It is implemented as the Get-MgAllSite cmdlet in the Microsoft Graph PowerShell SDK.
The Get Site API also fetches information about sites (implemented as the Get-MgSite cmdlet). The two APIs return sites ordered by creation date, with the most recent sites returned first. Two major differences exist. First, the API only supports the retrieval of sites for single-geo tenants. Second, the Get Site API supports both delegated and application permissions whereas the getAllSites API only supports application permissions.
The permissions gap means that a SharePoint administrator can sign into an interactive Microsoft Graph PowerShell SDK session and list all sites with Get-MgSite, but they cannot use Get-MgAllSite because it doesn’t support delegated permissions.
The problem is easily solved by creating an Entra ID registered app and assigning the Sites.Read.All application permission to the app. You can then use an X.509 certificate (recommended) or app secret (for testing only) to authenticate in app-only mode and use the Get-MgAllSite cmdlet.
In terms of performance, Get-MgAllSite seems to be slightly faster than Get-MgSite. Both cmdlets retrieve the same set of site properties, so there’s no obvious reason why one might be better than the other. In any case, from a practical perspective, there’s nothing to choose in terms of speed when retrieving all sites. Let’s see hopw to find sites with the two cmdlets.
Filtering to Find Sites
To find all sites in a tenant, run the Get-MgSite cmdlet and use the All parameter:
[array]$Sites = Get-MgSite -All
Get-MgAllSite doesn’t have an All parameter and although I have tested it to find > 600 sites without a hitch, I don’t know how it will cope with larger tenants. However, the cmdlet can use filtering to avoid the need to fetch all sites. For example, here’s how to find the set of personal (OneDrive for Business) sites in a tenant:
[array]$Sites = Get-MgAllSite -filter "isPersonalSite eq true"
Filtering against a site display name is also supported:
Get-MgAllSite -Filter "displayname eq 'Ultimate Guide to Office 365'"
Another example is using the startsWith operator to find sites:
$Site = Get-MgAllSite -filter "startsWith(displayName,'Ultimate Guide')"
You can also filter to find sites based on the creation date. This example shows how to retrieve sites created in the last month.
$FirstDayOfMonth = (Get-Date -Day 1).ToString('yyyy-MM-ddT00:00:00Z')
Get-MgAllSite -Filter "createdDateTime ge $FirstDayOfMonth"
Get-MgSite can’t filter using the isPersonalSite, DisplayName, or CreationDateTime properties and responds with “Get-MgSite_List: Cannot enumerate sites.”
The closest the Get-MgSite cmdlet gets to filtering is via the Search parameter (which isn’t supported by the Get-MgAllSite cmdlet):
Get-MgSite -Search "Ultimate Guide"
Why Filter Sites with One Cmdlet and Not the Other?
I don’t know why the Get Site API doesn’t support filtering in the same way as the getAllSites API does. Given the apparent similarities between the two APIs in terms of performance and output, there doesn’t appear to be a good reason why the developers chose not to implement filtering for the Get Site API.
Even though the Get-MgSite cmdlet can retrieve all sites, perhaps the reason for the different behavior across the two APIs is because the purpose of the Get Site API is to retrieve details of individual sites. By comparison, the getAllSites API exists to retrieve sets of sites when filtering obviously becomes more important and is therefore implemented. If so, the documentation could clarify the situation better than it does.
Avoid the Top Parameter
Although Get-MgAllSite supports filtering, it currently has a problem when combining filters with the Top parameter. Two issues exist. First, performance (both cmdlet and API request) is much slower when the Top parameter is used. Second, the API ignores the Top parameter and returns all sites that match the filter instead of the number of hits specified in the parameter. For instance, this command returns all the sites created in the last month:
Get-MgAllSite -Filter "createdDateTime ge $FirstDayOfMonth" -Top 1
The problem is reported as issue #3405 for V2.30 of the Microsoft Graph PowerShell SDK.
Mix and Match as the Need Arises
My advice is to use Get-MgSite whenever an app needs to retrieve details of a single site and Get-MgAllSite to fetch details of multiple sites (but only in app-only mode). Both cmdlets include the site identifier in the properties they return, and that’s the critical piece of information for further interaction with sites through Microsoft Graph PowerShell SDK cmdlets or API requests (like this example of accessing site pages or this one of creating a list in a site).
Need some assistance to write and manage PowerShell scripts for Microsoft 365, including Azure Automation runbooks? Get a copy of the Automating Microsoft 365 with PowerShell eBook, available standalone or as part of the Office 365 for IT Pros eBook bundle.









