Tag Archives: microsoft
Extend allow in Tenant Allow/Block List allow entries in a transparent data driven manner
This feature is Available to customers who have Exchange Online Protection or Defender for Office 365 Plan 1 or Plan 2 across WW, GCC, GCCH, DoD.
Transparency inside Tenant Allow/Block List
Recently we launched the last used date for allowed or blocked domains, email addresses, URLs, or files inside the Microsoft Defender XDR. For block entries, the last used date is updated when the entity is encountered by the filtering system (at time of click or during mail flow). For allow entries, when the filtering system determines that the entity is malicious (at time of click or during mail flow), the allow entry is triggered and the last used date is updated.
Time for data driven allow management
Now you can edit existing allowed domains, email addresses, URLs, or files inside the Tenant Allow/Block List to have the Remove allow entry after value of 45 days after last used date.
As a member of a security team, you create an allow entry in the Tenant Allow/Block List through the submissions page if you find a legitimate email being delivered to the Junk Email folder or quarantine.
The last used date for allow entries will update in real time until the filtering system has learned that the entity is clean. You can view the last used date in the Tenant Allow/Block List experience or via the Get-TenantAllowBlockListItems cmdlet in Exchange Online PowerShell. Once the filtering system learns that the entity is clean, the allow entry last used date will no longer be updated, and the allow entry will be removed 45 days after this last used date (if the entry is configured this way). This behavior prevents legitimate email from being sent to junk or quarantine while you have full visibility into what is going on. Spoof allow entries don’t expire, so they aren’t affected in this case.
Here’s an example for better understanding. Suppose you created an allow entry on July 1 with the Remove allow entry after value of 45 days after last used date. And suppose the filtering system finds the entity to be malicious until July 29. and then finds the entity to be clean on July 30. From Jul 1 to July 29, the last used date is updated whenever the entry is encountered during mail flow or at time of click. From July 30th, the last used date of the allow entry is no longer updated, because the entity is clean. The allow entry will be removed on September 12, which is 45 days after July 29th. The following alert will be raised in the Alerts and Incidents section of the Defender XDR portal: Removed an entry in Tenant Allow/Block List.
As a security professional, your job of managing the allow entries in the Tenant Allow/Block List just got easier in a data driven, transparent manner.
To learn more, check out these articles:
Allow or block email using the Tenant Allow/Block List
Allow or block URL using the Tenant Allow/Block List
Allow or block file using the Tenant Allow/Block List
Let Us Know What You Think!
We are excited for you to experience automatic Tenant Allow/Block List expiration management for allow entries. Let us know what you think by commenting below.
If you have other questions or feedback about Microsoft Defender for Office 365, engage with the community and Microsoft experts in the Defender for Office 365 forum.
Microsoft Tech Community – Latest Blogs –Read More
Decommissioning Exchange Server 2016
Exchange 2016 is approaching the end of extended support and will be out of support on October 14th, 2025. If you are using Exchange Server 2019, you will be able to in-place upgrade to the next version, Exchange Server Subscription Edition (SE), so Exchange Server 2016 will need to be decommissioned at some point.
This article will focus on the removal of Exchange 2016 from an environment which already has Exchange 2019 installed. We will not focus on any of the steps already documented for upgrading to Exchange 2019. To get those details, see the Exchange Deployment Assistant and create a custom step-by-step deployment checklist for your environment. Also check out the Exchange Server documentation for details on upgrading to a newer version of Exchange Server.
If you plan to stay on-premises, we recommend moving to Exchange 2019 as soon as possible. Only Exchange 2019 will support in-place upgrades to Exchange SE, marking the first time in many years that you can perform an in-place upgrade on any Exchange release. You should start decommissioning Exchange 2016 servers in favor of Exchange 2019 now, to be ready for easy in-place upgrades to Exchange SE when it becomes available.
Prepare for Shutdown
Once you’ve completed your migration from Exchange 2016 to a newer version of Exchange Server, you can prepare the Exchange 2016 servers to be decommissioned.
Inventory and upgrade third-party applications
Make a list of all applications using Exchange 2016 servers and configure each of them to use the newer Exchange Server infrastructure. If you are using a shared namespace for these applications, minimal configuration would be required. Contact the providers of those applications to ensure they are supported on your latest version of Exchange Server.
Client Access Services
Review Exchange virtual directory namespaces
Review all client connectivity namespaces and ensure they are routing to the latest Exchange server(s) in the environment. These include all names published for your Exchange virtual directories. If the newer Exchange environment is using the same namespaces, you can reuse the existing SSL certificate. If the new Exchange environment is using a new namespace that does not exist as a Subject Alternative Name (SAN) on your current SSL certificate, a new certificate will need to be obtained with the appropriate names.
Tip: Verify that all clients including ActiveSync, Outlook (MAPI/HTTP or RPC/HTTP), EWS, OWA, OAB, POP3/IMAP, and Autodiscover are no longer connecting to legacy Exchange servers. Review each Client Access server’s IIS Logs with Log Parser Studio (LPS). LPS is a GUI for Log Parser 2.2 that greatly reduces the complexity of parsing logs, and it can parse large sets of logs concurrently (we have tested with total log sizes of >60GB). See this blog post for details.
Review Service Connection Point objects in Active Directory
Run the following command to obtain the value of the Autodiscover service connection point (SCP). The Autodiscover SCP is used by internal clients to look up connection information from Active Directory:
Get-ExchangeServer | Where-Object {$_.AdminDisplayVersion -like “Version 15.1*”} | Get-ClientAccessService | Format-Table Name, FQDN, AutoDiscoverServiceInternalUri -AutoSize
If present, ensure the AutoDiscoverServiceInternalURI routes to the new Exchange Servers or load-balanced VIP.
Get the URI from a 2019 server:
$2019AutoDURI = (Get-ExchangeServer <Ex2019 ServerName> | Get-ClientAccessService).AutoDiscoverServiceInternalURI.AbsoluteURI
Then set it on the 2019 virtual directory:
Set-ClientAccessService -Identity <Ex2016 ServerName> -AutoDiscoverServiceInternalUri $2019AutoDURI
You can also remove this value by setting AutoDiscoverServiceInternalUri to $null.
Mailflow
Next, review all mail flow connectors to ensure that the server is ready to be decommissioned.
Review the send connectors
Review the send connectors and ensure that the Exchange 2016 servers have been removed and the newer Exchange servers have been added. Most organizations only permit outbound network traffic on port 25 to a small number of IP addresses, so you may also need to review and update the outbound network configuration.
Get-SendConnector | Format-Table Name, SourceTransportServers -AutoSize
Get-ForeignConnector | Format-Table Name, SourceTransportServers -Autosize
Review the receive connectors
Review the receive connectors on the Exchange 2016 servers and ensure they are recreated on the new Exchange servers (e.g., SMTP relay, anonymous relay, partner, etc.) Review all namespaces used for inbound mail routing and ensure they deliver to the new Exchange servers. If your Exchange 2016 servers have any custom or third-party connectors, ensure they can be recreated on the newer Exchange servers, you can do this by using the Export-CLIXML command.
Get-ReceiveConnector -Server <ServerToDecommission> | Export-CLIXML C:tempOldReceive.xml
Tip: Check the SMTP logs to see if any services are still sending SMTP traffic to the servers via hard coded names or IP addresses. To enable logging, review Configure Protocol Logging. Ensure you capture message logs from a period long enough to account for any apps or processes which relay for weekly or monthly events, such as payroll processing or month-end reporting, as these may not be captured in a small sample size of SMTP Protocol logs.
The decommissioning process is a great opportunity to audit your mail flow configuration, ensuring all the required connectors are properly configured and secured. It’s the perfect time to get rid of any of those Anonymous Relay connectors that may not be in use in your environment. Or, if Exchange is deployed in hybrid, possibly relay against Office 365.
Edge Servers
If you have one or more Edge Transport servers, you must install a newer version of the Edge Transport role (i.e., Exchange 2019). If subscribed to an active directory site, recreate the Edge Subscription as documented here.
If you plan to decommission your Edge servers without replacing them, ensure your firewall rules are updated to route incoming traffic to the Mailbox servers. The Mailbox servers also need to be able to communicate outbound over TCP port 25.
Mailboxes
Move all Exchange 2016 mailboxes to a newer version of Exchange Server
Exchange 2016 cannot be decommissioned until all mailboxes are migrated to the new Exchange servers. Migrations are initiated from the newest version of Exchange. For example, when migrating to Exchange 2019, you create all migration batches and move requests from Exchange 2019; move all your Arbitration Mailboxes to the newest Exchange server first.
Once all moves have been completed, delete all migration batches and move requests. Any lingering move requests or mailboxes will block uninstalling Exchange 2016.
Run the following commands in the Exchange Management Shell (EMS) to identify any mailboxes that need to move to a newer Exchange Server:
Set-ADServerSettings -ViewEntireForest $True
Get-Mailbox -Server <Ex2016 ServerName> -Arbitration
Get-Mailbox -Server <Ex2016 ServerName> -ResultSize Unlimited
Get-Mailbox -Server <Ex2016 ServerName> -Archive -ResultSize Unlimited
Get-SiteMailbox
Get-Mailbox –AuditLog
You may also need to run Get-SiteMailbox –DeletedSiteMailbox if any site mailboxes had been previously removed (as this can still be a blocker for removing databases).
If any mailboxes are found, migrate them to a newer version of Exchange before moving on. Additional information can be found in Manage on-premises mailbox moves in Exchange Server.
After ensuring all arbitration and user mailboxes have been moved, ensure all public folder mailboxes have been migrated:
Get-Mailbox -Server <Ex2016 ServerName> -PublicFolder -ResultSize Unlimited
Additional information on public folder migrations can be found in Migrate public folders from Exchange 2013 to Exchange 2016 or Exchange 2019.
After all mailboxes have been moved to newer Exchange servers, and after reviewing the moves and migration batches, you can remove the moves and batches. Run the command first with the -WhatIf parameter, and after confirming all listed moves and batches can be removed, run it again without the –WhatIf parameter.
All completed move requests can be removed using the following command – see Remove-MoveRequest
Get-MoveRequest -ResultSize Unlimited | Remove-MoveRequest -Confirm:$false -WhatIf
All migration batches can be removed using the following command – see Remove-MigrationBatch
Get-MigrationBatch | Remove-MigrationBatch -Confirm:$false -WhatIf
Decommissioning the Database Availability Group
Verify no mailboxes exist on Exchange 2016 servers
Run the following command:
Get-Mailbox –ResultSize unlimited | Where-Object {$_.AdminDisplayVersion -like “Version 15.1*”}
If any mailboxes are found, migrate them to newer Exchange servers or remove them.
Remove mailbox database copies
Every mailbox database copy on Exchange 2016 must be removed. You can do this in the Exchange admin center (EAC) or using the EMS. Details for using either tool are in Remove a mailbox database copy.
Note that removing a database copy does not remove the actual database files or transaction logs from the server.
To find passive copies on a per-server basis, run:
Get-MailboxDatabaseCopyStatus –Server <Ex2016 ServerName> | Where-Object {$_.Status -notlike “*mounted”} | Remove-MailboxDatabaseCopy
To find passive copies on a per-database basis, run:
Get-MailboxDatabaseCopyStatus <DatabaseName> | Where-Object {$_.Status -notlike “*mounted”} | Remove-MailboxDatabaseCopy
Remove mailbox databases
Assuming best practices were followed for the Exchange 2016 environment, we will have a DAG for HA/DR capabilities. With all mailboxes having been removed from the 2016 environment, we are ready to tear down the DAG to move forward with decommissioning Exchange 2016. After all mailboxes are migrated off Exchange 2016 and all passive database copies are removed, you can remove any leftover databases from the Exchange 2016 environment.
Run the following command with the –WhatIf parameter to confirm that all listed databases can be removed, and then run the command without the –WhatIf parameter to remove them.
Get-MailboxDatabase –Server <ServerToDecommission> | Remove-MailboxDatabase –Confirm:$false -WhatIf
If any mailboxes are present in a database, you cannot remove the database. The attempt will fail with the following error:
This mailbox database contains one or more mailboxes, mailbox plans, archive mailboxes, public folder mailboxes or arbitration mailboxes, audit mailboxes.
If you verified that no mailboxes reside in the database, but you are still unable to remove the database, review this article. The database you’re trying to remove might contain an archive mailbox for a primary mailbox in a different database. Bear in mind: if your mailboxes are on an InPlaceHold or LitigationHold, these will be blocked from removal, and you’ll want to ensure it’s safe to remove each hold to allow the continuation of the removal.
Note: If you run into an issue trying to remove mailbox databases that host no active mailboxes one of the ways you can identify which objects are pointing to this database would be these commands:
Set-ADServerSettings -ViewEntireForest $True
$DN =(Get-MailboxDatabase “DBNAME”).DistinguishedName
Get-AdObject -Filter ‘(homemdb -eq $DN -or msExchArchiveDatabaseLink -eq $DN) -and (Name -notlike “HealthMailbox*” -and Name -notlike “SystemMailbox*”)’
Remove all members from your Database Availability Group(s)
Each DAG member must be removed from the DAG before the DAG can be removed. You can do this using the EAC or the EMS. Details for using either tool are in Manage database availability group membership.
Remove DAGs
Once all database copies have been removed, and all members have been removed from the DAG, the DAG can be deleted using the EAC or the EMS. Details for using either tool are in Remove a database availability group.
Tip: If you have a DAG with a file share witness, don’t forget to decommission the file share witness used for the Exchange 2016 DAG.
A note about the Unified Messaging Role
This post does not cover Unified Messaging, because that feature has been removed from Exchange 2019. For detailed steps on migrating Unified Messaging to another solution, see Plan for Skype for Business Server and Exchange Server migration – Skype for Business Hybrid. Note, though, if your Exchange 2016 users have UM-enabled mailboxes, do not move them to Exchange 2019 before you move them to Skype for Business Server 2019, or they will have a voice messaging outage.
Put Exchange 2016 servers into maintenance mode
Once everything is moved from Exchange 2016 to a newer version of Exchange Server, put the Exchange 2016 servers into maintenance mode for one week to observe any unforeseen issues. If issues are experienced, they can be resolved before you remove Exchange 2016. If no issues occur, you can uninstall your Exchange 2016 servers. Please note that we do not recommend shutting down the Exchange 2016 servers as this can cause issues if resources aren’t fully migrated unless you plan to do so within a change control window.
The goal is to verify that nothing is trying to connect to these Exchange 2016 servers. If you find something that is, update it to use the new Exchange servers, or return the Exchange 2016 servers back to service until updates can occur.
Even after reviewing messaging and connectivity logs, it’s not uncommon for an organization to keep their legacy Exchange servers online (in Maintenance Mode) for a period long enough to find issues with unknown processes, unexpected recovery efforts, etc.
To put an Exchange server into maintenance mode, see the Performing maintenance on DAG members section of Manage database availability groups in Exchange Server.
For additional information on Exchange Server component states, see this blog post.
Uninstall Exchange 2016
Review best practices
Start by reviewing the Best Practices section of Upgrade Exchange to the latest Cumulative Update, as they also apply when uninstalling Exchange (e.g., reboot the server before and after running Setup, disable antivirus, etc.).
Remove health mailboxes
Prior to uninstalling Exchange 2016, use the following command to remove all Exchange 2016 health mailboxes:
Get-Mailbox -Monitoring | Where-Object {$_.AdminDisplayVersion -like “Version 15.1*”} | Remove-Mailbox -Confirm:$false
Uninstall Exchange 2016
Before you begin the uninstall process, close EMS and any other programs that might delay the uninstall process (e.g., programs using .NET assemblies, antivirus, and backup agents). Then, uninstall Exchange 2016 using either of these recommended methods (we do not recommend using Control Panel):
Use the unattended setup mode: Setup.exe /mode:Uninstall
Run Setup.exe from the setup file location
Perform post-uninstallation tasks
After uninstalling Exchange, there are some general housekeeping tasks that remain. These may vary depending on the steps taken during your upgrade process and depending upon your organization’s operational requirements.
Examples include:
Removing the Exchange 2016 computer accounts from Active Directory (including the DAG’s Cluster Name Object and Kerberos ASA object).
Removing the Exchange 2016 servers as targets to other services (e.g., backup software, antivirus/security agents, network monitoring).
Removing Exchange 2016 name records from DNS.
Ensuring the folder on the DAG’s file share witness (FSW) servers were successfully removed.
Removing the Exchange Trusted Subsystem from the FSW servers’ local Administrators group unless these servers are witnesses for other DAGs.
Removing old firewall rules that open ports to Exchange 2016 environment.
Removing and disposing of the Exchange 2016 environment’s physical equipment.
Deleting any Exchange 2016 virtual machines.
In summary, when decommissioning Exchange 2016, the most important considerations are:
Planning for removal (by updating anything that relies on Exchange to use newer Exchange servers)
Monitoring to ensure nothing tries to connect to the servers being removed
If you have any questions or would like to discuss a specific scenario, feel free to ask in the Exchange Tech Community forum.
Jason Lockridge, Dylan Stetts, Robin Tinnie and Josh Hagen
Microsoft Tech Community – Latest Blogs –Read More
Return userform values based on 2 search criteria
Hi all,
I am using the code below through a userform that will populate labels, textboxes, etc with client information based on the client name in column A (i.e. client location, badge #, active status, etc.). Each client has different types of equipment (i.e. batons, handcuffs, etc) and each piece of equipment has a unique serial number for individual clients however there may be a risk that there could be duplicate serial numbers across all clients.
My question is this: Is there a way to add additional criteria to the code below to narrow down search results within the spreadsheet to include client name and serial number? This would ensure that users are able to display the proper equipment for the client.
Thanks in advance!
Dim f As Range
Dim ws As Worksheet
Dim rng As Range
Dim answer As Integer
With SearchClient
Set f = Sheets(“DTT”).Range(“D4:D1503”).Find(.Value, , xlValues, xlWhole, , , False)
If Not f Is Nothing Then
ClientNameModifyProfile.Caption = Sheets(“CLIENT PROFILES”).Range(“C” & f.Row).Value
BadgeModifyProfile.Caption = Sheets(“CLIENT PROFILES”).Range(“E” & f.Row).Value
ActiveOfficerModifyProfile.Caption = Sheets(“CLIENT PROFILES”).Range(“F” & f.Row).Value
ActiveClientGroup.Value = Sheets(“CLIENT PROFILES”).Range(“O” & f.Row).Value
NotesClientProfile.Value = Sheets(“CLIENT PROFILES”).Range(“J” & f.Row).Value
HomePositionClientProfile.Value = Sheets(“CLIENT PROFILES”).Range(“G” & f.Row).Value
HomeUnitClientProfile.Value = Sheets(“CLIENT PROFILES”).Range(“H” & f.Row).Value
HomeLocationClientProfile.Value = Sheets(“CLIENT PROFILES”).Range(“I” & f.Row).Value
TempPositionType.Caption = Sheets(“CLIENT PROFILES”).Range(“K” & f.Row).Value
TempPosition.Caption = Sheets(“CLIENT PROFILES”).Range(“L” & f.Row).Value
TempUnit.Caption = Sheets(“CLIENT PROFILES”).Range(“M” & f.Row).Value
TempLocation.Caption = Sheets(“CLIENT PROFILES”).Range(“N” & f.Row).Value
Else
MsgBox “No Client Profile exists for this individual.”
Exit Sub
End If
End With
Hi all, I am using the code below through a userform that will populate labels, textboxes, etc with client information based on the client name in column A (i.e. client location, badge #, active status, etc.). Each client has different types of equipment (i.e. batons, handcuffs, etc) and each piece of equipment has a unique serial number for individual clients however there may be a risk that there could be duplicate serial numbers across all clients. My question is this: Is there a way to add additional criteria to the code below to narrow down search results within the spreadsheet to include client name and serial number? This would ensure that users are able to display the proper equipment for the client. Thanks in advance! Dim f As Range
Dim ws As Worksheet
Dim rng As Range
Dim answer As Integer
With SearchClient
Set f = Sheets(“DTT”).Range(“D4:D1503”).Find(.Value, , xlValues, xlWhole, , , False)
If Not f Is Nothing Then
ClientNameModifyProfile.Caption = Sheets(“CLIENT PROFILES”).Range(“C” & f.Row).Value
BadgeModifyProfile.Caption = Sheets(“CLIENT PROFILES”).Range(“E” & f.Row).Value
ActiveOfficerModifyProfile.Caption = Sheets(“CLIENT PROFILES”).Range(“F” & f.Row).Value
ActiveClientGroup.Value = Sheets(“CLIENT PROFILES”).Range(“O” & f.Row).Value
NotesClientProfile.Value = Sheets(“CLIENT PROFILES”).Range(“J” & f.Row).Value
HomePositionClientProfile.Value = Sheets(“CLIENT PROFILES”).Range(“G” & f.Row).Value
HomeUnitClientProfile.Value = Sheets(“CLIENT PROFILES”).Range(“H” & f.Row).Value
HomeLocationClientProfile.Value = Sheets(“CLIENT PROFILES”).Range(“I” & f.Row).Value
TempPositionType.Caption = Sheets(“CLIENT PROFILES”).Range(“K” & f.Row).Value
TempPosition.Caption = Sheets(“CLIENT PROFILES”).Range(“L” & f.Row).Value
TempUnit.Caption = Sheets(“CLIENT PROFILES”).Range(“M” & f.Row).Value
TempLocation.Caption = Sheets(“CLIENT PROFILES”).Range(“N” & f.Row).Value
Else
MsgBox “No Client Profile exists for this individual.”
Exit Sub
End If
End With Read More
MS Teams Aysnc Media Storage
Does anyone know where the aysnc Media storage is located and if an admin can access it to find a recording? with in the 21 day as listed below?
Async media storage
If a Teams meeting recording fails to successfully upload to OneDrive because the organizer, co-organizers and recording initiator don’t have OneDrive accounts, or the storage quota is full, an error message appears. The recording is instead temporarily saved to async media storage. Once the recording is in async media storage, no retry attempts are made to automatically upload the recording to OneDrive or SharePoint. During that time, the organizer must download the recording. The organizer can try to upload the recording again if they get a OneDrive or SharePoint license, or clear some space in their storage quota. If not downloaded within 21 days, the recording is deleted.”
Does anyone know where the aysnc Media storage is located and if an admin can access it to find a recording? with in the 21 day as listed below? Async media storageIf a Teams meeting recording fails to successfully upload to OneDrive because the organizer, co-organizers and recording initiator don’t have OneDrive accounts, or the storage quota is full, an error message appears. The recording is instead temporarily saved to async media storage. Once the recording is in async media storage, no retry attempts are made to automatically upload the recording to OneDrive or SharePoint. During that time, the organizer must download the recording. The organizer can try to upload the recording again if they get a OneDrive or SharePoint license, or clear some space in their storage quota. If not downloaded within 21 days, the recording is deleted.” Read More
How to be part of Microsoft Cloud for Healthcare
Hello,
For some time now we have been wanting to know what steps we should follow to start collaborating in Microsoft Cloud for Healthcare or who we should contact to be able to enroll to become a partner in this Microsoft branch.
We at PartnerHelper have a health platform that is powered by Microsoft technology and is divided into EHR and an AI application that focuses on allowing the doctor to focus on the patient and not on their computer and above all on Clinical Decision Support through AI algorithms, therefore, we see enrollment in Microsoft Cloud for Healthcare as essential.
Thank you for your help!
Hello,For some time now we have been wanting to know what steps we should follow to start collaborating in Microsoft Cloud for Healthcare or who we should contact to be able to enroll to become a partner in this Microsoft branch.We at PartnerHelper have a health platform that is powered by Microsoft technology and is divided into EHR and an AI application that focuses on allowing the doctor to focus on the patient and not on their computer and above all on Clinical Decision Support through AI algorithms, therefore, we see enrollment in Microsoft Cloud for Healthcare as essential.Thank you for your help! Read More
Cybersecurity incident correlation in the unified security operations platform
The exponential growth of threat actors, coupled with the proliferation of cybersecurity solutions has inundated security operation centers (SOCs) with a flood of alerts. SOC teams receive an average of 4,484 alerts per day and spend up to 3 hours manually triaging to separate genuine threats from noise. In response, alert correlation has become an indispensable tool in the defender’s arsenal, allowing SOCs to consolidate disparate alerts into cohesive incidents, dramatically reducing the number of analyst investigations.
Earlier this year, we announced the general availability of Microsoft’s unified security operations platform that brought together the full capabilities of an industry-leading cloud-native security information and event management (SIEM), comprehensive extended detection and response (XDR), and generative AI built specifically for cybersecurity.
As part of the unified platform, we also evolved our leading correlation engine, which is projected to save 7.2M analyst hours annually, or $241M across our customers per year.
In this blog post we will share deep insights into the innovative research that infuses powerful data science and threat intelligence to correlate detections across first and third-party data via Microsoft Defender XDR & Microsoft Sentinel with 99% accuracy.
The Challenges of Incident Correlation
Cybersecurity incident correlation is critical for any SOC – the correlation helps connect individual security alerts and events to spot patterns and uncover hidden threats that might be missed if looked at individually. It enables organizations to detect and respond to sophisticated cyberattacks more quickly and holistically, but challenges with traditional technologies remain:
Mitigating false correlations. False correlations pose a significant risk and can lead to unwarranted actions on benign devices or users, disrupting vital company operations. Additionally, over-correlation can result in “black hole”’ incidents where all alerts within an enterprise begin to correlate indiscriminately
Minimizing missed correlations. Avoiding false negatives is equally important, as a missed correlation could be the difference between the key context required to disrupt a cyberattack, preventing the loss of valuable data and intellectual property
Scalability and timeliness. Ingesting billions of alerts with varying degrees of fidelity across a multitude of security products presents a monumental correlation challenge. Therefore, requiring a robust infrastructure and an efficient methodology Furthermore, these correlations need to happen in near real-time to keep SOCs up to date
TI and Domain Knowledge. Correlation across diverse entity types such as IP addresses and files often requires customers to rely on specialized threat intelligence (TI) and domain knowledge to mitigate false positive and false negative correlations
Microsoft’s Unified Security Operations Provides Unique Correlation Technology
Microsoft’s XDR and SIEM solutions have long provided effective incident correlation to customers, saving millions of analyst hours and delivering an effective response to attacks.
In the unified security operations platform, we brought together Microsoft Defender XDR and Microsoft Sentinel, which allowed us to evolve and reshape how traditional correlation technologies work. Security analysts now benefit from a scale framework designed to correlate billions of security alerts even more effectively. Unlike traditional methods that rely on predefined conditions and fixed logic to identify relationships and patterns—and struggle to adapt and scale to the evolving and intricate nature of enterprise security landscapes—the correlation engine in the unified security operations platform employs a geo-distributed, graph-based approach that continuously integrates fresh threat intelligence and security domain knowledge to adapt to the evolving security landscape. This allows us to seamlessly handle the vast complexities of alert correlation across numerous enterprises by leveraging data from Defender workloads and third-party sources ingested via Microsoft Sentinel.
This framework infuses expert domain knowledge and real-time threat intelligence, ensuring accurate, context-driven correlations that significantly reduce false positive and false negative correlations. Additionally, the correlation engine dynamically adapts using a self-learning model, continuously refining its processes by mining incident patterns and incorporating feedback from security experts to offer a scalable and precise solution to modern cybersecurity challenges.
Key Innovations
We introduced multiple key innovations tailored to ensure accurate and scalable incident correlation (see Figure 1):
Geo-distributed architecture. Enhances data handling efficiency by distributing processing across multiple geographic locations and PySpark clusters
Graph-based approach. Utilizes graph mining algorithms to optimize the correlation process, making the system scalable to billions of alerts
Breaking the boundary between 1st and 3rd party alerts. Every hour, we profile first and third-party detectors to ensure they meet key correlation safety checks before allowing cross-detector correlation (outlined below)
Domain knowledge and Threat Intelligence integration. We are no combining real-time threat intelligence with expert security insight to create highly contextualized and accurate incidents
Continuous adaptation. Features a human-in-the-loop feedback system that mines incident patterns and refines the correlation process, ensuring the framework evolves to tackle emerging threats
High accuracy. Extensive analysis shows that our correlations are over 99% accurate, significantly up-leveling the incident formation process
Ensuring High Fidelity Correlations for any Data Source
A majority of organizations have detections from multiple data sources and consume data in various ways whether if that’s through an XDR or a data connector. For data consumed through an XDR, because it’s native to the vendor, is normalized and has higher fidelity compared to data that comes through a connector which can produce a ton of noise and at lower fidelity. This is where correlation becomes extremely important, because alerts with varying degrees of fidelity are difficult to analyze and slow down the response time if a pattern is missed or mis-identified.
To ensure alerts can be correlated across any data source, we introduced three safety checks to activate cross-detector correlation:
Low volume detector. We examine the historical alert volume for each detector to ensure it is below a set threshold
Low evidence detector. The average historical number of distinct values per entity type in a detector should not exceed predetermined values
Low evidence alert. Similarly, the number of distinct entities associated with an individual alert are constrained to the same thresholds as the generating detector
Together, these checks ensure incident quality by correlating high-fidelity third-party alerts with first-party ones and creating separate incidents for low-fidelity third-party alerts that do not pass all three safety checks. By filtering out low-fidelity alerts from key incidents, the SOC can focus on quality detections for their threat hunting needs across any data source.
Looking ahead
Defending against cyberattacks hinges on the ability to accurately and correlate alerts at scale across numerous sources and alert types. By leveraging a unified platform that consolidates alerts across multiple workloads, organizations benefit not only from streamlining their security operations but also gain deeper insights into potential threats and vulnerabilities. This integrated approach enhances response times, reduces false positives, and allows for more proactive threat mitigation strategies. Ultimately, the unified platform optimizes the efficiency and efficacy of security measures, enabling organizations to stay ahead of evolving cyber threats and safeguard their critical assets more effectively.
Learn More
Check out our resources to learn more about the new incident correlation engine and our recent security announcements:
Read the unified security operations platform GA announcement
Read the full paper on the correlation engine that was accepted into CIKM 2024 here
Microsoft Tech Community – Latest Blogs –Read More
New Blog | Bridging the On-premises to Cloud Security Gap: Cloud Credentials Detection
Identities lie at the heart of cloud security. One of the most common tactics used to breach cloud environments is Credential Access. User credentials may be obtained using various techniques. Credentials may be cracked through brute force attempts, obtained in social engineering campaigns, or stolen from compromised resources, where they are stored and used.
In this blog, we demonstrate that properly securing cloud environments requires securing credentials in the organization’s non-cloud environments. To this end, we dive into our innovative capability to detect cloud credentials in on-premises environments and user devices. By integrating it with Microsoft Security Exposure Management, customers are able to identify attack paths starting in non-cloud environments and reaching critical cloud assets using cloud credentials. Customers are then able to effectively prioritize and mitigate those attack paths, thereby improving their enterprise and cloud security posture.
Read the full post here: Bridging the On-premises to Cloud Security Gap: Cloud Credentials Detection
By Tamir Friedman
Identities lie at the heart of cloud security. One of the most common tactics used to breach cloud environments is Credential Access. User credentials may be obtained using various techniques. Credentials may be cracked through brute force attempts, obtained in social engineering campaigns, or stolen from compromised resources, where they are stored and used.
In this blog, we demonstrate that properly securing cloud environments requires securing credentials in the organization’s non-cloud environments. To this end, we dive into our innovative capability to detect cloud credentials in on-premises environments and user devices. By integrating it with Microsoft Security Exposure Management, customers are able to identify attack paths starting in non-cloud environments and reaching critical cloud assets using cloud credentials. Customers are then able to effectively prioritize and mitigate those attack paths, thereby improving their enterprise and cloud security posture.
Read the full post here: Bridging the On-premises to Cloud Security Gap: Cloud Credentials Detection
Auto-labeling condition with document properties for sensitivity labels
Hi, is there a comprehensive documentation regarding the rules conditions using document properties?
Valid search queries often does not work in auto-labeling conditions….
Questions:
– Are all default managed properties with predefined refinables can be used? (excluding custom properties)
– Can we use AND and OR in one line condition? : Path:xxxx AND Subject:XXXX
– Are double quotes supported in query or we need to encode (ex: url)?
– Multiline condition are evaluated as OR (Looks like it is) ?
Thanks.
Hi, is there a comprehensive documentation regarding the rules conditions using document properties? Valid search queries often does not work in auto-labeling conditions…. Questions:- Are all default managed properties with predefined refinables can be used? (excluding custom properties)- Can we use AND and OR in one line condition? : Path:xxxx AND Subject:XXXX- Are double quotes supported in query or we need to encode (ex: url)?- Multiline condition are evaluated as OR (Looks like it is) ? Thanks. Read More
FY25 Co-op: Start earning and spending your eligible co-op funds today
Learn about Cooperative Marketing Funds (Co-op), including FY25 changes, eligible activities, how to view, and when to use your funds.
FY25 Co-op: Start earning and spending your eligible co-op funds today | Microsoft
French, Spanish, and Portuguese available from the “Language” drop down on the main menu bar.
Learn about Cooperative Marketing Funds (Co-op), including FY25 changes, eligible activities, how to view, and when to use your funds.
FY25 Co-op: Start earning and spending your eligible co-op funds today | Microsoft
French, Spanish, and Portuguese available from the “Language” drop down on the main menu bar. Read More
Create a Dax to find source of value while using dependent dax measures in stacked column chart
Hi,
I have a visual as below in my report:
This visual contains a measure called ‘Total switch new’ as below:
Total switch new =
SUMX ( ‘Accruals’, [Switch Units] )
‘Switch Units’ dax referenced above is as below:
Switch Units =
COALESCE (
[Priority 1 units],
[Priority 2 units],
[Priority 3 units],
[Parameter Value units]
)
‘Priority 1 units’,’Priority 2 units’, ‘Priority 3 units’ & ‘Parameter Value units’ used above are all dax and they are:
Priority 1 units =
SWITCH (
SELECTEDVALUE ( ‘Parameter 4′[Parameter Fields] ),
“‘DAX_Units'[Profile units]”, [Profile units],
“‘DAX_Units'[Direct units]”, [Direct units],
“‘DAX_Units'[Target units]”, [Target units],
“‘DAX_Units'[Priority 4]”, [Units Parameter Value]
)
Priority 2 units =
SWITCH (
SELECTEDVALUE ( ‘Parameter 5′[Parameter Fields] ),
“‘DAX_Units'[Profile units]”, [Profile units],
“‘DAX_Units'[Direct units]”, [Direct units],
“‘DAX_Units'[Target units]”, [Target units],
“‘DAX_Units'[Priority 4]”, [Units Parameter Value]
)
Priority 3 units =
SWITCH (
SELECTEDVALUE ( ‘Parameter 6′[Parameter Fields] ),
“‘DAX_Units'[Profile units]”, [Profile units],
“‘DAX_Units'[Direct units]”, [Direct units],
“‘DAX_Units'[Target units]”, [Target units],
“‘DAX_Units'[Priority 4]”, [Units Parameter Value]
)
Parameter Value units comes from Custom Units table
Above ‘Priority 1 units’,’Priority 2 units’, ‘Priority 3 units’ dax measures contains 3 base dax measures as below(i have given only name here and you will find their measures in attached file):
Profile units
Direct units
Target units
We have used 3 field parameter tables(Parameter 4,Parameter 5 & Parameter 6) to be used as a slicer for reporting.
FYR, i have given table format in page 1 which will give you an idea on how above measures are dependent on each other and how field parameters are used.
Now, I need to show in below visual whether their monthly values is made up of/composed of Profile units or Direct units or Target units (in different colors)
For example, month of july, Total switch new dax shows that value 1166 comes from Profile units(refer page 1 in report) & November, Total switch new dax shows value 910 comes from Target.
Then our expected visual would display something like below(different color code to show whether it comes from Profile or Direct or Target):
PFA file here Financial Management -Tanvi Trial – Copy.pbix
Thanks in advance!
Hi, I have a visual as below in my report:This visual contains a measure called ‘Total switch new’ as below:Total switch new =SUMX ( ‘Accruals’, [Switch Units] ) ‘Switch Units’ dax referenced above is as below: Switch Units =COALESCE ([Priority 1 units],[Priority 2 units],[Priority 3 units],[Parameter Value units]) ‘Priority 1 units’,’Priority 2 units’, ‘Priority 3 units’ & ‘Parameter Value units’ used above are all dax and they are:Priority 1 units =SWITCH (SELECTEDVALUE ( ‘Parameter 4′[Parameter Fields] ),”‘DAX_Units'[Profile units]”, [Profile units],”‘DAX_Units'[Direct units]”, [Direct units],”‘DAX_Units'[Target units]”, [Target units],”‘DAX_Units'[Priority 4]”, [Units Parameter Value]) Priority 2 units =SWITCH (SELECTEDVALUE ( ‘Parameter 5′[Parameter Fields] ),”‘DAX_Units'[Profile units]”, [Profile units],”‘DAX_Units'[Direct units]”, [Direct units],”‘DAX_Units'[Target units]”, [Target units],”‘DAX_Units'[Priority 4]”, [Units Parameter Value]) Priority 3 units =SWITCH (SELECTEDVALUE ( ‘Parameter 6′[Parameter Fields] ),”‘DAX_Units'[Profile units]”, [Profile units],”‘DAX_Units'[Direct units]”, [Direct units],”‘DAX_Units'[Target units]”, [Target units],”‘DAX_Units'[Priority 4]”, [Units Parameter Value]) Parameter Value units comes from Custom Units table Above ‘Priority 1 units’,’Priority 2 units’, ‘Priority 3 units’ dax measures contains 3 base dax measures as below(i have given only name here and you will find their measures in attached file):Profile unitsDirect unitsTarget units We have used 3 field parameter tables(Parameter 4,Parameter 5 & Parameter 6) to be used as a slicer for reporting.FYR, i have given table format in page 1 which will give you an idea on how above measures are dependent on each other and how field parameters are used. Now, I need to show in below visual whether their monthly values is made up of/composed of Profile units or Direct units or Target units (in different colors) For example, month of july, Total switch new dax shows that value 1166 comes from Profile units(refer page 1 in report) & November, Total switch new dax shows value 910 comes from Target. Then our expected visual would display something like below(different color code to show whether it comes from Profile or Direct or Target): PFA file here Financial Management -Tanvi Trial – Copy.pbix Thanks in advance!@SergeiBaklan Read More
Duplicate Values
I’m running into issues with cleaning data within Excel files.
I’m cross referencing two Excel spreadsheets to get accurate store locations.
When I drop in my data (in column B) next to provided data (column C), it highlights almost all data values (including ones that do not match).
I’ve tried changing the number format (from general to number to text) and run into the same result. Any suggestions on how to fix this?
My work currently uses Microsoft® Excel® for Microsoft 365 MSO (Version 2405 Build 16.0.17628.20006) 64-bit.
I’m running into issues with cleaning data within Excel files.I’m cross referencing two Excel spreadsheets to get accurate store locations.When I drop in my data (in column B) next to provided data (column C), it highlights almost all data values (including ones that do not match).I’ve tried changing the number format (from general to number to text) and run into the same result. Any suggestions on how to fix this?My work currently uses Microsoft® Excel® for Microsoft 365 MSO (Version 2405 Build 16.0.17628.20006) 64-bit. Read More
I desperately want to use this forum but the site takes forever to load
This is my second post about the loading time for the tech community site and I have to highlight it again so someone will improve the horrible user experience I am having.
It takes approximately 20 seconds to load every page.
I have done everything for optimization of the browser. Please see my previous post but here is a video to demonstrate the pain i go thru to get a page loaded.
https://app.screencast.com/W1x41SWTinGZl
This is my second post about the loading time for the tech community site and I have to highlight it again so someone will improve the horrible user experience I am having.It takes approximately 20 seconds to load every page.I have done everything for optimization of the browser. Please see my previous post but here is a video to demonstrate the pain i go thru to get a page loaded. https://app.screencast.com/W1x41SWTinGZl Read More
Monitoring GPU Metrics in AKS with Azure Managed Prometheus, DCGM Exporter and Managed Grafana
Azure Monitor managed service for Prometheus provides a production-grade solution for monitoring without the hassle of installation and maintenance. By leveraging these managed services, we can focus on extracting insights from your metrics and logs rather than managing the underlying infrastructure.
The integration of essential GPU metrics—such as Framebuffer Memory Usage, GPU Utilization, Tensor Core Utilization, and SM Clock Frequencies—into Azure Managed Prometheus and Grafana enhances the visualization of actionable insights. This integration facilitates a comprehensive understanding of GPU consumption patterns, enabling more informed decisions regarding optimization and resource allocation.
Azure Managed Prometheus recently announced general availability of Operator and CRD support, which will enable customers to customize metrics collection and add scraping of metrics from workloads and applications using Service and Pod Monitors, similar to the OSS Prometheus Operator.
This blog will demonstrate how we leveraged the CRD/Operator support in Azure Managed Prometheus and used the Nvidia DCGM Exporter and Grafana to enable GPU monitoring.
GPU monitoring
As the use of GPUs has skyrocketed for deploying large language models (LLMs) for both inference and fine-tuning, monitoring these resources becomes critical to ensure optimal performance and utilization. Prometheus, an open-source monitoring and alerting toolkit, coupled with Grafana, a powerful dashboarding and visualization tool, provides an excellent solution for collecting, visualizing, and acting on these metrics.
Essential metrics such as Framebuffer Memory Usage, GPU Utilization, Tensor Core Utilization, and SM Clock Frequencies serve as fundamental indicators of GPU consumption, offering invaluable insights into the performance and efficiency of graphics processing units, and thereby enabling us to reduce our COGs and improve operations.
Using Nvidia’s DGCM Exporter with Azure Managed Prometheus
The DGCM Exporter is a tool developed by Nvidia to collect and export GPU metrics. It runs as a pod on Kubernetes clusters and gathers various metrics from Nvidia GPUs, such as utilization, memory usage, temperature, and power consumption. These metrics are crucial for monitoring and managing the performance of GPUs.
You can integrate this exporter with Azure Managed Prometheus. The section below in blog describes the steps and changes needed to deploy the DCGM Exporter successfully.
Prerequisites
Before we jump straight to the installation, ensure your AKS cluster meets the following requirements:
GPU Node Pool: Add a node pool with the required VM SKU that includes GPU support.
GPU Driver: Ensure the NVIDIA Kubernetes device plugin driver is running as a DaemonSet on your GPU nodes.
Enable Azure Managed Prometheus and Azure Managed Grafana on your AKS cluster.
Refactoring Nvidia DCGM Exporter for AKS: Code Changes and Deployment Guide
Updating API Versions and Configurations for Seamless Integration
As per the official documentation, the best way to get started with DGCM Exporter is to install it using Helm. When installing over AKS with Managed Prometheus, you might encounter the below error:
Error: Installation Failed: Unable to build Kubernetes objects from release manifest: resource mapping not found for name: “dcgm-exporter-xxxxx” namespace: “default” from “”: no matches for kind “ServiceMonitor” in version “monitoring.coreos.com/v1”. Ensure CRDs are installed first.
To resolve this, follow these steps to make necessary changes in the DCGM code:
Clone the Project: Go to the GitHub repository of the DCGM Exporter and clone the project or download it to your local machine.
Navigate to the Template Folder: The code used to deploy the DCGM Exporter is located in the template folder within the deployment folder.
Modify the service-monitor.yaml File: Find the file service-monitor.yaml. The apiVersion key in this file needs to be updated from monitoring.coreos.com/v1 to azmonitoring.coreos.com/v1. This change allows the DCGM Exporter to use the Azure managed Prometheus CRD.
apiVersion: azmonitoring.coreos.com/v1
4. Handle Node Selectors and Tolerations: GPU node pools often have tolerations and node selector tags. Modify the values.yaml file in the deployment folder to handle these configurations:
nodeSelector:
accelerator: nvidia
tolerations:
– key: “sku”
operator: “Equal”
value: “gpu”
effect: “NoSchedule”
Helm: Packaging, Pushing, and Installation on Azure Container Registry
We followed the MS Learn documentation for pushing and installing the package through Helm on Azure Container Registry. For a comprehensive understanding, you can refer to the documentation. Here are the quick steps for installation:
After making all the necessary changes in the deployment folder on the source code, be on that directory to package the code. Log in to your registry to proceed further.
1. Package the Helm chart and login to your container registry:
helm package .
helm registry login <container-registry-url> –username $USER_NAME –password $PASSWORD
2. Push the Helm Chart to the Registry:
helm push dcgm-exporter-3.4.2.tgz oci://<container-registry-url>/helm
3. Verify that the package has been pushed to the registry on Azure portal.
4. Install the chart and verify the installation:
helm install dcgm-nvidia oci://<container-registry-url>/helm/dcgm-exporter -n gpu-resources
#Check the installation on your AKS cluster by running:
helm list -n gpu-resources
#Verify the DGCM Exporter:
Kubectl get po -n gpu-resources
Kubectl get ds -n gpu-resources
You can now check that the DGCM Exporter is running on the GPU nodes as a DaemonSet.
Exporting GPU Metrics and Configuring Azure Managed Grafana Dashboard
Once the DGCM Exporter DaemonSet is running across all GPU node pools, you need to export the GPU metrics generated by this workload to Azure Managed Prometheus. This is accomplished by deploying a PodMonitor resource. Follow these steps:
Deploy the PodMonitor: Apply the following YAML configuration to deploy the PodMonitor:
apiVersion: azmonitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: nvidia-dcgm-exporter
labels:
app.kubernetes.io/name: nvidia-dcgm-exporter
spec:
selector:
matchLabels:
app.kubernetes.io/name: nvidia-dcgm-exporter
podMetricsEndpoints:
– port: metrics
interval: 30s
podTargetLabels:
2. Check if the PodMonitor is deployed and running by executing:
kubectl get podmonitor -n <namespace>
3. Verify Metrics export: Ensure that the metrics are being exported to Azure Managed Prometheus on the portal by navigating to the “Metrics” page on your Azure Monitor Workspace.
Create the DGCM Dashboard on Azure Managed Grafana
The GitHub repository for the DGCM Exporter includes a JSON file for the Grafana dashboard. Follow the MS Learn documentation to import this JSON into your Managed Grafana instance.
After importing the JSON, the dashboard displaying GPU metrics will be visible on Grafana.
Microsoft Tech Community – Latest Blogs –Read More
Operator/CRD support with Azure Monitor managed service for Prometheus is now Generally Available
We are excited to announce that custom resource definitions (CRD) support with Azure Monitor managed service for Prometheus is now generally available.
Azure Monitor managed service for Prometheus is a component of Azure Monitor Metrics, allowing you to collect and analyze metrics at scale using a Prometheus-compatible monitoring solution, based on the Prometheus project from the Cloud Native Computing Foundation. This fully managed service enables using the Prometheus query language (PromQL) to analyze and alert on the performance of monitored infrastructure and workloads.
What’s new?
With this new update, customers can customize scraping targets using Custom Resources (Pod Monitors and Service Monitors), similar to the OSS Prometheus Operator. Enabling Managed Prometheus add-on in an AKS or ARC-enabled AKS will deploy the Pod and Service Monitor custom resource definitions to allow you to create your own custom resources. If you are already using Prometheus Service and Pod monitors to collect metrics from your workloads, you can simply change the apiVersion in the Service/Pod monitor definitions to use them with Azure Managed Prometheus.
Earlier, customers who did not have access to kube-system namespace were not able to customize metrics collection. With this update, customers can create custom resources to enable custom configuration of scrape jobs in any namespace. This is especially useful in a multitenancy scenario where customers are running workloads in different namespaces.
Here is how a leading Public Sector Banking and Financial Services and Insurance (BFSI) company in India has used Service and Pod monitors custom resources to enable monitoring of GPU metrics with Azure Managed Prometheus, DCGM Exporter, and Azure Managed Grafana.
“Azure Monitor managed service for Prometheus provides a production-grade solution for monitoring without the hassle of installation and maintenance. By leveraging these managed services, we can focus on extracting insights from your metrics and logs rather than managing the underlying infrastructure.
The integration of essential GPU metrics—such as Framebuffer Memory Usage, GPU Utilization, Tensor Core Utilization, and SM Clock Frequencies—into Azure Managed Prometheus and Grafana enhances the visualization of actionable insights. This integration facilitates a comprehensive understanding of GPU consumption patterns, enabling more informed decisions regarding optimization and resource allocation.”
-A leading public sector BFSI company in India
Get started today!
To use CRD support with Azure Managed Prometheus, enable Managed Prometheus add-on on your AKS or Arc-enabled AKS cluster. This will automatically deploy the custom resource definitions (CRD) for service and pod monitors. To add Prometheus exporters to collect metrics from third-party workloads or other applications, and to see a list of workloads which have curated configurations and instructions, see Integrate common workloads with Azure Managed Prometheus – Azure Monitor | Microsoft Learn.
For more details refer to this article, or our documentation.
We would love to hear from you – Please share your feedback and suggestions in Azure Monitor · Community.
Microsoft Tech Community – Latest Blogs –Read More
Skilling snack: Go cloud first with Windows device management
The future is cloud first, and it’s already here. Cloud-native device management is secure, dynamic, and most suited for remote work across future-ready organizations. Wherever you are on your cloud-native journey, these resources will help you take the next step with confidence.
Time to learn: 124 minutes
READ
Three benefits of going cloud native
If in doubt about the cloud, start here! Find our definition of cloud native followed by its three main benefits. Explore how organizations experience greater security and cost savings, transformed endpoint management, and readiness for the future.
(7 mins)
Identity + Management + Autopilot + Security + AI
READ
Best practices in moving to cloud-native endpoint management
Follow this guidance to accelerate your transition to cloud-native device management. Get ready to enable workloads in Microsoft Intune, enroll existing Windows devices, and go direct to cloud native.
(8 mins)
Intune + Microsoft Entra hybrid join + ConfigMgr + Zero Trust + Remote
LEARN
Explore Windows 365, the Microsoft cloud-based PC management solution. Learn how to configure and administer it for a secure and personalized Windows 11 experience. Earn 700 XP points for completing this learning module.
(18 mins)
Windows 365 + Cloud + Setup + Management + Security + Deployment + Licensing
READ
Deployment guide for Windows device management
Protect and manage Windows apps and endpoints using Microsoft Intune in 10 easy deployment steps. Start with the prerequisites and a plan, then create compliance policies and configure endpoint security and device settings. Learn how to set up secure authentication methods, deploy apps, and enroll devices. Finally, run remote actions and help other users.
(15 mins)
Modern device management + Permissions + Compliance + Security + Apps + Intune Company Portal
WATCH
AMA: Finding your way to “cloud first”
How can you accelerate the transition to cloud-native endpoint management for your Windows estate? What’s the logical process for moving workloads? Watch and read our experts tackle questions from the live chat.
(60 mins)
Windows + Intune + ConfigMgr + GPO + EPM + ISV + LOB + Apps
READ
Myths and misconceptions: Windows 11 and cloud native
Consider a parallel move to Windows 11 and cloud-native management. Find answers to five common misconceptions that can hinder IT admins.
(9 mins)
Windows 11 + Cloud + ConfigMgr + Autopatch + Intune + Microsoft Entra ID + App Compat + App Assure + UI + TCO
READ
How to achieve cloud-native endpoint management with Microsoft Intune
Here’s more guidance for your conversations with strategic leadership and tactical execution. Consider the change in vision, a change-in-process approach, and multiple supporting resources.
(7 mins)
Automation + FastTrack + Intune + AI
INTERACT
Use this online tool to quickly calculate the return on investment (ROI) with Intune. Compare different Intune plans and review license information.
(time varies)
Intune + ROI + EMS + E3 + E5 + F1/F3 + Business Premium
Learn more about cloud-native endpoints with our tutorials and documentation.
Bon appétit! Come back for more skilling snacks every other week and leave us a comment below with topic ideas for future learning!
Continue the conversation. Find best practices. Bookmark the Windows Tech Community, then follow us @MSWindowsITPro on X and on LinkedIn. Looking for support? Visit Windows on Microsoft Q&A.
Microsoft Tech Community – Latest Blogs –Read More
New Certification for Dynamics 365 customer experience analysts
We’re looking for Dynamics 365 customer experience analysts to take our new beta exam. Are you responsible for configuring, customizing, and expanding the functionality of Dynamics 365 Sales to create business solutions that support, automate, and accelerate your organization’s sales process? Do you use your knowledge of customer experience capabilities in Dynamics 365 Sales and Microsoft Power Platform to configure Dynamics 365 Sales standard and premium features, implement collaboration features, and configure the security model? Additional helpful qualifications include the ability to perform Dynamics 365 Sales customizations, extend Dynamics 365 Sales with Microsoft Power Platform, and deploy the Dynamics 365 App for Outlook.
If this is your skill set, we have a new Certification for you. The Microsoft Certified: Microsoft Dynamics 365 Customer Experience Analyst Associate Certification validates your expertise in this area and offers you the opportunity to prove your skills. To earn this Certification, pass Exam MB-280: Microsoft Dynamics 365 Customer Experience Analyst, currently in beta.
Is this the right Certification for you?
As a customer experience analyst, you’re responsible for participating in Dynamics 365 Sales implementations, understanding your organization’s sales process, and demonstrating the capabilities of Dynamics 365 Customer Insights – Data and Dynamics 365 Customer Insights – Journeys. You have experience configuring model-driven apps in Power Apps. You understand accounts, contacts, and activities; leads and opportunities; the components of model-driven apps, such as forms, views, charts, and dashboards; model-driven app personal settings; and Dataverse, including tables, columns, and relationships. Plus, you have experience working with Dataverse solutions and you’re familiar with Power Automate cloud flow concepts, such as connectors, triggers, and actions. Additionally, you have an understanding of the Dataverse security model and features, including business units, security roles, and row ownership and sharing.
Ready to prove your skills?
Take advantage of the discounted beta exam offer. The first 300 people who take Exam MB-280 (beta) on or before September 6, 2024, can get 80 percent off market price.
To receive the discount, when you register for the exam and are prompted for payment, use code MB280LMhiking. This is not a private access code. The seats are offered on a first-come, first-served basis. As noted, you must take the exam on or before September 6, 2024. Please note that this beta exam is not available in Turkey, Pakistan, India, or China.
Get ready to take Exam MB-280 (beta):
Review the Exam MB-280 (beta) exam page for details. The Exam MB-280 study guide alerts you to key topics covered on the exam.
Skill up with the Microsoft Learn Official Collection Level Up: Dynamics 365 Customer Experience Analyst.
Want even more in-depth training? Connect with Microsoft Training Services Partner in your area for in-person offerings.
Need other preparation ideas? Check out my blog post Just How Does One Prepare for Beta Exams?
Read about our new and improved exam UI in Reimagining the Microsoft Certification exam UI experience.
Did you know that you can take any role-based exam online? Online delivered exams—taken from your home or office—can be less hassle, less stress, and even less worry than traveling to a test center, especially if you’re adequately prepared for what to expect. To find out more, read my blog post Online proctored exams: What to expect and how to prepare.
The rescore process starts on the day an exam goes live, and final scores for beta exams are released approximately 10 days after that. For details on the timing of beta exam rescoring and results, check out my post Creating high-quality exams: The path from beta to live.
Ready to get started?
Remember, the number of spots is limited to the first 300 candidates taking Exam MB-280 (beta) on or before September 6, 2024.
Related resources
Evolving Microsoft Credentials for Dynamics 365
Dynamics 365 Sales documentation on Microsoft Learn
Dynamics 365 Customer Insights documentation on Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
Duplicate emails in Inbox in Office365
I have been receiving duplicate emails in Office 365 for a majority of my emails. I use a Windows computer and this recently started happening. It is very frustrating because not all of my emails are duplicates, so I end up having to open each message. This has been time consuming.
Is there any way to stop getting duplicate emails?
I have been receiving duplicate emails in Office 365 for a majority of my emails. I use a Windows computer and this recently started happening. It is very frustrating because not all of my emails are duplicates, so I end up having to open each message. This has been time consuming. Is there any way to stop getting duplicate emails? Read More
How to download Multiple APKs at one place
APKs are the most downloaded items on internet. Because people love their modified functions. There was a problem that different APKs were to be downloaded from different platforms. MODOBR solved this problem. Now you can download multiple APKs from single website. Let’s have a try to modobr and you will fall in love and your time of searching will be saved.
APKs are the most downloaded items on internet. Because people love their modified functions. There was a problem that different APKs were to be downloaded from different platforms. MODOBR solved this problem. Now you can download multiple APKs from single website. Let’s have a try to modobr and you will fall in love and your time of searching will be saved. Read More
[ADO] Work Items custom states
Hello.
Wer shifting to ADO as a project management tool (not for software development and delivery, not directly).
I’m creating our own custome process (inheriting from Agile) and was wondering if there is any way for us (my team) to have custom workflow states to be created by default instead of havig to do it manually for each work item type we create.
This is what we would like to have and has we have some several WI types to create would make our lives easier and possibly future proof it in case we need to make changes (add or edit – through delete+add – more WI).
Thank you.
Hello. Wer shifting to ADO as a project management tool (not for software development and delivery, not directly). I’m creating our own custome process (inheriting from Agile) and was wondering if there is any way for us (my team) to have custom workflow states to be created by default instead of havig to do it manually for each work item type we create.This is what we would like to have and has we have some several WI types to create would make our lives easier and possibly future proof it in case we need to make changes (add or edit – through delete+add – more WI). Thank you. Read More
Feature request: (DLP) Add new passport numbering format for Canada
Since May 2023, new versions of the Canada passports have new numbering format.
Before the May 2023 the format was AB123456, and now they’re A123456BC.
pls change this article to include the new numbering format & change Purview to include the regex for the new passport number format -> https://learn.microsoft.com/en-us/purview/sit-defn-canada-passport-number
Most likely & potentially, both formats will be used for another 10 years. After which, the old format passports will likely expire post the max validity allowed for the passports.
Since May 2023, new versions of the Canada passports have new numbering format.Before the May 2023 the format was AB123456, and now they’re A123456BC.pls change this article to include the new numbering format & change Purview to include the regex for the new passport number format -> https://learn.microsoft.com/en-us/purview/sit-defn-canada-passport-numberMost likely & potentially, both formats will be used for another 10 years. After which, the old format passports will likely expire post the max validity allowed for the passports. Read More