Category: Microsoft
Category Archives: Microsoft
Logic Apps Standard – Service Bus In-App connector improvements for Peek-lock operations
In collaboration with Divya Swarnkar and Aprana Seth.
Service Bus In-App connector is bringing new triggers and actions for peek-lock operations. Those changes will allow peek-lock operations in message and queues that don’t require session to be started and completed from any instance of the runtime available in the pool of resources, removing previous requirements for VNET integration and fixed size or role instances, which were needed because of the underlying client SDK used by the connector.
The new trigger and actions will be the default operations for peek-lock, but will not impact existing workflows. Read through the next sections to learn more about this update and its impact.
New triggers
Starting from bundle version 1.81.x, you will find new triggers for messages available in a queue and topic using the peek-lock method:
New Actions
Starting from bundle version 1.81.x, you will find new actions for managing messages in a queue or topic subscriptions using the peek-lock method are added for queue and topic.
What is the difference between this version and the previous version of the connector
The new connector actions require details of the repository holding the message (queue name / topic and subscription name) as well as lock token, where the previous item required the message id.
This allows the connector to reuse or initialize a client in any instance of the runtime available in the pool of resources. With that, not only the pre-requisites of VNET integration and fixed number of role instance is remov but also the requirement of the same Message Receiver that peeked the message being the workflow that execute all the actions is removed. For more information about the previous connector requirements, check this Tech community post.
What is the impact of existing workflows that used the previous version of the Service Bus actions?
The previous actions and triggers are marked as internal actions now. This is how Logic Apps indicates that the actions define in existing workflows are still supported by the runtime, both at design and workflow execution, but shouldn’t not be used for new workflows.
The impact for you as a developer is:
Workflows with old version of the trigger and actions will show normally in the designer and be fully supported by the runtime. This means that if you have existing workflows you will not need to change them.
The runtime do not support the new and old version of the actions in the same workflow. You can have workflows that uses each version independently, but you can’t mix and match version in the same workflow.
This means that if you need to add Service Bus actions in a workflow that already have actions from the previous versions of the connector, all actions must be changed to the new workflow. Notice that all properties from the old version exists in the new one, so you can simply replace the individual actions, providing the required parameters.
What happens with my workflow require session support?
If your workflow requires session, you will be using the existing trigger and actions that are specific for session. Those actions are the same from the previous version, as the underlying SDK doesn’t provide the support to execute action against a message in a repository that is session enabled from any client instance.
That means that the VNET integration requirement, which existed for session in the previous connector, is still required. The requirement for fixed number of role instances have been removed in a previous update, when the connector received the concurrency support. You can read more about the Service Bus connector support for sessions here.
What happen if I am using the Export Tool to migrate my ISE Logic Apps?
As customers are still running their last effort to migrate Logic Apps from ISE to Logic Apps Standard, with many migration processes underway, we decided to keep the previous version of the Service Bus connector as the migrated connector. The reason for that decision was that lots of customers are still actively migrating their ISE logic app fleet, with some workflows already migrated, others still being migrated. Having two different connectors coming from the same export process would confuse customers and complicate their support during runtime.
After the ISE Retirement is completed, we will update the export tool to support the latest version of the connector.
Microsoft Tech Community – Latest Blogs –Read More
MDEClientAnalyzer not working on Suse 12
We are having issues with running MDEClientAnalyzer on Suse 12. Suse 12 is officially supported by MDE, thus I assume MDEClientAnalyzer is as well. However when run it according to the MS Instructions
we are receiving error that “could not run command /bin/hostname exception: RAN: /bin/hostname -A”
It looks like on Suse linux command hostname with parameter -A. On RHEL works perfect and shows fqdn when running this command. On Suse it should be with hostname -f however MDEClientAnalyzer is not editable as it is binary. Anyone knows how to fix it ?
We are having issues with running MDEClientAnalyzer on Suse 12. Suse 12 is officially supported by MDE, thus I assume MDEClientAnalyzer is as well. However when run it according to the MS Instructionswe are receiving error that “could not run command /bin/hostname exception: RAN: /bin/hostname -A”It looks like on Suse linux command hostname with parameter -A. On RHEL works perfect and shows fqdn when running this command. On Suse it should be with hostname -f however MDEClientAnalyzer is not editable as it is binary. Anyone knows how to fix it ? Read More
Show or hide the Discover feed in Microsoft Teams
Hi, Microsoft 365 Insiders,
We’re excited to introduce a new enhancement in Microsoft Teams: the ability to show or hide the Discover feed. This personalized, relevance-based feed helps you stay informed and engaged with important content while managing information overload.
Check out our latest blog: Show or hide the Discover feed in Microsoft Teams
Thanks!
Perry Sjogren
Microsoft 365 Insider Community Manager
Become a Microsoft 365 Insider and gain exclusive access to new features and help shape the future of Microsoft 365. Join Now: Windows | Mac | iOS | Android
Hi, Microsoft 365 Insiders,
We’re excited to introduce a new enhancement in Microsoft Teams: the ability to show or hide the Discover feed. This personalized, relevance-based feed helps you stay informed and engaged with important content while managing information overload.
Check out our latest blog: Show or hide the Discover feed in Microsoft Teams
Thanks!
Perry Sjogren
Microsoft 365 Insider Community Manager
Become a Microsoft 365 Insider and gain exclusive access to new features and help shape the future of Microsoft 365. Join Now: Windows | Mac | iOS | Android Read More
Ihor Zahorodnii DataOps for the modern data warehouse
Ihor Zahorodnii
DataOps for the modern data warehouse
This article describes how a fictional city planning office could use this solution. The solution provides an end-to-end data pipeline that follows the MDW architectural pattern, along with corresponding DevOps and DataOps processes, to assess parking use and make more informed business decisions.
Architecture
The following diagram shows the overall architecture of the solution.
Dataflow
Azure Data Factory (ADF) orchestrates and Azure Data Lake Storage (ADLS) Gen2 stores the data:
The Contoso city parking web service API is available to transfer data from the parking spots.
There’s an ADF copy job that transfers the data into the Landing schema.
Next, Azure Databricks cleanses and standardizes the data. It takes the raw data and conditions it so data scientists can use it.
If validation reveals any bad data, it gets dumped into the Malformed schema.
Important
People have asked why the data isn’t validated before it’s stored in ADLS. The reason is that the validation might introduce a bug that could corrupt the dataset. If you introduce a bug at this step, you can fix the bug and replay your pipeline. If you dumped the bad data before you added it to ADLS, then the corrupted data is useless because you can’t replay your pipeline.
There’s a second Azure Databricks transform step that converts the data into a format that you can store in the data warehouse.
Finally, the pipeline serves the data in two different ways:
Databricks makes the data available to the data scientist so they can train models.
Polybase moves the data from the data lake to Azure Synapse Analytics and Power BI accesses the data and presents it to the business user.
Components
The solution uses these components:
Azure Data Lake Storage (ADLS) Gen2
Scenario details
A modern data warehouse (MDW) lets you easily bring all of your data together at any scale. It doesn’t matter if it’s structured, unstructured, or semi-structured data. You can gain insights to an MDW through analytical dashboards, operational reports, or advanced analytics for all your users.
Setting up an MDW environment for both development (dev) and production (prod) environments is complex. Automating the process is key. It helps increase productivity while minimizing the risk of errors.
This article describes how a fictional city planning office could use this solution. The solution provides an end-to-end data pipeline that follows the MDW architectural pattern, along with corresponding DevOps and DataOps processes, to assess parking use and make more informed business decisions.
Solution requirements
Ability to collect data from different sources or systems.
Infrastructure as code: deploy new dev and staging (stg) environments in an automated manner.
Deploy application changes across different environments in an automated manner:
Implement continuous integration and continuous delivery (CI/CD) pipelines.
Use deployment gates for manual approvals.
Pipeline as Code: ensure the CI/CD pipeline definitions are in source control.
Carry out integration tests on changes using a sample data set.
Run pipelines on a scheduled basis.
Support future agile development, including the addition of data science workloads.
Support for both row-level and object-level security:
The security feature is available in SQL Database.
You can also find it in Azure Synapse Analytics, Azure Analysis Services (AAS) and Power BI.
Support for 10 concurrent dashboard users and 20 concurrent power users.
The data pipeline should carry out data validation and filter out malformed records to a specified store.
Support monitoring.
Centralized configuration in a secure storage like Azure Key Vault.
More details here: https://learn.microsoft.com/en-us/azure/architecture/databases/architecture/dataops-mdw
Ihor Zahorodnii
Ihor Zahorodnii
Ihor Zahorodnii DataOps for the modern data warehouse This article describes how a fictional city planning office could use this solution. The solution provides an end-to-end data pipeline that follows the MDW architectural pattern, along with corresponding DevOps and DataOps processes, to assess parking use and make more informed business decisions. ArchitectureThe following diagram shows the overall architecture of the solution. DataflowAzure Data Factory (ADF) orchestrates and Azure Data Lake Storage (ADLS) Gen2 stores the data:The Contoso city parking web service API is available to transfer data from the parking spots.There’s an ADF copy job that transfers the data into the Landing schema.Next, Azure Databricks cleanses and standardizes the data. It takes the raw data and conditions it so data scientists can use it.If validation reveals any bad data, it gets dumped into the Malformed schema. ImportantPeople have asked why the data isn’t validated before it’s stored in ADLS. The reason is that the validation might introduce a bug that could corrupt the dataset. If you introduce a bug at this step, you can fix the bug and replay your pipeline. If you dumped the bad data before you added it to ADLS, then the corrupted data is useless because you can’t replay your pipeline.There’s a second Azure Databricks transform step that converts the data into a format that you can store in the data warehouse.Finally, the pipeline serves the data in two different ways:Databricks makes the data available to the data scientist so they can train models.Polybase moves the data from the data lake to Azure Synapse Analytics and Power BI accesses the data and presents it to the business user. ComponentsThe solution uses these components:Azure Data Factory (ADF)Azure DatabricksAzure Data Lake Storage (ADLS) Gen2Azure Synapse AnalyticsAzure Key VaultAzure DevOpsPower BIScenario detailsA modern data warehouse (MDW) lets you easily bring all of your data together at any scale. It doesn’t matter if it’s structured, unstructured, or semi-structured data. You can gain insights to an MDW through analytical dashboards, operational reports, or advanced analytics for all your users.Setting up an MDW environment for both development (dev) and production (prod) environments is complex. Automating the process is key. It helps increase productivity while minimizing the risk of errors.This article describes how a fictional city planning office could use this solution. The solution provides an end-to-end data pipeline that follows the MDW architectural pattern, along with corresponding DevOps and DataOps processes, to assess parking use and make more informed business decisions.Solution requirementsAbility to collect data from different sources or systems.Infrastructure as code: deploy new dev and staging (stg) environments in an automated manner.Deploy application changes across different environments in an automated manner:Implement continuous integration and continuous delivery (CI/CD) pipelines.Use deployment gates for manual approvals.Pipeline as Code: ensure the CI/CD pipeline definitions are in source control.Carry out integration tests on changes using a sample data set.Run pipelines on a scheduled basis.Support future agile development, including the addition of data science workloads.Support for both row-level and object-level security:The security feature is available in SQL Database.You can also find it in Azure Synapse Analytics, Azure Analysis Services (AAS) and Power BI.Support for 10 concurrent dashboard users and 20 concurrent power users.The data pipeline should carry out data validation and filter out malformed records to a specified store.Support monitoring.Centralized configuration in a secure storage like Azure Key Vault.More details here: https://learn.microsoft.com/en-us/azure/architecture/databases/architecture/dataops-mdw Ihor Zahorodnii Ihor Zahorodnii Read More
New Blog | Leveraging Azure DDoS protection with WAF rate limiting
By Saleem Bseeu
Introduction
In an increasingly interconnected world, the need for robust cybersecurity measures has never been more critical. As businesses and organizations migrate to the cloud, they must address not only the conventional threats but also more sophisticated ones like Distributed Denial of Service (DDoS) attacks. Azure, Microsoft’s cloud computing platform, offers powerful tools to protect your applications and data. In this blog post, we will explore how to leverage Azure DDoS Protection in combination with Azure Web Application Firewall (WAF) rate limiting to enhance your security posture.
Understanding DDoS Attacks
Distributed Denial of Service attacks are a malicious attempt to disrupt the normal functioning of a network, service, or website by overwhelming it with a flood of internet traffic. These attacks can paralyze online services, causing severe downtime and financial losses. Azure DDoS Protection is a service designed to mitigate such attacks and ensure the availability of your applications hosted on Azure.
Combining Azure DDoS Protection with WAF Rate Limiting
While Azure DDoS Protection can mitigate many types of attacks, it’s often beneficial to combine it with a Web Application Firewall for comprehensive security. Azure WAF provides protection at the application layer, inspecting HTTP/HTTPS traffic and identifying and blocking malicious requests. One of the key features of Azure WAF is rate limiting, which allows you to control the number of incoming requests from a single IP address or Geo location. By setting appropriate rate limiting rules, you can mitigate application-layer DDoS attacks.
In this article, we will delve into DDoS protection logs, exploring how to harness this valuable data to configure rate limiting on the Application Gateway WAF. By doing so, we fortify our defenses at various layers, ensuring a holistic approach to DDoS protection.
Read the full post here: Leveraging Azure DDoS protection with WAF rate limiting
By Saleem Bseeu
Introduction
In an increasingly interconnected world, the need for robust cybersecurity measures has never been more critical. As businesses and organizations migrate to the cloud, they must address not only the conventional threats but also more sophisticated ones like Distributed Denial of Service (DDoS) attacks. Azure, Microsoft’s cloud computing platform, offers powerful tools to protect your applications and data. In this blog post, we will explore how to leverage Azure DDoS Protection in combination with Azure Web Application Firewall (WAF) rate limiting to enhance your security posture.
Understanding DDoS Attacks
Distributed Denial of Service attacks are a malicious attempt to disrupt the normal functioning of a network, service, or website by overwhelming it with a flood of internet traffic. These attacks can paralyze online services, causing severe downtime and financial losses. Azure DDoS Protection is a service designed to mitigate such attacks and ensure the availability of your applications hosted on Azure.
Combining Azure DDoS Protection with WAF Rate Limiting
While Azure DDoS Protection can mitigate many types of attacks, it’s often beneficial to combine it with a Web Application Firewall for comprehensive security. Azure WAF provides protection at the application layer, inspecting HTTP/HTTPS traffic and identifying and blocking malicious requests. One of the key features of Azure WAF is rate limiting, which allows you to control the number of incoming requests from a single IP address or Geo location. By setting appropriate rate limiting rules, you can mitigate application-layer DDoS attacks.
In this article, we will delve into DDoS protection logs, exploring how to harness this valuable data to configure rate limiting on the Application Gateway WAF. By doing so, we fortify our defenses at various layers, ensuring a holistic approach to DDoS protection.
Read the full post here: Leveraging Azure DDoS protection with WAF rate limiting Read More
New Blog | Microsoft Power BI and Defender for Cloud – Part 2: Overcoming ARG 1000-Record Limit
In our previous blog, we explored how Power BI can complement Azure Workbook for consuming and visualizing data from Microsoft Defender for Cloud (MDC). In this second installment of our series, we dive into a common limitation faced when working with Azure Resource Graph (ARG) data – the 1000-record limit – and how Power BI can effectively address this constraint to enhance your data analysis and security insights.
The 1000-Record Limit: A Bottleneck in Data Analysis
When querying Azure Resource Graph (ARG) programmatically or using tools like Azure Workbook, users often face a limitation where the results are truncated to 1000 records. This limitation can be problematic for environments with extensive data, such as those with numerous subscriptions or complex resource configurations. Notably, this limit does not apply when accessing data through the Azure Portal’s built-in Azure Resource Graph Explorer, where users can query and view larger datasets without restriction. This difference can create a significant bottleneck for organizations relying on programmatic access to ARG data for comprehensive analysis.
Power BI and ARG Data Connector: Breaking Through the Limit
One of the key advantages of using Power BI’s ARG data connector is its ability to bypass the 1000-record limit imposed by Azure Workbook and other similar tools. By leveraging Power BI’s capabilities, users can access and visualize a comprehensive dataset without the constraints that typically come with ARG queries.
The Power BI ARG data connector provides a robust solution by enabling the extraction of larger datasets, which allows for more detailed and insightful analysis. This feature is particularly useful for organizations with extensive resource configurations and security plans, as it facilitates a deeper understanding of their security posture.
Read the full post here: Microsoft Power BI and Defender for Cloud – Part 2: Overcoming ARG 1000-Record Limit
By Giulio Astori
In our previous blog, we explored how Power BI can complement Azure Workbook for consuming and visualizing data from Microsoft Defender for Cloud (MDC). In this second installment of our series, we dive into a common limitation faced when working with Azure Resource Graph (ARG) data – the 1000-record limit – and how Power BI can effectively address this constraint to enhance your data analysis and security insights.
The 1000-Record Limit: A Bottleneck in Data Analysis
When querying Azure Resource Graph (ARG) programmatically or using tools like Azure Workbook, users often face a limitation where the results are truncated to 1000 records. This limitation can be problematic for environments with extensive data, such as those with numerous subscriptions or complex resource configurations. Notably, this limit does not apply when accessing data through the Azure Portal’s built-in Azure Resource Graph Explorer, where users can query and view larger datasets without restriction. This difference can create a significant bottleneck for organizations relying on programmatic access to ARG data for comprehensive analysis.
Power BI and ARG Data Connector: Breaking Through the Limit
One of the key advantages of using Power BI’s ARG data connector is its ability to bypass the 1000-record limit imposed by Azure Workbook and other similar tools. By leveraging Power BI’s capabilities, users can access and visualize a comprehensive dataset without the constraints that typically come with ARG queries.
The Power BI ARG data connector provides a robust solution by enabling the extraction of larger datasets, which allows for more detailed and insightful analysis. This feature is particularly useful for organizations with extensive resource configurations and security plans, as it facilitates a deeper understanding of their security posture.
Read the full post here: Microsoft Power BI and Defender for Cloud – Part 2: Overcoming ARG 1000-Record Limit Read More
Import old email into Outlook
I am a newbie. I have Microsoft professional Plus 2024 Microsoft 365. I want to move my old emails from my eM client program ( I have five email addresses with a history of five years of emails) into my new Outlook folders. I cannot find out how to do this. I am also trying to import my contacts ( people) information into Outlook.
I am a newbie. I have Microsoft professional Plus 2024 Microsoft 365. I want to move my old emails from my eM client program ( I have five email addresses with a history of five years of emails) into my new Outlook folders. I cannot find out how to do this. I am also trying to import my contacts ( people) information into Outlook. Read More
SQL Server Virtualization and S3 – Authentication Error
We are experimenting with data virtualization in SQL server 2022 where we have data in S3 that we want to access from our SQL Server instances. I have completed the configuration according to the documentation, but I am getting an error when trying to access the external table. SQL Server says it cannot list the contents of the directory. Logs in AWS indicate that it cannot connect due to an authorization error where the header is malformed.
I verified that I can access that bucket with the same credentials using the AWS cli from the same machine, but I cannot figure out why it is failing or what the authorization header looks like. Any pointers on where to look?
Enable Polybase
select serverproperty(‘IsPolyBaseInstalled’) as IsPolyBaseInstalled
exec sp_configure @configname = ‘polybase enabled’, @configvalue = 1
Create Credentials and data source
create master key encryption by password = ‘<some password>’
go
create credential s3_dc with identity = ‘S3 Access Key’, SECRET = ‘<access key>:<secret key>’
go
create external data source s3_ds
with (
location = ‘s3://<bucket_name>/<path>/’,
credential = s3_dc,
connection_options = ‘{
“s3”:{
“url_style”:”virtual_hosted”
}
}’
)
go
Create External Table
CREATE EXTERNAL FILE FORMAT ParquetFileFormat WITH(FORMAT_TYPE = PARQUET)
GO
CREATE EXTERNAL TABLE sample_table(
code varchar,
the_date date,
ref_code varchar,
value1 int,
value2 int,
value3 int,
cost numeric(12,2),
peak_value varchar
)
WITH (
LOCATION = ‘/sample_table/’,
DATA_SOURCE = s3_ds,
FILE_FORMAT = ParquetFileFormat
)
GO
We are experimenting with data virtualization in SQL server 2022 where we have data in S3 that we want to access from our SQL Server instances. I have completed the configuration according to the documentation, but I am getting an error when trying to access the external table. SQL Server says it cannot list the contents of the directory. Logs in AWS indicate that it cannot connect due to an authorization error where the header is malformed. I verified that I can access that bucket with the same credentials using the AWS cli from the same machine, but I cannot figure out why it is failing or what the authorization header looks like. Any pointers on where to look? Enable Polybaseselect serverproperty(‘IsPolyBaseInstalled’) as IsPolyBaseInstalled
exec sp_configure @configname = ‘polybase enabled’, @configvalue = 1Create Credentials and data sourcecreate master key encryption by password = ‘<some password>’
go
create credential s3_dc with identity = ‘S3 Access Key’, SECRET = ‘<access key>:<secret key>’
go
create external data source s3_ds
with (
location = ‘s3://<bucket_name>/<path>/’,
credential = s3_dc,
connection_options = ‘{
“s3”:{
“url_style”:”virtual_hosted”
}
}’
)
go Create External TableCREATE EXTERNAL FILE FORMAT ParquetFileFormat WITH(FORMAT_TYPE = PARQUET)
GO
CREATE EXTERNAL TABLE sample_table(
code varchar,
the_date date,
ref_code varchar,
value1 int,
value2 int,
value3 int,
cost numeric(12,2),
peak_value varchar
)
WITH (
LOCATION = ‘/sample_table/’,
DATA_SOURCE = s3_ds,
FILE_FORMAT = ParquetFileFormat
)
GO Read More
Getting last email for Microsoft 365 Group via Graph
Hello,
Is there a way to get information about Last Received mail for Microsoft 365 Group using Graph?
In the past I used:
Get-ExoMailboxFolderStatistics –Identity $mailbox–IncludeOldestAndNewestITems –FolderScope Inbox
but it takes too long if there are many mailboxes.
I also tried https://graph.microsoft.com/v1.0/users/<M365Group_mailAddress>/mailFolders?`$top=1
but that didn’t work, most likely because mailbox doesn’t exist from Exchange perspective.
Any ideas?
Hello,Is there a way to get information about Last Received mail for Microsoft 365 Group using Graph?In the past I used:Get-ExoMailboxFolderStatistics -Identity $mailbox-IncludeOldestAndNewestITems -FolderScope Inboxbut it takes too long if there are many mailboxes. I also tried https://graph.microsoft.com/v1.0/users/<M365Group_mailAddress>/mailFolders?`$top=1but that didn’t work, most likely because mailbox doesn’t exist from Exchange perspective.Any ideas? Read More
New Blog | Detect compromised RDP sessions with Microsoft Defender for Endpoint
By SaarCohen
Human operators play a significant part in planning, managing, and executing cyber-attacks. During each phase of their operations, they learn and adapt by observing the victims’ networks and leveraging intelligence and social engineering. One of the most common tools human operators use is Remote Desktop Protocol (RDP), which gives attackers not only control, but also Graphical User Interface (GUI) visibility on remote computers. As RDP is such a popular tool in human operated attacks, it allows defenders to use the RDP context as a strong incriminator of suspicious activities. And therefore, detect Indicators of Compromise (IOCs) and act on them.
That’s why today Microsoft Defender for Endpoint is enhancing the RDP data by adding a detailed layer of session information, so you can more easily identify potentially compromised devices in your organization. This layer provides you with more details into the RDP session within the context of the activity initiated, simplifying correlation and increasing the accuracy of threat detection and proactive hunting.
By Detect compromised RDP sessions with Microsoft Defender for Endpoint
By SaarCohen
Human operators play a significant part in planning, managing, and executing cyber-attacks. During each phase of their operations, they learn and adapt by observing the victims’ networks and leveraging intelligence and social engineering. One of the most common tools human operators use is Remote Desktop Protocol (RDP), which gives attackers not only control, but also Graphical User Interface (GUI) visibility on remote computers. As RDP is such a popular tool in human operated attacks, it allows defenders to use the RDP context as a strong incriminator of suspicious activities. And therefore, detect Indicators of Compromise (IOCs) and act on them.
That’s why today Microsoft Defender for Endpoint is enhancing the RDP data by adding a detailed layer of session information, so you can more easily identify potentially compromised devices in your organization. This layer provides you with more details into the RDP session within the context of the activity initiated, simplifying correlation and increasing the accuracy of threat detection and proactive hunting.
By Detect compromised RDP sessions with Microsoft Defender for Endpoint
Detect compromised RDP sessions with Microsoft Defender for Endpoint
Human operators play a significant part in planning, managing, and executing cyber-attacks. During each phase of their operations, they learn and adapt by observing the victims’ networks and leveraging intelligence and social engineering. One of the most common tools human operators use is Remote Desktop Protocol (RDP), which gives attackers not only control, but also Graphical User Interface (GUI) visibility on remote computers. As RDP is such a popular tool in human operated attacks, it allows defenders to use the RDP context as a strong incriminator of suspicious activities. And therefore, detect Indicators of Compromise (IOCs) and act on them.
That’s why today Microsoft Defender for Endpoint is enhancing the RDP data by adding a detailed layer of session information, so you can more easily identify potentially compromised devices in your organization. This layer provides you with more details into the RDP session within the context of the activity initiated, simplifying correlation and increasing the accuracy of threat detection and proactive hunting.
Remote session information
The new layer adds 8 extra fields, represented as new columns in Advanced Hunting, expands the schema across various tables. These columns enrich process information by including session details, augmenting the contextual data related to remote activities.
InitiatingProcessSessionId – Windows session ID of the initiating process
CreatedProcessSessionId – Windows session ID of the created process
IsInitiatingProcessRemoteSession – Indicates whether the initiating process was run under a remote desktop protocol (RDP) session (true) or locally (false).
IsProcessRemoteSession – Indicates whether the created process was run under a remote desktop protocol (RDP) session (true) or locally (false).
InitiatingProcessRemoteSessionDeviceName – Device name of the remote device from which the initiating process’s RDP session was initiated.
ProcessRemoteSessionDeviceName – Device name of the remote device from which the created process’s RDP session was initiated.
InitiatingProcessRemoteSessionIP – IP address of the remote device from which the initiating process’s RDP session was initiated.
ProcessRemoteSessionIP – IP address of the remote device from which the created process’s RDP session was initiated.
The data will be available in the following tables:
Table Name
Initiating process
Created Process
DeviceEvents
Yes
Yes, where relevant
DeviceProcessEvents
Yes
Yes
DeviceFileEvents
Yes
No
DeviceImageLoadEvents
Yes
No
DeviceLogonEvents
Yes
No
DeviceNetworkEvents
Yes
No
DeviceRegistryEvents
Yes
No
Detect human-operated ransomware attacks that use RDP
Defender for Endpoint machine learning models use data from remote sessions to identify patterns of malicious activity. They assess user interactions with devices via RDP by examining more than 100 characteristics and apply a machine learning classifier to determine if the behavior is consistent with hands-on-keyboard-based attacks.
Detect suspicious RDP sessions
Another model uses remote session information to identify suspicious remote sessions. Outlined below is an example of a suspect RDP session where harmful tools, commonly used by attackers in ransomware campaigns and other malicious activities, are deployed, setting off a high-severity alert.
This context is also available in Advanced Hunting for custom detection and investigation purposes.
An Advanced Hunting query can be used to display all processes initiated by a source IP during an RDP session. This query can be adjusted to fit all the supported tables.
DeviceProcessEvents
| where Timestamp >= ago(1d)
| where IsInitiatingProcessRemoteSession == “True”
| where InitiatingProcessRemoteSessionIP == “X.X.X.X” // Insert your IP Address here
| project InitiatingProcessFileName, InitiatingProcessAccountSid, InitiatingProcessCommandLine, FileName, ProcessCommandLine
Another query can be used to highlight actions performed remotely by a compromised account. This query can be adjusted to fit all the supported tables.
DeviceProcessEvents
| where Timestamp >= ago(7d)
| where InitiatingProcessAccountSid == “SID” // Insert the compromised account SID here
| where IsInitiatingProcessRemoteSession == “True”
| project InitiatingProcessFileName, InitiatingProcessAccountSid, InitiatingProcessCommandLine, FileName, ProcessCommandLine
You can also hunt for tampering attempts. Conducting this remotely across numerous devices can signal a broad attempt at tampering prior to an attack being launched.
DeviceRegistryEvents
| where Timestamp >= ago(7d)
| where RegistryKey == “HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows Defender”
| where RegistryValueName == “DisableAntiSpyware”
| where RegistryValueType == “Dword”
| where RegistryValueData == 1
| where IsInitiatingProcessRemoteSession == true
Comprehensive endpoint security
The ability to identify malicious use of RDP in Defender for Endpoint gives admins more granular visibility and control over detection, investigation, and hunting in unique edge cases, and helps them stay one step ahead of the evolving threat landscape.
For more information:
Learn more about Advanced Hunting in Microsoft Defender XDR: Overview – Advanced hunting | Microsoft Learn
Learn more about Defender for Endpoint: Microsoft Defender for Endpoint | Microsoft Security
Not a Defender for Endpoint customer? Start a free trial today.
Microsoft Tech Community – Latest Blogs –Read More
New Outlook for Windows: A guide for Delegates – part 1
The new Outlook for Windows brings a new, powerful email experience that can help executive administrators and delegates become more productive in their everyday work. This blog captures some tips to help delegates get started in the new Outlook.
1. Toggling into new Outlook
If your organization has enabled access to new Outlook, you will see a ‘Try the new Outlook’ toggle on the top right of your classic Outlook app. Turn this toggle on to try the new Outlook experience. You can toggle off to classic Outlook any time. Learn more about getting started in the new Outlook here.
We recommend that you select the option to ‘Import Settings’ from classic Outlook to make the new Outlook experience more familiar. You can learn more about the settings that are imported, here.
Note: You can use classic Outlook and new Outlook side-by-side by toggling off and launching both apps independently.
2. Customize the Outlook ribbon
In the new Outlook, simplified ribbon is enabled by default to offer a clean and simple experience. However, if you prefer the classic Outlook ribbon layout, you can change it from the ribbon drop down and select ‘classic ribbon’.
3. Manage your settings
You can navigate to the new Outlook Settings from the gear icon in the upper right corner. Changes you make to settings in the new Outlook for Windows will also be reflected in Outlook on the web.
4. View shared calendars
We sometimes hear feedback that shared/delegate calendars are missing in new Outlook because these are not visible by default.
To view shared calendars, click ‘Show all’ in the calendar list and view shared calendars under “People’s calendars”. Then, select the calendar you are interested in and then select ‘split view’ in the ribbon to view multiple calendars side by side.
We plan to automatically select the same calendars and view, including shared calendars, when users switch to new Outlook.
5. Add new shared/ delegate calendars
You can add a new shared/ delegate calendar either from the email you receive to manage the invite, or directly from the calendar.
To add directly from your calendar, click on ‘Add calendar’. Then choose ‘Add from directory’ and select the executive or team member whose calendar you would like to add.
Tip – you can add any team member’s calendar and see their default calendar sharing details (for most organizations, this is usually free/busy sharing).
6. Add and view shared/ delegate mailboxes and folders
To add shared or delegate mailboxes and folders, click on the three dots next to the ‘shared with me’ folder under your account and select ‘Add shared folder or mailbox’. Then select the shared/ delegate account you want to add. You can then view the shared mailbox or folders under ‘shared with me’
Share feedback
We encourage you to try the new Outlook and share your feedback. You can submit feedback on the new Outlook experience from Help > Feedback
Please mention – “I am an EA” Or “I am a delegate” when adding comments.
To stay updated with the latest features in new Outlook, follow the roadmap.
This guide will also be published as a support article that will be linked here once available.
Thanks!
Microsoft Tech Community – Latest Blogs –Read More
Link is stripped from email in Workflow
I have created email notifications whenever a message is posted to channels in teams. In the email notification I am trying to paste a link to corresponding channel, but after I save the link gets stripped out whenever I go back in to edit it or review something again. Is there something special I need to do to keep the link from being stripped out? The text I used to anchor the link stays, but the link is removed.
I have created email notifications whenever a message is posted to channels in teams. In the email notification I am trying to paste a link to corresponding channel, but after I save the link gets stripped out whenever I go back in to edit it or review something again. Is there something special I need to do to keep the link from being stripped out? The text I used to anchor the link stays, but the link is removed. Read More
Users not on account getting notifications
We have multiple users with separate accounts. Within the past week or so, users are getting notifications for Bookings calendars they are not even apart of. Anyone else experiencing this?
We have multiple users with separate accounts. Within the past week or so, users are getting notifications for Bookings calendars they are not even apart of. Anyone else experiencing this? Read More
Function for finding percentage of sum
What function do I use and how do I write it when trying to find the percentage of: (A10+A11)/5?
What function do I use and how do I write it when trying to find the percentage of: (A10+A11)/5? Read More
Canging sender without loosing the whole e-mail
I started using the New Outlook today. In the desktop app, I use my private and my work e-mail.
I got an e-mail on my private address that I wanted to answer with my work address. So I went to ‘From’ and clicked my work address. Instead of just changing the sending e-mail address (and keeping everything else), it opened a new window with a completely empty e-mail (with my work address as the sender). So now I had to manually copy-paste everything (the message I wanted to answer, the people I wanted to send it to, the subject line) from one screen to the other…
On the old Outlook I could just change the sender as easily as I could change receivers…
I started using the New Outlook today. In the desktop app, I use my private and my work e-mail.I got an e-mail on my private address that I wanted to answer with my work address. So I went to ‘From’ and clicked my work address. Instead of just changing the sending e-mail address (and keeping everything else), it opened a new window with a completely empty e-mail (with my work address as the sender). So now I had to manually copy-paste everything (the message I wanted to answer, the people I wanted to send it to, the subject line) from one screen to the other…On the old Outlook I could just change the sender as easily as I could change receivers… Read More
Marketplace Customer Office Hours: the marketplace + Azure, August 8th, at 8:30 am
Our customer office hours series is an opportunity for both customers and partners who want to understand customer FAQs. In this upcoming session focused on the marketplace + Azure, customers will get guidance on how to align Azure investments to the marketplace to help their organizations increase efficiency and spend smarter.
Register today for the marketplace + Azure.
Our customer office hours series is an opportunity for both customers and partners who want to understand customer FAQs. In this upcoming session focused on the marketplace + Azure, customers will get guidance on how to align Azure investments to the marketplace to help their organizations increase efficiency and spend smarter.
Register today for the marketplace + Azure. Read More
Cannot assign SMTP service to certificate
Hi,
is this a place to ask for support? Not sure, it’s called “conversations”… 🙂
My problem is that I cannot assign SMTP service to my freshly installed Letsencrypt certificate (new installation of Exchange 2019 on Server 2022 core). I ran automated win-acme client and the certificate now is visible in EAC. All seems to be fine so far. Now I try to assign IIS and SMTP service, but this only works for IIS service. The assignment for SMTP is not retained without any message appearing. I have tried it via EMS, no difference. Can anyone help?
Regards,
Stefano
Hi, is this a place to ask for support? Not sure, it’s called “conversations”… 🙂 My problem is that I cannot assign SMTP service to my freshly installed Letsencrypt certificate (new installation of Exchange 2019 on Server 2022 core). I ran automated win-acme client and the certificate now is visible in EAC. All seems to be fine so far. Now I try to assign IIS and SMTP service, but this only works for IIS service. The assignment for SMTP is not retained without any message appearing. I have tried it via EMS, no difference. Can anyone help? Regards,Stefano Read More
Enhancements to the Outbound Messages in Transit Security Report
Today, we are excited to announce enhancements to the Outbound Messages in Transit Security report that help you track and optimize the security of your outbound email.
To help you identify and reduce the number of emails that are sent in plain text, we have added two new elements to the outbound messages in transit report: a new field in the Messages Sent section, and a new page called Recipient Domains Not Supporting TLS.
We have split the ‘Opportunistic TLS’ category in the Messages Sent section of the mail flow report into 2 categories: ‘TLS’ and ‘No-TLS’ so there are now 5 security categories.
With the addition of Recipient Domains Not Supporting TLS, the Outbound Messages in Transit Security report now has 3 views:
The Messages Blocked section compiles data for tenant admins on any SMTP DANE with DNSSEC or MTA-STS issues encountered during attempts to send messages to domains that use these security protocols.
The Messages Sent section provides time-series data for emails secured by SMTP DANE with DNSSEC, MTA-STS, Both SMTP DANE with DNSSEC and MTA-STS, TLS, or No-TLS.
Recipient Domains Not Supporting TLS provides time series data for messages that were sent to a destination domain unencrypted (in plain text) because the destination didn’t support TLS. Exchange Online always attempts to send using TLS, but if the destination server or domain doesn’t support it then the default behavior is to send the email.
How to access the new features
These updates are available right now! To access the report, go to the Exchange admin center, and then click Reports > Mail flow. Once the page loads, select Outbound Messages in Transit Security report.
To learn more about the report, visit Outbound messages in Transit Security report in the Exchange Admin Center for Exchange Online | Microsoft Learn
How to use the data to improve your email security
The data in the Outbound Messages in Transit Security report can help you monitor and improve email security in several ways. Here are some examples of how you can use the data:
If you see a high number of emails sent in plain text to an organization, you can contact the receiving organization and ask them to enable TLS on their email servers.
If you see a sudden spike in the number of emails experiencing SMTP DANE with DNSSEC or MTA-STS failures, you can alert the destination organization, so they take corrective measures.
If you see a consistent pattern of emails being blocked or sent in plain text to certain domains, you can consider alternative ways of communicating with those domains. For example, you can use secure file sharing services or secure web portals to exchange information with those domains.
We hope that you will find these enhancements helpful. If you have any feedback or suggestions, please let us know in the comments below!
Microsoft 365 Messaging Team
(Formerly Exchange Online Transport Team)
Microsoft Tech Community – Latest Blogs –Read More
Leveraging Azure DDoS protection with WAF rate limiting
Introduction
In an increasingly interconnected world, the need for robust cybersecurity measures has never been more critical. As businesses and organizations migrate to the cloud, they must address not only the conventional threats but also more sophisticated ones like Distributed Denial of Service (DDoS) attacks. Azure, Microsoft’s cloud computing platform, offers powerful tools to protect your applications and data. In this blog post, we will explore how to leverage Azure DDoS Protection in combination with Azure Web Application Firewall (WAF) rate limiting to enhance your security posture.
Understanding DDoS Attacks
Distributed Denial of Service attacks are a malicious attempt to disrupt the normal functioning of a network, service, or website by overwhelming it with a flood of internet traffic. These attacks can paralyze online services, causing severe downtime and financial losses. Azure DDoS Protection is a service designed to mitigate such attacks and ensure the availability of your applications hosted on Azure.
Combining Azure DDoS Protection with WAF Rate Limiting
While Azure DDoS Protection can mitigate many types of attacks, it’s often beneficial to combine it with a Web Application Firewall for comprehensive security. Azure WAF provides protection at the application layer, inspecting HTTP/HTTPS traffic and identifying and blocking malicious requests. One of the key features of Azure WAF is rate limiting, which allows you to control the number of incoming requests from a single IP address or Geo location. By setting appropriate rate limiting rules, you can mitigate application-layer DDoS attacks.
In this article, we will delve into DDoS protection logs, exploring how to harness this valuable data to configure rate limiting on the Application Gateway WAF. By doing so, we fortify our defenses at various layers, ensuring a holistic approach to DDoS protection.
Note: Rate limiting for Application gateway WAF is currently in GA, you can find more information here Azure Web Application Firewall (WAF) rate limiting | Microsoft Learn
Example Attack scenario
In this scenario, we outline a two-phase DDoS (Distributed Denial of Service) attack for illustration purposes. The attacker initiates with a Layer 4 TCP SYN flood attack by a bot network. This targets the network infrastructure with a flood of TCP (Transmission Control Protocol) SYN packets, primarily targeting Layer 4, the transport layer. The objective is to overwhelm network resources, including bandwidth and processing capacity, disrupting access for legitimate users. Azure DDoS Protection detects and mitigates this Layer 4 attack.
Subsequently, attackers transition to Phase 2, launching a Layer 7 (L7) DDoS attack with the same bot network. Here, the focus shifts to Layer 7, the application layer. In this scenario, they deploy a Layer 7 flood attack, exploiting application-level vulnerabilities in the target application. The goal remains consistent: disrupting the target’s application by leveraging Layer 7 weaknesses. Real-world DDoS attacks may employ various vectors, depending on application vulnerabilities. Azure DDoS Protection, combined with complementary security measures like Web Application Firewall (WAF) rate limiting, forms a robust defense against these attacks, ensuring service continuity and protection against evolving DDoS tactics.”
Note: In our testing environment, we’re using spoofed Layer 4 DDoS attacks instead of those carried out by a bot network. In actual real-world situations, the attack vectors can vary widely, adapting to the specific vulnerabilities and targets. In this scenario, we assume that the attackers use the same source IPs since they are focused on launching attacks in quick succession and do not expect the target to respond quickly enough. This scenario serves as a simplified representation to highlight the importance of multi-layered defenses and the role of Azure DDoS Protection and WAF rate limit in mitigating DDoS attacks.
Prerequisites
Set up an Application Gateway with the WAF V2 SKU and select the latest WAF engine by choosing CRS 3.2 as the default rule set.
Associate a public IP address with your application gateway and activate Azure DDoS Protection (Network or IP SKU).
Ensure that logging is enabled for your public IP resource and on your Application Gateway.
Setting up DDoS protection
Ensure that Azure DDoS Protection is activated for your application gateway’s public IP. You can do this by navigating to the public IP address resource and verifying that DDoS protection is correctly configured.
To enable logging for your public IP address, access your public IP resource. Within the Diagnostic settings, create a new diagnostic configuration. Ensure that you select the DDoS logs categories and specify your preferred destination log analytics workspace.
Investigating and understanding Azure DDoS protection logs
Navigate to your log analytics workspace logs and run the following query to confirm that your public IP endpoint was under active DDoS mitigation:
AzureDiagnostics
| where Category == “DDoSProtectionNotifications”
Note: Azure DDoS protection logs are generated only during active DDoS mitigation.
As shown below, there’s a log type called “MitigationStarted,” confirming the occurrence of a DDoS attack. The message field provides details about the targeted public IP.
Next, let’s determine the source IPs responsible for this DDoS attack. Run the following query:
AzureDiagnostics
| where Category == “DDoSMitigationFlowLogs”
| where Message <> “Packet was forwarded to service”
| project Message, SourceIPAddress = tostring(sourcePublicIpAddress_s)
| summarize LogCount = count() by Message, SourceIPAddress
| order by LogCount desc
This query filters Azure Diagnostics logs for “DDoSMitigationFlowLogs,” extracts log messages and source IP addresses, and summarizes how many times each unique combination of message and source IP address appears in the logs. The results are sorted in descending order of log counts.
In the query results below, we see that the highest log counts contain the message ‘protocol violation invalid TCP syn,’ indicating that this traffic was identified as malicious by the DDoS mitigation system
Another method for identifying malicious source IPs in DDoS attacks is by utilizing the Sentinel DDoS Protection solution. This solution includes two analytical rules, triggering incidents when specific thresholds are reached. I’ve implemented the PPS threshold, which led to the incident described below.
As illustrated, the entities displayed represent the source IPs detected during this DDoS mitigation, aligning with the source IPs previously identified through the logs query.
For additional details on the Sentinel DDoS Protection solution, see here Azure DDoS Solution for Microsoft Sentinel – Microsoft Community Hub
Configuring rate limit on Application gateway WAF
Now that we have pinpointed the malicious source IPs behind the DDoS attacks, we can employ this data to set up rate limiting in our Web Application Firewall (WAF). Rate limiting is configured through custom rules, and you have the flexibility to attach the policy either globally to your Application Gateway or on a per-site/URI basis. For instance, if your Application Gateway serves four distinct sites and you wish to tailor the WAF configuration for each site, you can attach different policies to individual listeners to accommodate site-specific WAF settings. For further details. For more information, see here Configure per-site WAF policies using PowerShell – Azure Web Application Firewall | Microsoft Learn
Within the custom rules section, create a new rule and select ‘rate limit’ as the rule type. Here, you have the flexibility to choose the rate limit duration, ranging from 1 to 5 minutes, as well as the rate limit request threshold, which defines the maximum number of requests permitted within the specified rate limit duration. Given that we have identified the source IPs, choose ‘client address’ as the group rate limit traffic option. In the ‘conditions’ section, choose the match type ‘IP address,’ and then add the identified malicious IP addresses.
Note: While it is possible to configure a complete block on the identified IP addresses, it’s worth noting that attackers occasionally compromise legitimate users’ machines to launch DDoS attacks. Therefore, we opt for rate limiting to avoid outright blocking, allowing for a more nuanced approach to security.
The optimal rate limit setting depends on your specific environment and traffic patterns. One useful metric to guide you is the ‘WAF Total Requests’ found under your Application Gateway instance metrics. By selecting this metric and extending the timeline to at least 30 days, you can gather more comprehensive data to make an informed decision. Another method of rate limiting you can utilize with this information is to group by ‘None’ instead of ‘ClientAddr’ or ‘GeoLocation’. This approach groups all traffic together and counts it against the threshold of the rate limit rule you set up. Since the metric shows total WAF requests, you can use this group-by option to set the threshold against all traffic without maintaining counters for each client IP address or geography. Keep in mind that this is a powerful setting, and you should be careful when configuring it, as it could block legitimate traffic to your resources.
As an alternative approach, you have the option to set up rate limiting based on geo-location, which clusters traffic based on the geographical origin of their source IP addresses. By using the Azure DDoS Protection mitigation logs, you can pinpoint the countries from which the attacks originate and subsequently fine-tune your rate limiting rules accordingly. To find the post-mitigation logs, run the query below:
AzureDiagnostics
| where Category == “DDoSMitigationReports”
| where ReportType_s == “Post mitigation”
By leveraging the Post-Mitigation Report logs, you gain valuable insights into the countries of origin for the source IPs, along with other useful details such as top source ASNs (Autonomous System Numbers), top continents, drop reasons, and protocols. This information can be used in configuring rate limiting based on geographic locations, utilizing the top source countries data.
Investigating WAF metrics and logs
Navigate to your Application Gateway metrics tab and add these two metrics, “WAF Total Requests” and “ WAF Custom Rule Matches”, to get a view on total requests inspected by WAF and the custom rules hit. As you can see below there’s an increase in matched custom rules due to rate limiting
To confirm that rate limiting is actively working, we can investigate WAF logs by running the following query:
AzureDiagnostics
|where Category == “ApplicationGatewayFirewallLog”
|where priority_d == 30 //Replace 30 with your rate limiting custom rule priority
Benefits from combining Azure DDoS Protection with Azure Web Application Firewall rate limiting
Comprehensive Protection: You have multi-layered security, addressing both network-level and application-level threats.
Customization: You can fine-tune your rate limiting rules to suit your application’s unique requirements.
Visibility: Azure provides detailed traffic telemetry and analytics, allowing you to gain insights into potential threats.
Rate limiting on Azure Front Door WAF
The concepts explained for Application Gateway rate limiting in this post are also applicable to Azure Front Door WAF rate limiting. Azure Front Door (AFD) offers rate limiting capabilities as part of its Web Application Firewall (WAF) features. This allows you to control the number of requests a user can make to your application within a set time frame, effectively protecting against Layer 7 DDoS attacks. The rate limiting is configured through custom WAF rules, where you can specify the threshold for the number of web requests allowed from each socket IP address within a period of one or five minutes. Additionally, you can set up multiple rate limits for different paths within your application to ensure comprehensive protection.
This approach ensures that the rate limiting strategies discussed for Application Gateway in this blog post are equally applicable and effective when implemented on Azure Front Door WAF, offering a robust solution for your application’s security needs
Conclusion
Protecting your applications and data from DDoS attacks is a top priority in today’s digital landscape. Azure DDoS Protection, combined with Azure Web Application Firewall rate limiting, offers a powerful defense strategy. By implementing these services on either Application Gateway or Azure Front Door, you can protect your resources, maintain high availability, and provide a secure online experience for your users.
Resources
Rate Limiting Feature for Azure WAF on Application Gateway now in Preview. – Microsoft Community Hub
Application DDoS protection – Azure Web Application Firewall | Microsoft Learn
Azure DDoS Solution for Microsoft Sentinel – Microsoft Community Hub
Configure Azure DDoS Protection diagnostic logging through portal | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More