Tag Archives: microsoft
A Comprehensive Guide for Landing zone for Red Hat Enterprise Linux(RHEL) on Azure
The Essence of the Landing zone for RHEL on Azure: The landing zone for RHEL on Azure is combination of set of guidelines and it’s a blueprint for success in the cloud. It encompasses a range of critical considerations, from identity and access management to network topology, security, and compliance. This document lays out a path for organizations to follow, ensuring that their RHEL systems are deployed with resiliency and aligned with enterprise-scale design principles.
Reference Architecture
The following diagram shows the Landing zone for RHEL on Azure architecture.
The below design areas provide design recommendations and consideration for Landing zone for RHEL on Azure to accelerate your journey.
Management Group and Subscription Organization
Identity and access management
Network topology and connectivity
Business continuity and disaster recovery
Governance and compliance
Security
Management and monitoring
Platform automation & DevOps
Overview
It provides design recommendations and reference architecture, allowing organizations make critical design decisions quickly and scalably.
The document emphasizes the importance of a Standard Operating Environment (SOE) and the advantages of implementing the Red Hat Infrastructure Standard.
It delves into the intricacies of identity and access management, offering insights into the integration of Red Hat Enterprise Linux with Microsoft Active Directory and Microsoft Entra ID.
Identity and Access Management
Red Hat Identity Management (IdM) integrates with Microsoft Active Directory and Microsoft Entra ID, providing a centralized Linux identity authority that increases operational efficiency and access control visibility.
The document recommends automating the deployment, configuration, and day-2 of Red Hat Identity Management using the redhat.rhel_idm certified Ansible collection.
Network Topology and Connectivity
The Landing zone for RHEL on Azure emphasizes the importance of a well-designed network topology to support the deployment of RHEL systems in Azure and methods for a zero-trust network model and deeper micro-segmentation for enhanced security
Deployment, Management, and Patching
Deployment of RHEL instances within Azure is performed using a system image prepared for Azure, with options available through the Azure Marketplace or Red Hat Cloud Access.
Infrastructure as a code please utilize Azure Verified Modules enable and accelerate consistent solution development and delivery of cloud-native or migrated applications and their supporting infrastructure by codifying Microsoft guidance (WAF), with best practice configurations.
Red Hat Satellite and Red Hat Satellite Capsule are recommended for automating the software lifecycle and delivering software to systems wherever they are deployed.
Business Continuity & Disaster Recovery (BCDR):
The document outlines the use of Azure on-demand capacity reservation to ensure sufficient availability for RHEL deployments in Azure regions.
It discusses the importance of geographical deployment considerations for IdM infrastructure to reduce latencies and ensure no single point of failure in replication.
These examples demonstrate the comprehensive approach taken in the document to cover various critical design areas for deploying RHEL on Azure.
A scalable and repeatable approach
One of the standout features of the Landing zone for RHEL on Azure is built on learnings and best practices including architecture. Organizations can adapt the landing zone solution to fit their specific needs, putting them on a path to sustainable scalability and automation. The document provides guidelines for creating a landing zone solution that is both robust and flexible, capable of evolving alongside the organization’s requirements.
Conclusion: The landing zone for RHEL on Azure documentation is a testament to the collaborative effort of industry leaders to provide a structured and secure approach to cloud deployment. It is a resource that empowers organizations to harness the full potential of RHEL on Azure, paving the way for a future where cloud infrastructure is synonymous with innovation and excellence. We encourage you to check out the published document and explore how it can benefit your organization today!
Microsoft Tech Community – Latest Blogs –Read More
Logic Apps Standard – Service Bus In-App connector improvements for Peek-lock operations
In collaboration with Divya Swarnkar and Aprana Seth.
Service Bus In-App connector is bringing new triggers and actions for peek-lock operations. Those changes will allow peek-lock operations in message and queues that don’t require session to be started and completed from any instance of the runtime available in the pool of resources, removing previous requirements for VNET integration and fixed size or role instances, which were needed because of the underlying client SDK used by the connector.
The new trigger and actions will be the default operations for peek-lock, but will not impact existing workflows. Read through the next sections to learn more about this update and its impact.
New triggers
Starting from bundle version 1.81.x, you will find new triggers for messages available in a queue and topic using the peek-lock method:
New Actions
Starting from bundle version 1.81.x, you will find new actions for managing messages in a queue or topic subscriptions using the peek-lock method are added for queue and topic.
What is the difference between this version and the previous version of the connector
The new connector actions require details of the repository holding the message (queue name / topic and subscription name) as well as lock token, where the previous item required the message id.
This allows the connector to reuse or initialize a client in any instance of the runtime available in the pool of resources. With that, not only the pre-requisites of VNET integration and fixed number of role instance is remov but also the requirement of the same Message Receiver that peeked the message being the workflow that execute all the actions is removed. For more information about the previous connector requirements, check this Tech community post.
What is the impact of existing workflows that used the previous version of the Service Bus actions?
The previous actions and triggers are marked as internal actions now. This is how Logic Apps indicates that the actions define in existing workflows are still supported by the runtime, both at design and workflow execution, but shouldn’t not be used for new workflows.
The impact for you as a developer is:
Workflows with old version of the trigger and actions will show normally in the designer and be fully supported by the runtime. This means that if you have existing workflows you will not need to change them.
The runtime do not support the new and old version of the actions in the same workflow. You can have workflows that uses each version independently, but you can’t mix and match version in the same workflow.
This means that if you need to add Service Bus actions in a workflow that already have actions from the previous versions of the connector, all actions must be changed to the new workflow. Notice that all properties from the old version exists in the new one, so you can simply replace the individual actions, providing the required parameters.
What happens with my workflow require session support?
If your workflow requires session, you will be using the existing trigger and actions that are specific for session. Those actions are the same from the previous version, as the underlying SDK doesn’t provide the support to execute action against a message in a repository that is session enabled from any client instance.
That means that the VNET integration requirement, which existed for session in the previous connector, is still required. The requirement for fixed number of role instances have been removed in a previous update, when the connector received the concurrency support. You can read more about the Service Bus connector support for sessions here.
What happen if I am using the Export Tool to migrate my ISE Logic Apps?
As customers are still running their last effort to migrate Logic Apps from ISE to Logic Apps Standard, with many migration processes underway, we decided to keep the previous version of the Service Bus connector as the migrated connector. The reason for that decision was that lots of customers are still actively migrating their ISE logic app fleet, with some workflows already migrated, others still being migrated. Having two different connectors coming from the same export process would confuse customers and complicate their support during runtime.
After the ISE Retirement is completed, we will update the export tool to support the latest version of the connector.
Microsoft Tech Community – Latest Blogs –Read More
MDEClientAnalyzer not working on Suse 12
We are having issues with running MDEClientAnalyzer on Suse 12. Suse 12 is officially supported by MDE, thus I assume MDEClientAnalyzer is as well. However when run it according to the MS Instructions
we are receiving error that “could not run command /bin/hostname exception: RAN: /bin/hostname -A”
It looks like on Suse linux command hostname with parameter -A. On RHEL works perfect and shows fqdn when running this command. On Suse it should be with hostname -f however MDEClientAnalyzer is not editable as it is binary. Anyone knows how to fix it ?
We are having issues with running MDEClientAnalyzer on Suse 12. Suse 12 is officially supported by MDE, thus I assume MDEClientAnalyzer is as well. However when run it according to the MS Instructionswe are receiving error that “could not run command /bin/hostname exception: RAN: /bin/hostname -A”It looks like on Suse linux command hostname with parameter -A. On RHEL works perfect and shows fqdn when running this command. On Suse it should be with hostname -f however MDEClientAnalyzer is not editable as it is binary. Anyone knows how to fix it ? Read More
Show or hide the Discover feed in Microsoft Teams
Hi, Microsoft 365 Insiders,
We’re excited to introduce a new enhancement in Microsoft Teams: the ability to show or hide the Discover feed. This personalized, relevance-based feed helps you stay informed and engaged with important content while managing information overload.
Check out our latest blog: Show or hide the Discover feed in Microsoft Teams
Thanks!
Perry Sjogren
Microsoft 365 Insider Community Manager
Become a Microsoft 365 Insider and gain exclusive access to new features and help shape the future of Microsoft 365. Join Now: Windows | Mac | iOS | Android
Hi, Microsoft 365 Insiders,
We’re excited to introduce a new enhancement in Microsoft Teams: the ability to show or hide the Discover feed. This personalized, relevance-based feed helps you stay informed and engaged with important content while managing information overload.
Check out our latest blog: Show or hide the Discover feed in Microsoft Teams
Thanks!
Perry Sjogren
Microsoft 365 Insider Community Manager
Become a Microsoft 365 Insider and gain exclusive access to new features and help shape the future of Microsoft 365. Join Now: Windows | Mac | iOS | Android Read More
Ihor Zahorodnii DataOps for the modern data warehouse
Ihor Zahorodnii
DataOps for the modern data warehouse
This article describes how a fictional city planning office could use this solution. The solution provides an end-to-end data pipeline that follows the MDW architectural pattern, along with corresponding DevOps and DataOps processes, to assess parking use and make more informed business decisions.
Architecture
The following diagram shows the overall architecture of the solution.
Dataflow
Azure Data Factory (ADF) orchestrates and Azure Data Lake Storage (ADLS) Gen2 stores the data:
The Contoso city parking web service API is available to transfer data from the parking spots.
There’s an ADF copy job that transfers the data into the Landing schema.
Next, Azure Databricks cleanses and standardizes the data. It takes the raw data and conditions it so data scientists can use it.
If validation reveals any bad data, it gets dumped into the Malformed schema.
Important
People have asked why the data isn’t validated before it’s stored in ADLS. The reason is that the validation might introduce a bug that could corrupt the dataset. If you introduce a bug at this step, you can fix the bug and replay your pipeline. If you dumped the bad data before you added it to ADLS, then the corrupted data is useless because you can’t replay your pipeline.
There’s a second Azure Databricks transform step that converts the data into a format that you can store in the data warehouse.
Finally, the pipeline serves the data in two different ways:
Databricks makes the data available to the data scientist so they can train models.
Polybase moves the data from the data lake to Azure Synapse Analytics and Power BI accesses the data and presents it to the business user.
Components
The solution uses these components:
Azure Data Lake Storage (ADLS) Gen2
Scenario details
A modern data warehouse (MDW) lets you easily bring all of your data together at any scale. It doesn’t matter if it’s structured, unstructured, or semi-structured data. You can gain insights to an MDW through analytical dashboards, operational reports, or advanced analytics for all your users.
Setting up an MDW environment for both development (dev) and production (prod) environments is complex. Automating the process is key. It helps increase productivity while minimizing the risk of errors.
This article describes how a fictional city planning office could use this solution. The solution provides an end-to-end data pipeline that follows the MDW architectural pattern, along with corresponding DevOps and DataOps processes, to assess parking use and make more informed business decisions.
Solution requirements
Ability to collect data from different sources or systems.
Infrastructure as code: deploy new dev and staging (stg) environments in an automated manner.
Deploy application changes across different environments in an automated manner:
Implement continuous integration and continuous delivery (CI/CD) pipelines.
Use deployment gates for manual approvals.
Pipeline as Code: ensure the CI/CD pipeline definitions are in source control.
Carry out integration tests on changes using a sample data set.
Run pipelines on a scheduled basis.
Support future agile development, including the addition of data science workloads.
Support for both row-level and object-level security:
The security feature is available in SQL Database.
You can also find it in Azure Synapse Analytics, Azure Analysis Services (AAS) and Power BI.
Support for 10 concurrent dashboard users and 20 concurrent power users.
The data pipeline should carry out data validation and filter out malformed records to a specified store.
Support monitoring.
Centralized configuration in a secure storage like Azure Key Vault.
More details here: https://learn.microsoft.com/en-us/azure/architecture/databases/architecture/dataops-mdw
Ihor Zahorodnii
Ihor Zahorodnii
Ihor Zahorodnii DataOps for the modern data warehouse This article describes how a fictional city planning office could use this solution. The solution provides an end-to-end data pipeline that follows the MDW architectural pattern, along with corresponding DevOps and DataOps processes, to assess parking use and make more informed business decisions. ArchitectureThe following diagram shows the overall architecture of the solution. DataflowAzure Data Factory (ADF) orchestrates and Azure Data Lake Storage (ADLS) Gen2 stores the data:The Contoso city parking web service API is available to transfer data from the parking spots.There’s an ADF copy job that transfers the data into the Landing schema.Next, Azure Databricks cleanses and standardizes the data. It takes the raw data and conditions it so data scientists can use it.If validation reveals any bad data, it gets dumped into the Malformed schema. ImportantPeople have asked why the data isn’t validated before it’s stored in ADLS. The reason is that the validation might introduce a bug that could corrupt the dataset. If you introduce a bug at this step, you can fix the bug and replay your pipeline. If you dumped the bad data before you added it to ADLS, then the corrupted data is useless because you can’t replay your pipeline.There’s a second Azure Databricks transform step that converts the data into a format that you can store in the data warehouse.Finally, the pipeline serves the data in two different ways:Databricks makes the data available to the data scientist so they can train models.Polybase moves the data from the data lake to Azure Synapse Analytics and Power BI accesses the data and presents it to the business user. ComponentsThe solution uses these components:Azure Data Factory (ADF)Azure DatabricksAzure Data Lake Storage (ADLS) Gen2Azure Synapse AnalyticsAzure Key VaultAzure DevOpsPower BIScenario detailsA modern data warehouse (MDW) lets you easily bring all of your data together at any scale. It doesn’t matter if it’s structured, unstructured, or semi-structured data. You can gain insights to an MDW through analytical dashboards, operational reports, or advanced analytics for all your users.Setting up an MDW environment for both development (dev) and production (prod) environments is complex. Automating the process is key. It helps increase productivity while minimizing the risk of errors.This article describes how a fictional city planning office could use this solution. The solution provides an end-to-end data pipeline that follows the MDW architectural pattern, along with corresponding DevOps and DataOps processes, to assess parking use and make more informed business decisions.Solution requirementsAbility to collect data from different sources or systems.Infrastructure as code: deploy new dev and staging (stg) environments in an automated manner.Deploy application changes across different environments in an automated manner:Implement continuous integration and continuous delivery (CI/CD) pipelines.Use deployment gates for manual approvals.Pipeline as Code: ensure the CI/CD pipeline definitions are in source control.Carry out integration tests on changes using a sample data set.Run pipelines on a scheduled basis.Support future agile development, including the addition of data science workloads.Support for both row-level and object-level security:The security feature is available in SQL Database.You can also find it in Azure Synapse Analytics, Azure Analysis Services (AAS) and Power BI.Support for 10 concurrent dashboard users and 20 concurrent power users.The data pipeline should carry out data validation and filter out malformed records to a specified store.Support monitoring.Centralized configuration in a secure storage like Azure Key Vault.More details here: https://learn.microsoft.com/en-us/azure/architecture/databases/architecture/dataops-mdw Ihor Zahorodnii Ihor Zahorodnii Read More
New Blog | Leveraging Azure DDoS protection with WAF rate limiting
By Saleem Bseeu
Introduction
In an increasingly interconnected world, the need for robust cybersecurity measures has never been more critical. As businesses and organizations migrate to the cloud, they must address not only the conventional threats but also more sophisticated ones like Distributed Denial of Service (DDoS) attacks. Azure, Microsoft’s cloud computing platform, offers powerful tools to protect your applications and data. In this blog post, we will explore how to leverage Azure DDoS Protection in combination with Azure Web Application Firewall (WAF) rate limiting to enhance your security posture.
Understanding DDoS Attacks
Distributed Denial of Service attacks are a malicious attempt to disrupt the normal functioning of a network, service, or website by overwhelming it with a flood of internet traffic. These attacks can paralyze online services, causing severe downtime and financial losses. Azure DDoS Protection is a service designed to mitigate such attacks and ensure the availability of your applications hosted on Azure.
Combining Azure DDoS Protection with WAF Rate Limiting
While Azure DDoS Protection can mitigate many types of attacks, it’s often beneficial to combine it with a Web Application Firewall for comprehensive security. Azure WAF provides protection at the application layer, inspecting HTTP/HTTPS traffic and identifying and blocking malicious requests. One of the key features of Azure WAF is rate limiting, which allows you to control the number of incoming requests from a single IP address or Geo location. By setting appropriate rate limiting rules, you can mitigate application-layer DDoS attacks.
In this article, we will delve into DDoS protection logs, exploring how to harness this valuable data to configure rate limiting on the Application Gateway WAF. By doing so, we fortify our defenses at various layers, ensuring a holistic approach to DDoS protection.
Read the full post here: Leveraging Azure DDoS protection with WAF rate limiting
By Saleem Bseeu
Introduction
In an increasingly interconnected world, the need for robust cybersecurity measures has never been more critical. As businesses and organizations migrate to the cloud, they must address not only the conventional threats but also more sophisticated ones like Distributed Denial of Service (DDoS) attacks. Azure, Microsoft’s cloud computing platform, offers powerful tools to protect your applications and data. In this blog post, we will explore how to leverage Azure DDoS Protection in combination with Azure Web Application Firewall (WAF) rate limiting to enhance your security posture.
Understanding DDoS Attacks
Distributed Denial of Service attacks are a malicious attempt to disrupt the normal functioning of a network, service, or website by overwhelming it with a flood of internet traffic. These attacks can paralyze online services, causing severe downtime and financial losses. Azure DDoS Protection is a service designed to mitigate such attacks and ensure the availability of your applications hosted on Azure.
Combining Azure DDoS Protection with WAF Rate Limiting
While Azure DDoS Protection can mitigate many types of attacks, it’s often beneficial to combine it with a Web Application Firewall for comprehensive security. Azure WAF provides protection at the application layer, inspecting HTTP/HTTPS traffic and identifying and blocking malicious requests. One of the key features of Azure WAF is rate limiting, which allows you to control the number of incoming requests from a single IP address or Geo location. By setting appropriate rate limiting rules, you can mitigate application-layer DDoS attacks.
In this article, we will delve into DDoS protection logs, exploring how to harness this valuable data to configure rate limiting on the Application Gateway WAF. By doing so, we fortify our defenses at various layers, ensuring a holistic approach to DDoS protection.
Read the full post here: Leveraging Azure DDoS protection with WAF rate limiting Read More
New Blog | Microsoft Power BI and Defender for Cloud – Part 2: Overcoming ARG 1000-Record Limit
In our previous blog, we explored how Power BI can complement Azure Workbook for consuming and visualizing data from Microsoft Defender for Cloud (MDC). In this second installment of our series, we dive into a common limitation faced when working with Azure Resource Graph (ARG) data – the 1000-record limit – and how Power BI can effectively address this constraint to enhance your data analysis and security insights.
The 1000-Record Limit: A Bottleneck in Data Analysis
When querying Azure Resource Graph (ARG) programmatically or using tools like Azure Workbook, users often face a limitation where the results are truncated to 1000 records. This limitation can be problematic for environments with extensive data, such as those with numerous subscriptions or complex resource configurations. Notably, this limit does not apply when accessing data through the Azure Portal’s built-in Azure Resource Graph Explorer, where users can query and view larger datasets without restriction. This difference can create a significant bottleneck for organizations relying on programmatic access to ARG data for comprehensive analysis.
Power BI and ARG Data Connector: Breaking Through the Limit
One of the key advantages of using Power BI’s ARG data connector is its ability to bypass the 1000-record limit imposed by Azure Workbook and other similar tools. By leveraging Power BI’s capabilities, users can access and visualize a comprehensive dataset without the constraints that typically come with ARG queries.
The Power BI ARG data connector provides a robust solution by enabling the extraction of larger datasets, which allows for more detailed and insightful analysis. This feature is particularly useful for organizations with extensive resource configurations and security plans, as it facilitates a deeper understanding of their security posture.
Read the full post here: Microsoft Power BI and Defender for Cloud – Part 2: Overcoming ARG 1000-Record Limit
By Giulio Astori
In our previous blog, we explored how Power BI can complement Azure Workbook for consuming and visualizing data from Microsoft Defender for Cloud (MDC). In this second installment of our series, we dive into a common limitation faced when working with Azure Resource Graph (ARG) data – the 1000-record limit – and how Power BI can effectively address this constraint to enhance your data analysis and security insights.
The 1000-Record Limit: A Bottleneck in Data Analysis
When querying Azure Resource Graph (ARG) programmatically or using tools like Azure Workbook, users often face a limitation where the results are truncated to 1000 records. This limitation can be problematic for environments with extensive data, such as those with numerous subscriptions or complex resource configurations. Notably, this limit does not apply when accessing data through the Azure Portal’s built-in Azure Resource Graph Explorer, where users can query and view larger datasets without restriction. This difference can create a significant bottleneck for organizations relying on programmatic access to ARG data for comprehensive analysis.
Power BI and ARG Data Connector: Breaking Through the Limit
One of the key advantages of using Power BI’s ARG data connector is its ability to bypass the 1000-record limit imposed by Azure Workbook and other similar tools. By leveraging Power BI’s capabilities, users can access and visualize a comprehensive dataset without the constraints that typically come with ARG queries.
The Power BI ARG data connector provides a robust solution by enabling the extraction of larger datasets, which allows for more detailed and insightful analysis. This feature is particularly useful for organizations with extensive resource configurations and security plans, as it facilitates a deeper understanding of their security posture.
Read the full post here: Microsoft Power BI and Defender for Cloud – Part 2: Overcoming ARG 1000-Record Limit Read More
Import old email into Outlook
I am a newbie. I have Microsoft professional Plus 2024 Microsoft 365. I want to move my old emails from my eM client program ( I have five email addresses with a history of five years of emails) into my new Outlook folders. I cannot find out how to do this. I am also trying to import my contacts ( people) information into Outlook.
I am a newbie. I have Microsoft professional Plus 2024 Microsoft 365. I want to move my old emails from my eM client program ( I have five email addresses with a history of five years of emails) into my new Outlook folders. I cannot find out how to do this. I am also trying to import my contacts ( people) information into Outlook. Read More
SQL Server Virtualization and S3 – Authentication Error
We are experimenting with data virtualization in SQL server 2022 where we have data in S3 that we want to access from our SQL Server instances. I have completed the configuration according to the documentation, but I am getting an error when trying to access the external table. SQL Server says it cannot list the contents of the directory. Logs in AWS indicate that it cannot connect due to an authorization error where the header is malformed.
I verified that I can access that bucket with the same credentials using the AWS cli from the same machine, but I cannot figure out why it is failing or what the authorization header looks like. Any pointers on where to look?
Enable Polybase
select serverproperty(‘IsPolyBaseInstalled’) as IsPolyBaseInstalled
exec sp_configure @configname = ‘polybase enabled’, @configvalue = 1
Create Credentials and data source
create master key encryption by password = ‘<some password>’
go
create credential s3_dc with identity = ‘S3 Access Key’, SECRET = ‘<access key>:<secret key>’
go
create external data source s3_ds
with (
location = ‘s3://<bucket_name>/<path>/’,
credential = s3_dc,
connection_options = ‘{
“s3”:{
“url_style”:”virtual_hosted”
}
}’
)
go
Create External Table
CREATE EXTERNAL FILE FORMAT ParquetFileFormat WITH(FORMAT_TYPE = PARQUET)
GO
CREATE EXTERNAL TABLE sample_table(
code varchar,
the_date date,
ref_code varchar,
value1 int,
value2 int,
value3 int,
cost numeric(12,2),
peak_value varchar
)
WITH (
LOCATION = ‘/sample_table/’,
DATA_SOURCE = s3_ds,
FILE_FORMAT = ParquetFileFormat
)
GO
We are experimenting with data virtualization in SQL server 2022 where we have data in S3 that we want to access from our SQL Server instances. I have completed the configuration according to the documentation, but I am getting an error when trying to access the external table. SQL Server says it cannot list the contents of the directory. Logs in AWS indicate that it cannot connect due to an authorization error where the header is malformed. I verified that I can access that bucket with the same credentials using the AWS cli from the same machine, but I cannot figure out why it is failing or what the authorization header looks like. Any pointers on where to look? Enable Polybaseselect serverproperty(‘IsPolyBaseInstalled’) as IsPolyBaseInstalled
exec sp_configure @configname = ‘polybase enabled’, @configvalue = 1Create Credentials and data sourcecreate master key encryption by password = ‘<some password>’
go
create credential s3_dc with identity = ‘S3 Access Key’, SECRET = ‘<access key>:<secret key>’
go
create external data source s3_ds
with (
location = ‘s3://<bucket_name>/<path>/’,
credential = s3_dc,
connection_options = ‘{
“s3”:{
“url_style”:”virtual_hosted”
}
}’
)
go Create External TableCREATE EXTERNAL FILE FORMAT ParquetFileFormat WITH(FORMAT_TYPE = PARQUET)
GO
CREATE EXTERNAL TABLE sample_table(
code varchar,
the_date date,
ref_code varchar,
value1 int,
value2 int,
value3 int,
cost numeric(12,2),
peak_value varchar
)
WITH (
LOCATION = ‘/sample_table/’,
DATA_SOURCE = s3_ds,
FILE_FORMAT = ParquetFileFormat
)
GO Read More
Getting last email for Microsoft 365 Group via Graph
Hello,
Is there a way to get information about Last Received mail for Microsoft 365 Group using Graph?
In the past I used:
Get-ExoMailboxFolderStatistics –Identity $mailbox–IncludeOldestAndNewestITems –FolderScope Inbox
but it takes too long if there are many mailboxes.
I also tried https://graph.microsoft.com/v1.0/users/<M365Group_mailAddress>/mailFolders?`$top=1
but that didn’t work, most likely because mailbox doesn’t exist from Exchange perspective.
Any ideas?
Hello,Is there a way to get information about Last Received mail for Microsoft 365 Group using Graph?In the past I used:Get-ExoMailboxFolderStatistics -Identity $mailbox-IncludeOldestAndNewestITems -FolderScope Inboxbut it takes too long if there are many mailboxes. I also tried https://graph.microsoft.com/v1.0/users/<M365Group_mailAddress>/mailFolders?`$top=1but that didn’t work, most likely because mailbox doesn’t exist from Exchange perspective.Any ideas? Read More
New Blog | Detect compromised RDP sessions with Microsoft Defender for Endpoint
By SaarCohen
Human operators play a significant part in planning, managing, and executing cyber-attacks. During each phase of their operations, they learn and adapt by observing the victims’ networks and leveraging intelligence and social engineering. One of the most common tools human operators use is Remote Desktop Protocol (RDP), which gives attackers not only control, but also Graphical User Interface (GUI) visibility on remote computers. As RDP is such a popular tool in human operated attacks, it allows defenders to use the RDP context as a strong incriminator of suspicious activities. And therefore, detect Indicators of Compromise (IOCs) and act on them.
That’s why today Microsoft Defender for Endpoint is enhancing the RDP data by adding a detailed layer of session information, so you can more easily identify potentially compromised devices in your organization. This layer provides you with more details into the RDP session within the context of the activity initiated, simplifying correlation and increasing the accuracy of threat detection and proactive hunting.
By Detect compromised RDP sessions with Microsoft Defender for Endpoint
By SaarCohen
Human operators play a significant part in planning, managing, and executing cyber-attacks. During each phase of their operations, they learn and adapt by observing the victims’ networks and leveraging intelligence and social engineering. One of the most common tools human operators use is Remote Desktop Protocol (RDP), which gives attackers not only control, but also Graphical User Interface (GUI) visibility on remote computers. As RDP is such a popular tool in human operated attacks, it allows defenders to use the RDP context as a strong incriminator of suspicious activities. And therefore, detect Indicators of Compromise (IOCs) and act on them.
That’s why today Microsoft Defender for Endpoint is enhancing the RDP data by adding a detailed layer of session information, so you can more easily identify potentially compromised devices in your organization. This layer provides you with more details into the RDP session within the context of the activity initiated, simplifying correlation and increasing the accuracy of threat detection and proactive hunting.
By Detect compromised RDP sessions with Microsoft Defender for Endpoint
Detect compromised RDP sessions with Microsoft Defender for Endpoint
Human operators play a significant part in planning, managing, and executing cyber-attacks. During each phase of their operations, they learn and adapt by observing the victims’ networks and leveraging intelligence and social engineering. One of the most common tools human operators use is Remote Desktop Protocol (RDP), which gives attackers not only control, but also Graphical User Interface (GUI) visibility on remote computers. As RDP is such a popular tool in human operated attacks, it allows defenders to use the RDP context as a strong incriminator of suspicious activities. And therefore, detect Indicators of Compromise (IOCs) and act on them.
That’s why today Microsoft Defender for Endpoint is enhancing the RDP data by adding a detailed layer of session information, so you can more easily identify potentially compromised devices in your organization. This layer provides you with more details into the RDP session within the context of the activity initiated, simplifying correlation and increasing the accuracy of threat detection and proactive hunting.
Remote session information
The new layer adds 8 extra fields, represented as new columns in Advanced Hunting, expands the schema across various tables. These columns enrich process information by including session details, augmenting the contextual data related to remote activities.
InitiatingProcessSessionId – Windows session ID of the initiating process
CreatedProcessSessionId – Windows session ID of the created process
IsInitiatingProcessRemoteSession – Indicates whether the initiating process was run under a remote desktop protocol (RDP) session (true) or locally (false).
IsProcessRemoteSession – Indicates whether the created process was run under a remote desktop protocol (RDP) session (true) or locally (false).
InitiatingProcessRemoteSessionDeviceName – Device name of the remote device from which the initiating process’s RDP session was initiated.
ProcessRemoteSessionDeviceName – Device name of the remote device from which the created process’s RDP session was initiated.
InitiatingProcessRemoteSessionIP – IP address of the remote device from which the initiating process’s RDP session was initiated.
ProcessRemoteSessionIP – IP address of the remote device from which the created process’s RDP session was initiated.
The data will be available in the following tables:
Table Name
Initiating process
Created Process
DeviceEvents
Yes
Yes, where relevant
DeviceProcessEvents
Yes
Yes
DeviceFileEvents
Yes
No
DeviceImageLoadEvents
Yes
No
DeviceLogonEvents
Yes
No
DeviceNetworkEvents
Yes
No
DeviceRegistryEvents
Yes
No
Detect human-operated ransomware attacks that use RDP
Defender for Endpoint machine learning models use data from remote sessions to identify patterns of malicious activity. They assess user interactions with devices via RDP by examining more than 100 characteristics and apply a machine learning classifier to determine if the behavior is consistent with hands-on-keyboard-based attacks.
Detect suspicious RDP sessions
Another model uses remote session information to identify suspicious remote sessions. Outlined below is an example of a suspect RDP session where harmful tools, commonly used by attackers in ransomware campaigns and other malicious activities, are deployed, setting off a high-severity alert.
This context is also available in Advanced Hunting for custom detection and investigation purposes.
An Advanced Hunting query can be used to display all processes initiated by a source IP during an RDP session. This query can be adjusted to fit all the supported tables.
DeviceProcessEvents
| where Timestamp >= ago(1d)
| where IsInitiatingProcessRemoteSession == “True”
| where InitiatingProcessRemoteSessionIP == “X.X.X.X” // Insert your IP Address here
| project InitiatingProcessFileName, InitiatingProcessAccountSid, InitiatingProcessCommandLine, FileName, ProcessCommandLine
Another query can be used to highlight actions performed remotely by a compromised account. This query can be adjusted to fit all the supported tables.
DeviceProcessEvents
| where Timestamp >= ago(7d)
| where InitiatingProcessAccountSid == “SID” // Insert the compromised account SID here
| where IsInitiatingProcessRemoteSession == “True”
| project InitiatingProcessFileName, InitiatingProcessAccountSid, InitiatingProcessCommandLine, FileName, ProcessCommandLine
You can also hunt for tampering attempts. Conducting this remotely across numerous devices can signal a broad attempt at tampering prior to an attack being launched.
DeviceRegistryEvents
| where Timestamp >= ago(7d)
| where RegistryKey == “HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows Defender”
| where RegistryValueName == “DisableAntiSpyware”
| where RegistryValueType == “Dword”
| where RegistryValueData == 1
| where IsInitiatingProcessRemoteSession == true
Comprehensive endpoint security
The ability to identify malicious use of RDP in Defender for Endpoint gives admins more granular visibility and control over detection, investigation, and hunting in unique edge cases, and helps them stay one step ahead of the evolving threat landscape.
For more information:
Learn more about Advanced Hunting in Microsoft Defender XDR: Overview – Advanced hunting | Microsoft Learn
Learn more about Defender for Endpoint: Microsoft Defender for Endpoint | Microsoft Security
Not a Defender for Endpoint customer? Start a free trial today.
Microsoft Tech Community – Latest Blogs –Read More
New Outlook for Windows: A guide for Delegates – part 1
The new Outlook for Windows brings a new, powerful email experience that can help executive administrators and delegates become more productive in their everyday work. This blog captures some tips to help delegates get started in the new Outlook.
1. Toggling into new Outlook
If your organization has enabled access to new Outlook, you will see a ‘Try the new Outlook’ toggle on the top right of your classic Outlook app. Turn this toggle on to try the new Outlook experience. You can toggle off to classic Outlook any time. Learn more about getting started in the new Outlook here.
We recommend that you select the option to ‘Import Settings’ from classic Outlook to make the new Outlook experience more familiar. You can learn more about the settings that are imported, here.
Note: You can use classic Outlook and new Outlook side-by-side by toggling off and launching both apps independently.
2. Customize the Outlook ribbon
In the new Outlook, simplified ribbon is enabled by default to offer a clean and simple experience. However, if you prefer the classic Outlook ribbon layout, you can change it from the ribbon drop down and select ‘classic ribbon’.
3. Manage your settings
You can navigate to the new Outlook Settings from the gear icon in the upper right corner. Changes you make to settings in the new Outlook for Windows will also be reflected in Outlook on the web.
4. View shared calendars
We sometimes hear feedback that shared/delegate calendars are missing in new Outlook because these are not visible by default.
To view shared calendars, click ‘Show all’ in the calendar list and view shared calendars under “People’s calendars”. Then, select the calendar you are interested in and then select ‘split view’ in the ribbon to view multiple calendars side by side.
We plan to automatically select the same calendars and view, including shared calendars, when users switch to new Outlook.
5. Add new shared/ delegate calendars
You can add a new shared/ delegate calendar either from the email you receive to manage the invite, or directly from the calendar.
To add directly from your calendar, click on ‘Add calendar’. Then choose ‘Add from directory’ and select the executive or team member whose calendar you would like to add.
Tip – you can add any team member’s calendar and see their default calendar sharing details (for most organizations, this is usually free/busy sharing).
6. Add and view shared/ delegate mailboxes and folders
To add shared or delegate mailboxes and folders, click on the three dots next to the ‘shared with me’ folder under your account and select ‘Add shared folder or mailbox’. Then select the shared/ delegate account you want to add. You can then view the shared mailbox or folders under ‘shared with me’
Share feedback
We encourage you to try the new Outlook and share your feedback. You can submit feedback on the new Outlook experience from Help > Feedback
Please mention – “I am an EA” Or “I am a delegate” when adding comments.
To stay updated with the latest features in new Outlook, follow the roadmap.
This guide will also be published as a support article that will be linked here once available.
Thanks!
Microsoft Tech Community – Latest Blogs –Read More
Link is stripped from email in Workflow
I have created email notifications whenever a message is posted to channels in teams. In the email notification I am trying to paste a link to corresponding channel, but after I save the link gets stripped out whenever I go back in to edit it or review something again. Is there something special I need to do to keep the link from being stripped out? The text I used to anchor the link stays, but the link is removed.
I have created email notifications whenever a message is posted to channels in teams. In the email notification I am trying to paste a link to corresponding channel, but after I save the link gets stripped out whenever I go back in to edit it or review something again. Is there something special I need to do to keep the link from being stripped out? The text I used to anchor the link stays, but the link is removed. Read More
Users not on account getting notifications
We have multiple users with separate accounts. Within the past week or so, users are getting notifications for Bookings calendars they are not even apart of. Anyone else experiencing this?
We have multiple users with separate accounts. Within the past week or so, users are getting notifications for Bookings calendars they are not even apart of. Anyone else experiencing this? Read More
Function for finding percentage of sum
What function do I use and how do I write it when trying to find the percentage of: (A10+A11)/5?
What function do I use and how do I write it when trying to find the percentage of: (A10+A11)/5? Read More
Canging sender without loosing the whole e-mail
I started using the New Outlook today. In the desktop app, I use my private and my work e-mail.
I got an e-mail on my private address that I wanted to answer with my work address. So I went to ‘From’ and clicked my work address. Instead of just changing the sending e-mail address (and keeping everything else), it opened a new window with a completely empty e-mail (with my work address as the sender). So now I had to manually copy-paste everything (the message I wanted to answer, the people I wanted to send it to, the subject line) from one screen to the other…
On the old Outlook I could just change the sender as easily as I could change receivers…
I started using the New Outlook today. In the desktop app, I use my private and my work e-mail.I got an e-mail on my private address that I wanted to answer with my work address. So I went to ‘From’ and clicked my work address. Instead of just changing the sending e-mail address (and keeping everything else), it opened a new window with a completely empty e-mail (with my work address as the sender). So now I had to manually copy-paste everything (the message I wanted to answer, the people I wanted to send it to, the subject line) from one screen to the other…On the old Outlook I could just change the sender as easily as I could change receivers… Read More
Marketplace Customer Office Hours: the marketplace + Azure, August 8th, at 8:30 am
Our customer office hours series is an opportunity for both customers and partners who want to understand customer FAQs. In this upcoming session focused on the marketplace + Azure, customers will get guidance on how to align Azure investments to the marketplace to help their organizations increase efficiency and spend smarter.
Register today for the marketplace + Azure.
Our customer office hours series is an opportunity for both customers and partners who want to understand customer FAQs. In this upcoming session focused on the marketplace + Azure, customers will get guidance on how to align Azure investments to the marketplace to help their organizations increase efficiency and spend smarter.
Register today for the marketplace + Azure. Read More
Cannot assign SMTP service to certificate
Hi,
is this a place to ask for support? Not sure, it’s called “conversations”… 🙂
My problem is that I cannot assign SMTP service to my freshly installed Letsencrypt certificate (new installation of Exchange 2019 on Server 2022 core). I ran automated win-acme client and the certificate now is visible in EAC. All seems to be fine so far. Now I try to assign IIS and SMTP service, but this only works for IIS service. The assignment for SMTP is not retained without any message appearing. I have tried it via EMS, no difference. Can anyone help?
Regards,
Stefano
Hi, is this a place to ask for support? Not sure, it’s called “conversations”… 🙂 My problem is that I cannot assign SMTP service to my freshly installed Letsencrypt certificate (new installation of Exchange 2019 on Server 2022 core). I ran automated win-acme client and the certificate now is visible in EAC. All seems to be fine so far. Now I try to assign IIS and SMTP service, but this only works for IIS service. The assignment for SMTP is not retained without any message appearing. I have tried it via EMS, no difference. Can anyone help? Regards,Stefano Read More
Enhancements to the Outbound Messages in Transit Security Report
Today, we are excited to announce enhancements to the Outbound Messages in Transit Security report that help you track and optimize the security of your outbound email.
To help you identify and reduce the number of emails that are sent in plain text, we have added two new elements to the outbound messages in transit report: a new field in the Messages Sent section, and a new page called Recipient Domains Not Supporting TLS.
We have split the ‘Opportunistic TLS’ category in the Messages Sent section of the mail flow report into 2 categories: ‘TLS’ and ‘No-TLS’ so there are now 5 security categories.
With the addition of Recipient Domains Not Supporting TLS, the Outbound Messages in Transit Security report now has 3 views:
The Messages Blocked section compiles data for tenant admins on any SMTP DANE with DNSSEC or MTA-STS issues encountered during attempts to send messages to domains that use these security protocols.
The Messages Sent section provides time-series data for emails secured by SMTP DANE with DNSSEC, MTA-STS, Both SMTP DANE with DNSSEC and MTA-STS, TLS, or No-TLS.
Recipient Domains Not Supporting TLS provides time series data for messages that were sent to a destination domain unencrypted (in plain text) because the destination didn’t support TLS. Exchange Online always attempts to send using TLS, but if the destination server or domain doesn’t support it then the default behavior is to send the email.
How to access the new features
These updates are available right now! To access the report, go to the Exchange admin center, and then click Reports > Mail flow. Once the page loads, select Outbound Messages in Transit Security report.
To learn more about the report, visit Outbound messages in Transit Security report in the Exchange Admin Center for Exchange Online | Microsoft Learn
How to use the data to improve your email security
The data in the Outbound Messages in Transit Security report can help you monitor and improve email security in several ways. Here are some examples of how you can use the data:
If you see a high number of emails sent in plain text to an organization, you can contact the receiving organization and ask them to enable TLS on their email servers.
If you see a sudden spike in the number of emails experiencing SMTP DANE with DNSSEC or MTA-STS failures, you can alert the destination organization, so they take corrective measures.
If you see a consistent pattern of emails being blocked or sent in plain text to certain domains, you can consider alternative ways of communicating with those domains. For example, you can use secure file sharing services or secure web portals to exchange information with those domains.
We hope that you will find these enhancements helpful. If you have any feedback or suggestions, please let us know in the comments below!
Microsoft 365 Messaging Team
(Formerly Exchange Online Transport Team)
Microsoft Tech Community – Latest Blogs –Read More