Category: Microsoft
Category Archives: Microsoft
IP address changes for Azure Service Bus and IP/DNS Changes for Azure Relay
What is Changing?
The infrastructure layer of Azure Relay and Service Bus is being upgraded which will cause the IP addresses used by customer namespaces to. For Azure Relay the gateway DNS names are also changing.
These changes are being made as part of our continuous improvements to our platform. The IP addresses of our services can change and should not be considered static and unchanging as previously communicated in the communication for Azure Service Bus and Azure Relay. There is no added charge for this nor are there any service interruptions during the migration.
Call to Action
If you are using IP addresses in your egress firewalls to your Azure Relay or Azure Service Bus namespaces, you will need to update them to use the namespace DNS names instead.
Alternative (not recommended!)
As a final alternative, it is possible to use the new IP addresses. We highly recommend against this, as you will need to keep track of any IP address changes yourself, and your service may be interrupted.
Azure Service Bus customers
If you are using Azure Service Bus premium, we recommend using service tags, as per our recommendations described in the service documentation. Service tags will automatically be updated if anything changes in our infrastructure.
If you are on Azure Service Bus standard / basic or cannot use service tags on Azure Service Bus Premium, use the fully qualified domain names for your specific namespaces, or the wildcard “*.servicebus.windows.net” domains. These will automatically resolve to the new IP addresses.
For Azure Service Bus, as an unrecommended alternative, the IP address can be found by executing a ping command against the fully qualified domain name of your specific namespace.
Azure Relay customers
For Azure Relay, configure your firewalls with the DNS names of all the Relay gateways, which can be found by running this script . This script will resolve the fully qualified domain names of all the gateways to which you need to establish a connection.
Furthermore, you can use the same script , to get the IP addresses of all the gateways to which you need to establish a connection.
Microsoft Tech Community – Latest Blogs –Read More
What’s new in Windows Autopatch: February 2024
The start of the new year brings a great opportunity for positive change, including the release of new features in Windows Autopatch. We heard your feedback! Here are some improvements made in response to your enterprise needs.
Import Update rings for Windows 10 and later in preview
Update rings allow you to specify how and when Windows as a service updates your Windows 10 or Windows 11 device with feature and quality updates. Update rings are available for Windows 10 and later. And if you’re a Windows Autopatch customer, you can now bring existing Update rings for Windows 10 and later policies into Windows Autopatch Management. For additional information, see Configure Update rings for Windows 10 and later policy in Intune.
Importing existing rings allows you to take advantage of the many capabilities of Windows Autopatch without impacting your existing Windows update schedules. Imported rings will automatically register all targeted devices into Windows Autopatch without the need to redeploy or change your existing update rings. Additionally, important rings will be reflected in the reporting and release experience.
Learn how to import update rings for Windows 10 and later. If needed, brush up on Windows client updates, channels, and tools.
Customer defined service outcomes in preview
Have you used Windows Autopatch reports to monitor the health and activity of your deployments? The insights from the reports can help you understand if your devices are maintaining update compliance targets.
Previously, deployment success measures were based on a static schedule of 21 days. This means that Windows Autopatch aims to keep at least 95% of eligible devices on the latest Windows quality update 21 days after release.
With this enhancement, the success of Windows Autopatch deployments will be based on your defined rings. We’ll also be introducing new columns in our release blade, as well as Windows quality and feature update reporting, to show the percentage complete for quality and feature updates. Devices that are up to date will remain in the “In Progress” status in reporting until you either get the current monthly cumulative update or an alert. If an alert is received, the status will change to “Not up to date.”
To learn more, read Service level objectives.
Improved data refresh speed and reporting accuracy
Windows Autopatch reporting provides rich insights into your patch compliance status, so you can make informed choices about protecting against defects and vulnerabilities.
This release is changing the refresh cycle for Windows Autopatch reporting. The refresh cycle refers to the amount of time from when a change is made to when it’s reflected in reporting and other UX components. This time will be reduced from every 24 hours to every 30 minutes. This improvement supports the many data streams that Windows Autopatch uses to provide current update status for all devices enrolled into Windows Autopatch.
To learn more, see Windows quality update reporting.
Take your next step with Windows Autopatch
We hope these enhancements will help you keep your devices secure and up to date with less hassle and more control. Get current and stay current with automation that leads to higher security and lower costs.
The ideas behind these releases originated from conversations, input, and requests from you, our customers. We’d love to hear your feedback and suggestions on how we can continue to make Windows Autopatch even better for you. You can share your thoughts and ideas with us on our feedback hub or by joining our community forum.
If you want to learn more about Windows Autopatch:
Visit our website.
Read our documentation.
Watch our guided demos.
If you want to try Windows Autopatch for yourself, sign up for a free trial or contact us for a demo.
Thank you for choosing Windows Autopatch and stay tuned for more updates and announcements.
Continue the conversation. Find best practices. Bookmark the Windows Tech Community, then follow us @MSWindowsITPro on X/Twitter. Looking for support? Visit Windows on Microsoft Q&A.
Microsoft Tech Community – Latest Blogs –Read More
Security review for Microsoft Edge version 121
We are pleased to announce the security review for Microsoft Edge, version 121!
We have reviewed the new settings in Microsoft Edge version 121 and determined that there are no additional security settings that require enforcement. The Microsoft Edge version 117 security baseline continues to be our recommended configuration which can be downloaded from the Microsoft Security Compliance Toolkit.
Microsoft Edge version 121 introduced 11 new computer settings and 11 new user settings. We have included a spreadsheet listing the new settings in the release to make it easier for you to find them.
As a friendly reminder, all available settings for Microsoft Edge are documented here, and all available settings for Microsoft Edge Update are documented here.
Please continue to give us feedback through the Security Baselines Discussion site or this post.
Microsoft Tech Community – Latest Blogs –Read More
APIs in Action: Unlocking the Potential of APIs in Today’s Digital Landscape
In today’s world, APIs (Application Programming Interfaces) are essential for connecting applications and services, driving digital innovation. But with the rise of hybrid and multi-cloud setups, effective API management becomes essential for ensuring security and efficiency. That’s where APIs in Action, a virtual event dedicated to unlocking the full potential of APIs, comes in.
Join us for a full-day virtual event focused on exploring API management for integration, hybrid and multi-cloud, and AI workloads. Learn from industry experts about the latest trends and best practices shaping the API landscape. Our immersive event delves deep into APIs and API management, highlighting innovative architectures that drive business growth. Our experts will guide you through transforming existing services and making your data easily accessible to developers, both internally and externally.
Whether you’re a seasoned professional or just starting out, APIs in Action equips you with the knowledge and tools to use APIs effectively in your hybrid and multi-cloud environment. Register now and join the conversation! Experience a day filled with insightful discussions, demos, and actionable insights that will empower you to navigate the evolving landscape of API management with confidence.
Session
Abstract
Speaker(s)
The role of API Management in Azure Integration Services
A successful integration platform developed with Azure Integration Services will find API Management at the heart of your solution. In this session we will discuss some of the common scenarios where you will find API Management used.
Mike Stephenson
API management for microservices in a hybrid and multi-cloud world
Microservices are on the cusp of becoming the dominant style of software architecture. This hands-on demonstration will show how enterprises can make the transition to API-first architectures and microservices in a hybrid, multi-cloud world.
Tom Kerkhove
Leveraging API Management for OpenAI Applications/Use Azure API Management (APIM) to manage, secure, and scale your LLM-based applications
This session navigates the intersection of APIM and OpenAI technologies, discussing how APIM enhances the deployment, security, and scalability of OpenAI-powered applications. Attendees will learn about APIM basics, OpenAI’s capabilities, integration strategies, security challenges, and real-world applications.
Elena Neroslavskaya, Chris Ayers
Azure API Management from a developer perspective
As organizations adopt an API-first mindset, the need for a good management of your APIs grows. This session will explain the benefits of Azure API Management (APIM) through the eyes of a developer. What’s in it for the developer and how can Azure APIM help to maximize the potential and security of your APIs?
Toon Vanhoutte
OpenAPI now vs. the future
Discover the essential role of OpenAPI in unlocking your API’s full potential and expanding your customer base. In this session, explore how OpenAPI is integral to the AI-driven future, providing crucial insights for staying ahead in the dynamic API landscape. Elevate your strategy and position your API for success by embracing OpenAPI.
Darrel Miller
API Design First with SwaggerHub and Azure API Management
Still designing in the dark ages with interface design documents and outdated documentation? Come see how SwaggerHub and Azure API Management can enable you to utilize the API Design First methodology to create live documentation that allows architects and stakeholders to design software together.
Joël Hébert
API DevEx
The developer experience for APIs can be difficult for new API developers and can add complexity to existing API projects due to new toolchains and evolving cloud services. In this session, we will demystify the API developer experience, leveraging tools like GitHub Copilot, Azure API Center, Azure API Management, and OpenAPI extensions.
Josh Garverick
Better API Governance with Azure API Center
An API catalog brings together the different roles involved in an API program and, by promoting the collaboration between them, it fosters API reuse, ensured compliance and better developer productivity. In this session we will explore what is Azure API Center and how to integrate it in your API design workflow.
Massimo Crippa
Leverage Postman to Collaboratively Test your APIs from design to deployment and beyond
Learn firsthand how to wield Postman effectively throughout the API Lifecycle, boosting your API implementation and fortifying security from the start with the right testing strategies.
Whether you’re in the business of creating or consuming APIs, discover how Postman and Azure API Management complement each other to enhance collaboration and streamline productivity.
Sandeep Murusupalli, Garrett London
Build a warp speed time-to-market API with DAB, APIM and Azure Container Apps
In this session will delve into how the Data API builder enables swift and secure database object exposure through REST or GraphQL endpoints allowing data access on any platform, language, or device. By combining DAB with Azure Container Apps and API Management we will build up and secure a serverless data API without writing a single line of code.
Massimo Crippa
Harnessing the Power of Azure API Management: Building Robust and Secure API
In this session, which combines theoretical knowledge with real-world scenarios, we will delve into the advanced features of Azure API Management, with a focus on building robust, secure, and scalable APIs. Attendees will learn about security best practices, policy management, and how to effectively use Azure’s tools to enhance API performance and security.
Hamida Rebai
Building a resilient API landscape with Azure API Management
Cloud service failure is inevitable. When building platforms, it is crucial to ensure that you will seamlessly handle failure and by being resilient to them. Learn how Azure API Management helps you mitigate and recover from failures by using built-in load balancing and circuit-breaking capabilities.
Tom Kerkhove
Enhance your API security posture with Microsoft Defender for APIs
Azure Defender for APIs brings security insights and ML-based detections to APIs that are exposed via Azure API Management. In this session we will see how to leverage Defender for APIs to enhance your security posture, which kind of scenarios are covered, and our learnings from observing production workloads.
Massimo Crippa
Gain Understanding of APIs and Integrations with Azure Application Insights
Use Application Insights to create a correlated, end to end view of integrations across APIM, Logic Apps and Functions. Learn how to record insights, including business data, then create queries to view the data and observe through dashboards. Through Workbooks we can create meaningful, insightful custom visuals allowing support and business teams to gain the insights they want.
Dave Phelps
GitOps for API-Management
In this talk, we will present our experience with a GitOps workflow for implementing and managing API-Management within an Integration Platform for an international corporation. We will describe how we automated infrastructure and deployment for the whole platform, addressing key aspects such as governance, permissions management, testing and documentation.
Christine Robinson, Maximiliane Ott
APIOps: Transforming Azure APIM Deployments with GitOps and DevOps Methodologies
This talk offers a deep dive into the principles and practices of automating and managing APIs in Azure API Management. Attendees will gain insights into how APIOps applies the concepts of GitOps and DevOps to API deployment. By using practices from these two methodologies, APIOps can enable everyone involved in the lifecycle of API design, development, and deployment with self-service and automated tools to ensure the quality of the specifications and APIs that they’re building.
Wael Kdouh
Microsoft Tech Community – Latest Blogs –Read More
Sysmon v15.14
Microsoft Tech Community – Latest Blogs –Read More
Azure SQL Managed Instance – Log Space Growth Alert using Azure Runbook/PowerShell
Introduction
There are scenarios wherein customer want to monitor their transaction log space usage. Currently there are options available to monitor Azure SQL Managed Instance metrics like CPU, RAM, IOPS etc. using Azure Monitor, but there is no inbuilt alert to monitor the transaction log space usage.
This blog will guide to setup Azure Runbook and schedule the execution of DMVs to monitor their transaction log space usage and take appropriate actions.
Overview
Microsoft Azure SQL Managed Instance enables a subset of dynamic management views (DMVs) to diagnose performance problems, which might be caused by blocked or long-running queries, resource bottlenecks, poor query plans, and so on.
Using DMV’s we can also find the log growth – Find the usage in percentage and compare it to a threshold value and create an alert.
In Azure SQL Managed Instance, querying a dynamic management view requires VIEW SERVER STATE permissions.
GRANT VIEW SERVER STATE TO database_user;
Monitor log space use by using sys.dm_db_log_space_usage. This DMV returns information about the amount of log space currently used and indicates when the transaction log needs truncation.
For information about the current log file size, its maximum size, and the auto grow option for the file, you can also use the size, max_size, and growth columns for that log file in sys.database_files.
Solution
Below PowerShell script can be used inside an Azure Runbook and alerts can be created to notify the user about the log space used to take necessary actions.
# Ensures you do not inherit an AzContext in your runbook
Disable-AzContextAutosave -Scope Process
$Threshold = 70 # Change this to your desired threshold percentage
try
{
“Logging in to Azure…”
Connect-AzAccount -Identity
}
catch {
Write-Error -Message $_.Exception
throw $_.Exception
}
$ServerName = “tcp:xxx.xx.xxx.database.windows.net,3342”
$databaseName = “AdventureWorks2017”
$Cred = Get-AutomationPSCredential -Name “xxxx”
$Query=”USE [AdventureWorks2017];”
$Query= $Query+ ” “
$Query= $Query+ “SELECT ROUND(used_log_space_in_percent,0) as used_log_space_in_percent FROM sys.dm_db_log_space_usage;”
$Output = Invoke-SqlCmd -ServerInstance $ServerName -Database $databaseName -Username $Cred.UserName -Password $Cred.GetNetworkCredential().Password -Query $Query
#$LogspaceUsedPercentage = $Output.used_log_space_in_percent
#$LogspaceUsedPercentage
if($Output. used_log_space_in_percent -ge $Threshold)
{
# Raise an alert
$alertMessage = “Log space usage on database $databaseName is above the threshold. Current usage: $Output.used_log_space_in_percent%.”
Write-Output “Alert: $alertMessage”
# You can send an alert using Send-Alert cmdlet or any other desired method
# Send-Alert -Message $alertMessage -Severity “High” Via EMAIL – Can call logicApp to send email, run DBCC CMDs etc.
} else {
Write-Output “Log space usage is within acceptable limits.”
}
There are different alert options which you can use to send alert in case log space exceeds its limit as below.
Alert Options
Send email using logic apps or SMTP – https://learn.microsoft.com/en-us/azure/connectors/connectors-create-api-smtp
Azure functions – https://learn.microsoft.com/en-us/samples/azure-samples/e2e-dotnetcore-function-sendemail/azure-net-core-function-to-send-email-through-smtp-for-office-365/
Run dbcc command to shrink log growth – https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/file-space-manage?view=azuresql-mi#ShrinkSize
Feedback and suggestions
If you have feedback or suggestions for improving this data migration asset, please contact the Data SQL Ninja Engineering Team (datasqlninja@microsoft.com). Thanks for your support!
Note: For additional information about migrating various source databases to Azure, see the Azure Database Migration Guide
Microsoft Tech Community – Latest Blogs –Read More
Azure Database for MySQL – Single Server retirement – Key updates and migration tooling available
Azure Database for MySQL – Single Server is scheduled for retirement by September 16, 2024.
As part of this retirement, we stopped support for creating new Single Server instances via the Azure portal as of January 16, 2023, and beginning March 19, 2024, we’ll no longer support creating new Single Server instances via the Azure CLI. Should you still need to create Single Server instances to meet your business continuity needs, please raise an Azure support ticket. Note that you’ll still be able to create read replicas and perform restores (PITR and geo-restore) for your existing Single Server instance until the sunset date, September 16, 2024.
If you currently have an Azure Database for MySQL – Single Server production server, we’re pleased to let you know that you can migrate your Azure Database for MySQL – Single Server instance to the Azure Database for MySQL – Flexible Server service free of charge by using one of the following migration tooling options.
Azure Database for MySQL Import CLI
You can leverage the Azure Database for MySQL Import CLI (General Availability) to migrate your Azure Database for MySQL – Single Server instances to Flexible Server using snapshot backup and restore technology with a single CLI command. Based on user inputs, this functionality will provision your target Flexible Server instance, take a backup of the source server, and then restore it to the target. It copies the following properties and files from the Single Server instance to the Flexible Server instance:
Data files
Server parameters
Compatible firewall rules
Server properties such as tier, version, SKU name, storage size, location, geo-redundant backups settings, public access settings, tags, auto grow settings and backup-retention days settings
Admin username and password
In-place auto-migration
In-place auto-migration (General Availability) from Azure Database for MySQL – Single Server to Flexible Server is an in-place upgrade during a planned maintenance window for select Single Server database workloads. If you have a Single Server workload based on the Basic or General Purpose SKU with <= 20 GiB of used storage and without complex features (CMK, AAD, Read Replica, Private Link) enabled, you can now nominate yourself for auto-migration by submitting your server details using this form.
Azure Database Migration Service (DMS)
Azure Database Migration Service (DMS) (General Availability) is a fully managed service designed to enable seamless online and offline migration from Azure Database for MySQL – Single Server to Flexible Server. DMS supports cross-region, cross-version, cross-resource group, and cross-subscription migrations.
Conclusion
Take advantage of one of these options to migrate your Single Server instances to Flexible Server at no cost!
For more questions on Azure Database for MySQL Single Server retirement, see our Frequently Asked Questions.
Microsoft Tech Community – Latest Blogs –Read More
Simplifying Azure Kubernetes Service Authentication Part 2
Welcome to the second installment of our multipart series on simplifying Azure Kubernetes Service (AKS) authentication. In this article, we delve deeper into the intricacies of AKS setup, focusing on critical aspects such as deploying demo applications, configuring Cert Manager for TLS certificates (enabling HTTPS), establishing a static IP address, creating a DNS label, and initiating the groundwork for robust authentication. First part here Part 1
Let’s dive in!
Deploy two demo applications
In the previous post we set up our AKS cluster and configured NGINX. Now we will deploy two sample applications and deploy them. You can follow the official documentation here Create an unmanaged ingress controller.
First create the following two YAML files that define our two applications:
aks-helloworld-one.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: aks-helloworld-one
spec:
replicas: 1
selector:
matchLabels:
app: aks-helloworld-one
template:
metadata:
labels:
app: aks-helloworld-one
spec:
containers:
– name: aks-helloworld-one
image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
ports:
– containerPort: 80
env:
– name: TITLE
value: “Welcome to Azure Kubernetes Service (AKS)”
—
apiVersion: v1
kind: Service
metadata:
name: aks-helloworld-one
spec:
type: ClusterIP
ports:
– port: 80
selector:
app: aks-helloworld-one
aks-helloworld-two.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: aks-helloworld-two
spec:
replicas: 1
selector:
matchLabels:
app: aks-helloworld-two
template:
metadata:
labels:
app: aks-helloworld-two
spec:
containers:
– name: aks-helloworld-two
image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
ports:
– containerPort: 80
env:
– name: TITLE
value: “AKS Ingress Demo”
—
apiVersion: v1
kind: Service
metadata:
name: aks-helloworld-two
spec:
type: ClusterIP
ports:
– port: 80
selector:
app: aks-helloworld-two
Then run the following commands to deploy the applications:
kubectl apply -f aks-helloworld-one.yaml –namespace ingress-basic
kubectl apply -f aks-helloworld-two.yaml –namespace ingress-basic
Now lets check the pods, service, and deployment:
List the pods and verify the STATUS is Running for both applications
kubectl get pods -n ingress-basic
List the service and notice the CLUSTER-IP assigned to each service
kubectl get service -n ingress-basic
List the deployment and notice the READY state
kubectl get deployment -n ingress-basic
Create an ingress route
We will proceed to create a Kubernetes Ingress resource YAML file, enabling us to efficiently route traffic to each of our deployed applications. As a reminder, our ingress controller has been configured to utilize NGINX, as discussed in our previous post. Consequently, we will leverage the NGINX configuration to effectively manage traffic for the following services:
EXTERNAL_IP/hello-world-one to aks-helloworld-one
EXTERNAL_IP/hello-world-two to aks-helloworld-two,
EXTERNAL_IP/static to aks-helloworld-one
First create the following YAML file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: “false”
nginx.ingress.kubernetes.io/use-regex: “true”
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
– http:
paths:
– path: /hello-world-one(/|$)(.*)
pathType: Prefix
backend:
service:
name: aks-helloworld-one
port:
number: 80
– path: /hello-world-two(/|$)(.*)
pathType: Prefix
backend:
service:
name: aks-helloworld-two
port:
number: 80
– path: /(.*)
pathType: Prefix
backend:
service:
name: aks-helloworld-one
port:
number: 80
—
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress-static
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: “false”
nginx.ingress.kubernetes.io/rewrite-target: /static/$2
spec:
ingressClassName: nginx
rules:
– http:
paths:
– path: /static(/|$)(.*)
pathType: Prefix
backend:
service:
name: aks-helloworld-one
port:
number: 80
Then create the resource with the following command:
kubectl apply -f hello-world-ingress.yaml –namespace ingress-basic
You will need your public IP obtained from the last post. Now visit the deployed application in the web browser by navigating to:
PUBLICIP/hello-world-two or PUBLICIP/hello-world-one
Upload cert manager images to your ACR
We will proceed to configure images for the certificate manager by deploying the necessary images to our Azure Container Registry (ACR) instance. Before executing the following command, ensure that you include the -TargetTag <your tag name> flag. Although the Microsoft documentation for using Transport Layer Security (TLS) with an ingress controller on AKS does not explicitly require this flag, it is advisable to include it. Doing so allows you to specify the ACR repository names, such as jetstack/cert-manager-cainjector, jetstack/cert-manager-controller, and jetstack/cert-manager-webhook. For detailed steps, you can refer to the official documentation here Use TLS with an ingress controller on Azure Kubernetes Service (AKS)
Enter the following commands in PowerShell to upload the cert manager images to your ACR:
$RegistryName = “<REGISTRY_NAME>”
$ResourceGroup = (Get-AzContainerRegistry | Where-Object {$_.name -eq $RegistryName} ).ResourceGroupName
$CertManagerRegistry = “quay.io”
$CertManagerTag = “v1.8.0”
$CertManagerImageController = “jetstack/cert-manager-controller”
$CertManagerImageWebhook = “jetstack/cert-manager-webhook”
$CertManagerImageCaInjector = “jetstack/cert-manager-cainjector”
Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $CertManagerRegistry -SourceImage “${CertManagerImageController}:${CertManagerTag}” -TargetTag “${CertManagerImageController}:${CertManagerTag}”
Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $CertManagerRegistry -SourceImage “${CertManagerImageWebhook}:${CertManagerTag}” -TargetTag “${CertManagerImageWebhook}:${CertManagerTag}”
Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $CertManagerRegistry -SourceImage “${CertManagerImageCaInjector}:${CertManagerTag}” -TargetTag “${CertManagerImageCaInjector}:${CertManagerTag}”
Create a static IP address
In the context of configuring the NGINX ingress controller, it is prudent to address the necessity of a static IP address for proper routing functionality. Based on my observations during the NGINX setup process outlined in the previous documentation, it appears that a static IP address may already be assigned. Consequently, there might be no immediate requirement to allocate a new static IP address. However, to ensure unequivocal utilization of a static IP address, it is advisable to consider assigning a fresh one to the load balancer exposed by NGINX. While this additional step does not inherently pose any harm, it remains a discretionary measure. Depending on the specific deployment scenario, it may or may not be essential.
First get the resource group name of your AKS cluster:
(Get-AzAksCluster -ResourceGroupName $ResourceGroup -Name myAKSCluster).NodeResourceGroup
The run the following command to create a static IP address:
(New-AzPublicIpAddress -ResourceGroupName MC_myResourceGroup_myAKSCluster_eastus -Name myAKSPublicIP -Sku Standard -AllocationMethod Static -Location eastus).IpAddress
You should get an IP address. Keep a note of this IP.
Set the DNS label, static IP, and health probe using Helm
Create a DNS label name that will be used to generate a FQDN for navigating to your applications. This can be any name, but it must be unique. Additionally, add the static IP address obtained from above and set the health monitoring request path. Run the following command to configure the NGINX ingress controller:
$DnsLabel = “<DNS_LABEL>”
$Namespace = “ingress-basic”
$StaticIP = “<STATIC_IP>”
helm upgrade ingress-nginx ingress-nginx/ingress-nginx `
–namespace $Namespace `
–set controller.service.annotations.”service.beta.kubernetes.io/azure-dns-label-name”=$DnsLabel `
–set controller.service.loadBalancerIP=$StaticIP `
–set controller.service.annotations.”service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path”=/healthz
This marks the conclusion of the second installment in our series. In the upcoming segment, we will delve further into the setup process. Specifically, we’ll configure the certificate manager, update our ingress routes, establish passwords and secrets for authentication, and prepare for the configuration of our OAuth2 proxy. Stay tuned for the next part, where we continue our journey toward a robust and secure system.
Microsoft Tech Community – Latest Blogs –Read More
Intune moving to support Android 10 and later for user-based management methods in October 2024
We’ve heard your feedback asking to understand the plan for Intune’s support for Android operating system (OS) versions.
In October 2024 (after Google’s expected release of Android 15), Intune will revise its operating system support statement to move to supporting only Android 10 and later for user-based management methods, which include:
Android Enterprise personally owned with a work profile.
Android Enterprise corporate owned work profile.
Android Enterprise fully managed.
Android Open Source Project (AOSP) user-based.
Android Device administrator.
App protection policies.
App configuration policies for managed apps.
The following aren’t impacted by this change:
Android Enterprise dedicated devices: Will continue to be supported on Android 8 or later.
AOSP user-less: Will continue to be supported on Android 8 or later.
Microsoft Teams certified Android devices: Will be supported on versions listed in Microsoft Teams certified Android device documentation.
Microsoft Teams certified Android devices
Teams Rooms certified systems and peripherals
We plan to gradually move to only supporting the four most recent Android versions for our user-based management methods to keep enrolled devices secure. As Google continues to release new Android versions annually, we’ll stop supporting one or two older versions every October until we support only the four most recent versions. After that, we’ll end support for one version annually in October to maintain our support statement for the four latest versions.
Impact of ending support
For user-based management methods (as listed above), Android devices running Android 9 or earlier will no longer be supported. For devices on unsupported Android OS versions:
Intune technical support will no longer be provided.
Intune will no longer be making changes to address bugs or issues.
New and existing features are not guaranteed to work.
While Intune won’t prevent enrollment or management of devices on unsupported Android OS versions, functionality isn’t guaranteed, and use isn’t recommended.
How can you prepare?
Use Intune reporting to identify which devices or users might be affected:
For devices with mobile device management (MDM), go to Devices > All devices and filter by OS.
For devices with app protection policies, go to Apps > Monitor > App Protection status and use the Platform and Platform version columns to filter.
For devices with app configuration policies, go to Apps > Monitor > App Configuration status and use the Platform and Platform version columns to filter.
Warn users that they should update their Android version:
For devices with MDM, utilize a device compliance policy for Android Enterprise, Android AOSP, or Android device administrator and set the action for noncompliance to send an email or push notification to users before marking them noncompliant.
For devices with app protection policies, create an app protection policy and configure conditional launch with a min OS version requirement that warns users.
Block devices from accessing corporate resources until they update their Android version:
For devices with MDM, you can use either or both of these methods:
Set enrollment restrictions to prevent enrollment on devices running older versions.
Utilize a device compliance policy to make devices noncompliant if they are running older versions.
For devices with app protection policies, create an app protection policy and configure conditional launch with a min OS version requirement that blocks users from app access.
For more information, see Manage operating system versions with Intune. If you have any questions, leave a comment below or reach out to us on X @IntuneSuppTeam.
Microsoft Tech Community – Latest Blogs –Read More
Join Teams for work or school meetings with personal account
We are improving the ways to join Teams meetings and have started to roll out an improvement enabling you to join a Teams meeting organized by a work or school user with your signed-in personal account. Read more on the Teams Insider blog and join Teams Insider to try this in Teams free on Windows 11 today!
Join Teams for work or school meeting with your personal account – Teams Insider
Microsoft Tech Community – Latest Blogs –Read More
Intelligent App Chronicles: Azure API Management as an Enterprise API Gateway
The Intelligent App Chronicles for Healthcare is a webinar series designed to provide health and life sciences companies with a comprehensive guide to building intelligent healthcare applications.
The series will cover a wide range of topics including Azure Container Services, Azure AI Services, Azure Integration Services, and innovative solutions that can accelerate your Intelligent app journey. By attending these webinars, you will learn how to leverage the power of intelligent systems to build scalable and secure healthcare solutions that can transform the way you deliver care. Our hosts will be: (99+) Shelly (Finch) Avery | LinkedIn, (99+) Matthew Anderson | LinkedIn
Our next session will be on Feb 20th at 9:00 PT / 10:00 MT / 11:00 CT / 12:00 ET – Click here to Register.
Overview:
Please join us for an informative session on how to use Azure API Management as an enterprise API gateway. You will discover how to use Azure API Management as an enterprise API gateway to create intelligent and secure healthcare applications.
Our speaker this week is Rob McKenna, Principal Technical Specialist for Azure Apps and Innovation, he will cover topics such as:
Benefits of a centralized and shared API gateway
the steps to get your enterprise teams started
networking considerations for regulated industries.
How to ensure the internal and external availability of your APIs
How to improve your developer velocity, and how to use DevOps for API management and developer experience tooling.
Don’t miss this opportunity to learn from the experts and take your healthcare applications to the next level. Register now for the Intelligent App Chronicles for Healthcare webinar series! here!
Thanks for reading!
Please follow the aka.ms/HLSBlog for all this great content.
Thanks for reading, Shelly Avery | Email, LinkedIn
Microsoft Tech Community – Latest Blogs –Read More
Hunting for QR Code AiTM Phishing and User Compromise
In the dynamic landscape of adversary-in-the-middle (AiTM) attacks, the Microsoft Defender Experts team has recently observed a notable trend – QR code-themed phishing campaigns. The attackers employ deceptive QR codes to manipulate users into accessing fraudulent websites or downloading harmful content.
These attacks exploit the trust and curiosity of users who scan QR codes without verifying their source or content. Attackers can create QR codes that redirect users to phishing sites that mimic legitimate ones, such as banks, social media platforms, or online services. The targeted user scans the QR code, subsequently being redirected to a phishing page. Following user authentication, attackers steal the user’s session token, enabling them to launch various malicious activities, including Business Email Compromise attacks and data exfiltration attempts. Alternatively, attackers can create QR codes that prompt users to download malware or spyware onto their devices. These attacks can result in identity theft, financial loss, data breach, or device compromise.
This blog explains the mechanics of QR code phishing, and details how Defender Experts hunt for these phishing campaigns. Additionally, it outlines the procedures in place to notify customers about the unfolding attack narrative and its potential ramifications.
Why is QR code phishing a critical threat?
The Defender Experts team has observed that QR code campaigns are often massive and large-scale in nature. Before launching these campaigns, attackers typically conduct reconnaissance attempts to gather information on targeted users. The campaigns are then sent to large groups of people within an organization, often exceeding 1,000 users, with varying parameters across subject, sender, and body of the emails.
The identity compromises and stolen session tokens resulting from these campaigns are proportional to their large scale. In recent months, Defender Experts have observed QR code campaigns growing from 10% to 30% of total phishing campaigns. Since the campaigns do not follow a template, it can be difficult to scope and evaluate the extent of compromise. It is crucial for organizations to be aware of this trend and take steps to protect their employees from falling victim to QR code phishing attacks.
Understanding the intent of QR code phishing attacks
The QR code phishing email can have one of the below intents:
Credential theft: The majority of these campaigns are designed with the intent where the user is redirected to an AiTM phishing website for session token theft. The authentication method can be single factor authentication, where only the user’s password is compromised and the sign-in attempts are unsuccessful; in these scenarios, the attacker signs in later with the compromised password and bypasses multifactor authentication (MFA) through MFA fatigue attacks.Alternatively, the user can be redirected to an AiTM phishing page where the credentials, MFA parameters and session token are compromised in real-time.
Malware distribution: In these scenarios, once the user scans the QR code, malware/spyware/adware is automatically downloaded on the mobile device.
Financial theft: These campaigns use QR codes to trick the user into making a fake payment or giving away their banking credentials. The user may scan the QR code and be taken to a bogus payment gateway or a fake bank website. The attacker can then access the user’s account later and bypass the second factor authentication by contacting the user via email or phone.
How Defender Experts approach QR code phishing
In QR code phishing attempts, the targeted user scans the QR code on their personal non-managed mobile device, which falls outside the scope of the Microsoft Defender protected environment. This is one of the key challenges for detection. In addition to detections based on Image Recognition or Optical Character Recognition, a novel approach was necessary to detect the QR code phishing attempts.
Defender Experts have researched identifying patterns across the QR code phishing campaigns and malicious sign-in attempts and devised the following detection approaches:
Pre-cursor events: User activities
Suspicious Senders
Suspicious Subject
Email Clustering
User Signals
Suspicious Sign-in attempts
1. Hunting for user behavior:
This is one of the primary detections that helps Defender Experts surface suspicious sign-in attempts from QR code phishing campaigns. Although the user scans the QR code from an email on their personal mobile device, in the majority of the scenarios, the phishing email being accessed is recorded with MailItemsAccessed mail-box auditing action.
The majority of the QR code campaigns have image (png/jpg/jpeg/gif) or document attachments (pdf/doc/xls) – Yes! QR codes are embedded in Excel attachments too! The campaigns can include a legitimate URL that redirects to a phishing page with malicious QR code as well.
A malicious sign-in attempt with session token compromise that follows the QR code scan is always observed from non-trusted devices with medium/high risk score for the session.
This detection approach correlates a user accessing an email with image/document attachments and a risky sign-in attempt from non-trusted devices in closer proximity and validates if the location from where the email item was accessed is different from the location of sign-in attempt.
Advanced Hunting Query:
let successfulRiskySignIn = materialize(AADSignInEventsBeta
| where Timestamp > ago(1d)
| where isempty(DeviceTrustType)
| where IsManaged != 1
| where IsCompliant != 1
| where RiskLevelDuringSignIn in (50, 100)
| project Timestamp, ReportId, IPAddress, AccountUpn, AccountObjectId, SessionId, Country, State, City
);
let suspiciousSignInUsers = successfulRiskySignIn
| distinct AccountObjectId;
let suspiciousSignInIPs = successfulRiskySignIn
| distinct IPAddress;
let suspiciousSignInCities = successfulRiskySignIn
| distinct City;
CloudAppEvents
| where Timestamp > ago(1d)
| where ActionType == “MailItemsAccessed”
| where AccountObjectId in (suspiciousSignInUsers)
| where IPAddress !in (suspiciousSignInIPs)
| where City !in (suspiciousSignInCities)
| join kind=inner successfulRiskySignIn on AccountObjectId
| where AccountObjectId in (suspiciousSignInUsers)
| where (Timestamp – Timestamp1) between (-5min .. 5min)
| extend folders = RawEventData.Folders
| mv-expand folders
| extend items = folders.FolderItems
| mv-expand items
| extend InternetMessageId = tostring(items.InternetMessageId)
| project Timestamp, ReportId, IPAddress, InternetMessageId, AccountObjectId, SessionId, Country, State, City
2. Hunting for sender patterns:
The sender attributes play a key role in the detection of QR code campaigns. Since the campaigns are typically large scale in nature, 95% of the campaigns do not involve phishing emails from compromised trusted vendors. Predominant emails are sent from newly-created domains or non-prevalent domains in the organization.
Since the attack involves multiple user actions involving scanning the QR code from a mobile device and completing the authentication, unlike typical phishing with simple URL clicks, the attackers induce a sense of urgency by impersonating IT support, HR support, payroll, administrator team, or the display name indicates the email is sent on-behalf of a known high value target in the organization (e.g., “Lara Scott on-behalf of CEO”).
In this detection approach, we correlate email from non-prevalent senders in the organization with impersonation intents.
Advanced Hunting Query:
let PhishingSenderDisplayNames = ()
{
pack_array(“IT”, “support”, “Payroll”, “HR”, “admin”, “2FA”, “notification”, “sign”, “reminder”, “consent”, “workplace”,
“administrator”, “administration”, “benefits”, “employee”, “update”, “on behalf”);
};
let suspiciousEmails = EmailEvents
| where Timestamp > ago(1d)
| where isnotempty(RecipientObjectId)
| where isnotempty(SenderFromAddress)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| join kind=inner (EmailAttachmentInfo
| where Timestamp > ago(1d)
| where isempty(SenderObjectId)
| where FileType has_any (“png”, “jpg”, “jpeg”, “bmp”, “gif”)
) on NetworkMessageId
| where SenderDisplayName has_any (PhishingSenderDisplayNames())
| project Timestamp, Subject, FileName, SenderFromDomain, RecipientObjectId, NetworkMessageId;
let suspiciousSenders = suspiciousEmails | distinct SenderFromDomain;
let prevalentSenders = materialize(EmailEvents
| where Timestamp between (ago(7d) .. ago(1d))
| where isnotempty(RecipientObjectId)
| where isnotempty(SenderFromAddress)
| where SenderFromDomain in (suspiciousSenders)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| distinct SenderFromDomain);
suspiciousEmails
| where SenderFromDomain !in (prevalentSenders)
| project Timestamp, Subject, FileName, SenderFromDomain, RecipientObjectId, NetworkMessageId
Correlating suspicious emails with image attachments from a new sender with risky sign-in attempts for the recipients can also surface the QR code phishing campaigns and user compromises.
3. Hunting for subject patterns:
In addition to impersonating IT and HR teams, attackers also craft the campaigns with actionable subjects. (e.g., MFA completion required, Digitally sign documents). The targeted user is requested to complete the highlighted action by scanning the QR code in the email and providing credentials and MFA token.
In most cases, these automated phishing campaigns also include a personalized element, where the user’s first name/last name/alias/email address is included in the subject. The email address of the targeted user is also embedded in the URL behind the QR code. This serves as a unique tracker for the attacker to identify emails successfully delivered and QR codes scanned.
In this detection, we track emails with suspicious keywords in subjects or personalized subjects. To detect personalized subjects, we track campaigns where the first three words or last three words of the subject are the same, but the other values are personalized/unique.
For example:
Alex, you have an undelivered voice message
Bob, you have an undelivered voice message
Charlie, you have an undelivered voice message
Your MFA update is pending, Alex
Your MFA update is pending, Bob
Your MFA update is pending, Charlie
Advanced Hunting Query:
Personalized campaigns based on the first few keywords:
EmailEvents
| where Timestamp > ago(1d)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| where isempty(SenderObjectId)
| extend words = split(Subject,” “)
| project firstWord = tostring(words[0]), secondWord = tostring(words[1]), thirdWord = tostring(words[2]), Subject, SenderFromAddress, RecipientEmailAddress, NetworkMessageId
| summarize SubjectsCount = dcount(Subject), RecipientsCount = dcount(RecipientEmailAddress), suspiciousEmails = make_set(NetworkMessageId, 10) by firstWord, secondWord, thirdWord
, SenderFromAddress
| where SubjectsCount >= 10
Personalized campaigns based on the last few keywords:
EmailEvents
| where Timestamp > ago(1d)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| where isempty(SenderObjectId)
| extend words = split(Subject,” “)
| project firstLastWord = tostring(words[-1]), secondLastWord = tostring(words[-2]), thirdLastWord = tostring(words[-3]), Subject, SenderFromAddress, RecipientEmailAddress, NetworkMessageId
| summarize SubjectsCount = dcount(Subject), RecipientsCount = dcount(RecipientEmailAddress), suspiciousEmails = make_set(NetworkMessageId, 10) by firstLastWord, secondLastWord, thirdLastWord
, SenderFromAddress
| where SubjectsCount >= 10
Campaign with suspicious keywords:
let PhishingKeywords = ()
{
pack_array(“account”, “alert”, “bank”, “billing”, “card”, “change”, “confirmation”,
“login”, “password”, “mfa”, “authorize”, “authenticate”, “payment”, “urgent”, “verify”, “blocked”);
};
EmailEvents
| where Timestamp > ago(1d)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| where isempty(SenderObjectId)
| where Subject has_any (PhishingKeywords())
4. Hunting for attachment name patterns:
Based on the historical QR code campaigns investigations, Defender Experts have identified that the attachment names of the campaigns are usually randomized by the attackers, meaning every email has a different attachment name for the QR code with high levels of randomization. Emails with randomly named attachment names from the same sender to multiple recipients, typically more than 50, can potentially indicate a QR code phishing campaign.
Campaign with randomly named attachments:
EmailAttachmentInfo
| where hasNonPrevalentSenders
| where Timestamp between (emailStartTime .. emailEndTime)
| where SenderFromAddress in (nonPrevalentSenders)
| where FileType in (“png”, “jpg”, “jpeg”, “gif”, “svg”)
| where isnotempty(FileName)
| extend firstFourFileName = substring(FileName, 0, 4)
| summarize RecipientsCount = dcount(RecipientEmailAddress), FirstFourFilesCount = dcount(firstFourFileName), suspiciousEmails = make_set(NetworkMessageId, 10) by SenderFromAddress
| where FirstFourFilesCount >= 10
5. Hunting for user signals/clusters
In order to craft effective large scale QR code phishing attacks, the attackers perform reconnaissance across social media to gather target user email addresses, their preferences and much more. These campaigns are sent across to 1,000+ users in the organization with luring subjects and contents based on their preferences. However, Defender Experts have observed that, at least one user finds the campaign suspicious and reports the email, which generates this alert: “Email reported by user as malware or phish.”
This alert can be another starting point for hunting activity to identify the scope of the campaign and compromises. Since the campaigns are specifically crafted for each group of users, scoping based on sender/subject/filename might not be an effective approach. Microsoft Defender for Office offers a heuristic based approach based on the email content as a solution for this problem. Emails with similar content that are likely to be from one attacker are clustered together and the cluster ID is populated in the EmailClusterId field in EmailEvents table.
The clusters can include all phishing attempts from the attackers so far against the organization, it can aggregate emails with malicious URLs, attachments, and QR codes as one, based on the similarity. Hence, this is a powerful approach to explore the persistent phishing techniques of the attacker and the repeatedly targeted users.
Below is a sample query on scoping a campaign from the email reported by the end user. The same scoping logic can be used on the previously discussed hunting hypotheses as well.
let suspiciousClusters = EmailEvents
| where Timestamp > ago(7d)
| where EmailDirection == “Inbound”
| where NetworkMessageId in (<List of suspicious Network Message Ids from Alerts>)
| distinct EmailClusterId;
EmailEvents
| where Timestamp > ago(7d)
| where EmailDirection == “Inbound”
| where EmailClusterId in (suspiciousClusters)
| summarize make_set(Subject), make_set(SenderFromDomain), dcount(RecipientObjectId), dcount(SenderDisplayName) by EmailClusterId
6. Hunting for suspicious sign-in attempts:
In addition to detecting the campaigns, it is critical that we identify the compromised identities. To surface the identities compromised by AiTM, we can utilize the below approaches.
Risky sign-in attempt from a non-managed device
Any sign-in attempt from a non-managed, non-compliant, untrusted device should be taken into consideration, and a risk score for the sign-in attempt increases the anomalous nature of the activity. Monitoring these sign-in attempts can surface the identity compromises.
AADSignInEventsBeta
| where Timestamp > ago(7d)
| where IsManaged != 1
| where IsCompliant != 1
//Filtering only for medium and high risk sign-in
| where RiskLevelDuringSignIn in (50, 100)
| where ClientAppUsed == “Browser”
| where isempty(DeviceTrustType)
| where isnotempty(State) or isnotempty(Country) or isnotempty(City)
| where isnotempty(IPAddress)
| where isnotempty(AccountObjectId)
| where isempty(DeviceName)
| where isempty(AadDeviceId)
| project Timestamp,IPAddress, AccountObjectId, ApplicationId, SessionId, RiskLevelDuringSignIn, BrowserId
Suspicious sign-in attributes
Sign-in attempts from untrusted devices with empty user agent, operating system or anomalous BrowserId can also be an indication of identity compromises from AiTM.
Defender Experts also recommend monitoring the sign-ins from known malicious IP addresses. Although the mode of delivery of the phishing campaigns differ (QR code, HTML attachment, URL), the sign-in infrastructure often remains the same. Monitoring the sign-in patterns of compromised users, and continuously scoping the sign-in attempts based on the known patterns can also surface the identity compromises from AiTM.
Mitigations
Apply these mitigations to reduce the impact of this threat:
Educate users about the risks of QR code phishing emails.
Implement Microsoft Defender for Endpoint – Mobile Threat Defense on mobile devices used to access enterprise assets.
Enable Conditional Access policies in Microsoft Entra, especially risk-based access policies. Conditional access policies evaluate sign-in requests using additional identity-driven signals like user or group membership, IP address location information, and device status, among others, are enforced for suspicious sign-ins. Organizations can protect themselves from attacks that leverage stolen credentials by enabling policies such as compliant devices, Azuretrusted IP address requirements, or risk-based policies with proper access control. If you are still evaluating Conditional Access, use security defaults as an initial baseline set of policies to improve identity security posture.
Implement continuous access evaluation.
Leverage Microsoft Edge to automatically identify and block malicious websites, including those used in this phishing campaign, and Microsoft Defender for Office 365 to detect and block malicious emails, links, and files.
Monitor suspicious or anomalous activities in Microsoft Entra ID Protection. Investigate sign-in attempts with suspicious characteristics (e.g., location, ISP, user agent, and use of anonymizer services).
Implement Microsoft Entra passwordless sign-in with FIDO2 security keys.
Turn on network protection in Microsoft Defender for Endpoint to block connections to malicious domains and IP addresses.
If you’re interested in learning more about our Defender Experts services, visit the following resources:
Microsoft Defender Experts for XDR web page
Microsoft Defender Experts for XDR docs page
Microsoft Defender Experts for Hunting web page
Microsoft Defender Experts for Hunting docs page
Microsoft Tech Community – Latest Blogs –Read More
Azure Data @ Microsoft Fabric Community Conference 2024 | Data Exposed Exclusive
In this Data Exposed Exclusive, join Anna Hoffman, Bob Ward, and Jason Himmelstein as they discuss everything you need to know about the upcoming Microsoft Fabric Community Conference!
Microsoft Fabric Community Conference registration: https://aka.ms/fabcon (Enter the code DATAEXPOSED100 for a $100 savings)
Microsoft Tech Community – Latest Blogs –Read More
Enforcement of Defender CSPM for Premium DevOps Security Capabilities
Microsoft’s Defender for Cloud will begin enforcing the Defender Cloud Security Posture Management (DCSPM) plan check for premium DevOps security value beginning March 7th, 2024. If you have the Defender CSPM plan enabled on a cloud environment (Azure, AWS, GCP) within the same tenant your DevOps connectors are created in, you’ll continue to receive premium code to cloud DevOps capabilities at no additional cost. If you aren’t a Defender CSPM customer, you have until March 7th, 2024 to enable Defender CSPM before losing access to these security features. To enable Defender CSPM on a connected cloud environment before March 7, 2024, follow the enablement documentation outlined here.
Microsoft Defender CSPM provides advanced security posture capabilities including agentless vulnerability scanning, attack path analysis, integrated data-aware security posture, code to cloud contextualization, and an intelligent cloud security graph. Pricing is dependent on cloud size, with billing based on Server, Storage account, and Database counts. There is no additional charge for DevOps resources with this enforcement.
More Information
For more information about which DevOps security features are available across both the Foundational CSPM and Defender CSPM plans, see our documentation outlining feature availability.
For more information about DevOps Security in Defender for Cloud, see the overview documentation.
For more information on the code to cloud security capabilities in Defender CSPM, see how to protect your resources with Defender CSPM.
For more information on Defender CSPM pricing, see the pricing page.
Microsoft Tech Community – Latest Blogs –Read More
Azure Verified Modules – Monthly Update Jan 24′
Azure Verified Modules: Monthly Update
Azure Verified Modules (AVM) is an initiative to consolidate and set the standards for what a good Infrastructure-as-Code module looks like. Spanning across languages (Bicep, Terraform etc.) AVM is a unified approach to provide a common code base, a toolkit for our Customers, our Partners, and Microsoft.
AVM is a community driven aspiration, inside and outside of Microsoft. If you are not familiar with AVM yet, go check out this video on YouTube:
What Is This Series?
For Azure Verified Modules, we will be producing monthly updates in which, we will share with you the latest news and features of Azure Verified Modules, including:
Module Updates
The updates AVM Framework.
Our community engagement
For some months we may focus on a highlight module this could be a pattern or a workflow in which the community (You!) would like to know more about from the module owner.
AVM Module Summary
The AVM team are excited that our community have been busy building AVM Modules. As of January 31st, the AVM Footprint currently looks like:
Language
Published
In development
Bicep
84
35
Terraform
18
37
Bicep Resource Modules Published In January:
Full list of Bicep Resource Modules are available here: AVM Bicep Resource Index
analysis-services/server
app/container-app
cache/redis
compute/disk
compute/disk-encryption-set
compute/image
compute/proximity-placement-group
compute/virtual-machine
consumption/budget
container-registry/registry
container-service/managed-cluster
data-protection/backup-vault
databricks/access-connector
databricks/workspace
db-for-my-sql/flexible-server
health-bot/health-bot
net-app/net-app-account
network/ddos-protection-plan
network/firewall-policy
network/front-door
network/front-door-web-application-firewall-policy
network/local-network-gateway
network/nat-gateway
network/virtual-network-gateway
network/vpn-gateway
service-bus/nameaspace (Updates)
storage/storage-account
web/site
web/static-site
Terraform Resource Modules:
Full list of Terraform Resource Modules are available here: AVM Terraform Resource Index
authorization-roleassignment
network-azurefirewall
network-firewallpolicy
network-networkmanager
operationalinsights-workspace
Terraform Pattern Modules
Full list of Terraform Resource Modules are available here: AVM Terraform Pattern Index
alz-management
network-virtualwan (update)
Updates and Improvements
We have also made some updates and improvements to the existing Azure Verified Modules, based on your feedback and suggestions. Some of the highlights are:
Bicep
Improved workflow optimization for module publishing to allow better Intellisense when using Visual Studio Code extension for Bicep.
Extended compliance tests to include AVM Bicep CI Framework files.
Automatic issue life-cycle management workflow (ref) that tracks the stability of a module and its owner
Improved pipeline handling & readability (ref)
Batch disable and enable GitHub Workflows in user forks (Bicep)
Terraform
Implemented GREPT workflow for Repository Linting and Governance (Link to Matt’s video)
OpenID Connect Integration for Terraform test validation.
MVP for Centralized Module testing framework in place, utilizing docker technologies for both local and GitHub Actions testing capabilities.
AVM General
Automated issue creation for tracking GitHub Teams alignment to specs required for AVM Modules
Further Resources
Microsoft Tech Community – Latest Blogs –Read More
Interim guidance for DST changes announced by Palestinian authority for 2024, 2025.
The Palestinian authority has decided to delay the start of Daylight Saving Time (DST) in 2024 and 2025. The Ministry of Communications and Information Technology of the Palestinian authority has conveyed this decision in an article dated January 30, 2024.
This moves the DST entry date further away from the month of Ramadan and the Eid ul Fitr holiday, which marks the end of Ramadan.
The Palestinian authority also announced that the 2025 DST date will be delayed by one week.
The impact of this change is as follows:
Clocks will be set forward 1 hour on Saturday, April 20, 2024, from 02:00 (2 am) to 03:00 (3 am) local time.
Clocks will be set back 1 hour on Saturday, October 26, 2024, from 02:00 (2 am), to 01:00 (1 am) local time.
The following platforms will receive an update to support this time zone change as part of the March 2024 non-security preview update or the April 2024 security update:
Windows Server 23H2
Windows 11, version 22H2 and version 23H2
Windows 11, version 21H2
Windows 10, version 22H2; Windows 10, version 21H2
Windows Server 2022
Windows 10 Enterprise LTSC 2019; Windows Server 2019
Windows 10 Enterprise LTSC 2016; Windows Server 2016
Windows 10 Enterprise 2015 LTSB
Windows Server 2012
Windows Server 2008 SP2
Windows 8.1
Windows 7.0 SP1
For additional information, please review our official policy page and How Windows manages time zone changes.
Microsoft Tech Community – Latest Blogs –Read More
Deep Dive of Microsoft-managed Conditional Access Policies in Microsoft Entra ID
This blog was originally published on the Entra ID blog on 2/6.
In November 2023 at Microsoft Ignite, we announced Microsoft-managed policies and the auto-rollout of multifactor authentication (MFA)-related Conditional Access policies in customer tenants. Since then, we’ve rolled out report-only policies for over 500,000 tenants. These policies are part of our Secure Future Initiative, which includes key engineering advances to improve security for customers against cyberthreats that we anticipate will increase over time.
This follow-up blog will dive deeper into these policies to provide you with a comprehensive understanding of what they entail and how they function.
Multifactor authentication for admins accessing Microsoft admin portals
Admin accounts with elevated privileges are more likely to be attacked, so enforcing MFA for these roles protects these privileged administrative functions. This policy covers 14 admin roles that we consider to be highly privileged, requiring administrators to perform multifactor authentication when signing into Microsoft admin portals. This policy targets Microsoft Entra ID P1 and P2 tenants, where security defaults aren’t enabled.
Multifactor authentication for per-user multifactor authentication users
Per-user MFA is when users are enabled individually and are required to perform multifactor authentication each time they sign in (with some exceptions, such as when they sign in from trusted IP addresses or when the remember MFA on trusted devices feature is turned on). For customers who are licensed for Entra ID P1, Conditional Access offers a better admin experience with many additional features, including user group and application targeting, more conditions such as risk- and device-based, integration with authentication strengths, session controls and report-only mode. This can help you be more targeted in requiring MFA, lowering end user friction while maintaining security posture.
This policy covers users with per-user MFA. These users are targeted by Conditional Access and are now required to perform multifactor authentication for all cloud apps. It aids organizations’ transition to Conditional Access seamlessly, ensuring no disruption to end user experiences while maintaining a high level of security.
This policy targets licensed users with Entra ID P1 and P2, where the security defaults policy isn’t enabled and there are less than 500 per-user MFA enabled enabled/enforced users. There will be no change to the end user experience due to this policy.
Multifactor authentication and reauthentication for risky sign-ins
This policy will help your organization achieve the Optimal level for Risk Assessments in the NIST Zero Trust Maturity Model because it provides a key layer of added security assurance that triggers only when we detect high-risk sign-ins. “High-risk sign-in” means there is a very high probability that a given authentication request isn’t the authorized identity owner and could indicate brute force, password spray, or token replay attacks. By dynamically responding to sign-in risk, this policy disrupts active attacks in real-time while remaining invisible to most users, particularly those who don’t have high sign-in risk. When Identity Protection detects an attack, your users will be prompted to self-remediate with MFA and reauthenticate to Entra ID, which will reset the compromised session.
This policy covers all users in Entra ID P2 tenants, where security defaults aren’t enabled, all active users are already registered for MFA, and there are enough licenses for each user. As with all policies, ensure you exclude any break-glass or service accounts to avoid locking yourself out.
Microsoft-managed Conditional Access policies have been created in all eligible tenants in Report-only mode. These policies are suggestions from Microsoft that organizations can adapt and use for their own environment. Administrators can view and review these policies in the Conditional Access policies blade. To enhance the policies, administrators are encouraged to add customizations such as excluding emergency accounts and service accounts. Once ready, the policies can be moved to the ON state. For additional customization needs, administrators have the flexibility to clone the policies and make further adjustments.
Call to Action
Don’t wait – take action now. Enable the Microsoft-managed Conditional Access policies now and/or customize the Microsoft-managed Conditional Access policies according to your organizational needs. Your proactive approach to implementing multifactor authentication policies is crucial in fortifying your organization against evolving security threats. To learn more about how to secure your resources, visit our Microsoft-managed policies documentation.
Nitika Gupta
Principal Group Product Manager, Microsoft
Learn more about Microsoft Entra:
See recent Microsoft Entra blogs
Dive into Microsoft Entra technical documentation
Learn more at Azure Active Directory (Azure AD) rename to Microsoft Entra ID
Join the conversation on the Microsoft Entra discussion space and Twitter
Learn more about Microsoft Security
Microsoft Tech Community – Latest Blogs –Read More
School-parent Communities in Teams
We just published a new blog post about how Richard Cloudesley School uses Communities in Teams to engage with parents. Read more in here:
https://insider.teams.com/blog/school-parent-communities-in-teams/
Microsoft Tech Community – Latest Blogs –Read More
Advancing key protection in Windows using VBS
Today, we are excited to bring you the next step in key protection for Windows. Now in Windows 11 Insider Preview Build 26052 and Windows Server Insider Preview Build 26052, developers can use the Cryptography API: Next Generation (CNG) framework to help secure Windows keys with virtualization-based security (VBS). With this new capability, keys can be protected from admin-level key theft attacks with negligible effect on performance, reliability, or scale.
Now let’s explore how you can create, import, and protect your keys using VBS.
The current state of key protection in Windows
As attackers advance their techniques to steal keys and credentials, Microsoft continues to evolve capabilities to help protect valuable assets across Windows. This is crucial work as when attackers get hold of important keys, they can impersonate users and access resources without their knowledge and consent. Consider the theft of third-party encryption keys – these types of attacks may have privacy and security consequences and could compromise the availability of applications and services.
The default method of protecting keys in Windows is to store them in the memory of a local system process known as the Local Security Authority (LSA). LSA is a great option for storing keys that do not protect high-value assets or require the best performance available. While LSA helps prevent code injection and non-authorized processes from reading memory, an admin or system-level attacker can still steal keys from this memory space.
For a more secure option, the industry is moving towards hardware-based isolation, where keys are stored directly on a hardware security processor like a managed HSM (Hardware Security Module), Trusted Platform Module (TPM) or a Microsoft Pluton security processor, which help provide stronger security against tampering with and exporting keys. While hardware isolation should be used for keys wherever possible, if there are performance or scale requirements that require usage of the central processing unit (CPU) core, VBS is a robust alternative that helps offer stronger security than currently available software protection.
Introducing key protection with VBS in Windows
The security capability we’re introducing today addresses the limitations in the current software and hardware key protection mechanisms on Windows. You can now protect your keys with VBS, which uses the virtualization extension capability of the CPU to create an isolated runtime outside of the normal OS. When in use, VBS keys are isolated in a secure process, allowing key operations to occur without ever exposing the private key material outside of this space. At rest, private key material is encrypted by a TPM key which binds VBS keys to the device. Keys protected in this way cannot be dumped from process memory or exported in plain text from a user’s machine, preventing exfiltration attacks by any admin-level attacker.
VBS helps to offer a higher security bar than software isolation, with stronger performance compared to hardware-based solutions, since it is powered by the device’s CPU. While hardware keys offer strong levels of protection, VBS is helpful for services with high security, reliability, and performance requirements.
The following section will show you how to use these capabilities by creating and using VBS keys with NCrypt, which is part of the Cryptography API: Next Generation (CNG) framework.
Tutorial: Leverage the NCrypt API to create and use VBS keys
The core functionality to create and import VBS keys is as simple as passing in an additional flag into the NCrypt API.
NCryptCreatePersistedKey and NCryptImportKey accept two flags to request that VBS should be leveraged to protect the client key’s private material:
Flag
Functionality and fallback
NCRYPT_REQUIRE_VBS_FLAG
Indicates a key must be protected with VBS.
Operation will fail if VBS is not available.
NCRYPT_PREFER_VBS_FLAG
Indicates a key should be protected with VBS.
Operation will generate a software-isolated key if VBS is not available.
When it comes to creating VBS keys, the standard CNG encryption algorithms and key lengths for software keys are supported.
Ephemeral and per-boot keys
The default behavior of NCryptCreatePersistedKey and NCryptImportKey is that of a cross-boot persisted key stored on disk that persists across reboot cycles.
Calling NCryptCreatePersistedKey with pszKeyName == NULL creates an ephemeral key rather than a persisted key, and its lifetime is managed by the client process. Ephemeral keys are not written to disk and live in secure memory. An additional flag can be passed in along with the above VBS flags to indicate that a per-boot key should be used to help protect the client key rather than default cross-boot key.
Flag
Functionality and fallback
NCRYPT_USE_PER_BOOT_KEY_FLAG
Instructs VBS to help protect the client key with a per-boot key that is stored in disk but can’t be reused across boot cycles.
Example: Creating a key with virtualization-based security
The following sample code shows how to create a 2048-bit VBS key with the RSA algorithm:
void
CreatePersistedKeyGuardKey(
void
)
{
SECURITY_STATUS status;
NCRYPT_PROV_HANDLE hProv = 0;
NCRYPT_KEY_HANDLE hKey = 0;
DWORD dwKeySize = 2048;
status = NCryptOpenStorageProvider(&hProv, MS_KEY_STORAGE_PROVIDER, 0);
if (status != ERROR_SUCCESS)
{
wprintf(L”NCryptOpenStorageProvider failed with %xn”, status);
goto clean;
}
status = NCryptCreatePersistedKey(hProv, &hKey, NCRYPT_RSA_ALGORITHM, L”MyKeyName”, 0, NCRYPT_USE_VIRTUAL_ISOLATION_FLAG);
if (status != ERROR_SUCCESS)
{
wprintf(L”NCryptCreatePersistedKey failed with %xn”, status);
goto clean;
}
status = NCryptSetProperty(hKey, NCRYPT_LENGTH_PROPERTY, (PBYTE)&dwKeySize, sizeof(DWORD), 0);
status = NCryptFinalizeKey(hKey, 0);
if (status != ERROR_SUCCESS)
{
wprintf(L”NCryptFinalizeKey failed with %xn”, status);
goto clean;
}
wprintf(L”Created a persisted Key Guard key!n”);
clean:
if (hKey)
{
NCryptFreeObject(hKey);
}
if (hProv)
{
NCryptFreeObject(hProv);
}
}
Using VBS keys
Beyond stricter key export policies, a VBS key can be treated like any other Cryptographic Next Generation (CNG) key when it comes to API usage, so developers can refer to the NCrypt API here. This applies to use cases like signing and encryption.
Try protecting your keys with VBS today
This feature is now in Preview and accessible via the Windows Insider Program for both client (Windows 11 Insider Preview Build 26052) and Server (Windows Server Insider Preview Build 26052) The following requirements must be met:
VBS enabled
VBS also has several hardware requirements to run, including Hyper-V (Windows hypervisor), 64-bit architecture, and IOMMU support. See the full list of VBS hardware requirements.
TPM enabled: For bare-metal environments, TPM 2.0 is required. For VM environments, vTPM (Virtual TPM) is supported.
UEFI with Secure Boot enabled
Having trouble?
Enable event log to investigate errors:
Search “Event Viewer” in the start menu
On the left panel open Applications and Services Logs > Microsoft > Windows > Crypto-NCrypt
Right-click Operational and select Enable Log (it may already be enabled)
Right click error events with Event ID 13, 14, or 15 and Task Category “VBS Key Isolation Operation”
We recommend sending any suggestions, questions, or logs through Feedback Hub under Security and Privacy > VBS Key Protection.
You may also reach out to VBSkeyprotection@microsoft.com with questions.
What’s next?
Stay on the lookout for further announcements to support key protection with VBS, and we’ll continue updating our documentation and support guidelines accordingly. We hope that you’ll be able to leverage this security capability to help protect your keys on Windows.
Continue the conversation. Find best practices. Bookmark the Windows Tech Community, then follow us @MSWindowsITPro on X/Twitter. Looking for support? Visit Windows on Microsoft Q&A.
Microsoft Tech Community – Latest Blogs –Read More
Enable Chat History on Azure AI Studio with Azure Cosmos DB
Azure AI Studio offers a feature that allows you to enable chat history for your web app users. This feature provides your users with access to their previous queries and responses, allowing them to easily reference past conversations. Check out the blog below for the full details on how to enable it today!
Benefits of enabling chat history
With Azure AI Studio, Developers can build a chatbot with cutting-edge models that draws on your own data for informed and custom responses to customers’ questions. In addition, you can incorporate multimodality – enabling your app to see, hear, and speak by pairing Azure OpenAI Service with Speech and Vision models.
Streamline customer support: Chat history serves as a powerful ally for streamlining customer support services. By referencing past chat logs, support teams gain the ability to quickly find solutions for customers. This enhances the efficiency of issue resolution while enabling support agents to manage request volumes effectively leading to improved customer satisfaction.
Data Analytics: Analyzing past interactions provides valuable insights into user behavior, preferences, and recurring issues. Armed with this data, you can make informed decisions to optimize user experiences, tailor content, and refine your application’s performance. The analytics derived from chat history pave the way for data-driven strategies, ensuring your application evolves in tune with user needs and expectations.
Product Enhancements: By studying past interactions, you gain a comprehensive view of user feedback, pain points, and preferences. This user-centric insight becomes a compass for product enhancement. Whether it’s refining features, addressing common concerns, or identifying opportunities for innovation, chat history becomes a valuable resource in the iterative process of improving your product for end-users.
How to enable chat history?
To enable chat history, deploy or redeploy your model as a web app using Azure AI Studio. Once completed, activate chat history by clicking the dedicated enablement button within the Azure AI Studio interface. With chat history enabled, users gain control over their interaction.
In the top right corner, they can show or hide their chat history. When displayed, users can rename or delete conversations, giving full control of the chat history experience to users. Conversations are automatically ordered from newest to oldest, simplifying navigation. Each conversation is named based on the initial query, making it easy for users to locate and reference past interactions.
Enabling chat history in Azure AI Studio can easily provide a valuable resource for your web app users, allowing them to easily reference past conversations and queries.
Important! Please note that enabling chat history with Azure Cosmos DB will incur additional charges for the storage used.
About Azure AI Advantage Offer
About Azure Cosmos DB
Azure Cosmos DB is a fully managed, serverless NoSQL database for high-performance applications of any size or scale. It is a multi-tenant, distributed, shared-nothing, horizontally scalable database that provides planet-scale NoSQL capabilities. It offers APIs for Apache Cassandra, MongoDB, Gremlin, Tables, and the Core (SQL)
Get started
Azure Cosmos DB Docs
Check us out on Youtube
Follow us on X (Twitter)
About Azure AI Studio
Azure AI Studio is a trusted and inclusive platform that empowers developers of all abilities and preferences to innovate with AI and shape the future. Seamlessly explore, build, test, deploy, and manage AI innovations at scale. Integrate cutting-edge AI tools and models, prompt orchestration, app evaluation, model fine-tuning, and responsible AI practices. Directly from Azure AI Studio, interact with your projects in a code-first environment using the Azure AI SDK and Azure AI CLI.
Build with Azure AI Studio
Learn more about Azure AI Studio
Watch the Demo!
Azure AI Studio Documentation
Microsoft Learn: Intro to Azure AI Studio
Enabling Chat History Microsoft Docs
Microsoft Tech Community – Latest Blogs –Read More