Month: February 2024
Join Teams for work or school meetings with personal account
We are improving the ways to join Teams meetings and have started to roll out an improvement enabling you to join a Teams meeting organized by a work or school user with your signed-in personal account. Read more on the Teams Insider blog and join Teams Insider to try this in Teams free on Windows 11 today!
Join Teams for work or school meeting with your personal account – Teams Insider
Microsoft Tech Community – Latest Blogs –Read More
Intelligent App Chronicles: Azure API Management as an Enterprise API Gateway
The Intelligent App Chronicles for Healthcare is a webinar series designed to provide health and life sciences companies with a comprehensive guide to building intelligent healthcare applications.
The series will cover a wide range of topics including Azure Container Services, Azure AI Services, Azure Integration Services, and innovative solutions that can accelerate your Intelligent app journey. By attending these webinars, you will learn how to leverage the power of intelligent systems to build scalable and secure healthcare solutions that can transform the way you deliver care. Our hosts will be: (99+) Shelly (Finch) Avery | LinkedIn, (99+) Matthew Anderson | LinkedIn
Our next session will be on Feb 20th at 9:00 PT / 10:00 MT / 11:00 CT / 12:00 ET – Click here to Register.
Overview:
Please join us for an informative session on how to use Azure API Management as an enterprise API gateway. You will discover how to use Azure API Management as an enterprise API gateway to create intelligent and secure healthcare applications.
Our speaker this week is Rob McKenna, Principal Technical Specialist for Azure Apps and Innovation, he will cover topics such as:
Benefits of a centralized and shared API gateway
the steps to get your enterprise teams started
networking considerations for regulated industries.
How to ensure the internal and external availability of your APIs
How to improve your developer velocity, and how to use DevOps for API management and developer experience tooling.
Don’t miss this opportunity to learn from the experts and take your healthcare applications to the next level. Register now for the Intelligent App Chronicles for Healthcare webinar series! here!
Thanks for reading!
Please follow the aka.ms/HLSBlog for all this great content.
Thanks for reading, Shelly Avery | Email, LinkedIn
Microsoft Tech Community – Latest Blogs –Read More
Hunting for QR Code AiTM Phishing and User Compromise
In the dynamic landscape of adversary-in-the-middle (AiTM) attacks, the Microsoft Defender Experts team has recently observed a notable trend – QR code-themed phishing campaigns. The attackers employ deceptive QR codes to manipulate users into accessing fraudulent websites or downloading harmful content.
These attacks exploit the trust and curiosity of users who scan QR codes without verifying their source or content. Attackers can create QR codes that redirect users to phishing sites that mimic legitimate ones, such as banks, social media platforms, or online services. The targeted user scans the QR code, subsequently being redirected to a phishing page. Following user authentication, attackers steal the user’s session token, enabling them to launch various malicious activities, including Business Email Compromise attacks and data exfiltration attempts. Alternatively, attackers can create QR codes that prompt users to download malware or spyware onto their devices. These attacks can result in identity theft, financial loss, data breach, or device compromise.
This blog explains the mechanics of QR code phishing, and details how Defender Experts hunt for these phishing campaigns. Additionally, it outlines the procedures in place to notify customers about the unfolding attack narrative and its potential ramifications.
Why is QR code phishing a critical threat?
The Defender Experts team has observed that QR code campaigns are often massive and large-scale in nature. Before launching these campaigns, attackers typically conduct reconnaissance attempts to gather information on targeted users. The campaigns are then sent to large groups of people within an organization, often exceeding 1,000 users, with varying parameters across subject, sender, and body of the emails.
The identity compromises and stolen session tokens resulting from these campaigns are proportional to their large scale. In recent months, Defender Experts have observed QR code campaigns growing from 10% to 30% of total phishing campaigns. Since the campaigns do not follow a template, it can be difficult to scope and evaluate the extent of compromise. It is crucial for organizations to be aware of this trend and take steps to protect their employees from falling victim to QR code phishing attacks.
Understanding the intent of QR code phishing attacks
The QR code phishing email can have one of the below intents:
Credential theft: The majority of these campaigns are designed with the intent where the user is redirected to an AiTM phishing website for session token theft. The authentication method can be single factor authentication, where only the user’s password is compromised and the sign-in attempts are unsuccessful; in these scenarios, the attacker signs in later with the compromised password and bypasses multifactor authentication (MFA) through MFA fatigue attacks.Alternatively, the user can be redirected to an AiTM phishing page where the credentials, MFA parameters and session token are compromised in real-time.
Malware distribution: In these scenarios, once the user scans the QR code, malware/spyware/adware is automatically downloaded on the mobile device.
Financial theft: These campaigns use QR codes to trick the user into making a fake payment or giving away their banking credentials. The user may scan the QR code and be taken to a bogus payment gateway or a fake bank website. The attacker can then access the user’s account later and bypass the second factor authentication by contacting the user via email or phone.
How Defender Experts approach QR code phishing
In QR code phishing attempts, the targeted user scans the QR code on their personal non-managed mobile device, which falls outside the scope of the Microsoft Defender protected environment. This is one of the key challenges for detection. In addition to detections based on Image Recognition or Optical Character Recognition, a novel approach was necessary to detect the QR code phishing attempts.
Defender Experts have researched identifying patterns across the QR code phishing campaigns and malicious sign-in attempts and devised the following detection approaches:
Pre-cursor events: User activities
Suspicious Senders
Suspicious Subject
Email Clustering
User Signals
Suspicious Sign-in attempts
1. Hunting for user behavior:
This is one of the primary detections that helps Defender Experts surface suspicious sign-in attempts from QR code phishing campaigns. Although the user scans the QR code from an email on their personal mobile device, in the majority of the scenarios, the phishing email being accessed is recorded with MailItemsAccessed mail-box auditing action.
The majority of the QR code campaigns have image (png/jpg/jpeg/gif) or document attachments (pdf/doc/xls) – Yes! QR codes are embedded in Excel attachments too! The campaigns can include a legitimate URL that redirects to a phishing page with malicious QR code as well.
A malicious sign-in attempt with session token compromise that follows the QR code scan is always observed from non-trusted devices with medium/high risk score for the session.
This detection approach correlates a user accessing an email with image/document attachments and a risky sign-in attempt from non-trusted devices in closer proximity and validates if the location from where the email item was accessed is different from the location of sign-in attempt.
Advanced Hunting Query:
let successfulRiskySignIn = materialize(AADSignInEventsBeta
| where Timestamp > ago(1d)
| where isempty(DeviceTrustType)
| where IsManaged != 1
| where IsCompliant != 1
| where RiskLevelDuringSignIn in (50, 100)
| project Timestamp, ReportId, IPAddress, AccountUpn, AccountObjectId, SessionId, Country, State, City
);
let suspiciousSignInUsers = successfulRiskySignIn
| distinct AccountObjectId;
let suspiciousSignInIPs = successfulRiskySignIn
| distinct IPAddress;
let suspiciousSignInCities = successfulRiskySignIn
| distinct City;
CloudAppEvents
| where Timestamp > ago(1d)
| where ActionType == “MailItemsAccessed”
| where AccountObjectId in (suspiciousSignInUsers)
| where IPAddress !in (suspiciousSignInIPs)
| where City !in (suspiciousSignInCities)
| join kind=inner successfulRiskySignIn on AccountObjectId
| where AccountObjectId in (suspiciousSignInUsers)
| where (Timestamp – Timestamp1) between (-5min .. 5min)
| extend folders = RawEventData.Folders
| mv-expand folders
| extend items = folders.FolderItems
| mv-expand items
| extend InternetMessageId = tostring(items.InternetMessageId)
| project Timestamp, ReportId, IPAddress, InternetMessageId, AccountObjectId, SessionId, Country, State, City
2. Hunting for sender patterns:
The sender attributes play a key role in the detection of QR code campaigns. Since the campaigns are typically large scale in nature, 95% of the campaigns do not involve phishing emails from compromised trusted vendors. Predominant emails are sent from newly-created domains or non-prevalent domains in the organization.
Since the attack involves multiple user actions involving scanning the QR code from a mobile device and completing the authentication, unlike typical phishing with simple URL clicks, the attackers induce a sense of urgency by impersonating IT support, HR support, payroll, administrator team, or the display name indicates the email is sent on-behalf of a known high value target in the organization (e.g., “Lara Scott on-behalf of CEO”).
In this detection approach, we correlate email from non-prevalent senders in the organization with impersonation intents.
Advanced Hunting Query:
let PhishingSenderDisplayNames = ()
{
pack_array(“IT”, “support”, “Payroll”, “HR”, “admin”, “2FA”, “notification”, “sign”, “reminder”, “consent”, “workplace”,
“administrator”, “administration”, “benefits”, “employee”, “update”, “on behalf”);
};
let suspiciousEmails = EmailEvents
| where Timestamp > ago(1d)
| where isnotempty(RecipientObjectId)
| where isnotempty(SenderFromAddress)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| join kind=inner (EmailAttachmentInfo
| where Timestamp > ago(1d)
| where isempty(SenderObjectId)
| where FileType has_any (“png”, “jpg”, “jpeg”, “bmp”, “gif”)
) on NetworkMessageId
| where SenderDisplayName has_any (PhishingSenderDisplayNames())
| project Timestamp, Subject, FileName, SenderFromDomain, RecipientObjectId, NetworkMessageId;
let suspiciousSenders = suspiciousEmails | distinct SenderFromDomain;
let prevalentSenders = materialize(EmailEvents
| where Timestamp between (ago(7d) .. ago(1d))
| where isnotempty(RecipientObjectId)
| where isnotempty(SenderFromAddress)
| where SenderFromDomain in (suspiciousSenders)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| distinct SenderFromDomain);
suspiciousEmails
| where SenderFromDomain !in (prevalentSenders)
| project Timestamp, Subject, FileName, SenderFromDomain, RecipientObjectId, NetworkMessageId
Correlating suspicious emails with image attachments from a new sender with risky sign-in attempts for the recipients can also surface the QR code phishing campaigns and user compromises.
3. Hunting for subject patterns:
In addition to impersonating IT and HR teams, attackers also craft the campaigns with actionable subjects. (e.g., MFA completion required, Digitally sign documents). The targeted user is requested to complete the highlighted action by scanning the QR code in the email and providing credentials and MFA token.
In most cases, these automated phishing campaigns also include a personalized element, where the user’s first name/last name/alias/email address is included in the subject. The email address of the targeted user is also embedded in the URL behind the QR code. This serves as a unique tracker for the attacker to identify emails successfully delivered and QR codes scanned.
In this detection, we track emails with suspicious keywords in subjects or personalized subjects. To detect personalized subjects, we track campaigns where the first three words or last three words of the subject are the same, but the other values are personalized/unique.
For example:
Alex, you have an undelivered voice message
Bob, you have an undelivered voice message
Charlie, you have an undelivered voice message
Your MFA update is pending, Alex
Your MFA update is pending, Bob
Your MFA update is pending, Charlie
Advanced Hunting Query:
Personalized campaigns based on the first few keywords:
EmailEvents
| where Timestamp > ago(1d)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| where isempty(SenderObjectId)
| extend words = split(Subject,” “)
| project firstWord = tostring(words[0]), secondWord = tostring(words[1]), thirdWord = tostring(words[2]), Subject, SenderFromAddress, RecipientEmailAddress, NetworkMessageId
| summarize SubjectsCount = dcount(Subject), RecipientsCount = dcount(RecipientEmailAddress), suspiciousEmails = make_set(NetworkMessageId, 10) by firstWord, secondWord, thirdWord
, SenderFromAddress
| where SubjectsCount >= 10
Personalized campaigns based on the last few keywords:
EmailEvents
| where Timestamp > ago(1d)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| where isempty(SenderObjectId)
| extend words = split(Subject,” “)
| project firstLastWord = tostring(words[-1]), secondLastWord = tostring(words[-2]), thirdLastWord = tostring(words[-3]), Subject, SenderFromAddress, RecipientEmailAddress, NetworkMessageId
| summarize SubjectsCount = dcount(Subject), RecipientsCount = dcount(RecipientEmailAddress), suspiciousEmails = make_set(NetworkMessageId, 10) by firstLastWord, secondLastWord, thirdLastWord
, SenderFromAddress
| where SubjectsCount >= 10
Campaign with suspicious keywords:
let PhishingKeywords = ()
{
pack_array(“account”, “alert”, “bank”, “billing”, “card”, “change”, “confirmation”,
“login”, “password”, “mfa”, “authorize”, “authenticate”, “payment”, “urgent”, “verify”, “blocked”);
};
EmailEvents
| where Timestamp > ago(1d)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| where isempty(SenderObjectId)
| where Subject has_any (PhishingKeywords())
4. Hunting for attachment name patterns:
Based on the historical QR code campaigns investigations, Defender Experts have identified that the attachment names of the campaigns are usually randomized by the attackers, meaning every email has a different attachment name for the QR code with high levels of randomization. Emails with randomly named attachment names from the same sender to multiple recipients, typically more than 50, can potentially indicate a QR code phishing campaign.
Campaign with randomly named attachments:
EmailAttachmentInfo
| where hasNonPrevalentSenders
| where Timestamp between (emailStartTime .. emailEndTime)
| where SenderFromAddress in (nonPrevalentSenders)
| where FileType in (“png”, “jpg”, “jpeg”, “gif”, “svg”)
| where isnotempty(FileName)
| extend firstFourFileName = substring(FileName, 0, 4)
| summarize RecipientsCount = dcount(RecipientEmailAddress), FirstFourFilesCount = dcount(firstFourFileName), suspiciousEmails = make_set(NetworkMessageId, 10) by SenderFromAddress
| where FirstFourFilesCount >= 10
5. Hunting for user signals/clusters
In order to craft effective large scale QR code phishing attacks, the attackers perform reconnaissance across social media to gather target user email addresses, their preferences and much more. These campaigns are sent across to 1,000+ users in the organization with luring subjects and contents based on their preferences. However, Defender Experts have observed that, at least one user finds the campaign suspicious and reports the email, which generates this alert: “Email reported by user as malware or phish.”
This alert can be another starting point for hunting activity to identify the scope of the campaign and compromises. Since the campaigns are specifically crafted for each group of users, scoping based on sender/subject/filename might not be an effective approach. Microsoft Defender for Office offers a heuristic based approach based on the email content as a solution for this problem. Emails with similar content that are likely to be from one attacker are clustered together and the cluster ID is populated in the EmailClusterId field in EmailEvents table.
The clusters can include all phishing attempts from the attackers so far against the organization, it can aggregate emails with malicious URLs, attachments, and QR codes as one, based on the similarity. Hence, this is a powerful approach to explore the persistent phishing techniques of the attacker and the repeatedly targeted users.
Below is a sample query on scoping a campaign from the email reported by the end user. The same scoping logic can be used on the previously discussed hunting hypotheses as well.
let suspiciousClusters = EmailEvents
| where Timestamp > ago(7d)
| where EmailDirection == “Inbound”
| where NetworkMessageId in (<List of suspicious Network Message Ids from Alerts>)
| distinct EmailClusterId;
EmailEvents
| where Timestamp > ago(7d)
| where EmailDirection == “Inbound”
| where EmailClusterId in (suspiciousClusters)
| summarize make_set(Subject), make_set(SenderFromDomain), dcount(RecipientObjectId), dcount(SenderDisplayName) by EmailClusterId
6. Hunting for suspicious sign-in attempts:
In addition to detecting the campaigns, it is critical that we identify the compromised identities. To surface the identities compromised by AiTM, we can utilize the below approaches.
Risky sign-in attempt from a non-managed device
Any sign-in attempt from a non-managed, non-compliant, untrusted device should be taken into consideration, and a risk score for the sign-in attempt increases the anomalous nature of the activity. Monitoring these sign-in attempts can surface the identity compromises.
AADSignInEventsBeta
| where Timestamp > ago(7d)
| where IsManaged != 1
| where IsCompliant != 1
//Filtering only for medium and high risk sign-in
| where RiskLevelDuringSignIn in (50, 100)
| where ClientAppUsed == “Browser”
| where isempty(DeviceTrustType)
| where isnotempty(State) or isnotempty(Country) or isnotempty(City)
| where isnotempty(IPAddress)
| where isnotempty(AccountObjectId)
| where isempty(DeviceName)
| where isempty(AadDeviceId)
| project Timestamp,IPAddress, AccountObjectId, ApplicationId, SessionId, RiskLevelDuringSignIn, BrowserId
Suspicious sign-in attributes
Sign-in attempts from untrusted devices with empty user agent, operating system or anomalous BrowserId can also be an indication of identity compromises from AiTM.
Defender Experts also recommend monitoring the sign-ins from known malicious IP addresses. Although the mode of delivery of the phishing campaigns differ (QR code, HTML attachment, URL), the sign-in infrastructure often remains the same. Monitoring the sign-in patterns of compromised users, and continuously scoping the sign-in attempts based on the known patterns can also surface the identity compromises from AiTM.
Mitigations
Apply these mitigations to reduce the impact of this threat:
Educate users about the risks of QR code phishing emails.
Implement Microsoft Defender for Endpoint – Mobile Threat Defense on mobile devices used to access enterprise assets.
Enable Conditional Access policies in Microsoft Entra, especially risk-based access policies. Conditional access policies evaluate sign-in requests using additional identity-driven signals like user or group membership, IP address location information, and device status, among others, are enforced for suspicious sign-ins. Organizations can protect themselves from attacks that leverage stolen credentials by enabling policies such as compliant devices, Azuretrusted IP address requirements, or risk-based policies with proper access control. If you are still evaluating Conditional Access, use security defaults as an initial baseline set of policies to improve identity security posture.
Implement continuous access evaluation.
Leverage Microsoft Edge to automatically identify and block malicious websites, including those used in this phishing campaign, and Microsoft Defender for Office 365 to detect and block malicious emails, links, and files.
Monitor suspicious or anomalous activities in Microsoft Entra ID Protection. Investigate sign-in attempts with suspicious characteristics (e.g., location, ISP, user agent, and use of anonymizer services).
Implement Microsoft Entra passwordless sign-in with FIDO2 security keys.
Turn on network protection in Microsoft Defender for Endpoint to block connections to malicious domains and IP addresses.
If you’re interested in learning more about our Defender Experts services, visit the following resources:
Microsoft Defender Experts for XDR web page
Microsoft Defender Experts for XDR docs page
Microsoft Defender Experts for Hunting web page
Microsoft Defender Experts for Hunting docs page
Microsoft Tech Community – Latest Blogs –Read More
Azure Data @ Microsoft Fabric Community Conference 2024 | Data Exposed Exclusive
In this Data Exposed Exclusive, join Anna Hoffman, Bob Ward, and Jason Himmelstein as they discuss everything you need to know about the upcoming Microsoft Fabric Community Conference!
Microsoft Fabric Community Conference registration: https://aka.ms/fabcon (Enter the code DATAEXPOSED100 for a $100 savings)
Microsoft Tech Community – Latest Blogs –Read More
Enforcement of Defender CSPM for Premium DevOps Security Capabilities
Microsoft’s Defender for Cloud will begin enforcing the Defender Cloud Security Posture Management (DCSPM) plan check for premium DevOps security value beginning March 7th, 2024. If you have the Defender CSPM plan enabled on a cloud environment (Azure, AWS, GCP) within the same tenant your DevOps connectors are created in, you’ll continue to receive premium code to cloud DevOps capabilities at no additional cost. If you aren’t a Defender CSPM customer, you have until March 7th, 2024 to enable Defender CSPM before losing access to these security features. To enable Defender CSPM on a connected cloud environment before March 7, 2024, follow the enablement documentation outlined here.
Microsoft Defender CSPM provides advanced security posture capabilities including agentless vulnerability scanning, attack path analysis, integrated data-aware security posture, code to cloud contextualization, and an intelligent cloud security graph. Pricing is dependent on cloud size, with billing based on Server, Storage account, and Database counts. There is no additional charge for DevOps resources with this enforcement.
More Information
For more information about which DevOps security features are available across both the Foundational CSPM and Defender CSPM plans, see our documentation outlining feature availability.
For more information about DevOps Security in Defender for Cloud, see the overview documentation.
For more information on the code to cloud security capabilities in Defender CSPM, see how to protect your resources with Defender CSPM.
For more information on Defender CSPM pricing, see the pricing page.
Microsoft Tech Community – Latest Blogs –Read More
Azure Verified Modules – Monthly Update Jan 24′
Azure Verified Modules: Monthly Update
Azure Verified Modules (AVM) is an initiative to consolidate and set the standards for what a good Infrastructure-as-Code module looks like. Spanning across languages (Bicep, Terraform etc.) AVM is a unified approach to provide a common code base, a toolkit for our Customers, our Partners, and Microsoft.
AVM is a community driven aspiration, inside and outside of Microsoft. If you are not familiar with AVM yet, go check out this video on YouTube:
What Is This Series?
For Azure Verified Modules, we will be producing monthly updates in which, we will share with you the latest news and features of Azure Verified Modules, including:
Module Updates
The updates AVM Framework.
Our community engagement
For some months we may focus on a highlight module this could be a pattern or a workflow in which the community (You!) would like to know more about from the module owner.
AVM Module Summary
The AVM team are excited that our community have been busy building AVM Modules. As of January 31st, the AVM Footprint currently looks like:
Language
Published
In development
Bicep
84
35
Terraform
18
37
Bicep Resource Modules Published In January:
Full list of Bicep Resource Modules are available here: AVM Bicep Resource Index
analysis-services/server
app/container-app
cache/redis
compute/disk
compute/disk-encryption-set
compute/image
compute/proximity-placement-group
compute/virtual-machine
consumption/budget
container-registry/registry
container-service/managed-cluster
data-protection/backup-vault
databricks/access-connector
databricks/workspace
db-for-my-sql/flexible-server
health-bot/health-bot
net-app/net-app-account
network/ddos-protection-plan
network/firewall-policy
network/front-door
network/front-door-web-application-firewall-policy
network/local-network-gateway
network/nat-gateway
network/virtual-network-gateway
network/vpn-gateway
service-bus/nameaspace (Updates)
storage/storage-account
web/site
web/static-site
Terraform Resource Modules:
Full list of Terraform Resource Modules are available here: AVM Terraform Resource Index
authorization-roleassignment
network-azurefirewall
network-firewallpolicy
network-networkmanager
operationalinsights-workspace
Terraform Pattern Modules
Full list of Terraform Resource Modules are available here: AVM Terraform Pattern Index
alz-management
network-virtualwan (update)
Updates and Improvements
We have also made some updates and improvements to the existing Azure Verified Modules, based on your feedback and suggestions. Some of the highlights are:
Bicep
Improved workflow optimization for module publishing to allow better Intellisense when using Visual Studio Code extension for Bicep.
Extended compliance tests to include AVM Bicep CI Framework files.
Automatic issue life-cycle management workflow (ref) that tracks the stability of a module and its owner
Improved pipeline handling & readability (ref)
Batch disable and enable GitHub Workflows in user forks (Bicep)
Terraform
Implemented GREPT workflow for Repository Linting and Governance (Link to Matt’s video)
OpenID Connect Integration for Terraform test validation.
MVP for Centralized Module testing framework in place, utilizing docker technologies for both local and GitHub Actions testing capabilities.
AVM General
Automated issue creation for tracking GitHub Teams alignment to specs required for AVM Modules
Further Resources
Microsoft Tech Community – Latest Blogs –Read More
Interim guidance for DST changes announced by Palestinian authority for 2024, 2025.
The Palestinian authority has decided to delay the start of Daylight Saving Time (DST) in 2024 and 2025. The Ministry of Communications and Information Technology of the Palestinian authority has conveyed this decision in an article dated January 30, 2024.
This moves the DST entry date further away from the month of Ramadan and the Eid ul Fitr holiday, which marks the end of Ramadan.
The Palestinian authority also announced that the 2025 DST date will be delayed by one week.
The impact of this change is as follows:
Clocks will be set forward 1 hour on Saturday, April 20, 2024, from 02:00 (2 am) to 03:00 (3 am) local time.
Clocks will be set back 1 hour on Saturday, October 26, 2024, from 02:00 (2 am), to 01:00 (1 am) local time.
The following platforms will receive an update to support this time zone change as part of the March 2024 non-security preview update or the April 2024 security update:
Windows Server 23H2
Windows 11, version 22H2 and version 23H2
Windows 11, version 21H2
Windows 10, version 22H2; Windows 10, version 21H2
Windows Server 2022
Windows 10 Enterprise LTSC 2019; Windows Server 2019
Windows 10 Enterprise LTSC 2016; Windows Server 2016
Windows 10 Enterprise 2015 LTSB
Windows Server 2012
Windows Server 2008 SP2
Windows 8.1
Windows 7.0 SP1
For additional information, please review our official policy page and How Windows manages time zone changes.
Microsoft Tech Community – Latest Blogs –Read More
Deep Dive of Microsoft-managed Conditional Access Policies in Microsoft Entra ID
This blog was originally published on the Entra ID blog on 2/6.
In November 2023 at Microsoft Ignite, we announced Microsoft-managed policies and the auto-rollout of multifactor authentication (MFA)-related Conditional Access policies in customer tenants. Since then, we’ve rolled out report-only policies for over 500,000 tenants. These policies are part of our Secure Future Initiative, which includes key engineering advances to improve security for customers against cyberthreats that we anticipate will increase over time.
This follow-up blog will dive deeper into these policies to provide you with a comprehensive understanding of what they entail and how they function.
Multifactor authentication for admins accessing Microsoft admin portals
Admin accounts with elevated privileges are more likely to be attacked, so enforcing MFA for these roles protects these privileged administrative functions. This policy covers 14 admin roles that we consider to be highly privileged, requiring administrators to perform multifactor authentication when signing into Microsoft admin portals. This policy targets Microsoft Entra ID P1 and P2 tenants, where security defaults aren’t enabled.
Multifactor authentication for per-user multifactor authentication users
Per-user MFA is when users are enabled individually and are required to perform multifactor authentication each time they sign in (with some exceptions, such as when they sign in from trusted IP addresses or when the remember MFA on trusted devices feature is turned on). For customers who are licensed for Entra ID P1, Conditional Access offers a better admin experience with many additional features, including user group and application targeting, more conditions such as risk- and device-based, integration with authentication strengths, session controls and report-only mode. This can help you be more targeted in requiring MFA, lowering end user friction while maintaining security posture.
This policy covers users with per-user MFA. These users are targeted by Conditional Access and are now required to perform multifactor authentication for all cloud apps. It aids organizations’ transition to Conditional Access seamlessly, ensuring no disruption to end user experiences while maintaining a high level of security.
This policy targets licensed users with Entra ID P1 and P2, where the security defaults policy isn’t enabled and there are less than 500 per-user MFA enabled enabled/enforced users. There will be no change to the end user experience due to this policy.
Multifactor authentication and reauthentication for risky sign-ins
This policy will help your organization achieve the Optimal level for Risk Assessments in the NIST Zero Trust Maturity Model because it provides a key layer of added security assurance that triggers only when we detect high-risk sign-ins. “High-risk sign-in” means there is a very high probability that a given authentication request isn’t the authorized identity owner and could indicate brute force, password spray, or token replay attacks. By dynamically responding to sign-in risk, this policy disrupts active attacks in real-time while remaining invisible to most users, particularly those who don’t have high sign-in risk. When Identity Protection detects an attack, your users will be prompted to self-remediate with MFA and reauthenticate to Entra ID, which will reset the compromised session.
This policy covers all users in Entra ID P2 tenants, where security defaults aren’t enabled, all active users are already registered for MFA, and there are enough licenses for each user. As with all policies, ensure you exclude any break-glass or service accounts to avoid locking yourself out.
Microsoft-managed Conditional Access policies have been created in all eligible tenants in Report-only mode. These policies are suggestions from Microsoft that organizations can adapt and use for their own environment. Administrators can view and review these policies in the Conditional Access policies blade. To enhance the policies, administrators are encouraged to add customizations such as excluding emergency accounts and service accounts. Once ready, the policies can be moved to the ON state. For additional customization needs, administrators have the flexibility to clone the policies and make further adjustments.
Call to Action
Don’t wait – take action now. Enable the Microsoft-managed Conditional Access policies now and/or customize the Microsoft-managed Conditional Access policies according to your organizational needs. Your proactive approach to implementing multifactor authentication policies is crucial in fortifying your organization against evolving security threats. To learn more about how to secure your resources, visit our Microsoft-managed policies documentation.
Nitika Gupta
Principal Group Product Manager, Microsoft
Learn more about Microsoft Entra:
See recent Microsoft Entra blogs
Dive into Microsoft Entra technical documentation
Learn more at Azure Active Directory (Azure AD) rename to Microsoft Entra ID
Join the conversation on the Microsoft Entra discussion space and Twitter
Learn more about Microsoft Security
Microsoft Tech Community – Latest Blogs –Read More
School-parent Communities in Teams
We just published a new blog post about how Richard Cloudesley School uses Communities in Teams to engage with parents. Read more in here:
https://insider.teams.com/blog/school-parent-communities-in-teams/
Microsoft Tech Community – Latest Blogs –Read More
Advancing key protection in Windows using VBS
Today, we are excited to bring you the next step in key protection for Windows. Now in Windows 11 Insider Preview Build 26052 and Windows Server Insider Preview Build 26052, developers can use the Cryptography API: Next Generation (CNG) framework to help secure Windows keys with virtualization-based security (VBS). With this new capability, keys can be protected from admin-level key theft attacks with negligible effect on performance, reliability, or scale.
Now let’s explore how you can create, import, and protect your keys using VBS.
The current state of key protection in Windows
As attackers advance their techniques to steal keys and credentials, Microsoft continues to evolve capabilities to help protect valuable assets across Windows. This is crucial work as when attackers get hold of important keys, they can impersonate users and access resources without their knowledge and consent. Consider the theft of third-party encryption keys – these types of attacks may have privacy and security consequences and could compromise the availability of applications and services.
The default method of protecting keys in Windows is to store them in the memory of a local system process known as the Local Security Authority (LSA). LSA is a great option for storing keys that do not protect high-value assets or require the best performance available. While LSA helps prevent code injection and non-authorized processes from reading memory, an admin or system-level attacker can still steal keys from this memory space.
For a more secure option, the industry is moving towards hardware-based isolation, where keys are stored directly on a hardware security processor like a managed HSM (Hardware Security Module), Trusted Platform Module (TPM) or a Microsoft Pluton security processor, which help provide stronger security against tampering with and exporting keys. While hardware isolation should be used for keys wherever possible, if there are performance or scale requirements that require usage of the central processing unit (CPU) core, VBS is a robust alternative that helps offer stronger security than currently available software protection.
Introducing key protection with VBS in Windows
The security capability we’re introducing today addresses the limitations in the current software and hardware key protection mechanisms on Windows. You can now protect your keys with VBS, which uses the virtualization extension capability of the CPU to create an isolated runtime outside of the normal OS. When in use, VBS keys are isolated in a secure process, allowing key operations to occur without ever exposing the private key material outside of this space. At rest, private key material is encrypted by a TPM key which binds VBS keys to the device. Keys protected in this way cannot be dumped from process memory or exported in plain text from a user’s machine, preventing exfiltration attacks by any admin-level attacker.
VBS helps to offer a higher security bar than software isolation, with stronger performance compared to hardware-based solutions, since it is powered by the device’s CPU. While hardware keys offer strong levels of protection, VBS is helpful for services with high security, reliability, and performance requirements.
The following section will show you how to use these capabilities by creating and using VBS keys with NCrypt, which is part of the Cryptography API: Next Generation (CNG) framework.
Tutorial: Leverage the NCrypt API to create and use VBS keys
The core functionality to create and import VBS keys is as simple as passing in an additional flag into the NCrypt API.
NCryptCreatePersistedKey and NCryptImportKey accept two flags to request that VBS should be leveraged to protect the client key’s private material:
Flag
Functionality and fallback
NCRYPT_REQUIRE_VBS_FLAG
Indicates a key must be protected with VBS.
Operation will fail if VBS is not available.
NCRYPT_PREFER_VBS_FLAG
Indicates a key should be protected with VBS.
Operation will generate a software-isolated key if VBS is not available.
When it comes to creating VBS keys, the standard CNG encryption algorithms and key lengths for software keys are supported.
Ephemeral and per-boot keys
The default behavior of NCryptCreatePersistedKey and NCryptImportKey is that of a cross-boot persisted key stored on disk that persists across reboot cycles.
Calling NCryptCreatePersistedKey with pszKeyName == NULL creates an ephemeral key rather than a persisted key, and its lifetime is managed by the client process. Ephemeral keys are not written to disk and live in secure memory. An additional flag can be passed in along with the above VBS flags to indicate that a per-boot key should be used to help protect the client key rather than default cross-boot key.
Flag
Functionality and fallback
NCRYPT_USE_PER_BOOT_KEY_FLAG
Instructs VBS to help protect the client key with a per-boot key that is stored in disk but can’t be reused across boot cycles.
Example: Creating a key with virtualization-based security
The following sample code shows how to create a 2048-bit VBS key with the RSA algorithm:
void
CreatePersistedKeyGuardKey(
void
)
{
SECURITY_STATUS status;
NCRYPT_PROV_HANDLE hProv = 0;
NCRYPT_KEY_HANDLE hKey = 0;
DWORD dwKeySize = 2048;
status = NCryptOpenStorageProvider(&hProv, MS_KEY_STORAGE_PROVIDER, 0);
if (status != ERROR_SUCCESS)
{
wprintf(L”NCryptOpenStorageProvider failed with %xn”, status);
goto clean;
}
status = NCryptCreatePersistedKey(hProv, &hKey, NCRYPT_RSA_ALGORITHM, L”MyKeyName”, 0, NCRYPT_USE_VIRTUAL_ISOLATION_FLAG);
if (status != ERROR_SUCCESS)
{
wprintf(L”NCryptCreatePersistedKey failed with %xn”, status);
goto clean;
}
status = NCryptSetProperty(hKey, NCRYPT_LENGTH_PROPERTY, (PBYTE)&dwKeySize, sizeof(DWORD), 0);
status = NCryptFinalizeKey(hKey, 0);
if (status != ERROR_SUCCESS)
{
wprintf(L”NCryptFinalizeKey failed with %xn”, status);
goto clean;
}
wprintf(L”Created a persisted Key Guard key!n”);
clean:
if (hKey)
{
NCryptFreeObject(hKey);
}
if (hProv)
{
NCryptFreeObject(hProv);
}
}
Using VBS keys
Beyond stricter key export policies, a VBS key can be treated like any other Cryptographic Next Generation (CNG) key when it comes to API usage, so developers can refer to the NCrypt API here. This applies to use cases like signing and encryption.
Try protecting your keys with VBS today
This feature is now in Preview and accessible via the Windows Insider Program for both client (Windows 11 Insider Preview Build 26052) and Server (Windows Server Insider Preview Build 26052) The following requirements must be met:
VBS enabled
VBS also has several hardware requirements to run, including Hyper-V (Windows hypervisor), 64-bit architecture, and IOMMU support. See the full list of VBS hardware requirements.
TPM enabled: For bare-metal environments, TPM 2.0 is required. For VM environments, vTPM (Virtual TPM) is supported.
UEFI with Secure Boot enabled
Having trouble?
Enable event log to investigate errors:
Search “Event Viewer” in the start menu
On the left panel open Applications and Services Logs > Microsoft > Windows > Crypto-NCrypt
Right-click Operational and select Enable Log (it may already be enabled)
Right click error events with Event ID 13, 14, or 15 and Task Category “VBS Key Isolation Operation”
We recommend sending any suggestions, questions, or logs through Feedback Hub under Security and Privacy > VBS Key Protection.
You may also reach out to VBSkeyprotection@microsoft.com with questions.
What’s next?
Stay on the lookout for further announcements to support key protection with VBS, and we’ll continue updating our documentation and support guidelines accordingly. We hope that you’ll be able to leverage this security capability to help protect your keys on Windows.
Continue the conversation. Find best practices. Bookmark the Windows Tech Community, then follow us @MSWindowsITPro on X/Twitter. Looking for support? Visit Windows on Microsoft Q&A.
Microsoft Tech Community – Latest Blogs –Read More
Enable Chat History on Azure AI Studio with Azure Cosmos DB
Azure AI Studio offers a feature that allows you to enable chat history for your web app users. This feature provides your users with access to their previous queries and responses, allowing them to easily reference past conversations. Check out the blog below for the full details on how to enable it today!
Benefits of enabling chat history
With Azure AI Studio, Developers can build a chatbot with cutting-edge models that draws on your own data for informed and custom responses to customers’ questions. In addition, you can incorporate multimodality – enabling your app to see, hear, and speak by pairing Azure OpenAI Service with Speech and Vision models.
Streamline customer support: Chat history serves as a powerful ally for streamlining customer support services. By referencing past chat logs, support teams gain the ability to quickly find solutions for customers. This enhances the efficiency of issue resolution while enabling support agents to manage request volumes effectively leading to improved customer satisfaction.
Data Analytics: Analyzing past interactions provides valuable insights into user behavior, preferences, and recurring issues. Armed with this data, you can make informed decisions to optimize user experiences, tailor content, and refine your application’s performance. The analytics derived from chat history pave the way for data-driven strategies, ensuring your application evolves in tune with user needs and expectations.
Product Enhancements: By studying past interactions, you gain a comprehensive view of user feedback, pain points, and preferences. This user-centric insight becomes a compass for product enhancement. Whether it’s refining features, addressing common concerns, or identifying opportunities for innovation, chat history becomes a valuable resource in the iterative process of improving your product for end-users.
How to enable chat history?
To enable chat history, deploy or redeploy your model as a web app using Azure AI Studio. Once completed, activate chat history by clicking the dedicated enablement button within the Azure AI Studio interface. With chat history enabled, users gain control over their interaction.
In the top right corner, they can show or hide their chat history. When displayed, users can rename or delete conversations, giving full control of the chat history experience to users. Conversations are automatically ordered from newest to oldest, simplifying navigation. Each conversation is named based on the initial query, making it easy for users to locate and reference past interactions.
Enabling chat history in Azure AI Studio can easily provide a valuable resource for your web app users, allowing them to easily reference past conversations and queries.
Important! Please note that enabling chat history with Azure Cosmos DB will incur additional charges for the storage used.
About Azure AI Advantage Offer
About Azure Cosmos DB
Azure Cosmos DB is a fully managed, serverless NoSQL database for high-performance applications of any size or scale. It is a multi-tenant, distributed, shared-nothing, horizontally scalable database that provides planet-scale NoSQL capabilities. It offers APIs for Apache Cassandra, MongoDB, Gremlin, Tables, and the Core (SQL)
Get started
Azure Cosmos DB Docs
Check us out on Youtube
Follow us on X (Twitter)
About Azure AI Studio
Azure AI Studio is a trusted and inclusive platform that empowers developers of all abilities and preferences to innovate with AI and shape the future. Seamlessly explore, build, test, deploy, and manage AI innovations at scale. Integrate cutting-edge AI tools and models, prompt orchestration, app evaluation, model fine-tuning, and responsible AI practices. Directly from Azure AI Studio, interact with your projects in a code-first environment using the Azure AI SDK and Azure AI CLI.
Build with Azure AI Studio
Learn more about Azure AI Studio
Watch the Demo!
Azure AI Studio Documentation
Microsoft Learn: Intro to Azure AI Studio
Enabling Chat History Microsoft Docs
Microsoft Tech Community – Latest Blogs –Read More
The Philosophy of the Federal Cyber Data Lake (CDL): A Thought Leadership Approach
Pursuant to Section 8 of Executive Order (EO) 14028, “Improving the Nation’s Cybersecurity”, Federal Chief Information Officers (CIOs) and Chief Information Security Officers (CISOs) aim to comply with the U.S. Office of Management and Budget (OMB) Memorandum 21-31, which centers on system logs for services both within authorization boundaries and deployed on Cloud Service Offerings (CSOs). This memorandum not only instructs Federal agencies to provide clear guidelines for service providers but also offers comprehensive recommendations on logging, retention, and management to increase the Government’s visibility before, during and after a cybersecurity incident. Additionally, OMB Memorandum 22-09, “Moving the U.S. Government Toward Zero Trust Cybersecurity Principles”, references M-21-31 in its Section 3.
While planning to address and execute these requirements, Federal CIO and CISO should explore the use of Cyber Data Lake (CDL). A CDL is a capability to assimilate and house vast quantities of security data, whether in its raw form or as derivatives of original logs. Thanks to its adaptable, scalable design, a CDL can encompass data of any nature, be it structured, semi-structured, or unstructured, all without compromising quality. This article probes into the philosophy behind the Federal CDL, exploring topics such as:
The Importance of CDL for Agency Missions and Business
Strategy and Approach
CDL Infrastructure
Application of CDL
The Importance of CDL for Agency Missions and Business
The overall reduction in both capital and operational expenditures for hardware and software, combined with enhanced data management capabilities, makes CDLs an economically viable solution for organizations looking to optimize their data handling and security strategies. CDLs are cost-effective due to their ability to consolidate various data types and sources into a single platform, eliminating the need for multiple, specialized data management tools. This consolidation reduces infrastructure and maintenance costs significantly. CDLs also adapt easily to increasing data volumes, allowing for scalable storage solutions without the need for expensive infrastructure upgrades. By enabling advanced analytics and efficient data processing, they reduce the time and resources needed for data analysis, further cutting operational costs. Additionally, improved accuracy in threat detection and reduction in false positives lead to more efficient security operations, minimizing the expenses associated with responding to erroneous alerts and increasing the speed of detection and remediation.
However, CDLs are not without challenges. As technological advancements and the big data paradigm evolve, the complexity of network, enterprise, and system architecture escalates. This complexity is further exacerbated by the integration of tools from various vendors into Federal ecosystem, managed by diverse internal and external teams. For security professionals, maintaining pace with this intricate environment and achieving real-time transparency into technological activities is becoming an uphill battle. These professionals require a dependable, almost instantaneous source that adheres to the National Institute of Standards and Technology (NIST) core functions—identify, protect, detect, respond, and recover. Such a source empowers them to strategize, prioritize, and address any anomalies or shifts in their security stance. The present challenge lies in acquiring a holistic view of security risk, especially when large agencies might deploy hundreds of applications across the US and in some cases globally. The security data logs, scattered across these applications, clouds and environments, often exhibit conflicting classifications or categorizations. Further complicating matters are logging maturity levels at different cloud deployment models, infrastructure, platform, and software.
It is vital to scrutinize any irregularities to ensure the environment is secure, aligning with zero-trust principles which advocate for a dual approach: never automatically trust and always operate under the assumption that breaches may occur. As security breaches become more frequent and advanced, malicious entities will employ machine learning to pinpoint vulnerabilities across expansive threat landscape. Artificial intelligence will leverage machine learning and large language models to further enhance organizations’ abilities to discover and adapt to changing risk environments, allowing security professionals to do more with less.
Strategy and Approach
The optimal approach to managing a CDL depends on several variables, including leadership, staff, services, governance, infrastructure, budget, maturity, and other factors spanning all agencies. It is debatable whether a centralized IT team can cater to the diverse needs and unique challenges of every agency. We are seeing a shift where departments are integrating multi-cloud infrastructure into their ecosystem to support the mission. An effective department strategy is pivotal for success, commencing with systems under the Federal Information Security Modernization Act (FISMA) and affiliated technological environments. Though there may be challenges at the departmental level in a federated setting, it often proves a more effective strategy than a checklist approach.
Regarding which logs to prioritize, there are several methods. CISA has published a guide on how to prioritize deployment: Guidance for Implementing M-21-31: Improving the Federal Government’s Investigative and Remediation Capabilities. Some might opt to begin with network-level logs, followed by enterprise and then system logs. Others might prioritize logs from high-value assets based on FISMA’s security categorization, from high to moderate to low. Some might start with systems that can provide logs most effortlessly, allowing them to accumulate best practices and insights before moving on to more intricate systems.
Efficiently performing analysis, enforcement, and operations across data repositories dispersed across multiple cloud locations in a departmental setting involves adopting a range of strategies. This includes data integration and aggregation, cross-cloud compatibility, API-based connectivity, metadata management, cloud orchestration, data virtualization, and the use of cloud-agnostic tools to ensure seamless data interaction. Security and compliance should be maintained consistently, while monitoring, analytics, machine learning, and AI tools can enhance visibility and automate processes. Cost optimization and ongoing evaluation are crucial, as is investing in training and skill development. By implementing these strategies, departments can effectively manage their multi-cloud infrastructure, ensuring data is accessible, secure, and cost-effective, while also leveraging advanced technologies for analysis and operations.
CDL Infrastructure
One of the significant challenges is determining how a CDL aligns with an agency’s structure. The decision between a centralized, federated, or hybrid approach arises, with cost considerations being paramount. Ingesting logs in their original form into a centralized CDL comes with its own set of challenges, including accuracy, privacy, cost, and ownership. Employing a formatting tool can lead to substantial cost savings in the extract, transform, and load (ETL) process. Several agencies have experienced cost reductions of up to 90% and significant data size reductions by incorporating formatting in tables, which can be reorganized as needed during the investigation phase. A federated approach means the logs remain in place, analyses are conducted locally, and the results are then forwarded to a centralized CDL for further evaluation and dissemination.
For larger and more complex agencies, a multi-tier CDL might be suitable. By implementing data collection rules (DCR), data can be categorized during the collection process, with department-specific information directed at the respective department’s CDL, while still ensuring that high value and timely logs are forwarded to a centralized CDL at the agency level, prioritizing privileged accounts. Each operating division or bureau could establish its own CDL, reporting on to the agency’s headquarters’ CDL. The agency’s Office of Inspector General (OIG) or a statistical component of a department may need to create their own independent CDL for independence purposes. This agency HQ CDL would then report to DHS. In contrast, smaller agencies might only need a single CDL. This could integrate with the existing Cloud Log Aggregation Warehouse (CLAW) a CISA-deployed architecture for collecting and aggregating security telemetry data from agencies using commercial CSP services — and align with the National Cybersecurity Protection System (NCPS) Cloud Interface Reference Architecture. This program ensures security data from cloud-based traffic is captured, analyzed, and enables CISA analysts to maintain situational awareness and provide support to agencies.
If data is consolidated in a central monolithic, stringent data stewardship is crucial, especially concerning data segmentation, access controls, and classification. Data segmentation provides granular access control based on a need-to-know approach, with mechanisms such as encryption, authorization, access audits, firewalls, and tagging. If constructed correctly, this can eliminate the need for separate CDL infrastructures for independent organizations. This should be compatible with role-based user access schemes, segment data based on sensitivity or criticality, and meet Federal authentication standards. This supports Zero Trust initiatives in Federal agencies and aligns with Federal cybersecurity regulations, data privacy laws, and current TLS encryption standards. Data must also adhere to retention standards outlined in OMB 21-31 Appendix C and the latest National Archives and Records Administration (NARA) publications, and comply with Data Loss Prevention requirements, covering data at rest, in transit, and at endpoints, in line with NIST 800-53 Revision 5.
In certain scenarios, data might require reclassification or recategorization based on its need-to-know status. Agencies must consider storage capabilities, ensuring they have a scalable, redundant and highly available storage system that can handle vast amounts of varied data, from structured to unstructured formats. Other considerations include interoperability, migrating an existing enterprise CDL to another platform, integrating with legacy systems, and supporting multi-cloud enterprise architectures that source data from a range of CSPs and physical locations. When considering data portability, the ease of transferring data between different platforms or services is crucial. This necessitates storing data in widely recognized formats and ensuring it remains accessible. Moreover, the administrative efforts involved in segmenting and classifying the data should also be considered.
Beyond cost and feasibility, the CDL model also provides the opportunity for CIOs and CISOs to achieve data dominance with their security and log data. This concept of data dominance allows them to gather data, quickly and securely, reduces processing time, which provides quicker time to respond. This quicker time to respond, the strategic goal of any security implementation, is only possible with the appropriate platform and infrastructure so organizations can get closer to real-time situational awareness.
The Application of CDL
With a solid strategy in place, it’s time to delve into the application of a CDL. Questions arise about its operation, making it actionable, its placement relative to the Security Operations Center (SOC), and potential integrations with agency Governance Risk Management, and Compliance (GRC) tools and other monitoring systems. A mature security program needs a comprehensive real-time view of an agency’s security posture, encompassing SOC activities and the agency’s governance, risk management, and compliance tasks. The CDL should interface seamlessly with existing or future Security Orchestration and Response (SOAR) and End Point Detection (EDR) tools, as well as ticketing systems.
CDLs facilitate the sharing of analyses within their agencies, as well as with other Federal entities like the Department of Homeland Security (DHS), Cybersecurity and Infrastructure Security Agency (CISA), Federal law enforcement agencies, and intelligence agencies. Moreover, CDLs can bridge the gaps in a Federal security program, interlinking entities such as the SOC, GRC tools, and other security monitoring capabilities. At the highest levels of maturity, the CDL will leverage Network Operations Center (NOC) and even potentially administration information such as employee leave schedules. The benefit of modernizing the CDL lies in eliminating the requirement to segregate data before ingestion. Data is no longer categorized as security-specific or operations-specific. Instead, it is centralized into a single location, allowing CDL tools and models to assess the data’s significance. Monolithic technology stacks are effective when all workloads are in the same cloud environment. However, in a multi-cloud infrastructure, this approach becomes challenging. With workloads spread across different clouds, selecting one as a central hub incurs egress costs to transfer log data between clouds. Departments are exploring options to store data in the cloud where it’s generated, while also considering if Cloud Service Providers (CSPs) offer tools for analysis, visibility, machine learning, and artificial intelligence.
The next step is for agencies to send actionable information to security personnel regarding potential incidents and provide mission owners with the intelligence necessary to enhance efficiency. Additionally, this approach eliminates the creation of separate silos for security data, mission data, financial information, and operations data. This integration extends to other Federal security initiatives such as Continuous Diagnostics and Mitigation (CDM), Authority to Operate (ATO), Trusted Internet Connection (TIC), and the Federal Risk and Authorization Management Program (FedRAMP).
It’s also pivotal to determine if the CDL aligns with the MITRE ATT&CK Framework, which can significantly assist in incident response. MITRE ATT&CK® is a public knowledge base outlining adversary tactics and techniques based on observed events. The knowledge base aids in developing specific threat models and methodologies across various sectors.
Lastly, to gauge the CDL’s applicability, one might consider creating a test case. Given the vast amount of log data — since logs are perpetual — this presents an ideal scenario for machine learning. Achieving real-time visibility can be challenging with the multiple layers of log aggregation, but timely insights might be within reach. For more resources from Microsoft Federal Security, please visit https://aka.ms/FedCyber.
Stay Connected
Connect with the Public Sector community to keep the conversation going, exchange tips and tricks, and join community events. Click “Join” to become a member and follow or subscribe to the Public Sector Blog space to get the most recent updates and news directly from the product teams.
Microsoft Tech Community – Latest Blogs –Read More
Creating Azure Container Apps using Azure Python SDK
The Azure Python SDK, also known as the Azure SDK for Python, is a set of libraries and packages that allow developers to interact with Microsoft Azure services using the Python programming language. It simplifies the process of integrating Python applications with Azure services by providing a set of high-level abstractions and APIs. With the SDK, developers can programmatically manage and interact with Azure resources, such as virtual machines, storage accounts, databases, and other cloud services.
To use the Azure Python SDK, developers typically install the required Python packages using a package manager like pip. They can then import the relevant modules in their Python code and use the provided classes and methods to interact with Azure services.
If we talk about Azure Container Apps, Microsoft provides comprehensive documentation and samples to help developers get started with the Azure Python SDK.
In this blog, we will be looking at how to create Container Apps using Azure Python SDK.
Getting Started
Prerequisites
It is assumed here that you are already having an existing Azure Subscription, Resource Group, Container App Environment and a Container Registry available. Also, we will be using a Windows machine here to run the file which has Python version > 3.7 installed.
Here as an example, we will be creating an Azure Container App, testing it, and then deleting it via the Azure Python SDK. To run the file, we would be using Azure CLI. This has been tested with the AZ CLI version 2.56
Package Installation
Install the packages that will be used for managing the resources. The Azure Identity Package is needed almost every time. We would be using the Azure Container App package along with it.
pip install azure-identity
pip install azure-mgmt-appcontainers
Authentication
There are two options that can be used for authenticating. Authentication via Subscription ID and Authentication via Service Principal. In this example, we will be using Subscription ID for authenticating to Azure.
You can specify the Subscription ID as an Environment Variable or use it directly in the code. Both the examples are provided below.
from azure.identity import DefaultAzureCredential
from azure.mgmt.appcontainers import ContainerAppsAPIClient
import os
sub_id = os.getenv(“AZURE_SUBSCRIPTION_ID”)
client = ContainerAppsAPIClient(credential=DefaultAzureCredential(), subscription_id=sub_id)
from azure.identity import DefaultAzureCredential
from azure.mgmt.appcontainers import ContainerAppsAPIClient
client = ContainerAppsAPIClient(credential=DefaultAzureCredential(),subscription_id=”<YOUR_SUBSCRIPTION_ID>”)
Python File
We will be using the following file for our management tasks specified above. I am naming this file as containerapp.py
from azure.identity import DefaultAzureCredential
from azure.mgmt.appcontainers import ContainerAppsAPIClient
def main():
client = ContainerAppsAPIClient(
credential=DefaultAzureCredential(),
subscription_id=”4db72a57-a748-41c7-aabc-1f7a153960cf”
)
response = client.container_apps.begin_create_or_update(
resource_group_name=”defaultrg”,
container_app_name=”containerapp-test”,
container_app_envelope={
“location”: “East US 2”,
“properties”: {
“configuration”: {
“ingress”: {
“external”: True,
“targetPort”: 80,
“transport”: “http”,
“stickySessions”: {
“affinity”: “none”
}
}
},
“environmentId”: “/subscriptions/4db72a57-a748-41c7-aabc-1f7a153960cf/resourceGroups/defaultrg/providers/Microsoft.App/managedEnvironments/defaultcaenv”,
“template”: {
“containers”: [
{
“image”: “docker.io/nginx:latest”,
“name”: “testapp4”,
“resources”: {
“cpu”: 0.25,
“memory”: “.5Gi”
}
}
]
},
},
},
).result()
print(response)
client.container_apps.begin_delete(
resource_group_name=”defaultrg”,
container_app_name=”containerapp-test”,
).result()
if __name__ == “__main__”:
main()
In the above file, we are using a Public Repository (DockerHub) as our image source. If in case you want to use your private Azure Container Registry as an image source, the template section must include the auth configuration.
“template”: {
“containers”: [
{
“image”: “nginx:latest”,
“name”: “containerapp-test”,
“resources”: {
“cpu”: 0.25,
“memory”: “.5Gi”
},
“registries”: {
“server”: “https://<YOUR_ACR_NAME>.azurecr.io”,
“username”: “<YOUR_ACR_USERNAME>”,
“passwordSecretRef”: “acr-password”
}
}
],
“secrets”: [
{
“name”: “acr-password”,
“value”: “<YOUR_ACR_PASSWORD>”
},
],
}
The above configuration assumes that there is an image called “nginx” with the tag “latest” in your ACR. Also, the ACR has admin credentials enabled. (Ref..)
After editing the python management file, we can run it simply by using the command
python containerapp.py
On successful run, the result will be printed in json format on the cli.
Troubleshooting
On successful run, the result will be printed in json format on the cli. In some cases, during an error, restarting the Azure CLI can help. I am listing some common scenarios that we usually see while working with the SDK.
InvalidAuthenticationTokenTenant
The error message suggests that the access token is from the wrong issuer, and it must match one of the tenants associated with this subscription. It is usually seen when the Subscription ID on the file does not match with the account you’ve logged in. Re-logging with the correct account may help. (az logout & az login)
InvalidParameterValueInContainerTemplate
The error message noted two issues. Possible invalid or missing image or an issue with authentication. Please check on any typo on the ‘registryPassword‘. Apart form that, if you are using any external public registry like DockerHub, please make sure that the full repository URL is mentioned in the ‘image’ parameter. Also, while using ACR, make sure that only the image and the tag is mentioned as its value.
Microsoft Tech Community – Latest Blogs –Read More
ZoomIt v8.01
Microsoft Tech Community – Latest Blogs –Read More
Nominations are now open for this year’s Microsoft Partner of the Year Awards!
Celebrated annually, these awards recognize the incredible impact that Microsoft partners are delivering to customers and celebrate the outstanding successes and innovations across Solution Areas, industries, and key areas of impact, with a focus on strategic initiatives and technologies. Partners of all types, sizes, and geographies are encouraged to self-nominate. This is an opportunity for partners to be recognized on a global scale for their innovative solutions built using Microsoft technologies.
In addition to recognizing partners for the impact in our award categories, we also recognize partners from over 100 countries/regions around the world as part of the Country/Region Partner of the Year Awards. In 2024, we’re excited to offer additional opportunities to recognize partner impact through new awards – read our blog to learn more and download the official guidelines for specific eligibility requirements.
Visit the Microsoft Partner of the Year Awards page to see the full list of awards and to submit your nomination in advance of the April 3, 2024, deadline. To ensure you create a strong entry, we encourage you to explore the provided resources and expert advice on the nomination process. We look forward to receiving another amazing set of nominations this year and are excited to celebrate another round of incredible partner innovations!
Read more on the Partner Blog
Microsoft Tech Community – Latest Blogs –Read More
Become a Microsoft Defender Vulnerability Management Ninja
Do you want to become a ninja for Microsoft Defender Vulnerability Management? We can help you get there! We collected content with multiple modules. We will keep updating this training on a regular basis.
In addition, we offer you a knowledge check based on the training material! Since there’s a lot of content, the goal of the knowledge checks is to help ensure understanding of the key concepts that were covered. Lastly, there’ll be a fun certificate issued at the end of the training: Disclaimer: This is not an official Microsoft certification and only acts as a way of recognizing your participation in this training content.
Module 1- Getting started
What is Microsoft Defender Vulnerability Management
Prerequisites & permissions
Supported operating systems, platforms and capabilities
Compare Defender Vulnerability Management plans and capabilities
Interactive Guide – Reduce organizational risk with Microsoft Defender Vulnerability Management
Defender Vulnerability Management trial
Defender Vulnerability Management add on trial
Defender Vulnerability Management standalone trial
Frequently asked questions
What’s new in Public Preview
Module 2 – Portal Orientation
Onboard to Defender Vulnerability Management
Dashboard overview
Device inventory
Software inventory
Browser extensions assessment
Certificate inventory
Firmware and hardware assessment
Authenticated scan
Module 3 -Prioritization
Vulnerabilities in my organization
Exposure score
Microsoft Secure Score for Devices
Assign device value
Security recommendation
Mitigate zero-day vulnerabilities
Module 4- Remediation
Remediate vulnerabilities
Request Remediation
Create and view exceptions for security recommendations
View remediation activities
Block vulnerable applications
Module 5 – Posture and Compliance
Microsoft Secure Score for Devices
Security baselines assessment
Module 6 – Data access
Hunt for exposed devices
Vulnerable devices report
Device health reporting in Defender for Endpoint
Monthly security summary reporting in Defender for Endpoint
API’s
Export assessment methods and properties per device
Export secure configuration assessment per device
Export software inventory assessment per device
Build your own custom reports
Are you ready for the Knowledge check?
Once you’ve finished the training and passed the knowledge check, please click here to request your certificate (you’ll see it in your inbox within 3-5 business days.)
Microsoft Tech Community – Latest Blogs –Read More
Firewall considerations for gMSA on Azure Kubernetes Service
This week I spent some time helping a customer with a gMSA environment on which they were finding some issues in deploying their app. The issues started when they were trying to figure out why the Kerberos ticket was not being issues for the Window pod with gMSA configured in AKS. I decided to write this blog post to list some of the firewall considerations for different scenarios on which security rules might block the authentication process.
gMSA and its moving parts
To use gMSA on AKS, you must understand that there are many moving parts in play. First, your Kubernetes cluster on AKS is comprised of both Linux and Windows nodes. Your nodes will all be part of a virtual network, but only the Windows nodes will try to reach the Domain Controller (DC).
The DC itself might be in another virtual network, in the same virtual network, or even outside of Azure. Then you have the Azure Key Vault (AKV) on which the secret (username and password) is securely stored. Your AKV should only be available to the proper Windows nodes, no one else.
The problem though, comes when you have Windows nodes on AKS and DCs running on different networks or even sites, and you need to open the proper ports between the Windows nodes and the Active Directory DC.
Ports to open for Active Directory and gMSA
We have had documentation on which ports to open for Active Directory for a while. That is relatively well known and can be leveraged here.
The thing to understand is that when using gMSA on AKS, not all these ports need to be opened, and allowing unnecessary traffic might expose you to threats without a need for it. For gMSA, there’s no computer or user account being used interactively, and thus we can compile the following list:
Protocol and port
Purpose
TCP and UDP 53
DNS
TCP and UDP 88
Kerberos
TCP 139
NetLogon
TCP and UDP 389
LDAP
TCP 636
LDAP SSL
Keep in mind this list of ports does not take into consideration ports that your application might need to query AD or perform any other action with the DC. You might need to check for those with the application owner.
Domain Controllers in Azure
You might mitigate a lot of firewall issues by simply adding one (or more) DC to Azure as a VM. By doing that, you have two things that play in your favor:
You keep the authentication process within Azure. Your Windows pods and nodes don’t need to reach to an on-premises environment – unless the DC(s) in Azure is down.
You have a better understanding of ports to open between NSGs in Azure rather than traffic between workloads on Azure and DCs on-premises.
On the other hand, you must consider that the DCs in Azure do need to replicate to the DCs on-premises. However, this is a preferred scenario because you know who the DCs are, versus workloads machine that might scale-out or even new workloads/clusters be added in the future. At the end of the day, the scope for opening ports is lower, which minimizes exposure. Please refer to the documentation to understand ports for AD replication as well.
Hopefully this will help you fix any issues you might be having with gMSA caused by blocked traffic. Keep in mind the ports listed above might not be the full list of ports you need to open, but the minimal set of ports and traffic for the proper authentication. As always, let us know in the comments what are your thoughts and if you have a different scenario.
Microsoft Tech Community – Latest Blogs –Read More
ADX Continuous Export to Delta Table – Preview
We’re excited to announce that continuous export to Delta table is now available in Preview.
Continuous export in ADX allows you to export data from Kusto to an external table with a periodically run query. The results are stored in the external table, which defines the destination, such as Azure Blob Storage, and the schema of the exported data. This process guarantees that all records are exported “exactly once”, with some exceptions. Continous export previously supported CSV, TSV, JSON and Parquet formats.
Starting today, you can continuously export to a delta table.
To define continuous export to a delta table:
Create an external delta table, as described in Create and alter delta external tables on Azure Storage.
(.create | .alter | .create-or-alter) external table TableName [(Schema)] kind = delta (StorageConnectionString ) [with (Property [, …])]
Define continuous export to this table using the commands described in Create or alter continuous export.
.create-or-alter continuous-export continuousExportName [over (T1, T2 )] to table externalTableName [with (propertyName = propertyValue [, …])] <| query
Few things to note:
If the schema of delta table while defining the external table isn’t provided, Kusto will try to infer it automatically based on the delta table defined in the target storage container.
If the schema of delta table while deining the external table is provided and there is no delta table defined in the target storage container, continous export will create a delta table during the first export.
The schema of the delta table must be in sync with the continuous export query. If the underlying delta table changes, the export might start failing with unexpected behavior.
Delta table partitioning is not supported today.
Read more : Continuous data export – Azure Data Explorer & Real-Time Analytics | Microsoft Learn
As always, we’d love to hear your feedback and comments.
Microsoft Tech Community – Latest Blogs –Read More
AI for Developers
The era of AI is here, and today’s developer needs the skills and tools to build intelligent apps. This month, we’re exploring resources to help developers modernize their applications and get started with AI. Join a Hack Together event, complete a Cloud Skills Challenge, work through guided tutorials, and register for upcoming events. These resources will help you build intelligent chat apps, extend Microsoft Copilot or create a custom copilot, learn about Microsoft Fabric, and much more.
Cloud Skills Challenge: Build Intelligent Apps
Join a Cloud Skills Challenge to compete against peers, show case your talents, and learn new skills. Combine AI, cloud-scaled data, and cloud-native app development to create intelligent apps. Join a challenge today.
Hack Together: The AI Chat App Hack
It’s not too late to join the AI Chat App Hack! This Hack Together event (January 29 – February 12) offers a playground for experimenting with RAG chat apps and a chance to learn from Microsoft experts.
Azure Cosmos DB Conf Call for Proposals
Want to give a presentation at the Azure Cosmos DB Conference 2024? Submit proposals for presentations on AI integration, innovative use cases, and other topics emphasizing practical insights and hands-on experiences. Submit by February 15, 2024.
Hack Together: The Microsoft Fabric Global AI Hack
Join the Microsoft Fabric Global AI Hack February 19 – March 1 for hands-on learning and find out why Microsoft Fabric is the data platform of choice for AI.
Official Collection: Learn how to build intelligent apps with .NET
Explore a collection of Microsoft Learn modules, videos, and samples on GitHub that will help you build intelligent apps with .NET.
Microsoft Fabric Community Conference
Register for the first annual Microsoft Fabric Community Conference—a live, in-person event taking place March 26 – 28 in Las Vegas. Immerse yourself in data and AI, get hands-on experience with the latest technologies, and connect with other experts.
Playwright Testing and GitHub Actions tutorial: How to run Playwright tests on every code commit
Set up continuous, end-to-end testing for your web apps with Microsoft Playwright and GitHub actions. Watch this tutorial to see how you can run tests on every code commit and validate that your app works across different browsers and operating systems.
The future of collaboration and AI
Build the next era of AI apps with the Teams AI Library, now generally available. Combined with Azure Open AI Service, you have everything you need to build your own AI apps and copilots. Learn more about extending your app to the Copilot ecosystem.
Azure Cosmos DB Conf 2024
Sign up for Azure Cosmos DB Conf, a free virtual developer event. Tune into the live show on April 16 to learn why Azure Cosmos DB is the leading database for AI and modern app development. Then explore more sessions on demand.
POSETTE Call for Presentations
Every great event starts with great speakers. Do you have Postgres tips, tricks, stories, or expertise to share? Submit your presentation proposals to be considered for POSETTE (formerly Citus Con), a free, virtual developer event organized by the Postgres team at Microsoft.
Build and modernize AI apps with new solution accelerators
Build intelligent apps on Azure with new tools that bring top use cases to life. Explore demos, GitHub repos, and Hackathon content to help you get started building AI-powered apps, such as a copilot using your own data.
New Azure AI Advantage offer
There’s a new Azure AI Advantage offer that lets Azure AI and GitHub Copilot customers save when using Azure Cosmos DB.
Build a production RAG chat using Azure AI Studio and Prompt Flow
Learn how to build a production-level RAG app for a customer support agent – and integrate it with your web-based product catalog. Streamline your end-to-end app development from prompt engineering to LLMOps with prompt flow in Azure AI Studio.
Train a machine learning model and debug it with the Responsible AI dashboard
Ready to build a machine learning model or integrate one into your app? Learn how to debug your model to assess it for Responsible AI practices using the Azure Responsible AI Dashboard.
How to Convert Audio to .WAV for Speech Service Using MoviePy
Azure Speech Service requires audio files to adhere to specific standards. Find out how to use MoviePy to easily convert your audio files to make them compatible with Azure Speech Service.
Build it with AI video series
Ready to get started with AI? Check out the Build it with AI video series from Microsoft Reactor. Deepen your engagement, grow your AI-driven solutions, and start building your business on AI technology.
How to build a custom copilot using Azure AI Studio and Microsoft Copilot Studio
Want to build your own copilot? Explore options in the Microsoft ecosystem for building a copilot. This blog post looks into low code tools and out-of-the-box features. A follow-up post will focus on code-heavy and extensible options.
Build an AI Powered Image App
Use AI image technologies to deploy it to build an AI-powered image web app. A new Microsoft Learn challenge module steps you through bite-sized project to give you a taste of the latest tools.
Microsoft JDConf 2024
Get ready for JDConf 2024—a free virtual event for Java developers. Explore the latest in tooling, architecture, cloud integration, frameworks, and AI. It all happens online March 27-28. Learn more and register now.
Step-by-step guide: Build a recommender full stack app using OpenAI and Azure SQL
Check out this step-by-step guide for creating an intelligent web app with Azure Open AI Service. This blog post shows you how to create a recommender full stack app with OpenAI and Azure SQL.
Official collection: AI Kick-off Projects
Put your AI skills to test and start building innovative solutions. This collection of AI Challenge Projects provides modules that will teach you how to build various intelligent solutions, such as a minigame and a speech translator.
Register now: Microsoft Fabric Community Conference
Join us at the first ever Microsoft Fabric Community Conference—a live, in-person event. Discover how Microsoft data and AI services accelerate innovation and prepare you for the era of AI. Use discount code MSCUST to save $100.
Microsoft Tech Community – Latest Blogs –Read More
ICYMI | Microsoft 365 Blog: Introducing the new Microsoft 365 Document Collaboration Partner Program
If you’re an independent software vendor (ISV) who provides a cloud communication and collaboration platform, you may want to offer customers a collaboration experience inside and outside meetings. That’s why we are excited to introduce the Microsoft 365 Document Collaboration Partner Program (MDCPP), a new opportunity for eligible platform providers to integrate Microsoft 365 apps into their platforms. Whether it’s a presentation, a spreadsheet, or a document, the program can enable users to share, edit, and coauthor, without switching between apps or losing context.
Continue reading in our Microsoft 365 blog
Microsoft Tech Community – Latest Blogs –Read More