Tag Archives: microsoft
Windows containers in Kubernetes: Automating nodepool management with Calico’s Windows HPC Support
Hello, we would like to feature our partners from Tigera Calico that we team up with to co-author a blog on Host Process Containers with Calico. Below are the names of the partners that co-authored the blog.
Dhiraj Sehgal Reza Ramezanpour
As the landscape of containerized applications evolves, enterprises are increasingly integrating Windows containers into their Kubernetes workflows.
These days with the help of cloud services such as Microsoft Azure Kubernetes Service, anyone can build and operate a Kubernetes environment with ease. However, there are a lot of fine-tuning and automation that are involved in preparing your production-ready environment that are done in the background. For example, networking is a huge part of the cloud-native environment, and all aspects of your business in the cloud depend on it.
Project Calico is a networking and security solution for the bare metal and cloud that offers great flexibility for such environments. In this blog, we will focus on how the new release of Calico has leveraged a new a feature of Windows containers, Host Process Containers (HPC) to optimize footprint in your cloud environment. On top of that, we will look at how HPC support makes the life of DevOps administrators easier by offering more control over the host machine in a Windows environment.
The challenge of manual nodepool management
One of the biggest challenges of managing Kubernetes clusters in an unmanaged or on-premise deployment. In a cloud environment like AKS (Azure Kubernetes Service), the cloud provider takes care of many aspects of managing your Kubernetes cluster, making it a seamless and hassle-free experience. However, when it comes to a customized environment where you have control over the node pools, the responsibility of managing and configuring the cluster falls on your shoulders. This can be a bit daunting, especially if you are new to Kubernetes or have limited experience with infrastructure management.
Managing Windows nodepools in such environments can be more challenging than Linux where privileged containers can configure host settings and integrate naturally with Kubernetes, Windows containers previously lacked this capability requiring administrators to use scripts or manual configuration steps outside of Kubernetes. This can be time-consuming and error-prone, especially when scaling your cluster quickly. Additionally, manual nodepool management can be disruptive to application lifecycles.
HPC is similar to a privileged container in Linux, just like privileged containers, HPC containers have the capability to access and make modifications to the host operating system. Silos are similar to namespaces in Linux which allow processes to run in an isolated environment. The following blog post highlights how Windows HPC is used for Calico and what are the benefits of it.
Calico’s Windows Host Process Containers
Calico’s Windows HPC support released in Calico OS 3.27 automates CNI installation and brings the Calico capabilities to Windows nodepools. This means that Kubernetes administrators can easily install Calico on their environment without having to manually install and configure Calico on each node, similar to Linux-based containers.
Calico’s support for Windows HPC feature works by running Calico as a HPC on each node. HPC are a special type of container that has access to the host’s filesystem. This allows Calico to install and configure itself on each node without requiring manual intervention from the Kubernetes administrator.
Benefits of automating nodepool management
Automating node pool management with Calico’s support for Windows HPC feature provides a number of benefits for Kubernetes administrators, including:
Reduced operational overhead: Automating nodepool management eliminates the need for Kubernetes administrators to manually install and configure Calico on each node. This frees up their time to focus on other tasks, such as managing Windows container-based applications.
Improved application performance and reliability: By automating node pool management, Kubernetes administrators can reduce the risk of disruptions to application lifecycles. This is because Calico can be installed and configured on new nodes without requiring any downtime for existing applications.
Increased agility and responsiveness to changing business needs: Automating node pool management makes it easier for Kubernetes administrators to scale their clusters up or down as needed. This can help businesses to respond more quickly to changing customer demand and other business needs.
Consistency between Windows and Linux GitOps practices.
How to enable Calico using Windows Host Process container support
For this part, we are going to assume that you have a hybrid Kubernetes cluster in your environment that supports HPC.
HPC support is provided with Kubernetes 1.22 and above, it also requires containerd 1.6+. If you would like to know more about these requirements, click here.
When your cluster is up and running, install the latest Tigera operator:
Use the following installation resource to install Calico for your Windows environment using the HPC feature:
kubectl create -f -<<EOF
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
calicoNetwork:
windowsDataplane: HNS
ipPools:
– blockSize: 26
cidr: 192.168.0.0/16
encapsulation: VXLAN
natOutgoing: Enabled
nodeSelector: all()
—
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
EOF
In environments where Calico is used for IP Address Management, you need to disable IPaddress sharing by using the following command:
kubectl patch ipamconfigurations default –type merge –patch='{“spec”: {“strictAffinity”: true}}’
Conclusion
To sum up, Windows nodes in non-cloud-provider environment used to be hard to install and configure because they did not have privileged containers. However, with HPC now generally available on Kubernetes, users can create containers that can automate the configuration of their node via accessing the host filesystem.
Calico has leveraged this technology to provide a Kubernetes-native way to install and manage networking in your cluster.
This means that the management of Windows nodes in a Kubernetes cluster is now fully automated, eliminating the need for administrators to manually configure nodes or containers.
Overall, the adoption of HPC in Kubernetes has transformed the way CNI solutions are installed and managed on Windows nodes, providing a more streamlined and automated approach that enhances the scalability, reliability, and ease of use of Kubernetes clusters.
Please look out for a coming blog covering Zero Trust with Tigera Calico.
Microsoft Tech Community – Latest Blogs –Read More
Final Reminder: Outlook REST API v2.0 and beta endpoints decommissioning
As we work to ensure better security, reliability, and performance for our customers, and as we announced in our previous blog post in September 2023, we are decommissioning the Outlook REST v2.0 and beta endpoints starting March 31, 2024. After this date, we will start progressively shutting off the endpoints until they become completely unavailable.
This means that any application that is still using these endpoints will stop working at some point after March 31, 2024 (except for Outlook Add-Ins as also communicated before). We strongly recommend that you migrate your applications to the Microsoft Graph API as soon as possible to avoid any disruption. Please refer to https://aka.ms/FromOutlookRestToGraph for guidance.
We continue to track the use of these endpoints and will inform the affected tenants through a Message Center post before we fully disable the endpoints. However, we urge you to migrate your applications as soon as possible.
The Microsoft 365 Team
Microsoft Tech Community – Latest Blogs –Read More
Microsoft Learn AI Skills Challenge Pitch Winner: Watch Out
The Microsoft Learn AI Cloud Skills Challenge held in July wrapped up an incredible learning journey with the AI pitch Challenge; a showcase of innovation where passionate learners brought their visions to life through the power of AI. These creators shared how they would harness Microsoft’s AI technology to craft solutions for the future in a 3-minute video pitch. Out of many, five outstanding winners emerged, each with a unique and compelling vision.
This series of blog posts spotlights each creator sharing the transformative potential of their ideas.
Hello! I’m Ahmet Dedeler, a 16-year-old high school junior from Turkey, and I’m eager to share with you not just my latest project, “Watch Out,” but also my journey in the tech world. My adventure began with a simple curiosity about coding. Python and JavaScript were my initial gateways, but they quickly became much more than just programming languages. They were the tools that helped me understand the power of technology in solving real-world issues.
From Hackathons to Hosting One
My enthusiasm for coding swiftly led me to the world of hackathons. These weren’t just competitions; they were platforms where I could test my skills, innovate, and learn from peers. Winning a bunch of hackathons was a thrilling experience, each victory not just an achievement but a stepping stone to something greater.
This journey through numerous hackathons sparked an idea – why not host my own? Thus, “Boost Hacks” was born. It was a leap from participant to organizer, from learner to leader. The event was a massive success, with 800 participants, 85 innovative projects, and a staggering $180,000 in prizes. This wasn’t just about organizing an event; it was about creating a space for like-minded individuals to collaborate, innovate, and push the boundaries of technology.
Unveiling “Watch Out”: A Vision for Safer Communities
“Watch Out” is born from a desire to enhance community safety through the power of AI. It’s an AI-driven system that uses Computer Vision to detect and alert people about potential safety hazards in their surroundings – from fallen trees to damaged sidewalks.
How “Watch Out” Works
The system operates by analyzing live street footage, continuously scanning for anomalies or potential dangers. When it detects a hazard, it immediately notifies local authorities and emergency services, ensuring quick action and a safer environment for everyone.
The Inspiration Behind the Project
The idea for “Watch Out” came from observing everyday community challenges. I wanted to create a solution that not only leverages technology but also actively involves the community in promoting safety.
The Tech Behind the Vision
Developing “Watch Out” involved several Microsoft AI technologies. The core of the project is Microsoft’s Custom Vision, a tool that enabled me to train an AI model to recognize various safety hazards with high precision.
Favorite Microsoft AI Technology
Among all the technologies I explored, Microsoft’s Custom Vision stood out. Its user-friendly interface and powerful capabilities made it not just a tool for development, but a learning experience that was both challenging and rewarding.
Looking Ahead: My Future Vision and Aspirations
Looking towards the future, my goal is to blend my coding skills with my enthusiasm for meaningful projects. “Watch Out” is a stepping stone into a world where technology serves humanity. I am excited about refining this project and exploring new technological frontiers. My aspiration is to create solutions that leave a lasting, positive impact on society.
Join me in this journey of innovation and discovery, where we’re not just coding for the sake of technology, but for building a smarter, safer, and more connected world. My story is one of a young mind’s passion for technology and a heart for community service, and I believe this is just the beginning.
Feeling inspired? The Microsoft Learn AI Skills Challenge may have ended but the learning never stops! Get started with an AI Learning Path and find a new Microsoft Learn Cloud Skills Challenge to join. Transform your innovative ideas into reality with Azure credits through the Founders Hub. And for the students who dream of making an impact, the Imagine Cup is currently underway!
Microsoft Tech Community – Latest Blogs –Read More
Benefits of moving to Azure Monitor SCOM managed instance
In this blog, let’s highlight the cost-benefit of moving from your existing SCOM on-prem to Azure Monitor SCOM MI.
If you are using System Center Operations Manager (SCOM) to monitor on-premises and hybrid cloud environment, you might be wondering whether you should migrate to Azure Monitor SCOM managed instance or keep your SCOM on-premises deployment. In this blog, we will compare the two options in terms of cost benefits (up to 44% when fully migrated to SCOM MI), and help you make an informed decision based on your specific needs and goals.
What is Azure Monitor SCOM managed instance?
Azure Monitor SCOM managed instance is a cloud-based service that provides the same functionality as SCOM on-premises, but without the hassle of managing and maintaining the infrastructure. You can use SCOM MI to monitor your resources on and off Azure, as well as integrate with other Azure services such as Log Analytics, Azure Managed Grafana, and Power BI. SCOM MI is fully compatible with your existing SCOM management packs and agents*, so you can migrate your existing monitoring configuration and data with minimal disruption.
What are the cost benefits of Azure Monitor SCOM managed instance?
Azure Monitor SCOM MI offers several cost benefits over SCOM on-premises, such as:
Reduced infrastructure & maintenance costs: You don’t need to bother about maintaining infrastructure such as server racks, network cables, electricity, cooling, physical security, datacenter lease. Moreover, hardware infrastructure is a depreciation cost. SCOM MI runs on Azure’s scalable and reliable infrastructure, which means you only pay for what you use, and you don’t have to worry about downtime or performance issues.
You can save additionally on Azure Infrastructure with savings and reserved plans.
Reduced IT labor costs: SCOM MI is fully managed by Microsoft, which means you get updates, patches, scalability, and security. Since you don’t need to retrain your staff on SCOM management packs and, the efforts required to provision, patch and scale SCOM MI service is significantly less, we estimate ~40% reduction in time (labor cost) required to maintain & operate SCOM MI.
Optimized licensing costs: You don’t need to purchase, renew, or manage any licenses for your monitoring solution. SCOM MI is offered as a PAYG model, which means you only pay a monthly fee based on the number of monitored objects and the amount of data ingested. You also get access to all the features and capabilities of Azure Monitor, which can enhance your monitoring experience and provide additional insights and value.
For more information on SCOM MI licensing, refer here.
To illustrate the cost benefits of SCOM MI, we have created a comparison table of the estimated annual costs for a typical scenario of monitoring 500 VMs. The table does not include optional SCOM MI integration i.e., data ingestion to Log Analytics, usage of Grafana.
Disclaimer: Below table includes representative numbers only. For accurate Azure costs, refer to Pricing Calculator | Microsoft Azure. Also, we assume that the duration of migration between SCOM to SCOM MI is completed quickly (<3 months) and not as a long-term migration project.
Cost category
SCOM on-premises
Azure Monitor SCOM managed instance
Infrastructure
(Hardware + Software)
To monitor 500 VMs, you need 2 SCOM servers with Windows OS, 1 SQL server with Windows OS, server racks, storage disks etc.
$13,812 (annually)
$27,780 (no discount)
$12,586 (max discount)
Maintenance cost
(Security, lease, electricity, network, etc.)
$4,443 (annually)
$0 (included under infra cost)
IT labor cost
(administration)
$116,800 (annually)
$70,080 (annually)
Licensing
System Center license to manage 500VMs is $75,747. If you are using all SC products, the operating license cost for SCOM will be least ($12,625).
SCOM MI license is $6/VM/month.
$12,625 (If all SC products used)
$75,747 (If SCOM only used)
$36,000 (annually)
Annual cost range
$147,680 to $210,802
$118,666 to $133,860
Costs savings
(once you move to SCOM MI to monitor 500VMs)
20% if SCOM onprem only used & No Azure discounts applied
36% if all SC products used, max Azure discounts applied
44% if SCOM onprem only used, maximum Azure discounts applied
As you can see, Azure Monitor SCOM managed instance can save you up to 44% of the total costs of SCOM on-premises, considering you migrate to SCOM MI quickly. Of course, your actual costs may vary depending on your specific requirements and preferences, but the table gives you a general idea of the potential savings you can achieve by migrating to Azure Monitor SCOM managed instance. If you are interested in moving other System Center products to Azure and want to know the cost analysis, we recommend you build a Business case with Azure Migrate | Microsoft Learn.
How to get started with Azure Monitor SCOM managed instance?
If you are interested in trying out Azure Monitor SCOM managed instance, you can start here. You should talk to your Microsoft sales representative for clarity on plausible discounts and actual cost savings.
If you have any questions or feedback, you can leave your comments below. We would love to hear from you and help you with your monitoring needs.
*SCOM 2022 Agent (as of Feb’24).
References
Pricing Calculator | Microsoft Azure
Microsoft System Center | Microsoft Licensing Resources
Microsoft Tech Community – Latest Blogs –Read More
Drive customer engagement with the power of AI
According to a recent IDC study commissioned by Microsoft, “For every $1 a company invests in AI, it is realizing an average return of $3.5X.” Because organizations realize a return on their AI investments within 14 months, customers are highly motivated to find partners with the necessary knowledge and skill set to deploy AI solutions today.
The Microsoft AI Partner Training Roadshow is a single-day, in-person event focused on driving customer engagement with the power of AI. The roadshow provides an exceptional opportunity to engage with Microsoft experts, hear about the latest trends in AI from Microsoft executives, and participate in technical or sales training.
Attend one of the six roadshow events
The Microsoft AI Partner Training Roadshow is scheduled in six cities across the globe, so there are only a few opportunities for deep learning on Microsoft generative and responsible AI technologies, cloud-scale data, and modern application development platforms, including Azure AI services and Microsoft Copilot.
The first event will be on March 1, 2024, in Hyderabad, India, followed by a second event in Bengaluru, India, on March 19. You don’t want to miss this opportunity. Register for an event near you.
Acquire generative and responsible AI knowledge from Microsoft experts
In a recent blog, Judson Althoff outlined four major opportunities where organizations can empower AI transformation:
Enriching employee experience
Reinventing customer engagement
Reshaping business processes
Bending the curve on innovation
Microsoft is focused on developing responsible AI strategies grounded in pragmatic innovation and enabling AI transformation to meet our customers’ needs. The Microsoft AI Partner Training Roadshow provides expert-led sessions and hands-on experiences to enhance your sales, pre-sales, and technical deployment capabilities across these impact areas.
Prepare technical and sales teams for AI success
Open to our Global Systems Integrator (GSI) and System Integrator (SI) partners, the Microsoft AI Partner Training Roadshow offers learning across multiple skill levels and interests. Alongside a keynote address by a Microsoft leader, there are four distinct learning paths for individuals with technical or sales backgrounds:
Sales Excellence with Microsoft AI Services: Master skills to confidently pitch Microsoft AI solutions by diving into solution use cases, exploring responsible AI commitments, and highlighting incentives to increase customer business value.
Technical Excellence with Azure AI: Build your own “Intelligent Agent” copilot to answer customer questions on products and services: Learn to build an “Intelligent Agent” that helps users find products, user profiles, and sales order information. This interactive experience features theoretical and lab sessions that prepare your technical teams to use Azure OpenAI and Azure AI Search.
Technical Excellence with Azure AI: Build a scalable data estate with a custom copilot for conversational data interaction: In this hands-on track, learn how to create a payments and transactions solution. Key subjects explored include business rules for data governance, patch operations for data replication, and customizing copilots for conversational AI.
Technical Excellence with Microsoft 365: Deep dive into the use and deployment of Copilot for Microsoft 365: Gain a fuller understanding of Copilot for Microsoft 365 with technical sessions on architecture, deployment, security, and compliance.
Bridge skill gaps in AI
Because AI is rapidly developing, there is a growing skills gap as employees work to keep up. In fact, 52% of participants of this IDC survey report that the lack of skilled workers is their biggest barrier to implementing and scaling AI. Much of the challenge isn’t simply adopting technology but also providing ample opportunities for employees to explore and learn.
To reconcile this divide, the Microsoft AI Partner Training Roadshow is committed to providing recent, up-to-date content for participants to study during and after the event. In addition to live keynote addresses and Q&A sessions, participants will have the chance to interact with and learn from technical and sales subject matter experts on topics that span generative and responsible AI technologies, cloud-scale data, and modern application development platforms, Azure AI services, and Microsoft Copilot
Prepare for the future
2023 introduced the world to the power of generative AI. Businesses are ready to deploy AI-based solutions as quickly as possible. The Microsoft AI Partner Training Roadshow places developers, solution architects, implementation consultants, and sales & pre-sales consultants at the forefront of AI transformation.
Because there will be no on-demand delivery post-event, we invite you to join us in Hyderabad, Bengaluru, or one of the other four cities across the globe that’s conveniently located near you.
Visit the Microsoft AI Partnership Roadshow website and register today to get started.
Microsoft Tech Community – Latest Blogs –Read More
IP address changes for Azure Service Bus and IP/DNS Changes for Azure Relay
What is Changing?
The infrastructure layer of Azure Relay and Service Bus is being upgraded which will cause the IP addresses used by customer namespaces to. For Azure Relay the gateway DNS names are also changing.
These changes are being made as part of our continuous improvements to our platform. The IP addresses of our services can change and should not be considered static and unchanging as previously communicated in the communication for Azure Service Bus and Azure Relay. There is no added charge for this nor are there any service interruptions during the migration.
Call to Action
If you are using IP addresses in your egress firewalls to your Azure Relay or Azure Service Bus namespaces, you will need to update them to use the namespace DNS names instead.
Alternative (not recommended!)
As a final alternative, it is possible to use the new IP addresses. We highly recommend against this, as you will need to keep track of any IP address changes yourself, and your service may be interrupted.
Azure Service Bus customers
If you are using Azure Service Bus premium, we recommend using service tags, as per our recommendations described in the service documentation. Service tags will automatically be updated if anything changes in our infrastructure.
If you are on Azure Service Bus standard / basic or cannot use service tags on Azure Service Bus Premium, use the fully qualified domain names for your specific namespaces, or the wildcard “*.servicebus.windows.net” domains. These will automatically resolve to the new IP addresses.
For Azure Service Bus, as an unrecommended alternative, the IP address can be found by executing a ping command against the fully qualified domain name of your specific namespace.
Azure Relay customers
For Azure Relay, configure your firewalls with the DNS names of all the Relay gateways, which can be found by running this script . This script will resolve the fully qualified domain names of all the gateways to which you need to establish a connection.
Furthermore, you can use the same script , to get the IP addresses of all the gateways to which you need to establish a connection.
Microsoft Tech Community – Latest Blogs –Read More
What’s new in Windows Autopatch: February 2024
The start of the new year brings a great opportunity for positive change, including the release of new features in Windows Autopatch. We heard your feedback! Here are some improvements made in response to your enterprise needs.
Import Update rings for Windows 10 and later in preview
Update rings allow you to specify how and when Windows as a service updates your Windows 10 or Windows 11 device with feature and quality updates. Update rings are available for Windows 10 and later. And if you’re a Windows Autopatch customer, you can now bring existing Update rings for Windows 10 and later policies into Windows Autopatch Management. For additional information, see Configure Update rings for Windows 10 and later policy in Intune.
Importing existing rings allows you to take advantage of the many capabilities of Windows Autopatch without impacting your existing Windows update schedules. Imported rings will automatically register all targeted devices into Windows Autopatch without the need to redeploy or change your existing update rings. Additionally, important rings will be reflected in the reporting and release experience.
Learn how to import update rings for Windows 10 and later. If needed, brush up on Windows client updates, channels, and tools.
Customer defined service outcomes in preview
Have you used Windows Autopatch reports to monitor the health and activity of your deployments? The insights from the reports can help you understand if your devices are maintaining update compliance targets.
Previously, deployment success measures were based on a static schedule of 21 days. This means that Windows Autopatch aims to keep at least 95% of eligible devices on the latest Windows quality update 21 days after release.
With this enhancement, the success of Windows Autopatch deployments will be based on your defined rings. We’ll also be introducing new columns in our release blade, as well as Windows quality and feature update reporting, to show the percentage complete for quality and feature updates. Devices that are up to date will remain in the “In Progress” status in reporting until you either get the current monthly cumulative update or an alert. If an alert is received, the status will change to “Not up to date.”
To learn more, read Service level objectives.
Improved data refresh speed and reporting accuracy
Windows Autopatch reporting provides rich insights into your patch compliance status, so you can make informed choices about protecting against defects and vulnerabilities.
This release is changing the refresh cycle for Windows Autopatch reporting. The refresh cycle refers to the amount of time from when a change is made to when it’s reflected in reporting and other UX components. This time will be reduced from every 24 hours to every 30 minutes. This improvement supports the many data streams that Windows Autopatch uses to provide current update status for all devices enrolled into Windows Autopatch.
To learn more, see Windows quality update reporting.
Take your next step with Windows Autopatch
We hope these enhancements will help you keep your devices secure and up to date with less hassle and more control. Get current and stay current with automation that leads to higher security and lower costs.
The ideas behind these releases originated from conversations, input, and requests from you, our customers. We’d love to hear your feedback and suggestions on how we can continue to make Windows Autopatch even better for you. You can share your thoughts and ideas with us on our feedback hub or by joining our community forum.
If you want to learn more about Windows Autopatch:
Visit our website.
Read our documentation.
Watch our guided demos.
If you want to try Windows Autopatch for yourself, sign up for a free trial or contact us for a demo.
Thank you for choosing Windows Autopatch and stay tuned for more updates and announcements.
Continue the conversation. Find best practices. Bookmark the Windows Tech Community, then follow us @MSWindowsITPro on X/Twitter. Looking for support? Visit Windows on Microsoft Q&A.
Microsoft Tech Community – Latest Blogs –Read More
Security review for Microsoft Edge version 121
We are pleased to announce the security review for Microsoft Edge, version 121!
We have reviewed the new settings in Microsoft Edge version 121 and determined that there are no additional security settings that require enforcement. The Microsoft Edge version 117 security baseline continues to be our recommended configuration which can be downloaded from the Microsoft Security Compliance Toolkit.
Microsoft Edge version 121 introduced 11 new computer settings and 11 new user settings. We have included a spreadsheet listing the new settings in the release to make it easier for you to find them.
As a friendly reminder, all available settings for Microsoft Edge are documented here, and all available settings for Microsoft Edge Update are documented here.
Please continue to give us feedback through the Security Baselines Discussion site or this post.
Microsoft Tech Community – Latest Blogs –Read More
APIs in Action: Unlocking the Potential of APIs in Today’s Digital Landscape
In today’s world, APIs (Application Programming Interfaces) are essential for connecting applications and services, driving digital innovation. But with the rise of hybrid and multi-cloud setups, effective API management becomes essential for ensuring security and efficiency. That’s where APIs in Action, a virtual event dedicated to unlocking the full potential of APIs, comes in.
Join us for a full-day virtual event focused on exploring API management for integration, hybrid and multi-cloud, and AI workloads. Learn from industry experts about the latest trends and best practices shaping the API landscape. Our immersive event delves deep into APIs and API management, highlighting innovative architectures that drive business growth. Our experts will guide you through transforming existing services and making your data easily accessible to developers, both internally and externally.
Whether you’re a seasoned professional or just starting out, APIs in Action equips you with the knowledge and tools to use APIs effectively in your hybrid and multi-cloud environment. Register now and join the conversation! Experience a day filled with insightful discussions, demos, and actionable insights that will empower you to navigate the evolving landscape of API management with confidence.
Session
Abstract
Speaker(s)
The role of API Management in Azure Integration Services
A successful integration platform developed with Azure Integration Services will find API Management at the heart of your solution. In this session we will discuss some of the common scenarios where you will find API Management used.
Mike Stephenson
API management for microservices in a hybrid and multi-cloud world
Microservices are on the cusp of becoming the dominant style of software architecture. This hands-on demonstration will show how enterprises can make the transition to API-first architectures and microservices in a hybrid, multi-cloud world.
Tom Kerkhove
Leveraging API Management for OpenAI Applications/Use Azure API Management (APIM) to manage, secure, and scale your LLM-based applications
This session navigates the intersection of APIM and OpenAI technologies, discussing how APIM enhances the deployment, security, and scalability of OpenAI-powered applications. Attendees will learn about APIM basics, OpenAI’s capabilities, integration strategies, security challenges, and real-world applications.
Elena Neroslavskaya, Chris Ayers
Azure API Management from a developer perspective
As organizations adopt an API-first mindset, the need for a good management of your APIs grows. This session will explain the benefits of Azure API Management (APIM) through the eyes of a developer. What’s in it for the developer and how can Azure APIM help to maximize the potential and security of your APIs?
Toon Vanhoutte
OpenAPI now vs. the future
Discover the essential role of OpenAPI in unlocking your API’s full potential and expanding your customer base. In this session, explore how OpenAPI is integral to the AI-driven future, providing crucial insights for staying ahead in the dynamic API landscape. Elevate your strategy and position your API for success by embracing OpenAPI.
Darrel Miller
API Design First with SwaggerHub and Azure API Management
Still designing in the dark ages with interface design documents and outdated documentation? Come see how SwaggerHub and Azure API Management can enable you to utilize the API Design First methodology to create live documentation that allows architects and stakeholders to design software together.
Joël Hébert
API DevEx
The developer experience for APIs can be difficult for new API developers and can add complexity to existing API projects due to new toolchains and evolving cloud services. In this session, we will demystify the API developer experience, leveraging tools like GitHub Copilot, Azure API Center, Azure API Management, and OpenAPI extensions.
Josh Garverick
Better API Governance with Azure API Center
An API catalog brings together the different roles involved in an API program and, by promoting the collaboration between them, it fosters API reuse, ensured compliance and better developer productivity. In this session we will explore what is Azure API Center and how to integrate it in your API design workflow.
Massimo Crippa
Leverage Postman to Collaboratively Test your APIs from design to deployment and beyond
Learn firsthand how to wield Postman effectively throughout the API Lifecycle, boosting your API implementation and fortifying security from the start with the right testing strategies.
Whether you’re in the business of creating or consuming APIs, discover how Postman and Azure API Management complement each other to enhance collaboration and streamline productivity.
Sandeep Murusupalli, Garrett London
Build a warp speed time-to-market API with DAB, APIM and Azure Container Apps
In this session will delve into how the Data API builder enables swift and secure database object exposure through REST or GraphQL endpoints allowing data access on any platform, language, or device. By combining DAB with Azure Container Apps and API Management we will build up and secure a serverless data API without writing a single line of code.
Massimo Crippa
Harnessing the Power of Azure API Management: Building Robust and Secure API
In this session, which combines theoretical knowledge with real-world scenarios, we will delve into the advanced features of Azure API Management, with a focus on building robust, secure, and scalable APIs. Attendees will learn about security best practices, policy management, and how to effectively use Azure’s tools to enhance API performance and security.
Hamida Rebai
Building a resilient API landscape with Azure API Management
Cloud service failure is inevitable. When building platforms, it is crucial to ensure that you will seamlessly handle failure and by being resilient to them. Learn how Azure API Management helps you mitigate and recover from failures by using built-in load balancing and circuit-breaking capabilities.
Tom Kerkhove
Enhance your API security posture with Microsoft Defender for APIs
Azure Defender for APIs brings security insights and ML-based detections to APIs that are exposed via Azure API Management. In this session we will see how to leverage Defender for APIs to enhance your security posture, which kind of scenarios are covered, and our learnings from observing production workloads.
Massimo Crippa
Gain Understanding of APIs and Integrations with Azure Application Insights
Use Application Insights to create a correlated, end to end view of integrations across APIM, Logic Apps and Functions. Learn how to record insights, including business data, then create queries to view the data and observe through dashboards. Through Workbooks we can create meaningful, insightful custom visuals allowing support and business teams to gain the insights they want.
Dave Phelps
GitOps for API-Management
In this talk, we will present our experience with a GitOps workflow for implementing and managing API-Management within an Integration Platform for an international corporation. We will describe how we automated infrastructure and deployment for the whole platform, addressing key aspects such as governance, permissions management, testing and documentation.
Christine Robinson, Maximiliane Ott
APIOps: Transforming Azure APIM Deployments with GitOps and DevOps Methodologies
This talk offers a deep dive into the principles and practices of automating and managing APIs in Azure API Management. Attendees will gain insights into how APIOps applies the concepts of GitOps and DevOps to API deployment. By using practices from these two methodologies, APIOps can enable everyone involved in the lifecycle of API design, development, and deployment with self-service and automated tools to ensure the quality of the specifications and APIs that they’re building.
Wael Kdouh
Microsoft Tech Community – Latest Blogs –Read More
Sysmon v15.14
Microsoft Tech Community – Latest Blogs –Read More
Azure SQL Managed Instance – Log Space Growth Alert using Azure Runbook/PowerShell
Introduction
There are scenarios wherein customer want to monitor their transaction log space usage. Currently there are options available to monitor Azure SQL Managed Instance metrics like CPU, RAM, IOPS etc. using Azure Monitor, but there is no inbuilt alert to monitor the transaction log space usage.
This blog will guide to setup Azure Runbook and schedule the execution of DMVs to monitor their transaction log space usage and take appropriate actions.
Overview
Microsoft Azure SQL Managed Instance enables a subset of dynamic management views (DMVs) to diagnose performance problems, which might be caused by blocked or long-running queries, resource bottlenecks, poor query plans, and so on.
Using DMV’s we can also find the log growth – Find the usage in percentage and compare it to a threshold value and create an alert.
In Azure SQL Managed Instance, querying a dynamic management view requires VIEW SERVER STATE permissions.
GRANT VIEW SERVER STATE TO database_user;
Monitor log space use by using sys.dm_db_log_space_usage. This DMV returns information about the amount of log space currently used and indicates when the transaction log needs truncation.
For information about the current log file size, its maximum size, and the auto grow option for the file, you can also use the size, max_size, and growth columns for that log file in sys.database_files.
Solution
Below PowerShell script can be used inside an Azure Runbook and alerts can be created to notify the user about the log space used to take necessary actions.
# Ensures you do not inherit an AzContext in your runbook
Disable-AzContextAutosave -Scope Process
$Threshold = 70 # Change this to your desired threshold percentage
try
{
“Logging in to Azure…”
Connect-AzAccount -Identity
}
catch {
Write-Error -Message $_.Exception
throw $_.Exception
}
$ServerName = “tcp:xxx.xx.xxx.database.windows.net,3342”
$databaseName = “AdventureWorks2017”
$Cred = Get-AutomationPSCredential -Name “xxxx”
$Query=”USE [AdventureWorks2017];”
$Query= $Query+ ” “
$Query= $Query+ “SELECT ROUND(used_log_space_in_percent,0) as used_log_space_in_percent FROM sys.dm_db_log_space_usage;”
$Output = Invoke-SqlCmd -ServerInstance $ServerName -Database $databaseName -Username $Cred.UserName -Password $Cred.GetNetworkCredential().Password -Query $Query
#$LogspaceUsedPercentage = $Output.used_log_space_in_percent
#$LogspaceUsedPercentage
if($Output. used_log_space_in_percent -ge $Threshold)
{
# Raise an alert
$alertMessage = “Log space usage on database $databaseName is above the threshold. Current usage: $Output.used_log_space_in_percent%.”
Write-Output “Alert: $alertMessage”
# You can send an alert using Send-Alert cmdlet or any other desired method
# Send-Alert -Message $alertMessage -Severity “High” Via EMAIL – Can call logicApp to send email, run DBCC CMDs etc.
} else {
Write-Output “Log space usage is within acceptable limits.”
}
There are different alert options which you can use to send alert in case log space exceeds its limit as below.
Alert Options
Send email using logic apps or SMTP – https://learn.microsoft.com/en-us/azure/connectors/connectors-create-api-smtp
Azure functions – https://learn.microsoft.com/en-us/samples/azure-samples/e2e-dotnetcore-function-sendemail/azure-net-core-function-to-send-email-through-smtp-for-office-365/
Run dbcc command to shrink log growth – https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/file-space-manage?view=azuresql-mi#ShrinkSize
Feedback and suggestions
If you have feedback or suggestions for improving this data migration asset, please contact the Data SQL Ninja Engineering Team (datasqlninja@microsoft.com). Thanks for your support!
Note: For additional information about migrating various source databases to Azure, see the Azure Database Migration Guide
Microsoft Tech Community – Latest Blogs –Read More
Azure Database for MySQL – Single Server retirement – Key updates and migration tooling available
Azure Database for MySQL – Single Server is scheduled for retirement by September 16, 2024.
As part of this retirement, we stopped support for creating new Single Server instances via the Azure portal as of January 16, 2023, and beginning March 19, 2024, we’ll no longer support creating new Single Server instances via the Azure CLI. Should you still need to create Single Server instances to meet your business continuity needs, please raise an Azure support ticket. Note that you’ll still be able to create read replicas and perform restores (PITR and geo-restore) for your existing Single Server instance until the sunset date, September 16, 2024.
If you currently have an Azure Database for MySQL – Single Server production server, we’re pleased to let you know that you can migrate your Azure Database for MySQL – Single Server instance to the Azure Database for MySQL – Flexible Server service free of charge by using one of the following migration tooling options.
Azure Database for MySQL Import CLI
You can leverage the Azure Database for MySQL Import CLI (General Availability) to migrate your Azure Database for MySQL – Single Server instances to Flexible Server using snapshot backup and restore technology with a single CLI command. Based on user inputs, this functionality will provision your target Flexible Server instance, take a backup of the source server, and then restore it to the target. It copies the following properties and files from the Single Server instance to the Flexible Server instance:
Data files
Server parameters
Compatible firewall rules
Server properties such as tier, version, SKU name, storage size, location, geo-redundant backups settings, public access settings, tags, auto grow settings and backup-retention days settings
Admin username and password
In-place auto-migration
In-place auto-migration (General Availability) from Azure Database for MySQL – Single Server to Flexible Server is an in-place upgrade during a planned maintenance window for select Single Server database workloads. If you have a Single Server workload based on the Basic or General Purpose SKU with <= 20 GiB of used storage and without complex features (CMK, AAD, Read Replica, Private Link) enabled, you can now nominate yourself for auto-migration by submitting your server details using this form.
Azure Database Migration Service (DMS)
Azure Database Migration Service (DMS) (General Availability) is a fully managed service designed to enable seamless online and offline migration from Azure Database for MySQL – Single Server to Flexible Server. DMS supports cross-region, cross-version, cross-resource group, and cross-subscription migrations.
Conclusion
Take advantage of one of these options to migrate your Single Server instances to Flexible Server at no cost!
For more questions on Azure Database for MySQL Single Server retirement, see our Frequently Asked Questions.
Microsoft Tech Community – Latest Blogs –Read More
Simplifying Azure Kubernetes Service Authentication Part 2
Welcome to the second installment of our multipart series on simplifying Azure Kubernetes Service (AKS) authentication. In this article, we delve deeper into the intricacies of AKS setup, focusing on critical aspects such as deploying demo applications, configuring Cert Manager for TLS certificates (enabling HTTPS), establishing a static IP address, creating a DNS label, and initiating the groundwork for robust authentication. First part here Part 1
Let’s dive in!
Deploy two demo applications
In the previous post we set up our AKS cluster and configured NGINX. Now we will deploy two sample applications and deploy them. You can follow the official documentation here Create an unmanaged ingress controller.
First create the following two YAML files that define our two applications:
aks-helloworld-one.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: aks-helloworld-one
spec:
replicas: 1
selector:
matchLabels:
app: aks-helloworld-one
template:
metadata:
labels:
app: aks-helloworld-one
spec:
containers:
– name: aks-helloworld-one
image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
ports:
– containerPort: 80
env:
– name: TITLE
value: “Welcome to Azure Kubernetes Service (AKS)”
—
apiVersion: v1
kind: Service
metadata:
name: aks-helloworld-one
spec:
type: ClusterIP
ports:
– port: 80
selector:
app: aks-helloworld-one
aks-helloworld-two.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: aks-helloworld-two
spec:
replicas: 1
selector:
matchLabels:
app: aks-helloworld-two
template:
metadata:
labels:
app: aks-helloworld-two
spec:
containers:
– name: aks-helloworld-two
image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
ports:
– containerPort: 80
env:
– name: TITLE
value: “AKS Ingress Demo”
—
apiVersion: v1
kind: Service
metadata:
name: aks-helloworld-two
spec:
type: ClusterIP
ports:
– port: 80
selector:
app: aks-helloworld-two
Then run the following commands to deploy the applications:
kubectl apply -f aks-helloworld-one.yaml –namespace ingress-basic
kubectl apply -f aks-helloworld-two.yaml –namespace ingress-basic
Now lets check the pods, service, and deployment:
List the pods and verify the STATUS is Running for both applications
kubectl get pods -n ingress-basic
List the service and notice the CLUSTER-IP assigned to each service
kubectl get service -n ingress-basic
List the deployment and notice the READY state
kubectl get deployment -n ingress-basic
Create an ingress route
We will proceed to create a Kubernetes Ingress resource YAML file, enabling us to efficiently route traffic to each of our deployed applications. As a reminder, our ingress controller has been configured to utilize NGINX, as discussed in our previous post. Consequently, we will leverage the NGINX configuration to effectively manage traffic for the following services:
EXTERNAL_IP/hello-world-one to aks-helloworld-one
EXTERNAL_IP/hello-world-two to aks-helloworld-two,
EXTERNAL_IP/static to aks-helloworld-one
First create the following YAML file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: “false”
nginx.ingress.kubernetes.io/use-regex: “true”
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
– http:
paths:
– path: /hello-world-one(/|$)(.*)
pathType: Prefix
backend:
service:
name: aks-helloworld-one
port:
number: 80
– path: /hello-world-two(/|$)(.*)
pathType: Prefix
backend:
service:
name: aks-helloworld-two
port:
number: 80
– path: /(.*)
pathType: Prefix
backend:
service:
name: aks-helloworld-one
port:
number: 80
—
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress-static
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: “false”
nginx.ingress.kubernetes.io/rewrite-target: /static/$2
spec:
ingressClassName: nginx
rules:
– http:
paths:
– path: /static(/|$)(.*)
pathType: Prefix
backend:
service:
name: aks-helloworld-one
port:
number: 80
Then create the resource with the following command:
kubectl apply -f hello-world-ingress.yaml –namespace ingress-basic
You will need your public IP obtained from the last post. Now visit the deployed application in the web browser by navigating to:
PUBLICIP/hello-world-two or PUBLICIP/hello-world-one
Upload cert manager images to your ACR
We will proceed to configure images for the certificate manager by deploying the necessary images to our Azure Container Registry (ACR) instance. Before executing the following command, ensure that you include the -TargetTag <your tag name> flag. Although the Microsoft documentation for using Transport Layer Security (TLS) with an ingress controller on AKS does not explicitly require this flag, it is advisable to include it. Doing so allows you to specify the ACR repository names, such as jetstack/cert-manager-cainjector, jetstack/cert-manager-controller, and jetstack/cert-manager-webhook. For detailed steps, you can refer to the official documentation here Use TLS with an ingress controller on Azure Kubernetes Service (AKS)
Enter the following commands in PowerShell to upload the cert manager images to your ACR:
$RegistryName = “<REGISTRY_NAME>”
$ResourceGroup = (Get-AzContainerRegistry | Where-Object {$_.name -eq $RegistryName} ).ResourceGroupName
$CertManagerRegistry = “quay.io”
$CertManagerTag = “v1.8.0”
$CertManagerImageController = “jetstack/cert-manager-controller”
$CertManagerImageWebhook = “jetstack/cert-manager-webhook”
$CertManagerImageCaInjector = “jetstack/cert-manager-cainjector”
Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $CertManagerRegistry -SourceImage “${CertManagerImageController}:${CertManagerTag}” -TargetTag “${CertManagerImageController}:${CertManagerTag}”
Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $CertManagerRegistry -SourceImage “${CertManagerImageWebhook}:${CertManagerTag}” -TargetTag “${CertManagerImageWebhook}:${CertManagerTag}”
Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $CertManagerRegistry -SourceImage “${CertManagerImageCaInjector}:${CertManagerTag}” -TargetTag “${CertManagerImageCaInjector}:${CertManagerTag}”
Create a static IP address
In the context of configuring the NGINX ingress controller, it is prudent to address the necessity of a static IP address for proper routing functionality. Based on my observations during the NGINX setup process outlined in the previous documentation, it appears that a static IP address may already be assigned. Consequently, there might be no immediate requirement to allocate a new static IP address. However, to ensure unequivocal utilization of a static IP address, it is advisable to consider assigning a fresh one to the load balancer exposed by NGINX. While this additional step does not inherently pose any harm, it remains a discretionary measure. Depending on the specific deployment scenario, it may or may not be essential.
First get the resource group name of your AKS cluster:
(Get-AzAksCluster -ResourceGroupName $ResourceGroup -Name myAKSCluster).NodeResourceGroup
The run the following command to create a static IP address:
(New-AzPublicIpAddress -ResourceGroupName MC_myResourceGroup_myAKSCluster_eastus -Name myAKSPublicIP -Sku Standard -AllocationMethod Static -Location eastus).IpAddress
You should get an IP address. Keep a note of this IP.
Set the DNS label, static IP, and health probe using Helm
Create a DNS label name that will be used to generate a FQDN for navigating to your applications. This can be any name, but it must be unique. Additionally, add the static IP address obtained from above and set the health monitoring request path. Run the following command to configure the NGINX ingress controller:
$DnsLabel = “<DNS_LABEL>”
$Namespace = “ingress-basic”
$StaticIP = “<STATIC_IP>”
helm upgrade ingress-nginx ingress-nginx/ingress-nginx `
–namespace $Namespace `
–set controller.service.annotations.”service.beta.kubernetes.io/azure-dns-label-name”=$DnsLabel `
–set controller.service.loadBalancerIP=$StaticIP `
–set controller.service.annotations.”service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path”=/healthz
This marks the conclusion of the second installment in our series. In the upcoming segment, we will delve further into the setup process. Specifically, we’ll configure the certificate manager, update our ingress routes, establish passwords and secrets for authentication, and prepare for the configuration of our OAuth2 proxy. Stay tuned for the next part, where we continue our journey toward a robust and secure system.
Microsoft Tech Community – Latest Blogs –Read More
Intune moving to support Android 10 and later for user-based management methods in October 2024
We’ve heard your feedback asking to understand the plan for Intune’s support for Android operating system (OS) versions.
In October 2024 (after Google’s expected release of Android 15), Intune will revise its operating system support statement to move to supporting only Android 10 and later for user-based management methods, which include:
Android Enterprise personally owned with a work profile.
Android Enterprise corporate owned work profile.
Android Enterprise fully managed.
Android Open Source Project (AOSP) user-based.
Android Device administrator.
App protection policies.
App configuration policies for managed apps.
The following aren’t impacted by this change:
Android Enterprise dedicated devices: Will continue to be supported on Android 8 or later.
AOSP user-less: Will continue to be supported on Android 8 or later.
Microsoft Teams certified Android devices: Will be supported on versions listed in Microsoft Teams certified Android device documentation.
Microsoft Teams certified Android devices
Teams Rooms certified systems and peripherals
We plan to gradually move to only supporting the four most recent Android versions for our user-based management methods to keep enrolled devices secure. As Google continues to release new Android versions annually, we’ll stop supporting one or two older versions every October until we support only the four most recent versions. After that, we’ll end support for one version annually in October to maintain our support statement for the four latest versions.
Impact of ending support
For user-based management methods (as listed above), Android devices running Android 9 or earlier will no longer be supported. For devices on unsupported Android OS versions:
Intune technical support will no longer be provided.
Intune will no longer be making changes to address bugs or issues.
New and existing features are not guaranteed to work.
While Intune won’t prevent enrollment or management of devices on unsupported Android OS versions, functionality isn’t guaranteed, and use isn’t recommended.
How can you prepare?
Use Intune reporting to identify which devices or users might be affected:
For devices with mobile device management (MDM), go to Devices > All devices and filter by OS.
For devices with app protection policies, go to Apps > Monitor > App Protection status and use the Platform and Platform version columns to filter.
For devices with app configuration policies, go to Apps > Monitor > App Configuration status and use the Platform and Platform version columns to filter.
Warn users that they should update their Android version:
For devices with MDM, utilize a device compliance policy for Android Enterprise, Android AOSP, or Android device administrator and set the action for noncompliance to send an email or push notification to users before marking them noncompliant.
For devices with app protection policies, create an app protection policy and configure conditional launch with a min OS version requirement that warns users.
Block devices from accessing corporate resources until they update their Android version:
For devices with MDM, you can use either or both of these methods:
Set enrollment restrictions to prevent enrollment on devices running older versions.
Utilize a device compliance policy to make devices noncompliant if they are running older versions.
For devices with app protection policies, create an app protection policy and configure conditional launch with a min OS version requirement that blocks users from app access.
For more information, see Manage operating system versions with Intune. If you have any questions, leave a comment below or reach out to us on X @IntuneSuppTeam.
Microsoft Tech Community – Latest Blogs –Read More
Join Teams for work or school meetings with personal account
We are improving the ways to join Teams meetings and have started to roll out an improvement enabling you to join a Teams meeting organized by a work or school user with your signed-in personal account. Read more on the Teams Insider blog and join Teams Insider to try this in Teams free on Windows 11 today!
Join Teams for work or school meeting with your personal account – Teams Insider
Microsoft Tech Community – Latest Blogs –Read More
Intelligent App Chronicles: Azure API Management as an Enterprise API Gateway
The Intelligent App Chronicles for Healthcare is a webinar series designed to provide health and life sciences companies with a comprehensive guide to building intelligent healthcare applications.
The series will cover a wide range of topics including Azure Container Services, Azure AI Services, Azure Integration Services, and innovative solutions that can accelerate your Intelligent app journey. By attending these webinars, you will learn how to leverage the power of intelligent systems to build scalable and secure healthcare solutions that can transform the way you deliver care. Our hosts will be: (99+) Shelly (Finch) Avery | LinkedIn, (99+) Matthew Anderson | LinkedIn
Our next session will be on Feb 20th at 9:00 PT / 10:00 MT / 11:00 CT / 12:00 ET – Click here to Register.
Overview:
Please join us for an informative session on how to use Azure API Management as an enterprise API gateway. You will discover how to use Azure API Management as an enterprise API gateway to create intelligent and secure healthcare applications.
Our speaker this week is Rob McKenna, Principal Technical Specialist for Azure Apps and Innovation, he will cover topics such as:
Benefits of a centralized and shared API gateway
the steps to get your enterprise teams started
networking considerations for regulated industries.
How to ensure the internal and external availability of your APIs
How to improve your developer velocity, and how to use DevOps for API management and developer experience tooling.
Don’t miss this opportunity to learn from the experts and take your healthcare applications to the next level. Register now for the Intelligent App Chronicles for Healthcare webinar series! here!
Thanks for reading!
Please follow the aka.ms/HLSBlog for all this great content.
Thanks for reading, Shelly Avery | Email, LinkedIn
Microsoft Tech Community – Latest Blogs –Read More
Hunting for QR Code AiTM Phishing and User Compromise
In the dynamic landscape of adversary-in-the-middle (AiTM) attacks, the Microsoft Defender Experts team has recently observed a notable trend – QR code-themed phishing campaigns. The attackers employ deceptive QR codes to manipulate users into accessing fraudulent websites or downloading harmful content.
These attacks exploit the trust and curiosity of users who scan QR codes without verifying their source or content. Attackers can create QR codes that redirect users to phishing sites that mimic legitimate ones, such as banks, social media platforms, or online services. The targeted user scans the QR code, subsequently being redirected to a phishing page. Following user authentication, attackers steal the user’s session token, enabling them to launch various malicious activities, including Business Email Compromise attacks and data exfiltration attempts. Alternatively, attackers can create QR codes that prompt users to download malware or spyware onto their devices. These attacks can result in identity theft, financial loss, data breach, or device compromise.
This blog explains the mechanics of QR code phishing, and details how Defender Experts hunt for these phishing campaigns. Additionally, it outlines the procedures in place to notify customers about the unfolding attack narrative and its potential ramifications.
Why is QR code phishing a critical threat?
The Defender Experts team has observed that QR code campaigns are often massive and large-scale in nature. Before launching these campaigns, attackers typically conduct reconnaissance attempts to gather information on targeted users. The campaigns are then sent to large groups of people within an organization, often exceeding 1,000 users, with varying parameters across subject, sender, and body of the emails.
The identity compromises and stolen session tokens resulting from these campaigns are proportional to their large scale. In recent months, Defender Experts have observed QR code campaigns growing from 10% to 30% of total phishing campaigns. Since the campaigns do not follow a template, it can be difficult to scope and evaluate the extent of compromise. It is crucial for organizations to be aware of this trend and take steps to protect their employees from falling victim to QR code phishing attacks.
Understanding the intent of QR code phishing attacks
The QR code phishing email can have one of the below intents:
Credential theft: The majority of these campaigns are designed with the intent where the user is redirected to an AiTM phishing website for session token theft. The authentication method can be single factor authentication, where only the user’s password is compromised and the sign-in attempts are unsuccessful; in these scenarios, the attacker signs in later with the compromised password and bypasses multifactor authentication (MFA) through MFA fatigue attacks.Alternatively, the user can be redirected to an AiTM phishing page where the credentials, MFA parameters and session token are compromised in real-time.
Malware distribution: In these scenarios, once the user scans the QR code, malware/spyware/adware is automatically downloaded on the mobile device.
Financial theft: These campaigns use QR codes to trick the user into making a fake payment or giving away their banking credentials. The user may scan the QR code and be taken to a bogus payment gateway or a fake bank website. The attacker can then access the user’s account later and bypass the second factor authentication by contacting the user via email or phone.
How Defender Experts approach QR code phishing
In QR code phishing attempts, the targeted user scans the QR code on their personal non-managed mobile device, which falls outside the scope of the Microsoft Defender protected environment. This is one of the key challenges for detection. In addition to detections based on Image Recognition or Optical Character Recognition, a novel approach was necessary to detect the QR code phishing attempts.
Defender Experts have researched identifying patterns across the QR code phishing campaigns and malicious sign-in attempts and devised the following detection approaches:
Pre-cursor events: User activities
Suspicious Senders
Suspicious Subject
Email Clustering
User Signals
Suspicious Sign-in attempts
1. Hunting for user behavior:
This is one of the primary detections that helps Defender Experts surface suspicious sign-in attempts from QR code phishing campaigns. Although the user scans the QR code from an email on their personal mobile device, in the majority of the scenarios, the phishing email being accessed is recorded with MailItemsAccessed mail-box auditing action.
The majority of the QR code campaigns have image (png/jpg/jpeg/gif) or document attachments (pdf/doc/xls) – Yes! QR codes are embedded in Excel attachments too! The campaigns can include a legitimate URL that redirects to a phishing page with malicious QR code as well.
A malicious sign-in attempt with session token compromise that follows the QR code scan is always observed from non-trusted devices with medium/high risk score for the session.
This detection approach correlates a user accessing an email with image/document attachments and a risky sign-in attempt from non-trusted devices in closer proximity and validates if the location from where the email item was accessed is different from the location of sign-in attempt.
Advanced Hunting Query:
let successfulRiskySignIn = materialize(AADSignInEventsBeta
| where Timestamp > ago(1d)
| where isempty(DeviceTrustType)
| where IsManaged != 1
| where IsCompliant != 1
| where RiskLevelDuringSignIn in (50, 100)
| project Timestamp, ReportId, IPAddress, AccountUpn, AccountObjectId, SessionId, Country, State, City
);
let suspiciousSignInUsers = successfulRiskySignIn
| distinct AccountObjectId;
let suspiciousSignInIPs = successfulRiskySignIn
| distinct IPAddress;
let suspiciousSignInCities = successfulRiskySignIn
| distinct City;
CloudAppEvents
| where Timestamp > ago(1d)
| where ActionType == “MailItemsAccessed”
| where AccountObjectId in (suspiciousSignInUsers)
| where IPAddress !in (suspiciousSignInIPs)
| where City !in (suspiciousSignInCities)
| join kind=inner successfulRiskySignIn on AccountObjectId
| where AccountObjectId in (suspiciousSignInUsers)
| where (Timestamp – Timestamp1) between (-5min .. 5min)
| extend folders = RawEventData.Folders
| mv-expand folders
| extend items = folders.FolderItems
| mv-expand items
| extend InternetMessageId = tostring(items.InternetMessageId)
| project Timestamp, ReportId, IPAddress, InternetMessageId, AccountObjectId, SessionId, Country, State, City
2. Hunting for sender patterns:
The sender attributes play a key role in the detection of QR code campaigns. Since the campaigns are typically large scale in nature, 95% of the campaigns do not involve phishing emails from compromised trusted vendors. Predominant emails are sent from newly-created domains or non-prevalent domains in the organization.
Since the attack involves multiple user actions involving scanning the QR code from a mobile device and completing the authentication, unlike typical phishing with simple URL clicks, the attackers induce a sense of urgency by impersonating IT support, HR support, payroll, administrator team, or the display name indicates the email is sent on-behalf of a known high value target in the organization (e.g., “Lara Scott on-behalf of CEO”).
In this detection approach, we correlate email from non-prevalent senders in the organization with impersonation intents.
Advanced Hunting Query:
let PhishingSenderDisplayNames = ()
{
pack_array(“IT”, “support”, “Payroll”, “HR”, “admin”, “2FA”, “notification”, “sign”, “reminder”, “consent”, “workplace”,
“administrator”, “administration”, “benefits”, “employee”, “update”, “on behalf”);
};
let suspiciousEmails = EmailEvents
| where Timestamp > ago(1d)
| where isnotempty(RecipientObjectId)
| where isnotempty(SenderFromAddress)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| join kind=inner (EmailAttachmentInfo
| where Timestamp > ago(1d)
| where isempty(SenderObjectId)
| where FileType has_any (“png”, “jpg”, “jpeg”, “bmp”, “gif”)
) on NetworkMessageId
| where SenderDisplayName has_any (PhishingSenderDisplayNames())
| project Timestamp, Subject, FileName, SenderFromDomain, RecipientObjectId, NetworkMessageId;
let suspiciousSenders = suspiciousEmails | distinct SenderFromDomain;
let prevalentSenders = materialize(EmailEvents
| where Timestamp between (ago(7d) .. ago(1d))
| where isnotempty(RecipientObjectId)
| where isnotempty(SenderFromAddress)
| where SenderFromDomain in (suspiciousSenders)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| distinct SenderFromDomain);
suspiciousEmails
| where SenderFromDomain !in (prevalentSenders)
| project Timestamp, Subject, FileName, SenderFromDomain, RecipientObjectId, NetworkMessageId
Correlating suspicious emails with image attachments from a new sender with risky sign-in attempts for the recipients can also surface the QR code phishing campaigns and user compromises.
3. Hunting for subject patterns:
In addition to impersonating IT and HR teams, attackers also craft the campaigns with actionable subjects. (e.g., MFA completion required, Digitally sign documents). The targeted user is requested to complete the highlighted action by scanning the QR code in the email and providing credentials and MFA token.
In most cases, these automated phishing campaigns also include a personalized element, where the user’s first name/last name/alias/email address is included in the subject. The email address of the targeted user is also embedded in the URL behind the QR code. This serves as a unique tracker for the attacker to identify emails successfully delivered and QR codes scanned.
In this detection, we track emails with suspicious keywords in subjects or personalized subjects. To detect personalized subjects, we track campaigns where the first three words or last three words of the subject are the same, but the other values are personalized/unique.
For example:
Alex, you have an undelivered voice message
Bob, you have an undelivered voice message
Charlie, you have an undelivered voice message
Your MFA update is pending, Alex
Your MFA update is pending, Bob
Your MFA update is pending, Charlie
Advanced Hunting Query:
Personalized campaigns based on the first few keywords:
EmailEvents
| where Timestamp > ago(1d)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| where isempty(SenderObjectId)
| extend words = split(Subject,” “)
| project firstWord = tostring(words[0]), secondWord = tostring(words[1]), thirdWord = tostring(words[2]), Subject, SenderFromAddress, RecipientEmailAddress, NetworkMessageId
| summarize SubjectsCount = dcount(Subject), RecipientsCount = dcount(RecipientEmailAddress), suspiciousEmails = make_set(NetworkMessageId, 10) by firstWord, secondWord, thirdWord
, SenderFromAddress
| where SubjectsCount >= 10
Personalized campaigns based on the last few keywords:
EmailEvents
| where Timestamp > ago(1d)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| where isempty(SenderObjectId)
| extend words = split(Subject,” “)
| project firstLastWord = tostring(words[-1]), secondLastWord = tostring(words[-2]), thirdLastWord = tostring(words[-3]), Subject, SenderFromAddress, RecipientEmailAddress, NetworkMessageId
| summarize SubjectsCount = dcount(Subject), RecipientsCount = dcount(RecipientEmailAddress), suspiciousEmails = make_set(NetworkMessageId, 10) by firstLastWord, secondLastWord, thirdLastWord
, SenderFromAddress
| where SubjectsCount >= 10
Campaign with suspicious keywords:
let PhishingKeywords = ()
{
pack_array(“account”, “alert”, “bank”, “billing”, “card”, “change”, “confirmation”,
“login”, “password”, “mfa”, “authorize”, “authenticate”, “payment”, “urgent”, “verify”, “blocked”);
};
EmailEvents
| where Timestamp > ago(1d)
| where EmailDirection == “Inbound”
| where DeliveryAction == “Delivered”
| where isempty(SenderObjectId)
| where Subject has_any (PhishingKeywords())
4. Hunting for attachment name patterns:
Based on the historical QR code campaigns investigations, Defender Experts have identified that the attachment names of the campaigns are usually randomized by the attackers, meaning every email has a different attachment name for the QR code with high levels of randomization. Emails with randomly named attachment names from the same sender to multiple recipients, typically more than 50, can potentially indicate a QR code phishing campaign.
Campaign with randomly named attachments:
EmailAttachmentInfo
| where hasNonPrevalentSenders
| where Timestamp between (emailStartTime .. emailEndTime)
| where SenderFromAddress in (nonPrevalentSenders)
| where FileType in (“png”, “jpg”, “jpeg”, “gif”, “svg”)
| where isnotempty(FileName)
| extend firstFourFileName = substring(FileName, 0, 4)
| summarize RecipientsCount = dcount(RecipientEmailAddress), FirstFourFilesCount = dcount(firstFourFileName), suspiciousEmails = make_set(NetworkMessageId, 10) by SenderFromAddress
| where FirstFourFilesCount >= 10
5. Hunting for user signals/clusters
In order to craft effective large scale QR code phishing attacks, the attackers perform reconnaissance across social media to gather target user email addresses, their preferences and much more. These campaigns are sent across to 1,000+ users in the organization with luring subjects and contents based on their preferences. However, Defender Experts have observed that, at least one user finds the campaign suspicious and reports the email, which generates this alert: “Email reported by user as malware or phish.”
This alert can be another starting point for hunting activity to identify the scope of the campaign and compromises. Since the campaigns are specifically crafted for each group of users, scoping based on sender/subject/filename might not be an effective approach. Microsoft Defender for Office offers a heuristic based approach based on the email content as a solution for this problem. Emails with similar content that are likely to be from one attacker are clustered together and the cluster ID is populated in the EmailClusterId field in EmailEvents table.
The clusters can include all phishing attempts from the attackers so far against the organization, it can aggregate emails with malicious URLs, attachments, and QR codes as one, based on the similarity. Hence, this is a powerful approach to explore the persistent phishing techniques of the attacker and the repeatedly targeted users.
Below is a sample query on scoping a campaign from the email reported by the end user. The same scoping logic can be used on the previously discussed hunting hypotheses as well.
let suspiciousClusters = EmailEvents
| where Timestamp > ago(7d)
| where EmailDirection == “Inbound”
| where NetworkMessageId in (<List of suspicious Network Message Ids from Alerts>)
| distinct EmailClusterId;
EmailEvents
| where Timestamp > ago(7d)
| where EmailDirection == “Inbound”
| where EmailClusterId in (suspiciousClusters)
| summarize make_set(Subject), make_set(SenderFromDomain), dcount(RecipientObjectId), dcount(SenderDisplayName) by EmailClusterId
6. Hunting for suspicious sign-in attempts:
In addition to detecting the campaigns, it is critical that we identify the compromised identities. To surface the identities compromised by AiTM, we can utilize the below approaches.
Risky sign-in attempt from a non-managed device
Any sign-in attempt from a non-managed, non-compliant, untrusted device should be taken into consideration, and a risk score for the sign-in attempt increases the anomalous nature of the activity. Monitoring these sign-in attempts can surface the identity compromises.
AADSignInEventsBeta
| where Timestamp > ago(7d)
| where IsManaged != 1
| where IsCompliant != 1
//Filtering only for medium and high risk sign-in
| where RiskLevelDuringSignIn in (50, 100)
| where ClientAppUsed == “Browser”
| where isempty(DeviceTrustType)
| where isnotempty(State) or isnotempty(Country) or isnotempty(City)
| where isnotempty(IPAddress)
| where isnotempty(AccountObjectId)
| where isempty(DeviceName)
| where isempty(AadDeviceId)
| project Timestamp,IPAddress, AccountObjectId, ApplicationId, SessionId, RiskLevelDuringSignIn, BrowserId
Suspicious sign-in attributes
Sign-in attempts from untrusted devices with empty user agent, operating system or anomalous BrowserId can also be an indication of identity compromises from AiTM.
Defender Experts also recommend monitoring the sign-ins from known malicious IP addresses. Although the mode of delivery of the phishing campaigns differ (QR code, HTML attachment, URL), the sign-in infrastructure often remains the same. Monitoring the sign-in patterns of compromised users, and continuously scoping the sign-in attempts based on the known patterns can also surface the identity compromises from AiTM.
Mitigations
Apply these mitigations to reduce the impact of this threat:
Educate users about the risks of QR code phishing emails.
Implement Microsoft Defender for Endpoint – Mobile Threat Defense on mobile devices used to access enterprise assets.
Enable Conditional Access policies in Microsoft Entra, especially risk-based access policies. Conditional access policies evaluate sign-in requests using additional identity-driven signals like user or group membership, IP address location information, and device status, among others, are enforced for suspicious sign-ins. Organizations can protect themselves from attacks that leverage stolen credentials by enabling policies such as compliant devices, Azuretrusted IP address requirements, or risk-based policies with proper access control. If you are still evaluating Conditional Access, use security defaults as an initial baseline set of policies to improve identity security posture.
Implement continuous access evaluation.
Leverage Microsoft Edge to automatically identify and block malicious websites, including those used in this phishing campaign, and Microsoft Defender for Office 365 to detect and block malicious emails, links, and files.
Monitor suspicious or anomalous activities in Microsoft Entra ID Protection. Investigate sign-in attempts with suspicious characteristics (e.g., location, ISP, user agent, and use of anonymizer services).
Implement Microsoft Entra passwordless sign-in with FIDO2 security keys.
Turn on network protection in Microsoft Defender for Endpoint to block connections to malicious domains and IP addresses.
If you’re interested in learning more about our Defender Experts services, visit the following resources:
Microsoft Defender Experts for XDR web page
Microsoft Defender Experts for XDR docs page
Microsoft Defender Experts for Hunting web page
Microsoft Defender Experts for Hunting docs page
Microsoft Tech Community – Latest Blogs –Read More
Azure Data @ Microsoft Fabric Community Conference 2024 | Data Exposed Exclusive
In this Data Exposed Exclusive, join Anna Hoffman, Bob Ward, and Jason Himmelstein as they discuss everything you need to know about the upcoming Microsoft Fabric Community Conference!
Microsoft Fabric Community Conference registration: https://aka.ms/fabcon (Enter the code DATAEXPOSED100 for a $100 savings)
Microsoft Tech Community – Latest Blogs –Read More
Enforcement of Defender CSPM for Premium DevOps Security Capabilities
Microsoft’s Defender for Cloud will begin enforcing the Defender Cloud Security Posture Management (DCSPM) plan check for premium DevOps security value beginning March 7th, 2024. If you have the Defender CSPM plan enabled on a cloud environment (Azure, AWS, GCP) within the same tenant your DevOps connectors are created in, you’ll continue to receive premium code to cloud DevOps capabilities at no additional cost. If you aren’t a Defender CSPM customer, you have until March 7th, 2024 to enable Defender CSPM before losing access to these security features. To enable Defender CSPM on a connected cloud environment before March 7, 2024, follow the enablement documentation outlined here.
Microsoft Defender CSPM provides advanced security posture capabilities including agentless vulnerability scanning, attack path analysis, integrated data-aware security posture, code to cloud contextualization, and an intelligent cloud security graph. Pricing is dependent on cloud size, with billing based on Server, Storage account, and Database counts. There is no additional charge for DevOps resources with this enforcement.
More Information
For more information about which DevOps security features are available across both the Foundational CSPM and Defender CSPM plans, see our documentation outlining feature availability.
For more information about DevOps Security in Defender for Cloud, see the overview documentation.
For more information on the code to cloud security capabilities in Defender CSPM, see how to protect your resources with Defender CSPM.
For more information on Defender CSPM pricing, see the pricing page.
Microsoft Tech Community – Latest Blogs –Read More
Azure Verified Modules – Monthly Update Jan 24′
Azure Verified Modules: Monthly Update
Azure Verified Modules (AVM) is an initiative to consolidate and set the standards for what a good Infrastructure-as-Code module looks like. Spanning across languages (Bicep, Terraform etc.) AVM is a unified approach to provide a common code base, a toolkit for our Customers, our Partners, and Microsoft.
AVM is a community driven aspiration, inside and outside of Microsoft. If you are not familiar with AVM yet, go check out this video on YouTube:
What Is This Series?
For Azure Verified Modules, we will be producing monthly updates in which, we will share with you the latest news and features of Azure Verified Modules, including:
Module Updates
The updates AVM Framework.
Our community engagement
For some months we may focus on a highlight module this could be a pattern or a workflow in which the community (You!) would like to know more about from the module owner.
AVM Module Summary
The AVM team are excited that our community have been busy building AVM Modules. As of January 31st, the AVM Footprint currently looks like:
Language
Published
In development
Bicep
84
35
Terraform
18
37
Bicep Resource Modules Published In January:
Full list of Bicep Resource Modules are available here: AVM Bicep Resource Index
analysis-services/server
app/container-app
cache/redis
compute/disk
compute/disk-encryption-set
compute/image
compute/proximity-placement-group
compute/virtual-machine
consumption/budget
container-registry/registry
container-service/managed-cluster
data-protection/backup-vault
databricks/access-connector
databricks/workspace
db-for-my-sql/flexible-server
health-bot/health-bot
net-app/net-app-account
network/ddos-protection-plan
network/firewall-policy
network/front-door
network/front-door-web-application-firewall-policy
network/local-network-gateway
network/nat-gateway
network/virtual-network-gateway
network/vpn-gateway
service-bus/nameaspace (Updates)
storage/storage-account
web/site
web/static-site
Terraform Resource Modules:
Full list of Terraform Resource Modules are available here: AVM Terraform Resource Index
authorization-roleassignment
network-azurefirewall
network-firewallpolicy
network-networkmanager
operationalinsights-workspace
Terraform Pattern Modules
Full list of Terraform Resource Modules are available here: AVM Terraform Pattern Index
alz-management
network-virtualwan (update)
Updates and Improvements
We have also made some updates and improvements to the existing Azure Verified Modules, based on your feedback and suggestions. Some of the highlights are:
Bicep
Improved workflow optimization for module publishing to allow better Intellisense when using Visual Studio Code extension for Bicep.
Extended compliance tests to include AVM Bicep CI Framework files.
Automatic issue life-cycle management workflow (ref) that tracks the stability of a module and its owner
Improved pipeline handling & readability (ref)
Batch disable and enable GitHub Workflows in user forks (Bicep)
Terraform
Implemented GREPT workflow for Repository Linting and Governance (Link to Matt’s video)
OpenID Connect Integration for Terraform test validation.
MVP for Centralized Module testing framework in place, utilizing docker technologies for both local and GitHub Actions testing capabilities.
AVM General
Automated issue creation for tracking GitHub Teams alignment to specs required for AVM Modules
Further Resources
Microsoft Tech Community – Latest Blogs –Read More