Category: Microsoft
Category Archives: Microsoft
Always Encrypted with secure enclaves – Intel SGX vs VBS
Always Encrypted with secure enclaves is a feature of Azure SQL Database that allows you to protect sensitive data from unauthorized access, even from the database administrators. Secure enclaves are regions of memory isolated from the server that can perform computations on encrypted data without revealing the plaintext. When processing SQL queries, the database engine delegates computations on encrypted data to a secure enclave. The code in the enclave decrypts the data and performs computations on plaintext. This can be done safely, because the enclave has strong isolation guarantees. It is a black box to the containing database engine process and the OS, so database administrators or machine administrators cannot see the data inside the enclave.
By leveraging secure enclaves, Always Encrypted can support rich confidential queries, including pattern matching, range comparisons, sorting and more. It also enables in-place cryptographic operations, such as encrypting existing data or rotating the data encryption keys.
Azure SQL Database supports two types of secure enclaves: Intel SGX enclaves and VBS enclaves. In this blog post, we will compare these two options and help you choose the best one for your use case.
What are Intel SGX enclaves and VBS enclaves?
Intel Software Guard Extensions (Intel SGX) enclaves is a hardware-based trusted execution environment technology. Intel SGX protects data actively being used in the processor and memory by creating a trusted execution environment (TEE) called an enclave.
Virtualization-based Security (VBS) enclaves (also known as Virtual Secure Mode, or VSM enclaves) is a software-based technology that relies on Windows hypervisor and doesn’t require any special hardware. The hypervisor creates a logical separation between the “normal world” and “secure world”, designated by Virtual Trust Levels, VTL0 and VT1, respectively. VBS secure memory enclaves create a means for secure, computation in an otherwise untrusted environment.
What are the advantages and disadvantages of Intel SGX and VBS enclaves?
The main advantage of Intel SGX enclaves is that they provide stronger security guarantees than VBS enclaves. Intel SGX enclaves are resistant to attacks from the host operating system.
The main disadvantage of Intel SGX enclaves is that they have limited availability. The databases require specific hardware (DC-series) that are not supported by all Azure SQL Database service tiers and regions. Let us know if you need a region to be enabled where we currently do not support DC-series. Secondly, DC-series comes with an extra cost because of the specific hardware that is needed which is limited to a maximum of 40 physical cores.
The main advantage of VBS enclaves is that they have wider availability than Intel SGX enclaves because we don’t have the hardware dependency. VBS enclaves can run on any Azure SQL Database service tier in any region and comes with no extra cost.
The main disadvantage of VBS enclaves is that they provide weaker security guarantees than Intel SGX enclaves. VBS enclaves help protect your data from attacks inside the VM. However, they don’t provide any protection from attacks using privileged system accounts originating from the host.
Below is a summary comparison of Intel SGX and VBS enclaves:
Intel Software Guard eXtensions (SGX)
Virtualization-based security (VBS)
Available in DC-series hardware configuration
No hardware dependency
Purchasing model
vCore model
DTU and vCore
Compute mode
Provisioned
Provisioned and serverless
Compute size
Up to 40 (physical) vCores
Any (up to 128 vCores)
Regional availability
Regional availability: East/West US,
North/West EU, Canada Central, UK South, Southeast Asia
All Azure regions
Security
Protection from rogue customer’s DBAs
Protection from rogue customer’s DBAs
Protection from attacks originating from both guest and host OS (rogue cloud operators, malware)
Protection from attacks originating from guest OS (rogue cloud operators, malware), but not host OS
Attestation using Microsoft Azure Attestation
No attestation currently supported
How to choose between Intel SGX and VBX enclaves?
The choice between Intel SGX enclaves and VBS enclaves depends on your security requirements. Think about who you want to protect your data for. Do you want to protect your data from malicious insiders or do you also want to protect your data from the host provider. If you need the highest level of security, you should use Intel SGX enclaves.
The table below can help you with that decision.
Attacker
Attack method
Always Encrypted with Intel SGX enclaves
Always Encrypted with VBS enclaves
DBAs connecting over TDS
Querying encrypted columns without access to the encryption keys
Y
Y
VM (guest OS) administrators
Generating a memory dump of the SQL Server process or scanning its memory
Y
Y
Data center/host administrators
Generating a memory dump of the host server
Y
N
If needed, you can always switch the enclave type by changing the SLO of the database. In general, there are no changes needed in the application if you switch from VBS to Intel SGX or the other way around.
Conclusion
Unlike Intel SGX, VBS is a software-based solution with no hardware dependency. This allows us to bring the benefits of Always Encrypted with secure enclaves to all Azure SQL Database offerings, so that you can use the feature with a compute tier (provisioned or serverless), a purchasing model (vCore or DTU), a compute size (currently, up to 128 vCores), and a region that best matches your workload requirements. And, since VBS enclaves are available in existing hardware offerings, they come with no extra cost. It is important to note that Intel SGX enclaves remain a recommended option for customers who seek the strongest level of protection, including the isolation from host OS administrators, which VBS enclaves do not provide.
Learn more
Always Encrypted with secure enclaves documentation
Getting started using Always Encrypted with secure enclaves
GitHub Demo
Data Exposed episode (video)
Microsoft Tech Community – Latest Blogs –Read More
Microsoft’s commitment to Azure IoT
There was a recent erroneous system message on Feb 14th regarding the deprecation of Azure IoT Central. The error message stated that Azure IoT Central will be deprecated on March 31st, 2027 and starting April 1, 2024, you won’t be able to create new application resources. This message is not accurate and was presented in error.
Microsoft does not communicate product retirements using system messages. When we do announce Azure product retirements, we follow our standard Azure service notification process including a notification period of 3-years before discontinuing support. We understand the importance of product retirement information for our customers’ planning and operations. Learn more about this process here: 3-Year Notification Subset – Microsoft Lifecycle | Microsoft Learn
Our goal is to provide our customers with a comprehensive, secure, and scalable IoT platform. We want to empower our customers to build and manage IoT solutions that can adapt to any scenario, across any industry, and at any scale. We see our IoT product portfolio as a key part of the adaptive cloud approach.
The adaptive cloud approach can help customers accelerate their industrial transformation journey by scaling adoption of IoT technologies. It helps unify siloed teams, distributed sites, and sprawling systems into a single operations, security, application, and data model, enabling organizations to leverage cloud-native and AI technologies to work simultaneously across hybrid, edge, and IoT. Learn more about our adaptive cloud approach here: Harmonizing AI-enhanced physical and cloud operations | Microsoft Azure Blog
Our approach is exemplified in the public preview of Azure IoT Operations, which makes it easy for customers to onboard assets and devices to flow data from physical operations to the cloud to power insights and decision making. Azure IoT Operations is designed to simplify and accelerate the development and deployment of IoT solutions, while giving you more control over your IoT devices and data. Learn more about Azure IoT Operations here: https://azure.microsoft.com/products/iot-operations/
We will continue to collaborate with our partners and customers to transform their businesses with intelligent edge and cloud solutions, taking advantage of our full portfolio of Azure IoT products.
We appreciate your trust and loyalty and look forward to continuing to serve you with our IoT platform offerings.
Microsoft Tech Community – Latest Blogs –Read More
Partner of the Year Awards – share how you make a difference
It’s Partner of the Year Award (POTYA) season – one of the most anticipated time periods of the year for Microsoft partners. Our team leading the POTYA Social Impact category is excited to be reading partner entries of changemaking innovations and technology delivery enabling positive societal impact around the world.
I encourage partners to take this unique opportunity to tell your story, and showcase your business leadership and commitment to purpose, with impactful customer engagements focused on enabling inclusion, sustainability, and community resilience.
POTYA Social Impact category
This category honors industry and technical leaders in the areas of community response, inclusion, and sustainability. Additional consideration will be given for submissions demonstrating solution/service market availability and scalability.
The Community Response POTYA recognizes a partner organization that is providing innovative and unique services or solutions based on Microsoft technologies, helping solve challenges faced by communities and making a significant social impact during unprecedented times. We will be recognizing the contributions of partners driving response and recovery to crises impacting communities around the world, highlighting solutions and services that are driving innovation and partnerships that protect fundamental rights, uplift, and create a positive impact on communities.
The Inclusion Changemaker POTYA recognizes a partner organization that excels at providing innovative and unique services or solutions based on Microsoft technologies that help customers solve challenges of diverse representation, economic access, digital inclusion, and/or accessibility. Inclusion changemakers drive digital transformation to help enable more inclusive economic growth. Technology can unlock innovations toward a more inclusive and equitable world, leading to greater innovations for everyone, including the 1+ billion people living with disabilities.
The Sustainability Changemaker POTYA recognizes a partner organization that excels at providing innovative and unique services or solutions based on Microsoft technologies that help customers solve challenges of sustainable digital transformation. Environmental stewardship has grown in strategic importance as a significant driver of organizational and business performance as well as innovation and market value. To help drive technological innovation and industry transformation toward a more sustainable and climate stable future, we look to solutions and services that help organizations understand their impact on the climate and deliver on sustainability commitments.
If your offers serve nonprofit customers, also consider the Nonprofit POTYA (in the ‘Industry’ category). The Nonprofit Partner of the Year Award recognizes a partner organization that excels at providing innovative services or cloud solutions based on Microsoft technologies that help nonprofits tackle the world’s biggest challenges and deliver on their missions. Successful entrants will demonstrate strong growth in revenue and/or marquis customer wins.
Call for nominations
To learn more on preparing a standout entry and how to submit your POTYA nomination, visit https://aka.ms/POTYA. The application deadline is 6:00 PM Pacific Time (PT), on April 3, 2024.
Need inspiration? Revisit 2023 POTYA Social Impact Category winners here.
We look forward to celebrating your leadership and impact.
Microsoft Tech Community – Latest Blogs –Read More
SQL Server サービスが OS の起動時に自動起動してこなかった場合の対処策について
こんにちは。SQL Server サポート チームです。
今回は、OS の起動時に、 SQL Server サービスの起動が指定時間内に開始要求または制御要求に応答しないことでサービスの起動に失敗する場合の対処策についてご紹介します。
事象
SQL Server サービスのスタートアップの種類が自動となっている場合、OS起動時に SQL Server サービスも自動で起動されます。
その際に、サービス起動タイムアウト時間である30秒以内にサービスが起動できない場合があり、システム イベントログに次のようなエラーが記録され、起動に失敗します。
種類 : エラー
ソース : Service Control Manager
イベント ID : 7009
説明 :
MSSQLSERVER サービスの接続を待機中にタイムアウト (30000 ミリ秒) になりました。
種類 : エラー
ソース : Service Control Manager
イベント ID : 7000
説明 :
“MSSQLSERVER サービスを、次のエラーが原因で開始できませんでした:
そのサービスは指定時間内に開始要求または制御要求に応答しませんでした。”
原因
SQL Server サービスの自動起動がタイムアウトに達する原因に、以下のようなものがあります。
1. OS 起動直後の CPU や Disk の高負荷
OS 起動時には、多くのサービスが同じタイミングで起動するため、CPU や Disk への負荷が高い状態となります。
このような状態では、SQL Server サービスは起動時に Disk からの読み込みも多く、CPU や Disk の高負荷の影響を受けやすいため、SQL Server サービスの起動に時間がかかり自動起動が失敗する場合があります。
2. サービス起動時のドメインコントローラーとの通信遅延
SQL Server の サービスアカウントがドメインユーザーの場合、サービス起動時にまだドメインコントローラーとの通信が確立出来ていない時にも起動アカウントのログインができずサービスの起動に至らないため、自動起動が失敗することがあります。
対処策
このような場合、SQL Server サービスのスタートアップの種類を「自動(遅延開始)」に変更することで、起動時に CPU や Disk へ負荷が集中するタイミングを避けて SQL Server サービスを起動することが可能となります。
OS の起動時に SQL Server サービスの自動起動に失敗することが無い場合、この対応は不要ですが、自動起動が失敗する場合には対処策として実施いただき、状況が改善されるかご確認ください。
変更手順
1. [ファイル名を指定して実行] で、services.msc を指定し、サービス ウィンドウを起動します。
2. 「SQL Server (MSSQLSERVER)」 サービスを右クリックし、[プロパティ] を選択します。
※ MSSQLSERVER は既定のインスタンスの場合の例です。実際に設定するインスタンスのサービスを選択してください。
3. “スタートアップの種類” で [自動(遅延開始)] を選択し [OK] をクリックします。
※ SQL Server サービスは、既定で SQL Server Agent サービスと依存関係がありますので、SQL Server Agent サービスも “自動(遅延開始)” に変更してください。また、その他のサービスでも、SQL Server サービス、または SQL Server Agent サービスに依存していることで起動に失敗している場合、そのサービスも “自動(遅延開始)” に変更ください。
本設定により、OS起動時に自動起動されるサービスから2分遅れて対象のサービスの起動が開始されるようになりますので、指定時間内の起動に失敗する状況が改善することが期待できます。
なお、SQL Server サービスと依存関係は設定されていないものの SQL Server を利用するアプリケーションにおいて、OS 再起動後に SQL Server サービスが起動するまでの時間が遅くなることにより、アプリケーション側で SQL Server への接続エラーなどが発生する可能性があります。
SQL Server を利用するアプリケーションの OS 起動後の開始タイミングについても念のためご確認ください。
※ OS の起動時に SQL Server サービスの起動に失敗しない環境では、上記の対処策は不要です。
※ SQL Server 2022 以降は[開始モード] が [自動] と表示されている場合でも、サービスは代わりに [自動 (遅延開始)] モードで開始されます。
SQL Server サービスの開始、停止、一時停止、再開、再起動 – SQL Server | Microsoft Learn
※本記事は 2020 年 にMSDN TechNet に公開されたブログ記事を一部修正し、再投稿したものです。
Microsoft Tech Community – Latest Blogs –Read More
Encryption and Ledger in Azure SQL Database | Data Exposed
In this episode of Data Exposed, learn about the recent Azure SQL security innovations with Anna Hoffman and Pieter Vanhove.
Resources:
TDE with database-level CMK now generally available for Azure SQL Database – Microsoft Community Hub
SQL Server Management Studio improvements for Always Encrypted – Microsoft Community Hub
Ledger in Azure SQL Managed Instance now generally available – Microsoft Community Hub
View/share our latest episodes on Microsoft Learn and YouTube!
Microsoft Tech Community – Latest Blogs –Read More
Windows containers in Kubernetes: Automating nodepool management with Calico’s Windows HPC Support
Hello, we would like to feature our partners from Tigera Calico that we team up with to co-author a blog on Host Process Containers with Calico. Below are the names of the partners that co-authored the blog.
Dhiraj Sehgal Reza Ramezanpour
As the landscape of containerized applications evolves, enterprises are increasingly integrating Windows containers into their Kubernetes workflows.
These days with the help of cloud services such as Microsoft Azure Kubernetes Service, anyone can build and operate a Kubernetes environment with ease. However, there are a lot of fine-tuning and automation that are involved in preparing your production-ready environment that are done in the background. For example, networking is a huge part of the cloud-native environment, and all aspects of your business in the cloud depend on it.
Project Calico is a networking and security solution for the bare metal and cloud that offers great flexibility for such environments. In this blog, we will focus on how the new release of Calico has leveraged a new a feature of Windows containers, Host Process Containers (HPC) to optimize footprint in your cloud environment. On top of that, we will look at how HPC support makes the life of DevOps administrators easier by offering more control over the host machine in a Windows environment.
The challenge of manual nodepool management
One of the biggest challenges of managing Kubernetes clusters in an unmanaged or on-premise deployment. In a cloud environment like AKS (Azure Kubernetes Service), the cloud provider takes care of many aspects of managing your Kubernetes cluster, making it a seamless and hassle-free experience. However, when it comes to a customized environment where you have control over the node pools, the responsibility of managing and configuring the cluster falls on your shoulders. This can be a bit daunting, especially if you are new to Kubernetes or have limited experience with infrastructure management.
Managing Windows nodepools in such environments can be more challenging than Linux where privileged containers can configure host settings and integrate naturally with Kubernetes, Windows containers previously lacked this capability requiring administrators to use scripts or manual configuration steps outside of Kubernetes. This can be time-consuming and error-prone, especially when scaling your cluster quickly. Additionally, manual nodepool management can be disruptive to application lifecycles.
HPC is similar to a privileged container in Linux, just like privileged containers, HPC containers have the capability to access and make modifications to the host operating system. Silos are similar to namespaces in Linux which allow processes to run in an isolated environment. The following blog post highlights how Windows HPC is used for Calico and what are the benefits of it.
Calico’s Windows Host Process Containers
Calico’s Windows HPC support released in Calico OS 3.27 automates CNI installation and brings the Calico capabilities to Windows nodepools. This means that Kubernetes administrators can easily install Calico on their environment without having to manually install and configure Calico on each node, similar to Linux-based containers.
Calico’s support for Windows HPC feature works by running Calico as a HPC on each node. HPC are a special type of container that has access to the host’s filesystem. This allows Calico to install and configure itself on each node without requiring manual intervention from the Kubernetes administrator.
Benefits of automating nodepool management
Automating node pool management with Calico’s support for Windows HPC feature provides a number of benefits for Kubernetes administrators, including:
Reduced operational overhead: Automating nodepool management eliminates the need for Kubernetes administrators to manually install and configure Calico on each node. This frees up their time to focus on other tasks, such as managing Windows container-based applications.
Improved application performance and reliability: By automating node pool management, Kubernetes administrators can reduce the risk of disruptions to application lifecycles. This is because Calico can be installed and configured on new nodes without requiring any downtime for existing applications.
Increased agility and responsiveness to changing business needs: Automating node pool management makes it easier for Kubernetes administrators to scale their clusters up or down as needed. This can help businesses to respond more quickly to changing customer demand and other business needs.
Consistency between Windows and Linux GitOps practices.
How to enable Calico using Windows Host Process container support
For this part, we are going to assume that you have a hybrid Kubernetes cluster in your environment that supports HPC.
HPC support is provided with Kubernetes 1.22 and above, it also requires containerd 1.6+. If you would like to know more about these requirements, click here.
When your cluster is up and running, install the latest Tigera operator:
Use the following installation resource to install Calico for your Windows environment using the HPC feature:
kubectl create -f -<<EOF
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
calicoNetwork:
windowsDataplane: HNS
ipPools:
– blockSize: 26
cidr: 192.168.0.0/16
encapsulation: VXLAN
natOutgoing: Enabled
nodeSelector: all()
—
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
EOF
In environments where Calico is used for IP Address Management, you need to disable IPaddress sharing by using the following command:
kubectl patch ipamconfigurations default –type merge –patch='{“spec”: {“strictAffinity”: true}}’
Conclusion
To sum up, Windows nodes in non-cloud-provider environment used to be hard to install and configure because they did not have privileged containers. However, with HPC now generally available on Kubernetes, users can create containers that can automate the configuration of their node via accessing the host filesystem.
Calico has leveraged this technology to provide a Kubernetes-native way to install and manage networking in your cluster.
This means that the management of Windows nodes in a Kubernetes cluster is now fully automated, eliminating the need for administrators to manually configure nodes or containers.
Overall, the adoption of HPC in Kubernetes has transformed the way CNI solutions are installed and managed on Windows nodes, providing a more streamlined and automated approach that enhances the scalability, reliability, and ease of use of Kubernetes clusters.
Please look out for a coming blog covering Zero Trust with Tigera Calico.
Microsoft Tech Community – Latest Blogs –Read More
Final Reminder: Outlook REST API v2.0 and beta endpoints decommissioning
As we work to ensure better security, reliability, and performance for our customers, and as we announced in our previous blog post in September 2023, we are decommissioning the Outlook REST v2.0 and beta endpoints starting March 31, 2024. After this date, we will start progressively shutting off the endpoints until they become completely unavailable.
This means that any application that is still using these endpoints will stop working at some point after March 31, 2024 (except for Outlook Add-Ins as also communicated before). We strongly recommend that you migrate your applications to the Microsoft Graph API as soon as possible to avoid any disruption. Please refer to https://aka.ms/FromOutlookRestToGraph for guidance.
We continue to track the use of these endpoints and will inform the affected tenants through a Message Center post before we fully disable the endpoints. However, we urge you to migrate your applications as soon as possible.
The Microsoft 365 Team
Microsoft Tech Community – Latest Blogs –Read More
Microsoft Learn AI Skills Challenge Pitch Winner: Watch Out
The Microsoft Learn AI Cloud Skills Challenge held in July wrapped up an incredible learning journey with the AI pitch Challenge; a showcase of innovation where passionate learners brought their visions to life through the power of AI. These creators shared how they would harness Microsoft’s AI technology to craft solutions for the future in a 3-minute video pitch. Out of many, five outstanding winners emerged, each with a unique and compelling vision.
This series of blog posts spotlights each creator sharing the transformative potential of their ideas.
Hello! I’m Ahmet Dedeler, a 16-year-old high school junior from Turkey, and I’m eager to share with you not just my latest project, “Watch Out,” but also my journey in the tech world. My adventure began with a simple curiosity about coding. Python and JavaScript were my initial gateways, but they quickly became much more than just programming languages. They were the tools that helped me understand the power of technology in solving real-world issues.
From Hackathons to Hosting One
My enthusiasm for coding swiftly led me to the world of hackathons. These weren’t just competitions; they were platforms where I could test my skills, innovate, and learn from peers. Winning a bunch of hackathons was a thrilling experience, each victory not just an achievement but a stepping stone to something greater.
This journey through numerous hackathons sparked an idea – why not host my own? Thus, “Boost Hacks” was born. It was a leap from participant to organizer, from learner to leader. The event was a massive success, with 800 participants, 85 innovative projects, and a staggering $180,000 in prizes. This wasn’t just about organizing an event; it was about creating a space for like-minded individuals to collaborate, innovate, and push the boundaries of technology.
Unveiling “Watch Out”: A Vision for Safer Communities
“Watch Out” is born from a desire to enhance community safety through the power of AI. It’s an AI-driven system that uses Computer Vision to detect and alert people about potential safety hazards in their surroundings – from fallen trees to damaged sidewalks.
How “Watch Out” Works
The system operates by analyzing live street footage, continuously scanning for anomalies or potential dangers. When it detects a hazard, it immediately notifies local authorities and emergency services, ensuring quick action and a safer environment for everyone.
The Inspiration Behind the Project
The idea for “Watch Out” came from observing everyday community challenges. I wanted to create a solution that not only leverages technology but also actively involves the community in promoting safety.
The Tech Behind the Vision
Developing “Watch Out” involved several Microsoft AI technologies. The core of the project is Microsoft’s Custom Vision, a tool that enabled me to train an AI model to recognize various safety hazards with high precision.
Favorite Microsoft AI Technology
Among all the technologies I explored, Microsoft’s Custom Vision stood out. Its user-friendly interface and powerful capabilities made it not just a tool for development, but a learning experience that was both challenging and rewarding.
Looking Ahead: My Future Vision and Aspirations
Looking towards the future, my goal is to blend my coding skills with my enthusiasm for meaningful projects. “Watch Out” is a stepping stone into a world where technology serves humanity. I am excited about refining this project and exploring new technological frontiers. My aspiration is to create solutions that leave a lasting, positive impact on society.
Join me in this journey of innovation and discovery, where we’re not just coding for the sake of technology, but for building a smarter, safer, and more connected world. My story is one of a young mind’s passion for technology and a heart for community service, and I believe this is just the beginning.
Feeling inspired? The Microsoft Learn AI Skills Challenge may have ended but the learning never stops! Get started with an AI Learning Path and find a new Microsoft Learn Cloud Skills Challenge to join. Transform your innovative ideas into reality with Azure credits through the Founders Hub. And for the students who dream of making an impact, the Imagine Cup is currently underway!
Microsoft Tech Community – Latest Blogs –Read More
Benefits of moving to Azure Monitor SCOM managed instance
In this blog, let’s highlight the cost-benefit of moving from your existing SCOM on-prem to Azure Monitor SCOM MI.
If you are using System Center Operations Manager (SCOM) to monitor on-premises and hybrid cloud environment, you might be wondering whether you should migrate to Azure Monitor SCOM managed instance or keep your SCOM on-premises deployment. In this blog, we will compare the two options in terms of cost benefits (up to 44% when fully migrated to SCOM MI), and help you make an informed decision based on your specific needs and goals.
What is Azure Monitor SCOM managed instance?
Azure Monitor SCOM managed instance is a cloud-based service that provides the same functionality as SCOM on-premises, but without the hassle of managing and maintaining the infrastructure. You can use SCOM MI to monitor your resources on and off Azure, as well as integrate with other Azure services such as Log Analytics, Azure Managed Grafana, and Power BI. SCOM MI is fully compatible with your existing SCOM management packs and agents*, so you can migrate your existing monitoring configuration and data with minimal disruption.
What are the cost benefits of Azure Monitor SCOM managed instance?
Azure Monitor SCOM MI offers several cost benefits over SCOM on-premises, such as:
Reduced infrastructure & maintenance costs: You don’t need to bother about maintaining infrastructure such as server racks, network cables, electricity, cooling, physical security, datacenter lease. Moreover, hardware infrastructure is a depreciation cost. SCOM MI runs on Azure’s scalable and reliable infrastructure, which means you only pay for what you use, and you don’t have to worry about downtime or performance issues.
You can save additionally on Azure Infrastructure with savings and reserved plans.
Reduced IT labor costs: SCOM MI is fully managed by Microsoft, which means you get updates, patches, scalability, and security. Since you don’t need to retrain your staff on SCOM management packs and, the efforts required to provision, patch and scale SCOM MI service is significantly less, we estimate ~40% reduction in time (labor cost) required to maintain & operate SCOM MI.
Optimized licensing costs: You don’t need to purchase, renew, or manage any licenses for your monitoring solution. SCOM MI is offered as a PAYG model, which means you only pay a monthly fee based on the number of monitored objects and the amount of data ingested. You also get access to all the features and capabilities of Azure Monitor, which can enhance your monitoring experience and provide additional insights and value.
For more information on SCOM MI licensing, refer here.
To illustrate the cost benefits of SCOM MI, we have created a comparison table of the estimated annual costs for a typical scenario of monitoring 500 VMs. The table does not include optional SCOM MI integration i.e., data ingestion to Log Analytics, usage of Grafana.
Disclaimer: Below table includes representative numbers only. For accurate Azure costs, refer to Pricing Calculator | Microsoft Azure. Also, we assume that the duration of migration between SCOM to SCOM MI is completed quickly (<3 months) and not as a long-term migration project.
Cost category
SCOM on-premises
Azure Monitor SCOM managed instance
Infrastructure
(Hardware + Software)
To monitor 500 VMs, you need 2 SCOM servers with Windows OS, 1 SQL server with Windows OS, server racks, storage disks etc.
$13,812 (annually)
$27,780 (no discount)
$12,586 (max discount)
Maintenance cost
(Security, lease, electricity, network, etc.)
$4,443 (annually)
$0 (included under infra cost)
IT labor cost
(administration)
$116,800 (annually)
$70,080 (annually)
Licensing
System Center license to manage 500VMs is $75,747. If you are using all SC products, the operating license cost for SCOM will be least ($12,625).
SCOM MI license is $6/VM/month.
$12,625 (If all SC products used)
$75,747 (If SCOM only used)
$36,000 (annually)
Annual cost range
$147,680 to $210,802
$118,666 to $133,860
Costs savings
(once you move to SCOM MI to monitor 500VMs)
20% if SCOM onprem only used & No Azure discounts applied
36% if all SC products used, max Azure discounts applied
44% if SCOM onprem only used, maximum Azure discounts applied
As you can see, Azure Monitor SCOM managed instance can save you up to 44% of the total costs of SCOM on-premises, considering you migrate to SCOM MI quickly. Of course, your actual costs may vary depending on your specific requirements and preferences, but the table gives you a general idea of the potential savings you can achieve by migrating to Azure Monitor SCOM managed instance. If you are interested in moving other System Center products to Azure and want to know the cost analysis, we recommend you build a Business case with Azure Migrate | Microsoft Learn.
How to get started with Azure Monitor SCOM managed instance?
If you are interested in trying out Azure Monitor SCOM managed instance, you can start here. You should talk to your Microsoft sales representative for clarity on plausible discounts and actual cost savings.
If you have any questions or feedback, you can leave your comments below. We would love to hear from you and help you with your monitoring needs.
*SCOM 2022 Agent (as of Feb’24).
References
Pricing Calculator | Microsoft Azure
Microsoft System Center | Microsoft Licensing Resources
Microsoft Tech Community – Latest Blogs –Read More
Drive customer engagement with the power of AI
According to a recent IDC study commissioned by Microsoft, “For every $1 a company invests in AI, it is realizing an average return of $3.5X.” Because organizations realize a return on their AI investments within 14 months, customers are highly motivated to find partners with the necessary knowledge and skill set to deploy AI solutions today.
The Microsoft AI Partner Training Roadshow is a single-day, in-person event focused on driving customer engagement with the power of AI. The roadshow provides an exceptional opportunity to engage with Microsoft experts, hear about the latest trends in AI from Microsoft executives, and participate in technical or sales training.
Attend one of the six roadshow events
The Microsoft AI Partner Training Roadshow is scheduled in six cities across the globe, so there are only a few opportunities for deep learning on Microsoft generative and responsible AI technologies, cloud-scale data, and modern application development platforms, including Azure AI services and Microsoft Copilot.
The first event will be on March 1, 2024, in Hyderabad, India, followed by a second event in Bengaluru, India, on March 19. You don’t want to miss this opportunity. Register for an event near you.
Acquire generative and responsible AI knowledge from Microsoft experts
In a recent blog, Judson Althoff outlined four major opportunities where organizations can empower AI transformation:
Enriching employee experience
Reinventing customer engagement
Reshaping business processes
Bending the curve on innovation
Microsoft is focused on developing responsible AI strategies grounded in pragmatic innovation and enabling AI transformation to meet our customers’ needs. The Microsoft AI Partner Training Roadshow provides expert-led sessions and hands-on experiences to enhance your sales, pre-sales, and technical deployment capabilities across these impact areas.
Prepare technical and sales teams for AI success
Open to our Global Systems Integrator (GSI) and System Integrator (SI) partners, the Microsoft AI Partner Training Roadshow offers learning across multiple skill levels and interests. Alongside a keynote address by a Microsoft leader, there are four distinct learning paths for individuals with technical or sales backgrounds:
Sales Excellence with Microsoft AI Services: Master skills to confidently pitch Microsoft AI solutions by diving into solution use cases, exploring responsible AI commitments, and highlighting incentives to increase customer business value.
Technical Excellence with Azure AI: Build your own “Intelligent Agent” copilot to answer customer questions on products and services: Learn to build an “Intelligent Agent” that helps users find products, user profiles, and sales order information. This interactive experience features theoretical and lab sessions that prepare your technical teams to use Azure OpenAI and Azure AI Search.
Technical Excellence with Azure AI: Build a scalable data estate with a custom copilot for conversational data interaction: In this hands-on track, learn how to create a payments and transactions solution. Key subjects explored include business rules for data governance, patch operations for data replication, and customizing copilots for conversational AI.
Technical Excellence with Microsoft 365: Deep dive into the use and deployment of Copilot for Microsoft 365: Gain a fuller understanding of Copilot for Microsoft 365 with technical sessions on architecture, deployment, security, and compliance.
Bridge skill gaps in AI
Because AI is rapidly developing, there is a growing skills gap as employees work to keep up. In fact, 52% of participants of this IDC survey report that the lack of skilled workers is their biggest barrier to implementing and scaling AI. Much of the challenge isn’t simply adopting technology but also providing ample opportunities for employees to explore and learn.
To reconcile this divide, the Microsoft AI Partner Training Roadshow is committed to providing recent, up-to-date content for participants to study during and after the event. In addition to live keynote addresses and Q&A sessions, participants will have the chance to interact with and learn from technical and sales subject matter experts on topics that span generative and responsible AI technologies, cloud-scale data, and modern application development platforms, Azure AI services, and Microsoft Copilot
Prepare for the future
2023 introduced the world to the power of generative AI. Businesses are ready to deploy AI-based solutions as quickly as possible. The Microsoft AI Partner Training Roadshow places developers, solution architects, implementation consultants, and sales & pre-sales consultants at the forefront of AI transformation.
Because there will be no on-demand delivery post-event, we invite you to join us in Hyderabad, Bengaluru, or one of the other four cities across the globe that’s conveniently located near you.
Visit the Microsoft AI Partnership Roadshow website and register today to get started.
Microsoft Tech Community – Latest Blogs –Read More
IP address changes for Azure Service Bus and IP/DNS Changes for Azure Relay
What is Changing?
The infrastructure layer of Azure Relay and Service Bus is being upgraded which will cause the IP addresses used by customer namespaces to. For Azure Relay the gateway DNS names are also changing.
These changes are being made as part of our continuous improvements to our platform. The IP addresses of our services can change and should not be considered static and unchanging as previously communicated in the communication for Azure Service Bus and Azure Relay. There is no added charge for this nor are there any service interruptions during the migration.
Call to Action
If you are using IP addresses in your egress firewalls to your Azure Relay or Azure Service Bus namespaces, you will need to update them to use the namespace DNS names instead.
Alternative (not recommended!)
As a final alternative, it is possible to use the new IP addresses. We highly recommend against this, as you will need to keep track of any IP address changes yourself, and your service may be interrupted.
Azure Service Bus customers
If you are using Azure Service Bus premium, we recommend using service tags, as per our recommendations described in the service documentation. Service tags will automatically be updated if anything changes in our infrastructure.
If you are on Azure Service Bus standard / basic or cannot use service tags on Azure Service Bus Premium, use the fully qualified domain names for your specific namespaces, or the wildcard “*.servicebus.windows.net” domains. These will automatically resolve to the new IP addresses.
For Azure Service Bus, as an unrecommended alternative, the IP address can be found by executing a ping command against the fully qualified domain name of your specific namespace.
Azure Relay customers
For Azure Relay, configure your firewalls with the DNS names of all the Relay gateways, which can be found by running this script . This script will resolve the fully qualified domain names of all the gateways to which you need to establish a connection.
Furthermore, you can use the same script , to get the IP addresses of all the gateways to which you need to establish a connection.
Microsoft Tech Community – Latest Blogs –Read More
What’s new in Windows Autopatch: February 2024
The start of the new year brings a great opportunity for positive change, including the release of new features in Windows Autopatch. We heard your feedback! Here are some improvements made in response to your enterprise needs.
Import Update rings for Windows 10 and later in preview
Update rings allow you to specify how and when Windows as a service updates your Windows 10 or Windows 11 device with feature and quality updates. Update rings are available for Windows 10 and later. And if you’re a Windows Autopatch customer, you can now bring existing Update rings for Windows 10 and later policies into Windows Autopatch Management. For additional information, see Configure Update rings for Windows 10 and later policy in Intune.
Importing existing rings allows you to take advantage of the many capabilities of Windows Autopatch without impacting your existing Windows update schedules. Imported rings will automatically register all targeted devices into Windows Autopatch without the need to redeploy or change your existing update rings. Additionally, important rings will be reflected in the reporting and release experience.
Learn how to import update rings for Windows 10 and later. If needed, brush up on Windows client updates, channels, and tools.
Customer defined service outcomes in preview
Have you used Windows Autopatch reports to monitor the health and activity of your deployments? The insights from the reports can help you understand if your devices are maintaining update compliance targets.
Previously, deployment success measures were based on a static schedule of 21 days. This means that Windows Autopatch aims to keep at least 95% of eligible devices on the latest Windows quality update 21 days after release.
With this enhancement, the success of Windows Autopatch deployments will be based on your defined rings. We’ll also be introducing new columns in our release blade, as well as Windows quality and feature update reporting, to show the percentage complete for quality and feature updates. Devices that are up to date will remain in the “In Progress” status in reporting until you either get the current monthly cumulative update or an alert. If an alert is received, the status will change to “Not up to date.”
To learn more, read Service level objectives.
Improved data refresh speed and reporting accuracy
Windows Autopatch reporting provides rich insights into your patch compliance status, so you can make informed choices about protecting against defects and vulnerabilities.
This release is changing the refresh cycle for Windows Autopatch reporting. The refresh cycle refers to the amount of time from when a change is made to when it’s reflected in reporting and other UX components. This time will be reduced from every 24 hours to every 30 minutes. This improvement supports the many data streams that Windows Autopatch uses to provide current update status for all devices enrolled into Windows Autopatch.
To learn more, see Windows quality update reporting.
Take your next step with Windows Autopatch
We hope these enhancements will help you keep your devices secure and up to date with less hassle and more control. Get current and stay current with automation that leads to higher security and lower costs.
The ideas behind these releases originated from conversations, input, and requests from you, our customers. We’d love to hear your feedback and suggestions on how we can continue to make Windows Autopatch even better for you. You can share your thoughts and ideas with us on our feedback hub or by joining our community forum.
If you want to learn more about Windows Autopatch:
Visit our website.
Read our documentation.
Watch our guided demos.
If you want to try Windows Autopatch for yourself, sign up for a free trial or contact us for a demo.
Thank you for choosing Windows Autopatch and stay tuned for more updates and announcements.
Continue the conversation. Find best practices. Bookmark the Windows Tech Community, then follow us @MSWindowsITPro on X/Twitter. Looking for support? Visit Windows on Microsoft Q&A.
Microsoft Tech Community – Latest Blogs –Read More
Security review for Microsoft Edge version 121
We are pleased to announce the security review for Microsoft Edge, version 121!
We have reviewed the new settings in Microsoft Edge version 121 and determined that there are no additional security settings that require enforcement. The Microsoft Edge version 117 security baseline continues to be our recommended configuration which can be downloaded from the Microsoft Security Compliance Toolkit.
Microsoft Edge version 121 introduced 11 new computer settings and 11 new user settings. We have included a spreadsheet listing the new settings in the release to make it easier for you to find them.
As a friendly reminder, all available settings for Microsoft Edge are documented here, and all available settings for Microsoft Edge Update are documented here.
Please continue to give us feedback through the Security Baselines Discussion site or this post.
Microsoft Tech Community – Latest Blogs –Read More
APIs in Action: Unlocking the Potential of APIs in Today’s Digital Landscape
In today’s world, APIs (Application Programming Interfaces) are essential for connecting applications and services, driving digital innovation. But with the rise of hybrid and multi-cloud setups, effective API management becomes essential for ensuring security and efficiency. That’s where APIs in Action, a virtual event dedicated to unlocking the full potential of APIs, comes in.
Join us for a full-day virtual event focused on exploring API management for integration, hybrid and multi-cloud, and AI workloads. Learn from industry experts about the latest trends and best practices shaping the API landscape. Our immersive event delves deep into APIs and API management, highlighting innovative architectures that drive business growth. Our experts will guide you through transforming existing services and making your data easily accessible to developers, both internally and externally.
Whether you’re a seasoned professional or just starting out, APIs in Action equips you with the knowledge and tools to use APIs effectively in your hybrid and multi-cloud environment. Register now and join the conversation! Experience a day filled with insightful discussions, demos, and actionable insights that will empower you to navigate the evolving landscape of API management with confidence.
Session
Abstract
Speaker(s)
The role of API Management in Azure Integration Services
A successful integration platform developed with Azure Integration Services will find API Management at the heart of your solution. In this session we will discuss some of the common scenarios where you will find API Management used.
Mike Stephenson
API management for microservices in a hybrid and multi-cloud world
Microservices are on the cusp of becoming the dominant style of software architecture. This hands-on demonstration will show how enterprises can make the transition to API-first architectures and microservices in a hybrid, multi-cloud world.
Tom Kerkhove
Leveraging API Management for OpenAI Applications/Use Azure API Management (APIM) to manage, secure, and scale your LLM-based applications
This session navigates the intersection of APIM and OpenAI technologies, discussing how APIM enhances the deployment, security, and scalability of OpenAI-powered applications. Attendees will learn about APIM basics, OpenAI’s capabilities, integration strategies, security challenges, and real-world applications.
Elena Neroslavskaya, Chris Ayers
Azure API Management from a developer perspective
As organizations adopt an API-first mindset, the need for a good management of your APIs grows. This session will explain the benefits of Azure API Management (APIM) through the eyes of a developer. What’s in it for the developer and how can Azure APIM help to maximize the potential and security of your APIs?
Toon Vanhoutte
OpenAPI now vs. the future
Discover the essential role of OpenAPI in unlocking your API’s full potential and expanding your customer base. In this session, explore how OpenAPI is integral to the AI-driven future, providing crucial insights for staying ahead in the dynamic API landscape. Elevate your strategy and position your API for success by embracing OpenAPI.
Darrel Miller
API Design First with SwaggerHub and Azure API Management
Still designing in the dark ages with interface design documents and outdated documentation? Come see how SwaggerHub and Azure API Management can enable you to utilize the API Design First methodology to create live documentation that allows architects and stakeholders to design software together.
Joël Hébert
API DevEx
The developer experience for APIs can be difficult for new API developers and can add complexity to existing API projects due to new toolchains and evolving cloud services. In this session, we will demystify the API developer experience, leveraging tools like GitHub Copilot, Azure API Center, Azure API Management, and OpenAPI extensions.
Josh Garverick
Better API Governance with Azure API Center
An API catalog brings together the different roles involved in an API program and, by promoting the collaboration between them, it fosters API reuse, ensured compliance and better developer productivity. In this session we will explore what is Azure API Center and how to integrate it in your API design workflow.
Massimo Crippa
Leverage Postman to Collaboratively Test your APIs from design to deployment and beyond
Learn firsthand how to wield Postman effectively throughout the API Lifecycle, boosting your API implementation and fortifying security from the start with the right testing strategies.
Whether you’re in the business of creating or consuming APIs, discover how Postman and Azure API Management complement each other to enhance collaboration and streamline productivity.
Sandeep Murusupalli, Garrett London
Build a warp speed time-to-market API with DAB, APIM and Azure Container Apps
In this session will delve into how the Data API builder enables swift and secure database object exposure through REST or GraphQL endpoints allowing data access on any platform, language, or device. By combining DAB with Azure Container Apps and API Management we will build up and secure a serverless data API without writing a single line of code.
Massimo Crippa
Harnessing the Power of Azure API Management: Building Robust and Secure API
In this session, which combines theoretical knowledge with real-world scenarios, we will delve into the advanced features of Azure API Management, with a focus on building robust, secure, and scalable APIs. Attendees will learn about security best practices, policy management, and how to effectively use Azure’s tools to enhance API performance and security.
Hamida Rebai
Building a resilient API landscape with Azure API Management
Cloud service failure is inevitable. When building platforms, it is crucial to ensure that you will seamlessly handle failure and by being resilient to them. Learn how Azure API Management helps you mitigate and recover from failures by using built-in load balancing and circuit-breaking capabilities.
Tom Kerkhove
Enhance your API security posture with Microsoft Defender for APIs
Azure Defender for APIs brings security insights and ML-based detections to APIs that are exposed via Azure API Management. In this session we will see how to leverage Defender for APIs to enhance your security posture, which kind of scenarios are covered, and our learnings from observing production workloads.
Massimo Crippa
Gain Understanding of APIs and Integrations with Azure Application Insights
Use Application Insights to create a correlated, end to end view of integrations across APIM, Logic Apps and Functions. Learn how to record insights, including business data, then create queries to view the data and observe through dashboards. Through Workbooks we can create meaningful, insightful custom visuals allowing support and business teams to gain the insights they want.
Dave Phelps
GitOps for API-Management
In this talk, we will present our experience with a GitOps workflow for implementing and managing API-Management within an Integration Platform for an international corporation. We will describe how we automated infrastructure and deployment for the whole platform, addressing key aspects such as governance, permissions management, testing and documentation.
Christine Robinson, Maximiliane Ott
APIOps: Transforming Azure APIM Deployments with GitOps and DevOps Methodologies
This talk offers a deep dive into the principles and practices of automating and managing APIs in Azure API Management. Attendees will gain insights into how APIOps applies the concepts of GitOps and DevOps to API deployment. By using practices from these two methodologies, APIOps can enable everyone involved in the lifecycle of API design, development, and deployment with self-service and automated tools to ensure the quality of the specifications and APIs that they’re building.
Wael Kdouh
Microsoft Tech Community – Latest Blogs –Read More
Sysmon v15.14
Microsoft Tech Community – Latest Blogs –Read More
Azure SQL Managed Instance – Log Space Growth Alert using Azure Runbook/PowerShell
Introduction
There are scenarios wherein customer want to monitor their transaction log space usage. Currently there are options available to monitor Azure SQL Managed Instance metrics like CPU, RAM, IOPS etc. using Azure Monitor, but there is no inbuilt alert to monitor the transaction log space usage.
This blog will guide to setup Azure Runbook and schedule the execution of DMVs to monitor their transaction log space usage and take appropriate actions.
Overview
Microsoft Azure SQL Managed Instance enables a subset of dynamic management views (DMVs) to diagnose performance problems, which might be caused by blocked or long-running queries, resource bottlenecks, poor query plans, and so on.
Using DMV’s we can also find the log growth – Find the usage in percentage and compare it to a threshold value and create an alert.
In Azure SQL Managed Instance, querying a dynamic management view requires VIEW SERVER STATE permissions.
GRANT VIEW SERVER STATE TO database_user;
Monitor log space use by using sys.dm_db_log_space_usage. This DMV returns information about the amount of log space currently used and indicates when the transaction log needs truncation.
For information about the current log file size, its maximum size, and the auto grow option for the file, you can also use the size, max_size, and growth columns for that log file in sys.database_files.
Solution
Below PowerShell script can be used inside an Azure Runbook and alerts can be created to notify the user about the log space used to take necessary actions.
# Ensures you do not inherit an AzContext in your runbook
Disable-AzContextAutosave -Scope Process
$Threshold = 70 # Change this to your desired threshold percentage
try
{
“Logging in to Azure…”
Connect-AzAccount -Identity
}
catch {
Write-Error -Message $_.Exception
throw $_.Exception
}
$ServerName = “tcp:xxx.xx.xxx.database.windows.net,3342”
$databaseName = “AdventureWorks2017”
$Cred = Get-AutomationPSCredential -Name “xxxx”
$Query=”USE [AdventureWorks2017];”
$Query= $Query+ ” “
$Query= $Query+ “SELECT ROUND(used_log_space_in_percent,0) as used_log_space_in_percent FROM sys.dm_db_log_space_usage;”
$Output = Invoke-SqlCmd -ServerInstance $ServerName -Database $databaseName -Username $Cred.UserName -Password $Cred.GetNetworkCredential().Password -Query $Query
#$LogspaceUsedPercentage = $Output.used_log_space_in_percent
#$LogspaceUsedPercentage
if($Output. used_log_space_in_percent -ge $Threshold)
{
# Raise an alert
$alertMessage = “Log space usage on database $databaseName is above the threshold. Current usage: $Output.used_log_space_in_percent%.”
Write-Output “Alert: $alertMessage”
# You can send an alert using Send-Alert cmdlet or any other desired method
# Send-Alert -Message $alertMessage -Severity “High” Via EMAIL – Can call logicApp to send email, run DBCC CMDs etc.
} else {
Write-Output “Log space usage is within acceptable limits.”
}
There are different alert options which you can use to send alert in case log space exceeds its limit as below.
Alert Options
Send email using logic apps or SMTP – https://learn.microsoft.com/en-us/azure/connectors/connectors-create-api-smtp
Azure functions – https://learn.microsoft.com/en-us/samples/azure-samples/e2e-dotnetcore-function-sendemail/azure-net-core-function-to-send-email-through-smtp-for-office-365/
Run dbcc command to shrink log growth – https://learn.microsoft.com/en-us/azure/azure-sql/managed-instance/file-space-manage?view=azuresql-mi#ShrinkSize
Feedback and suggestions
If you have feedback or suggestions for improving this data migration asset, please contact the Data SQL Ninja Engineering Team (datasqlninja@microsoft.com). Thanks for your support!
Note: For additional information about migrating various source databases to Azure, see the Azure Database Migration Guide
Microsoft Tech Community – Latest Blogs –Read More
Azure Database for MySQL – Single Server retirement – Key updates and migration tooling available
Azure Database for MySQL – Single Server is scheduled for retirement by September 16, 2024.
As part of this retirement, we stopped support for creating new Single Server instances via the Azure portal as of January 16, 2023, and beginning March 19, 2024, we’ll no longer support creating new Single Server instances via the Azure CLI. Should you still need to create Single Server instances to meet your business continuity needs, please raise an Azure support ticket. Note that you’ll still be able to create read replicas and perform restores (PITR and geo-restore) for your existing Single Server instance until the sunset date, September 16, 2024.
If you currently have an Azure Database for MySQL – Single Server production server, we’re pleased to let you know that you can migrate your Azure Database for MySQL – Single Server instance to the Azure Database for MySQL – Flexible Server service free of charge by using one of the following migration tooling options.
Azure Database for MySQL Import CLI
You can leverage the Azure Database for MySQL Import CLI (General Availability) to migrate your Azure Database for MySQL – Single Server instances to Flexible Server using snapshot backup and restore technology with a single CLI command. Based on user inputs, this functionality will provision your target Flexible Server instance, take a backup of the source server, and then restore it to the target. It copies the following properties and files from the Single Server instance to the Flexible Server instance:
Data files
Server parameters
Compatible firewall rules
Server properties such as tier, version, SKU name, storage size, location, geo-redundant backups settings, public access settings, tags, auto grow settings and backup-retention days settings
Admin username and password
In-place auto-migration
In-place auto-migration (General Availability) from Azure Database for MySQL – Single Server to Flexible Server is an in-place upgrade during a planned maintenance window for select Single Server database workloads. If you have a Single Server workload based on the Basic or General Purpose SKU with <= 20 GiB of used storage and without complex features (CMK, AAD, Read Replica, Private Link) enabled, you can now nominate yourself for auto-migration by submitting your server details using this form.
Azure Database Migration Service (DMS)
Azure Database Migration Service (DMS) (General Availability) is a fully managed service designed to enable seamless online and offline migration from Azure Database for MySQL – Single Server to Flexible Server. DMS supports cross-region, cross-version, cross-resource group, and cross-subscription migrations.
Conclusion
Take advantage of one of these options to migrate your Single Server instances to Flexible Server at no cost!
For more questions on Azure Database for MySQL Single Server retirement, see our Frequently Asked Questions.
Microsoft Tech Community – Latest Blogs –Read More
Simplifying Azure Kubernetes Service Authentication Part 2
Welcome to the second installment of our multipart series on simplifying Azure Kubernetes Service (AKS) authentication. In this article, we delve deeper into the intricacies of AKS setup, focusing on critical aspects such as deploying demo applications, configuring Cert Manager for TLS certificates (enabling HTTPS), establishing a static IP address, creating a DNS label, and initiating the groundwork for robust authentication. First part here Part 1
Let’s dive in!
Deploy two demo applications
In the previous post we set up our AKS cluster and configured NGINX. Now we will deploy two sample applications and deploy them. You can follow the official documentation here Create an unmanaged ingress controller.
First create the following two YAML files that define our two applications:
aks-helloworld-one.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: aks-helloworld-one
spec:
replicas: 1
selector:
matchLabels:
app: aks-helloworld-one
template:
metadata:
labels:
app: aks-helloworld-one
spec:
containers:
– name: aks-helloworld-one
image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
ports:
– containerPort: 80
env:
– name: TITLE
value: “Welcome to Azure Kubernetes Service (AKS)”
—
apiVersion: v1
kind: Service
metadata:
name: aks-helloworld-one
spec:
type: ClusterIP
ports:
– port: 80
selector:
app: aks-helloworld-one
aks-helloworld-two.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: aks-helloworld-two
spec:
replicas: 1
selector:
matchLabels:
app: aks-helloworld-two
template:
metadata:
labels:
app: aks-helloworld-two
spec:
containers:
– name: aks-helloworld-two
image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
ports:
– containerPort: 80
env:
– name: TITLE
value: “AKS Ingress Demo”
—
apiVersion: v1
kind: Service
metadata:
name: aks-helloworld-two
spec:
type: ClusterIP
ports:
– port: 80
selector:
app: aks-helloworld-two
Then run the following commands to deploy the applications:
kubectl apply -f aks-helloworld-one.yaml –namespace ingress-basic
kubectl apply -f aks-helloworld-two.yaml –namespace ingress-basic
Now lets check the pods, service, and deployment:
List the pods and verify the STATUS is Running for both applications
kubectl get pods -n ingress-basic
List the service and notice the CLUSTER-IP assigned to each service
kubectl get service -n ingress-basic
List the deployment and notice the READY state
kubectl get deployment -n ingress-basic
Create an ingress route
We will proceed to create a Kubernetes Ingress resource YAML file, enabling us to efficiently route traffic to each of our deployed applications. As a reminder, our ingress controller has been configured to utilize NGINX, as discussed in our previous post. Consequently, we will leverage the NGINX configuration to effectively manage traffic for the following services:
EXTERNAL_IP/hello-world-one to aks-helloworld-one
EXTERNAL_IP/hello-world-two to aks-helloworld-two,
EXTERNAL_IP/static to aks-helloworld-one
First create the following YAML file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: “false”
nginx.ingress.kubernetes.io/use-regex: “true”
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
– http:
paths:
– path: /hello-world-one(/|$)(.*)
pathType: Prefix
backend:
service:
name: aks-helloworld-one
port:
number: 80
– path: /hello-world-two(/|$)(.*)
pathType: Prefix
backend:
service:
name: aks-helloworld-two
port:
number: 80
– path: /(.*)
pathType: Prefix
backend:
service:
name: aks-helloworld-one
port:
number: 80
—
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress-static
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: “false”
nginx.ingress.kubernetes.io/rewrite-target: /static/$2
spec:
ingressClassName: nginx
rules:
– http:
paths:
– path: /static(/|$)(.*)
pathType: Prefix
backend:
service:
name: aks-helloworld-one
port:
number: 80
Then create the resource with the following command:
kubectl apply -f hello-world-ingress.yaml –namespace ingress-basic
You will need your public IP obtained from the last post. Now visit the deployed application in the web browser by navigating to:
PUBLICIP/hello-world-two or PUBLICIP/hello-world-one
Upload cert manager images to your ACR
We will proceed to configure images for the certificate manager by deploying the necessary images to our Azure Container Registry (ACR) instance. Before executing the following command, ensure that you include the -TargetTag <your tag name> flag. Although the Microsoft documentation for using Transport Layer Security (TLS) with an ingress controller on AKS does not explicitly require this flag, it is advisable to include it. Doing so allows you to specify the ACR repository names, such as jetstack/cert-manager-cainjector, jetstack/cert-manager-controller, and jetstack/cert-manager-webhook. For detailed steps, you can refer to the official documentation here Use TLS with an ingress controller on Azure Kubernetes Service (AKS)
Enter the following commands in PowerShell to upload the cert manager images to your ACR:
$RegistryName = “<REGISTRY_NAME>”
$ResourceGroup = (Get-AzContainerRegistry | Where-Object {$_.name -eq $RegistryName} ).ResourceGroupName
$CertManagerRegistry = “quay.io”
$CertManagerTag = “v1.8.0”
$CertManagerImageController = “jetstack/cert-manager-controller”
$CertManagerImageWebhook = “jetstack/cert-manager-webhook”
$CertManagerImageCaInjector = “jetstack/cert-manager-cainjector”
Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $CertManagerRegistry -SourceImage “${CertManagerImageController}:${CertManagerTag}” -TargetTag “${CertManagerImageController}:${CertManagerTag}”
Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $CertManagerRegistry -SourceImage “${CertManagerImageWebhook}:${CertManagerTag}” -TargetTag “${CertManagerImageWebhook}:${CertManagerTag}”
Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName $RegistryName -SourceRegistryUri $CertManagerRegistry -SourceImage “${CertManagerImageCaInjector}:${CertManagerTag}” -TargetTag “${CertManagerImageCaInjector}:${CertManagerTag}”
Create a static IP address
In the context of configuring the NGINX ingress controller, it is prudent to address the necessity of a static IP address for proper routing functionality. Based on my observations during the NGINX setup process outlined in the previous documentation, it appears that a static IP address may already be assigned. Consequently, there might be no immediate requirement to allocate a new static IP address. However, to ensure unequivocal utilization of a static IP address, it is advisable to consider assigning a fresh one to the load balancer exposed by NGINX. While this additional step does not inherently pose any harm, it remains a discretionary measure. Depending on the specific deployment scenario, it may or may not be essential.
First get the resource group name of your AKS cluster:
(Get-AzAksCluster -ResourceGroupName $ResourceGroup -Name myAKSCluster).NodeResourceGroup
The run the following command to create a static IP address:
(New-AzPublicIpAddress -ResourceGroupName MC_myResourceGroup_myAKSCluster_eastus -Name myAKSPublicIP -Sku Standard -AllocationMethod Static -Location eastus).IpAddress
You should get an IP address. Keep a note of this IP.
Set the DNS label, static IP, and health probe using Helm
Create a DNS label name that will be used to generate a FQDN for navigating to your applications. This can be any name, but it must be unique. Additionally, add the static IP address obtained from above and set the health monitoring request path. Run the following command to configure the NGINX ingress controller:
$DnsLabel = “<DNS_LABEL>”
$Namespace = “ingress-basic”
$StaticIP = “<STATIC_IP>”
helm upgrade ingress-nginx ingress-nginx/ingress-nginx `
–namespace $Namespace `
–set controller.service.annotations.”service.beta.kubernetes.io/azure-dns-label-name”=$DnsLabel `
–set controller.service.loadBalancerIP=$StaticIP `
–set controller.service.annotations.”service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path”=/healthz
This marks the conclusion of the second installment in our series. In the upcoming segment, we will delve further into the setup process. Specifically, we’ll configure the certificate manager, update our ingress routes, establish passwords and secrets for authentication, and prepare for the configuration of our OAuth2 proxy. Stay tuned for the next part, where we continue our journey toward a robust and secure system.
Microsoft Tech Community – Latest Blogs –Read More
Intune moving to support Android 10 and later for user-based management methods in October 2024
We’ve heard your feedback asking to understand the plan for Intune’s support for Android operating system (OS) versions.
In October 2024 (after Google’s expected release of Android 15), Intune will revise its operating system support statement to move to supporting only Android 10 and later for user-based management methods, which include:
Android Enterprise personally owned with a work profile.
Android Enterprise corporate owned work profile.
Android Enterprise fully managed.
Android Open Source Project (AOSP) user-based.
Android Device administrator.
App protection policies.
App configuration policies for managed apps.
The following aren’t impacted by this change:
Android Enterprise dedicated devices: Will continue to be supported on Android 8 or later.
AOSP user-less: Will continue to be supported on Android 8 or later.
Microsoft Teams certified Android devices: Will be supported on versions listed in Microsoft Teams certified Android device documentation.
Microsoft Teams certified Android devices
Teams Rooms certified systems and peripherals
We plan to gradually move to only supporting the four most recent Android versions for our user-based management methods to keep enrolled devices secure. As Google continues to release new Android versions annually, we’ll stop supporting one or two older versions every October until we support only the four most recent versions. After that, we’ll end support for one version annually in October to maintain our support statement for the four latest versions.
Impact of ending support
For user-based management methods (as listed above), Android devices running Android 9 or earlier will no longer be supported. For devices on unsupported Android OS versions:
Intune technical support will no longer be provided.
Intune will no longer be making changes to address bugs or issues.
New and existing features are not guaranteed to work.
While Intune won’t prevent enrollment or management of devices on unsupported Android OS versions, functionality isn’t guaranteed, and use isn’t recommended.
How can you prepare?
Use Intune reporting to identify which devices or users might be affected:
For devices with mobile device management (MDM), go to Devices > All devices and filter by OS.
For devices with app protection policies, go to Apps > Monitor > App Protection status and use the Platform and Platform version columns to filter.
For devices with app configuration policies, go to Apps > Monitor > App Configuration status and use the Platform and Platform version columns to filter.
Warn users that they should update their Android version:
For devices with MDM, utilize a device compliance policy for Android Enterprise, Android AOSP, or Android device administrator and set the action for noncompliance to send an email or push notification to users before marking them noncompliant.
For devices with app protection policies, create an app protection policy and configure conditional launch with a min OS version requirement that warns users.
Block devices from accessing corporate resources until they update their Android version:
For devices with MDM, you can use either or both of these methods:
Set enrollment restrictions to prevent enrollment on devices running older versions.
Utilize a device compliance policy to make devices noncompliant if they are running older versions.
For devices with app protection policies, create an app protection policy and configure conditional launch with a min OS version requirement that blocks users from app access.
For more information, see Manage operating system versions with Intune. If you have any questions, leave a comment below or reach out to us on X @IntuneSuppTeam.
Microsoft Tech Community – Latest Blogs –Read More
Join Teams for work or school meetings with personal account
We are improving the ways to join Teams meetings and have started to roll out an improvement enabling you to join a Teams meeting organized by a work or school user with your signed-in personal account. Read more on the Teams Insider blog and join Teams Insider to try this in Teams free on Windows 11 today!
Join Teams for work or school meeting with your personal account – Teams Insider
Microsoft Tech Community – Latest Blogs –Read More