Month: March 2024
Networking errors when enabling SQL Server by Azure Arc
When onboarding your SQL Server instance to Azure Arc, there are some networking prerequisites that need to be met. The prerequisites are documented here in detail Prerequisites – SQL Server enabled by Azure Arc | Microsoft Learn
Some common errors when the prerequisites are not met are:
SSL Errors:
System.Net.Http.HttpRequestException: The SSL connection could not be established, see inner exception.
—> System.IO.IOException: Unable to read data from the transport connection:
A connection attempt failed because the connected party did not properly respond after a period of time, or established connection
failed because connected host has failed to respond.. —> System.Net.Sockets.SocketException (10060)A connection attempt failed
because the connected party did not properly respond after a period of time, or established connection failed because connected host
has failed to respond
The SSL connection could not be established, see inner exception.
—> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the
remote host.. —> System.Net.Sockets.SocketException (10054 An existing connection was forcibly closed by the remote host.
— End of inner exception stack trace —
at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ThrowException(SocketError error, CancellationToken cancellationToken)
at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.System.Threading.Tasks.Sources.IValueTaskSource<System.Int32>.GetResult(Int16 token)
at System.Net.Security.SslStream.<FillHandshakeBufferAsync>g__InternalFillHandshakeBufferAsync|189_0[TIOAdapter](TIOAdapter adap, ValueTask`1 task, Int32 minSize)
SSL Error Causes:
1. Check if URLs listed on the prerequisites are blocked in the environment. Connectivity to URLs listed on this document Troubleshoot connectivity to data processing service and telemetry endpoints – SQL Server enabled by Azure Arc | Microsoft Learn are necessary for onboarding to succeed.
2. Check to see if there are any proxies or firewalls in the network path. Proxies or firewalls have the ability to do TLS inspection which can affect SSL/TLS connections. If browser, curl, or openssl is showing a third party software that does TLS inspection or similar, and the client does not trust the firewall certificates, it can lead to TLS/SSL issues.
Some tools to use to collect traces:
1. Test-NetConnection should succeed
Example: Test-NetConnection to test DPS endpointTest-NetConnection -ComputerName san-af-yourregion-prod.azurewebsites.net -Port 443
2. Invoke-WebRequest should succeed
Example: Invoke-WebRequest -Uri san-af-yourregion-prod.azurewebsites.net
3. If connectivity tests fail, review your network configuration for any blocked URLs, proxies, firewall, TLS inspection as described above. Network tracing tools would also help narrow down any networking config issues, How to collect a network trace | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
Java client changes to support TLS 1.3 with Azure Service Bus and Azure Event Hubs
Microsoft is looking to enable TLS 1.3 for Azure Service Bus, and Azure Event Hubs. We found however that there is a problem with some clients that use both Java and our AMQP or JMS interfaces. Java clients that use Apache Proton-J with a version older than proton-j-0.31.0 along with Java 11+ can’t support TLS 1.3. The Proton-J library is used in AMQP and JMS implementations. The nature of the problem lies in the handshake and is thus not detectable by our application layer. This means that we can’t detect and work around the issue from the service side. To avoid this problem, customers need to update any instances of Apache Proton-J that have a version older than proton-j-0.31.0. The link for the Proton-J issue is tracked at https://issues.apache.org/jira/browse/PROTON-1972
Required action
Proton-J may also be in a dependent library and may not be directly used by your code. To determine if you have an incompatibility with TLS 1.3, we have enabled the West Central US region with TLS 1.3 support for AMQP traffic. To test if you have a compatibility issue
first evaluate if you are using AMQP or JMS
Second, determine if you are using Java 11+ with your client code
Third, if you are using AMQP or JMS and also Java 11+, then create a namespace in West Central US and attempt to connect to it with your code.
If your client fails to connect, you need to find where you are using Proton-J and get the version used updated to a version that is newer than proton-j-0.30.0.
Timeline
As already noted, this only affects AMQP or JMS traffic. It does not affect web service or Kafka traffic which does have TLS 1.3 enabled now. We are going to enable TLS 1.3 for AMQP and JMS on October 31, 2024. That is also the same day that TLS 1.0 and TLS 1.1 is being removed from Azure Event Hubs and Azure Service Bus. Please take action as soon as possible to guarantee no interruption to your service use when we enable TLS 1.3.
Microsoft Tech Community – Latest Blogs –Read More
Windows GPUs for AKS
Today we are happy to announce the public preview of Windows on AKS GPU support! This feature aims to provide customers with the options of GPU compute-intensive workloads. A few examples of where a GPU supported node would benefit workloads are video encoding, machine learning, and large simulations. Through this release we hope to increase the parity between Windows and Linux on AKS.
What is it?
GPU support has been accomplished by enabling Windows node pools in AKS to support GPU workloads. This release will support all AKS Windows SKUs releases. As for the GPU support, there will be NVIDIA’s CUDA and GRID drivers. The current architecture installs a specific GPU Driver for each VM size.
Prerequisites/High level Call outs for Enabling GPU Support
Workload and driver compatibility are essential to deploying Windows nodes with GPU support. Please verify the workload is compatible with the driver installed to the VM Size.
VM Size
Driver Type
NC series
CUDA
NV, ND
GRID
Required For Setup
Kubernetes version 1.29.0 or greater is required for set up.
Updating an existing Windows node pool to GPU isn’t supported.
For AKS node pools, we recommend a minimum size of Standard_NC6s_v3.
The NVv4 series (based on AMD GPUs) aren’t supported on AKS.
Optional Opt Out of Configuration
Customer can opt out of auto driver installation by using:
–skip-gpu-driver-install flag
In Closing
To get started you can follow a detailed guide to show step-by-step instructions here.
We would love to hear your feedback and suggestions on this new feature. Thank you for using Windows on AKS. We hope you enjoy using GPU supported nodes.
Microsoft Tech Community – Latest Blogs –Read More
New Azure NC H100 v5 VMs Optimized for Generative AI and HPC workloads is now Generally Available
Azure NC H100 v5 virtual machines (VMs) are an excellent platform for executing diverse AI and High-Performance Computing (HPC) workloads. These workloads demand substantial computational power, large capacity of high-performance memory, and advanced GPU acceleration. In addition to AI, the Azure NC H100 v5 VMs are particularly well-suited for extreme modelling and simulation demands in the following science and mathematics disciplines: Computational Fluid Dynamics (CFD), Molecular Dynamics, Quantum Chemistry, Weather Forecasting and Climate Modeling, and Financial Analytics.
The AI landscape is constantly expanding and evolving, moving at a dizzying pace. Generative AI technology has played a pivotal role, enabling a diverse array of use cases. These range from powering AI assistants, chatbots, and search engines to facilitating creative content generation. As Generative AI applications expand at incredible speed, the fundamental language models that empower them will expand also to include both Small Language Models (SMLs) and Large Language Models (LLMs). In addition, Artificial Narrow Intelligence (ANI) models will continue to evolve focused on more precise predictions rather than creation of novel data to continue to enhance its use cases. Their applications include tasks such as image classification, object detection, and broader natural language processing.
At Microsoft, our mission is to empower every person and every organization on the planet to achieve more. Leveraging the robust capabilities and scalability of Microsoft Azure, we offer computational tools that empower organizations of all sizes, regardless of their resources. Azure NC H100 v5 VMs is yet another computational tool made generally available today that will do just that.
Here are some examples of what our customers are doing with our existing NC-series VMs and planning with the power of Azure NC H100 v5 GPU Virtual Machines at their fingertips:
Snorkel AI is a Microsoft for Startups Pegasus partner that helps enterprises move AI projects from prototype to production. A founding member of the Stanford Center for Research on Foundation Models, Snorkel AI is grounded in years of academic research and endeavors to remain at the forefront of new scholarship in data-centric AI and foundation models.
“Snorkel’s recent top tier ranking on the AlpacaEval 2.0 LLM leaderboard would not have been possible without the Microsoft for Startups Pegasus Program. Access to SoTA NVIDIA A100s via a seamless Azure experience has empowered us to drive cutting-edge research in programmatic alignment/DPO in a quick & efficient manner. For example, Azure AI Infrastructure VMs allow our research team to run quick experiments from small projects to large-scale distributed jobs reliably and with full monitoring mechanisms. Designing research projects on Azure’s next generation NC H100 v5-series powered by NVIDIA H100 NVL PCIe GPU will help our researchers deliver value for our customers and the OSS community even faster.”- Hoang Tran, Senior Research Scientist, Snorkel AI
Northflank is a self-service developer platform that automates and unifies deployment of any workload, on any cloud, at any scale.
“Northflank’s customers want to build scale-out apps on top of a sizeable number of Azure NC H100 v5 series VMs that feature the NVIDIA 100 GPUs running in a Kubernetes clusters, while keeping the self-service experience that Northflank’s developer platform provides. With NC H100 v5, Azure is the fastest way for us to help those customers ship apps on scale-out GPU infrastructure.” – Will Stewart, CEO & Co-Founder @ Northflank
Introducing the new NC H100 v5 series virtual machine, now generally available
Today, we are excited to announce that Azure NC H100 v5 Virtual Machines are now generally available. The NC H100 v5-series virtual machine (VM) is a cutting-edge addition to the Azure GPU virtual machines family. Designed for mid-range AI model training and generative inferencing, and HPC simulation workloads. This series combines the power of NVIDIA H100 NVL GPUs with 4th-generation AMD EPYC™ Genoa processors.
The NC H100 v5-series offers two classes of virtual machines, ranging from one to two NVIDIA H100 94GB NVL Tensor Core GPUs. It is more cost-effective than ever before, while still giving customers the options and flexibility they need for their workloads. We can’t wait to see what you’ll build, analyze, and discover with the new Azure NC H100 v5 platform.
For AI Inference workloads, customers will experience between 1.6x-1.9x inference performance on one GPU size depending on the types of workloads. The NC H100 v5 VMs offer significant performance improvements over the previous generations of Azure VMs in the NC series. The H100 NVL PCIe GPUs provide up to 2x the compute performance, 2x the memory bandwidth, and 17% larger HBM GPU memory capacity per VM compared to the A100 GPUs. The H100 NVL PCIe GPUs support PCIe Gen5, which provides the highest communication speeds (128GB/s bi-directional) between the host processor and the GPU. This reduces the latency and overhead of data transfer and enables faster and more scalable AI and HPC applications.
The NC H100 v5-series VMs empower your AI and HPC workloads, providing the performance and flexibility you need. Whether you’re training models, running inferencing tasks, or developing cutting-edge applications, these VMs have you covered. Explore the future of AI with the NC H100 v5-series on Azure!
Learn more
Microsoft AI
Azure AI portfolio
Azure AI infrastructure
high-performance computing (HPC) in Azure
Azure HPC optimized OS images.
Azure GPU virtual machines
NC H100 v5-series VM
Microsoft Tech Community – Latest Blogs –Read More
Optimize GPU compute costs: Pause your VMs to save!
We’re excited to announce that in April, Azure will be offering customers the ability to optimize GPU compute costs by enabling hibernation on Virtual Machines (VMs). With this feature, users can hibernate their VMs, pausing compute usage while preserving in-memory states. During hibernation, customers will only incur costs for storage and networking resources, significantly reducing compute expenses. When needed, VMs can be resumed effortlessly, allowing applications and processes to seamlessly pick up from their last state.
Use cases
Specifically for GPU Virtual Machines, hibernation offers compelling use cases and is an effective cost management strategy particularly in two key scenarios:
Optimizing GPU workstations: Pause GPU VMs during off-hours to conserve resources and resume seamlessly when needed, without the need to reopen applications.
Efficient Workflows for long running VMs: For long-running GPU-intensive tasks, hibernating after prewarming tasks ensures quick start-up times and efficient use of GPU resources.
Hibernation availability for GPU VMs
Hibernation will be available for preview on NVv4 and NVadsA10v5 GPU VM series, but larger sizes in the NVadsA10v5 series will not be supported during preview. Supported sizes include:
Customers will be able to leverage the hibernation feature using the Azure Portal, Azure CLI or PowerShell options on the supported virtual machines. In addition, customers can also take advantage of this feature through Azure Virtual Desktop and Citrix Desktop as a Service (DaaS) on their various offerings.
Azure Virtual Desktop
Azure Virtual Desktop provides a flexible cloud-based virtual desktop infrastructure (VDI) platform for securely delivering virtual desktops and remote apps. With GPU hibernation (preview) being supported, customers will be able to seamlessly integrate hibernation into their GPU-based virtual desktop environments, unlocking additional cost-saving opportunities. Learn more about creating personal desktop scaling plan.
Citrix Desktop as a Service (DaaS) for Azure
Microsoft and Citrix have been collaborating for decades to provide technology solutions that streamline IT operations and optimize costs. Citrix DaaS offers desktop and app virtualization solutions leveraging Azure Virtual Desktop platform capabilities. With GPU hibernation (preview) support, Citrix DaaS users can efficiently manage GPU VMs, enabling significant cost savings without compromising performance or user experience.
Getting started with hibernation
Hibernation is currently available for General Purpose Intel and AMD VM Sizes in all public regions. Both Linux and Windows Operating Systems are supported. Learn more about the hibernate feature here . For more details on how to get started with hibernation, refer to the product documentation
Microsoft Tech Community – Latest Blogs –Read More
Azure Event Hubs IP address changes
Microsoft Tech Community – Latest Blogs –Read More
Lesson Learned #481: Query Performance Analysis Tips
When working with databases, high resource usage or a query reporting a timeout could indicate that the statistics of the tables involved in the query are not up to date, that we might be missing indexes, or that there are excessive blocking as the most common elements and possible causes of performance loss. For this reason, I would like to add in this article elements that can help us determine what might be happening with our query.
CommandTimeout and Obtaining Statistics
The inception point of our exploration is the creation of a stored procedure, sp_AnalyzeQueryStatistics. This procedure is designed to take a SQL query as input, specified through the @SQLQuery parameter, and dissect it to unveil the underlying schema and tables it interacts with.
Crafting
sp_AnalyzeQueryStatistics: The core functionality of this procedure leverages the sys.dm_exec_describe_first_result_set DMV. This invaluable tool provides a window into the query’s anatomy, pinpointing the schema and tables entwined in its execution path.
CREATE PROCEDURE sp_AnalyzeQueryStatistics
@SQLQuery NVARCHAR(MAX)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @TableNames TABLE (
SourceSchema NVARCHAR(128),
TableName NVARCHAR(128)
);
INSERT INTO @TableNames (SourceSchema, TableName)
SELECT DISTINCT
source_schema AS SourceSchema,
source_table AS TableName
FROM
sys.dm_exec_describe_first_result_set(@SQLQuery, NULL, 1) sp
WHERE sp.error_number IS NULL AND NOT sp.source_table is NULL
SELECT
t.TableName,
s.name AS StatisticName,
STATS_DATE(s.object_id, s.stats_id) AS LastUpdated,
sp.rows,
sp.rows_sampled,
sp.modification_counter
FROM
@TableNames AS t
INNER JOIN
sys.stats AS s ON s.object_id = OBJECT_ID(QUOTENAME(t.SourceSchema) + ‘.’ + QUOTENAME(t.TableName))
CROSS APPLY
sys.dm_db_stats_properties(s.object_id, s.stats_id) AS sp;
END;
Diving Deeper with Table Statistics:
Identification is just the precursor; the crux lies in scrutinizing the statistics of these identified tables. By employing sys.stats and sys.dm_db_stats_properties, we delve into the statistical realm of each table, gleaning insights into data distribution, sampling rates, and the freshness of the statistics.
Informed Decision-Making:
This statistical audit empowers us with the knowledge to make data-driven decisions. Should the rows_sample significantly deviate from the total rows, or the statistics’ last update be a for example 2 months, it’s a clarion call for action—be it updating the statistics or reevaluating index strategies.
C# Implementation:
using System;
using System.Diagnostics;
using System.Data;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Data.SqlClient;
namespace HighCPU
{
class Program
{
private static string ConnectionString = “Server=tcp:myservername.database.windows.net,1433;User Id=MyUser;Password=MyPassword;Initial Catalog=MyDb;Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Pooling=true;Max Pool size=100;Min Pool Size=1;ConnectRetryCount=3;ConnectRetryInterval=10;Application Name=ConnTest”;
private static string Query = “SELECT * FROM [MSxyzTest].[_x_y_z_MS_HighDATAIOBlocks] ORDER BY NEWID() DESC”;
static async Task Main(string[] args)
{
SqlConnection connection = await EstablishConnectionWithRetriesAsync(3, 2000);
if (connection == null)
{
Console.WriteLine(“Failed to establish a database connection.”);
return;
}
await ExecuteQueryWithRetriesAsync(connection, 5, 1000, 100000,2,true);
connection.Close();
}
private static async Task<SqlConnection> EstablishConnectionWithRetriesAsync(int maxRetries, int initialDelay)
{
SqlConnection connection = null;
int retryDelay = initialDelay;
for (int attempt = 1; attempt <= maxRetries; attempt++)
{
try
{
connection = new SqlConnection(ConnectionString);
await connection.OpenAsync();
Console.WriteLine(“Connection established successfully.”);
return connection;
}
catch (SqlException ex)
{
Console.WriteLine($”Failed to establish connection: {ex.Message}. Attempt {attempt} of {maxRetries}.”);
if (attempt == maxRetries)
{
Console.WriteLine(“Maximum number of connection attempts reached. The application will terminate.”);
return null;
}
Console.WriteLine($”Waiting {retryDelay / 1000} seconds before the next connection attempt…”);
await Task.Delay(retryDelay);
retryDelay *= 2;
}
}
return null;
}
private static async Task ExecuteQueryWithRetriesAsync(SqlConnection connection, int maxRetries, int initialDelay, int CancellationTokenTimeout, int CommandSQLTimeout, Boolean bReviewQuery = false)
{
int retryDelay = initialDelay;
for (int attempt = 1; attempt <= maxRetries; attempt++)
{
using (var cts = new CancellationTokenSource())
{
cts.CancelAfter(CancellationTokenTimeout*attempt);
try
{
using (SqlCommand command = new SqlCommand(Query, connection))
{
command.CommandTimeout = CommandSQLTimeout*attempt;
Stopwatch stopwatch = Stopwatch.StartNew();
await command.ExecuteNonQueryAsync(cts.Token);
stopwatch.Stop();
Console.WriteLine($”Query executed successfully in {stopwatch.ElapsedMilliseconds} milliseconds.”);
return;
}
}
catch (TaskCanceledException)
{
Console.WriteLine($”Query execution was canceled by the CancellationToken. Attempt {attempt} of {maxRetries}.”);
}
catch (SqlException ex) when (ex.Number == -2)
{
Console.WriteLine($”Query execution was canceled due to CommandTimeout. Attempt {attempt} of {maxRetries}.”);
if (bReviewQuery)
{ await ReviewQuery(); }
}
catch (SqlException ex) when (ex.Number == 207 || ex.Number == 208 || ex.Number == 2627)
{
Console.WriteLine($”SQL error preventing retries: {ex.Message}”);
return;
}
catch (Exception ex)
{
Console.WriteLine($”An exception occurred: {ex.Message}”);
return;
}
Console.WriteLine($”Waiting {retryDelay / 1000} seconds before the next query attempt…”);
await Task.Delay(retryDelay);
retryDelay *= 2;
}
}
}
private static async Task ReviewQuery()
{
SqlConnection connection = await EstablishConnectionWithRetriesAsync(3, 2000);
if (connection == null)
{
Console.WriteLine(“Review Query – Failed to establish a database connection.”);
return;
}
await ReviewQueryWithRetriesAsync(connection, 5, 1000, 10000, 15);
connection.Close();
}
private static async Task ReviewQueryWithRetriesAsync(SqlConnection connection, int maxRetries, int initialDelay, int CancellationTokenTimeout, int CommandSQLTimeout, Boolean bReviewQuery = false)
{
int retryDelay = initialDelay;
for (int attempt = 1; attempt <= maxRetries; attempt++)
{
using (var cts = new CancellationTokenSource())
{
cts.CancelAfter(CancellationTokenTimeout * attempt);
try
{
using (SqlCommand command = new SqlCommand(“sp_AnalyzeQueryStatistics”, connection))
{
command.CommandTimeout = CommandSQLTimeout * attempt;
command.CommandType = CommandType.StoredProcedure;
Stopwatch stopwatch = Stopwatch.StartNew();
command.Parameters.Add(new SqlParameter(“@SQLQuery”, SqlDbType.NVarChar, -1));
command.Parameters[“@SQLQuery”].Value = Query;
using (SqlDataReader reader = await command.ExecuteReaderAsync())
{
while (await reader.ReadAsync())
{
Console.WriteLine(“TableName: ” + reader[“TableName”].ToString());
Console.WriteLine(“StatisticName: ” + reader[“StatisticName”].ToString());
Console.WriteLine(“LastUpdated: ” + reader[“LastUpdated”].ToString());
Console.WriteLine(“Rows: ” + reader[“Rows”].ToString());
Console.WriteLine(“RowsSampled: ” + reader[“Rows_Sampled”].ToString());
Console.WriteLine(“ModificationCounter: ” + reader[“Modification_Counter”].ToString());
Console.WriteLine(“———————————–“);
}
}
stopwatch.Stop();
Console.WriteLine($”Query executed successfully in {stopwatch.ElapsedMilliseconds} milliseconds.”);
return;
}
}
catch (TaskCanceledException)
{
Console.WriteLine($”Query execution was canceled by the CancellationToken. Attempt {attempt} of {maxRetries}.”);
}
catch (SqlException ex) when (ex.Number == -2)
{
Console.WriteLine($”Query execution was canceled due to CommandTimeout. Attempt {attempt} of {maxRetries}.”);
if (bReviewQuery)
{ }
}
catch (SqlException ex) when (ex.Number == 207 || ex.Number == 208 || ex.Number == 2627)
{
Console.WriteLine($”SQL error preventing retries: {ex.Message}”);
return;
}
catch (Exception ex)
{
Console.WriteLine($”An exception occurred: {ex.Message}”);
return;
}
Console.WriteLine($”Waiting {retryDelay / 1000} seconds before the next query attempt…”);
await Task.Delay(retryDelay);
retryDelay *= 2;
}
}
}
}
}
The Bigger Picture
This initial foray into CommandTimeout and statistics is merely the tip of the iceberg. It sets the stage for a broader discourse on query performance, where each element—from indexes to execution plans—plays a crucial role. Our series aims to arm you with the knowledge and tools to not just react to performance issues but to anticipate and mitigate them proactively, ensuring your databases are not just operational but optimized for efficiency and resilience.
Stay tuned as we continue to peel back the layers of SQL performance tuning, offering insights, strategies, and practical advice to elevate your database management game.
Microsoft Tech Community – Latest Blogs –Read More
Lesson Learned #480:Navigating Query Cancellations with Azure SQL Database
In a recent support case, our customer faced an intriguing issue where a query execution in a .NET application was unexpectedly canceled during asynchronous operations against Azure SQL Database. This experience highlighted the nuances of handling query cancellations, which could stem from either a CommandTimeout or a CancellationToken. Through this concise article, I aim to elucidate these two cancellation scenarios, alongside strategies for managing SQL errors, ensuring connection resilience through retries, and measuring query execution time. The accompanying code serves as a practical guide, demonstrating how to adjust timeouts dynamically in an attempt to successfully complete a query, should it face cancellation due to timeout constraints. This narrative not only shares a real-world scenario but also provides actionable insights for developers looking to fortify their .NET applications interacting with Azure SQL Database.
Introduction:
Understanding and managing query cancellations in asynchronous database operations are critical for maintaining the performance and reliability of .NET applications. This article stems from a real-world support scenario where a customer encountered unexpected query cancellations while interacting with Azure SQL Database. The issue brings to light the importance of distinguishing between cancellations caused by CommandTimeout and those triggered by CancellationToken, each requiring a distinct approach to error handling and application logic.
Cancellations: CommandTimeout vs. CancellationToken:
In asynchronous database operations, two primary types of cancellations can occur: one due to the command’s execution time exceeding the CommandTimeout limit, and the other due to a CancellationToken being invoked. Understanding the difference is crucial, as each scenario demands specific error handling strategies. A CommandTimeout cancellation typically indicates that the query is taking longer than expected, possibly due to database performance issues or query complexity. On the other hand, a cancellation triggered by a CancellationToken may be due to application logic deciding to abort the operation, often in response to user actions or to maintain application responsiveness.
Error Handling and Connection Resilience:
Errors during query execution, such as syntax errors or references to non-existent database objects, necessitate immediate attention and are not suitable for retry logic. The application must distinguish these errors from transient faults, where retry logic with exponential backoff can be beneficial. Moreover, connection resilience is paramount, and implementing a retry mechanism for establishing database connections ensures that transient network issues do not disrupt application functionality.
Measuring Query Execution Time:
Gauging the execution time of queries is instrumental in identifying performance bottlenecks and optimizing database interactions. The example code demonstrates using a Stopwatch to measure and log the duration of query execution, providing valuable insights for performance tuning.
Adaptive Timeout Strategy:
The code snippet illustrates an adaptive approach to handling query cancellations due to timeouts. By dynamically adjusting the CommandTimeout and CancellationToken timeout values upon encountering a timeout-related cancellation, the application attempts to afford the query additional time to complete in subsequent retries, where feasible.
Conclusion:
The intersection of CommandTimeout, CancellationToken, error handling, and connection resilience forms the crux of robust database interaction logic in .NET applications. This article, inspired by a real-world support case, sheds light on these critical aspects, offering a pragmatic code example that developers can adapt to enhance the reliability and performance of their applications when working with Azure SQL Database. The nuanced understanding and strategic handling of query cancellations, as discussed, are pivotal in crafting responsive and resilient .NET database applications.
Example C# code:
using System;
using System.Diagnostics;
using System.Data;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Data.SqlClient;
namespace CancellationToken
{
class Program
{
private static string ConnectionString = “Server=tcp:servername.database.windows.net,1433;User Id=MyUser;Password=MyPassword;Initial Catalog=MyDB;Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Pooling=true;Max Pool size=100;Min Pool Size=1;ConnectRetryCount=3;ConnectRetryInterval=10;Application Name=ConnTest”;
private static string Query = “waitfor delay ’00:00:20′”;
static async Task Main(string[] args)
{
SqlConnection connection = await EstablishConnectionWithRetriesAsync(3, 2000);
if (connection == null)
{
Console.WriteLine(“Failed to establish a database connection.”);
return;
}
await ExecuteQueryWithRetriesAsync(connection, 5, 1000, 30000,15);
connection.Close();
}
private static async Task<SqlConnection> EstablishConnectionWithRetriesAsync(int maxRetries, int initialDelay)
{
SqlConnection connection = null;
int retryDelay = initialDelay;
for (int attempt = 1; attempt <= maxRetries; attempt++)
{
try
{
connection = new SqlConnection(ConnectionString);
await connection.OpenAsync();
Console.WriteLine(“Connection established successfully.”);
return connection;
}
catch (SqlException ex)
{
Console.WriteLine($”Failed to establish connection: {ex.Message}. Attempt {attempt} of {maxRetries}.”);
if (attempt == maxRetries)
{
Console.WriteLine(“Maximum number of connection attempts reached. The application will terminate.”);
return null;
}
Console.WriteLine($”Waiting {retryDelay / 1000} seconds before the next connection attempt…”);
await Task.Delay(retryDelay);
retryDelay *= 2;
}
}
return null;
}
private static async Task ExecuteQueryWithRetriesAsync(SqlConnection connection, int maxRetries, int initialDelay, int CancellationTokenTimeout, int CommandSQLTimeout)
{
int retryDelay = initialDelay;
for (int attempt = 1; attempt <= maxRetries; attempt++)
{
using (var cts = new CancellationTokenSource())
{
cts.CancelAfter(CancellationTokenTimeout*attempt); // Set CancellationToken timeout
try
{
using (SqlCommand command = new SqlCommand(Query, connection))
{
command.CommandTimeout = CommandSQLTimeout*attempt;
Stopwatch stopwatch = Stopwatch.StartNew();
await command.ExecuteNonQueryAsync(cts.Token);
stopwatch.Stop();
Console.WriteLine($”Query executed successfully in {stopwatch.ElapsedMilliseconds} milliseconds.”);
return;
}
}
catch (TaskCanceledException)
{
Console.WriteLine($”Query execution was canceled by the CancellationToken. Attempt {attempt} of {maxRetries}.”);
}
catch (SqlException ex) when (ex.Number == -2)
{
Console.WriteLine($”Query execution was canceled due to CommandTimeout. Attempt {attempt} of {maxRetries}.”);
}
catch (SqlException ex) when (ex.Number == 207 || ex.Number == 208 || ex.Number == 2627)
{
Console.WriteLine($”SQL error preventing retries: {ex.Message}”);
return;
}
catch (Exception ex)
{
Console.WriteLine($”An exception occurred: {ex.Message}”);
return;
}
Console.WriteLine($”Waiting {retryDelay / 1000} seconds before the next query attempt…”);
await Task.Delay(retryDelay);
retryDelay *= 2;
}
}
}
}
}
Tests and Results:
In the course of addressing the query cancellation issue, we conducted a series of tests to understand the behavior under different scenarios and the corresponding exceptions thrown by the .NET application. Here are the findings:
Cancellation Prior to Query Execution:
Scenario: The cancellation occurs before the query gets a chance to execute, potentially due to reasons such as application overload or a preemptive cancellation policy.
Exception Thrown: TaskCanceledException
Internal Error Message: “A task was canceled.”
Explanation: This exception is thrown when the operation is canceled through a CancellationToken, indicating that the asynchronous task was canceled before it could begin executing the SQL command. It reflects the application’s decision to abort the operation, often to maintain responsiveness or manage workload.
Cancellation Due to CommandTimeout:
Scenario: The cancellation is triggered by reaching the CommandTimeout of SqlCommand, indicating that the query’s execution duration exceeded the specified timeout limit.
Exception Thrown: SqlException with an error number of -2
Internal Error Message: “Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.”
Explanation: This exception occurs when the query execution time surpasses the CommandTimeout value, prompting SQL Server to halt the operation. It suggests that the query may be too complex, the server is under heavy load, or there are network latency issues.
Cancellation Before CommandTimeout is Reached:
Scenario: The cancellation happens before the CommandTimeout duration is met, not due to the CommandTimeout setting but possibly due to an explicit cancellation request or an unforeseen severe error during execution.
Exception Thrown: General Exception (or a more specific exception depending on the context)
Internal Error Message: “A severe error occurred on the current command. The results, if any, should be discarded.rnOperation cancelled by user.”
Explanation: This exception indicates an abrupt termination of the command, potentially due to an external cancellation signal or a critical error that necessitates aborting the command. Unlike the TaskCanceledException, this may not always originate from a CancellationToken and can indicate more severe issues with the command or the connection.
Microsoft Tech Community – Latest Blogs –Read More
Support for Iterated and Salted Hash Password Verifiers in SQL Server 2022 CU12
Introduction
We all know that as security threats evolve, we must update our defenses to mitigate newer threats. Over the last few months, some customers have asked us to strengthen the way we secure passwords in SQL Server. The most often-cited reference by customers is to comply with NIST SP 800-63b.
Currently supported versions of SQL Server and Azure SQL DB use a SHA-512 hash with a 32-bit random and unique salt. It is statistically infeasible for an attacker to deduce the password knowing just the hash and the salt. It is considerably easier for an attacker to hunt for insecure storage of database connection strings that contain database credentials than it is to break the password verifier (also called a password authenticator) used by SQL Server and Azure SQL DB. But that’s a discussion for another day and is the main reason we highly recommend using Entra ID authentication rather than using uid/pwd-based connections because Entra ID authentication manages credentials and supports access policies.
Considering NIST SP 800-63b, we have added an iterator to the password verifier algorithm. Rather than just storing the SHA-512 hash of the password:
H = SHA512(salt, password)
In this update, we create an iterated hash:
H0 = SHA512(salt, 0, password);
for n=1 to 100,000
Hn = SHA512(salt, Hn-1, password)
NIST SP 800-63b requires a minimum 10,000 iterations, but we chose an iteration count of 100,000 because the Microsoft SDL minimum is 100,000. The iteration means that an attacker attempting a brute-force attack using the password verifier is slowed down by x100,000. We want to stress that there are no known weaknesses in the current SQL Server authenticator implementation, we are adding this update at the request of finance customers to help achieve their compliance goals.
Specifics
Let’s look at some details.
This feature is off by default, and it is only available in SQL Server 2022 CU12 and later. There are no plans to back-port this to older versions of the database engine.
Next, and this is critically important. This changes the on-disk format for the password verifier. This has serious repercussions because we cannot go back to the older version unless you reset users’ passwords or perform a full restore. This update only changes a password verifier to the new algorithm on password change. The database engine cannot update the password verifier automatically because the engine does not know the password.
So how do you enable this if you want to test it out?
First, make sure you’re using a non-production database. The code is production quality, but we want to limit its use until we get more coverage.
Second, make a backup of the database and test that it restores. Remember, there is no going back to the old authenticator other than a password change or a full restore. Reread this point one more time.
Third, using an account that is a sysadmin, you can enable this functionality using the following:
DBCC TRACEON (4671, -1)
DBCC TRACEON (4671)
You can check for this trace flag with:
DBCC TRACESTATUS(4671)
Below is a sample flow testing the functionality.
CREATE LOGIN JimBob WITH PASSWORD = ‘<insert strong pwd>’
DBCC TRACEON (4671,-1)
DBCC TRACEON (4671)
DBCC TRACESTATUS(4671)
CREATE LOGIN MaryJane WITH PASSWORD = ‘<insert strong pwd>’
GO
SELECT name, password_hash
FROM sys.sql_logins
WHERE name NOT LIKE ‘##MS%’
You will see output like this:
sa 0x020097E55B1EC90563A023D6785F4ADC–snip–33A34CB391510CE532B
JimBob 0x0200B378592D3BCFF9B2CD667380D66D–snip–78619048510C10C342E
MaryJane 0x0300D1FB26002DEE6615D02BD9F9F425–snip–0A9F559F3D16EF61A84
Notice the first byte of the hash for JimBob and sa is 0x02, this is the algorithm version number. If you change the password for a login after enabling the new algorithm, the version number changes to 0x03 and this is what we see for MaryJane. The use of a version number at the start of the hash output is a fantastic example of cryptographic agility. Always Encrypted in SQL Server and Azure SQL DB also supports cryptographic agility, as does Microsoft Office. Crypto-agility is a security best practice that allows applications to adopt new cryptographic algorithms over time without breaking existing applications. You can learn more about the topic here. This is especially important as we face a world of post-quantum cryptography.
The data size on disk is the same for v2 and v3 password verifiers because the salt and resulting hash are the same size.
Now, if we update JimBob’s password:
ALTER LOGIN JimBob WITH PASSWORD = ‘<insert strong pwd>’
We will see his password authenticator is something like this:
JimBob 0x0300A269192D2BCEF9A20016729FDEAD–snip–0AFFF124197B1065718
The new v3 algorithm is now applied to his account.
If you want to go back to using the old v2 algorithm, then you can turn the flag off:
DBCC TRACEOFF (4671,-1)
DBCC TRACEOFF (4671)
DBCC TRACESTATUS(4671)
However, this does not change any password authenticators. JimBob and MaryJane can still login and the SQL engine will use the v3 algorithm. However, if either of our two users changes their passwords, they will revert to v2 because the feature is turned off.
Miscellany
If you want to see what a password hash looks like, you can use PWDENCRYPT and you can compare a given hash with the hash of an existing password with PWDCOMPARE. The documentation indicates that HASHBYTES should be used rather than PWDENCRYPT, but this is not correct as HASHBYTES is a general-purpose hash function and not password-specific.
Finally, as noted, to change a password either to roll forward to v3 or roll back to v2, you can use ALTER
LOGIN xxxx WITH PASSWORD = ‘yyyy’, where xxxx is the login name and yyyy is the password, and you can add MUST_CHANGE if needed.
One last and crucial point, if you undo the update then you will have to change all passwords that use the v3 algorithm as the engine before this update cannot deal with the v3 data format.
Summary
Listening to our customers, we are excited to add this new security feature to SQL Server 2022 CU12 and later. If you want to test this, please understand the serious implications of making this change; the updated algorithm changes the on-disk format and there is no going back other than a full database restore or resetting the password for affected logins after turning off the traceflag.
We’d love to hear your feedback.
A big thanks to Steven Gott for all his hard making this happen!
Microsoft Tech Community – Latest Blogs –Read More
Maximizing your role as a Viva Gint Administrator
Are you a Viva Glint administrator looking for information on how to build your survey program? Are you looking for People Science best practices or to learn from fellow administrators such as yourself? As an administrator, you have access to a wealth of resources to help you take full advantage of everything Viva Glint has to offer.
Check out this page for more information on:
Deploying Viva Glint
Documentation – Find technical articles and guidance to help you through your Viva Glint journey
Badging – Complete our recommended learning paths and modules to earn a digital badge and showcase your achievements
Ask the Experts sessions – Join our monthly live sessions to learn best practices and get your questions answered about the product
Learning with us
Events – Attend our Viva Glint events that cover a range of topics sourced from our customer feedback
Blogs – Stay up-to-date with thought leadership, newsletters, and program launches
Connecting with us
Newsletter – Register for this recurring email that highlights new announcements, features, and Viva Glint updates
Product council – Be a part of a community that provides our team with feedback on how we can improve our products and services
Connecting with others
Learning Circles – Participate in these diverse groups to share knowledge, experiences, and challenges with fellow Viva Glint customers
Cohorts – Join these groups to be connected to like-minded Viva Glint customers and hear topic-based best practices from our Viva Glint team (Coming Soon)
Community – Engage with the Viva Community to ask questions, share ideas, and learn best practices through this online forum
Be sure to leverage these resources to make the most of your role as an administrator and empower your organization to achieve more with Viva Glint!
Microsoft Tech Community – Latest Blogs –Read More
Get Verifiable Credentials
A new Account Verification will be required to access Partner Center programs. All participants in the Windows Hardware Program on Partner Center will be required to complete Verifiable Credentials or they will be blocked from accessing the Hardware Program. Verifiable Credentials (VCs) is an open standard for digital credentials.
The Primary Contact for your account will be the initial individual required to obtain Verifiable Credentials. We ask that you take action now to confirm the Primary Contact information on your account in Partner Center is accurate and current.
Please note the information you provide to obtain Verifiable Credential will be verified by Au10tix Ltd, an industry leader engaged to support this effort. You will not need to disclose the information with Microsoft.
To complete your Verifiable Credential, you will need to install the Microsoft Authenticator app on your mobile device and follow the instructions. Microsoft Authenticator will store your Verifiable Credential. You will need a current and valid government ID in the physical form. The name on the government ID must match the Partner Center Primary Contact. For security purposes there are fixed time limits in the steps to obtain Verifiable Credentials. You may need thirty (30) minutes to complete the steps.
When you receive the Verifiable Credentials request from Partner Center, we urge you to complete the steps as soon as possible to avoid losing access to the Hardware Program. If you have any questions or issues, please reach out to our support team. For details on how to contact support, see Get support for Partner Center dashboard issues – Windows drivers | Microsoft Learn.
Thank you for your cooperation.
Microsoft Tech Community – Latest Blogs –Read More
Lesson Learned #479:Loading Data from Parquet to Azure SQL Database using C# and SqlBulkCopy
In the realm of big data and cloud computing, efficiently managing and transferring data between different platforms and formats is paramount. Azure SQL Database, a fully managed relational database service by Microsoft, offers robust capabilities for handling large volumes of data. However, when it comes to importing data from Parquet files, a popular columnar storage format, Azure SQL Database’s native BULK INSERT command does not directly support this format. This article presents a practical solution using a C# console application to bridge this gap, leveraging the Microsoft.Data.SqlClient.SqlBulkCopy class for high-performance bulk data loading.
Understanding Parquet and Its Significance
Parquet is an open-source, columnar storage file format optimized for use with big data processing frameworks. Its design is particularly beneficial for complex nested data structures and efficient data compression and encoding schemes, making it a favored choice for data warehousing and analytical processing tasks.
The Challenge with Direct Data Loading
Azure SQL Database’s BULK INSERT command is a powerful tool for importing large volumes of data quickly.
A C# Solution: Bridging the Gap
To overcome this limitation, we can develop a C# console application that reads Parquet files, processes the data, and utilizes SqlBulkCopy for efficient data transfer to Azure SQL Database. This approach offers flexibility and control over the data loading process, making it suitable for a wide range of data integration scenarios.
Step 1: Setting Up the Environment
Before diving into the code, ensure your development environment is set up with the following:
.NET Core or .NET Framework compatible with Microsoft.Data.SqlClient.
Microsoft.Data.SqlClient package installed in your project.
Parquet.Net package to facilitate Parquet file reading.
Step 2: Create the target table in Azure SQL Database.
create table parquet (id int, city varchar(30))
Step 3: Create Parquet File in C#
The following C# code allows to Parquet file using two columns ID and city.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.IO;
using Parquet;
using Parquet.Data;
using Parquet.Schema;
namespace ManageParquet
{
class ClsWriteParquetFile
{
public async Task CreateFile(string filePath)
{
var schema = new ParquetSchema( new DataField<int>(“id”), new DataField<string>(“city”));
var idColumn = new DataColumn( schema.DataFields[0], new int[] { 1, 2 });
var cityColumn = new DataColumn( schema.DataFields[1], new string[] { “L”, “D” });
using (Stream fileStream = System.IO.File.OpenWrite(filePath))
{
using (ParquetWriter parquetWriter = await ParquetWriter.CreateAsync(schema, fileStream))
{
parquetWriter.CompressionMethod = CompressionMethod.Gzip;
parquetWriter.CompressionLevel = System.IO.Compression.CompressionLevel.Optimal;
// create a new row group in the file
using (ParquetRowGroupWriter groupWriter = parquetWriter.CreateRowGroup())
{
groupWriter.WriteColumnAsync(idColumn);
groupWriter.WriteColumnAsync(cityColumn);
}
}
}
}
}
}
Step 4: Reading Parquet Files in C#
The first part of the solution involves reading the Parquet file. We leverage the ParquetReader class to access the data stored in the Parquet format.
In the following source code you could find two methods. The first method only read the parquet file and the second one reads and saves the data in a table of Azure SQL Database.
using Parquet;
using Parquet.Data;
using Parquet.Schema;
using System;
using System.Data;
using System.IO;
using System.Threading.Tasks;
using Microsoft.Data.SqlClient;
namespace ManageParquet
{
class ClsReadParquetFile
{
public async Task ReadFile(string filePath)
{
ParquetReader parquetReader = await ParquetReader.CreateAsync(filePath);//, options);
ParquetSchema schema = parquetReader.Schema;
Console.WriteLine(“Schema Parquet file:”);
foreach (var field in schema.Fields)
{
Console.WriteLine($”{field.Name}”);
}
for (int i = 0; i < parquetReader.RowGroupCount; i++)
{
using (ParquetRowGroupReader groupReader = parquetReader.OpenRowGroupReader(i))
{
foreach (DataField field in schema.GetDataFields())
{
Parquet.Data.DataColumn column = await groupReader.ReadColumnAsync(field);
Console.WriteLine($”Column Data of ‘{field.Name}’:”);
foreach (var value in column.Data)
{
Console.WriteLine(value);
}
}
}
}
}
public async Task ReadFileLoadSQL(string filePath)
{
ParquetReader parquetReader = await ParquetReader.CreateAsync(filePath);//, options);
//ParquetSchema schema = parquetReader.Schema;
var schema = new ParquetSchema(new DataField<int>(“id”), new DataField<string>(“city”));
DataTable dataTable = new DataTable();
dataTable.Columns.Add(“id”, typeof(int));
dataTable.Columns.Add(“city”, typeof(string));
for (int i = 0; i < parquetReader.RowGroupCount; i++)
{
using (ParquetRowGroupReader groupReader = parquetReader.OpenRowGroupReader(i))
{
var idColumn = new Parquet.Data.DataColumn(schema.DataFields[0], new int[] { 1, 2 });
var cityColumn = new Parquet.Data.DataColumn(schema.DataFields[1], new string[] { “L”, “D” });
for (int j = 0; j < idColumn.Data.Length; j++)
{
var row = dataTable.NewRow();
row[“id”] = idColumn.Data.GetValue(j);
row[“city”] = cityColumn.Data.GetValue(j);
dataTable.Rows.Add(row);
}
}
}
using (SqlConnection dbConnection = new SqlConnection(“Server=tcp:servername.database.windows.net,1433;User Id=MyUser;Password=MyPassword!;Initial Catalog=MyDb;Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Pooling=true;Max Pool size=100;Min Pool Size=1;ConnectRetryCount=3;ConnectRetryInterval=10;Application Name=ConnTest”))
{
await dbConnection.OpenAsync();
using (SqlBulkCopy s = new SqlBulkCopy(dbConnection))
{
s.DestinationTableName = “Parquet”;
foreach (System.Data.DataColumn column in dataTable.Columns)
{
s.ColumnMappings.Add(column.ColumnName, column.ColumnName);
}
await s.WriteToServerAsync(dataTable);
}
}
}
}
}
Conclusion
This C# console application demonstrates an effective workaround for loading data from Parquet files into Azure SQL Database, circumventing the limitations of the BULK INSERT command. By leveraging the .NET ecosystem and the powerful SqlBulkCopy class, developers can facilitate seamless data integration processes, enhancing the interoperability between different data storage formats and Azure SQL Database.
Microsoft Tech Community – Latest Blogs –Read More
Announcing the 3-year retirement of Windows Server 2022 on Azure Kubernetes Service
Windows Server 2025 and the Windows Server Annual Channel, offer a comprehensive array of enhanced features, heightened security measures, and improved overall performance, and with image portability customers can now run Windows Server 2022 based containers on these new versions. To maximize the experience for customers, not only will Windows Server 2025/Annual Channel provide the most efficient versions of Windows Server yet, but also streamline the upgrade process. In pursuit of an enhanced user experience and an unwavering commitment to safety and reliability, we will be retiring Windows Server 2022 on Azure Kubernetes Service (AKS) in 3-years time.
What does this mean for me?
Windows Server 2022 will be retiring on AKS in March 2027. You should prepare to upgrade to a supported Windows Server version before March 2027.
How can I upgrade my Windows nodepools?
You can follow the Windows Server OS migration process outlined in the AKS documentation to upgrade to Windows Server 2025 or Annual Channel when they’re released on AKS. Portability is key feature available for Windows Server 2025/Annual Channel and onwards, the host and container image no longer need to be upgraded in tandem, older images can now work on newer hosts (ex. running Windows Server 2022 image on Windows Server 2025 host).
Kubernetes version 1.34 will be the final version where Windows Server 2022 is supported on AKS. When Kubernetes version 1.34 is at the end of life on AKS, Windows Server 2022 will no longer be supported. Upgrades to Kubernetes 1.35 on AKS will be blocked if there are any remaining Windows Server 2022 node pools in the cluster.
Windows Server 2025 on AKS and will offer numerous advantages and enhancements. At a high level, Windows Server 2025 introduces enhanced performance and reliability and improved networking support, including density improvements. Learn more about Windows Server 2025 from our recent announcements at Containers – Microsoft Community Hub.
Our commitment centers on customer satisfaction and success, guiding our efforts to provide ample resources and time for upgrading to our premier operating system. Our aim is to simplify the upgrade process, enabling customers to fully leverage the benefits of Windows Server 2025/Annual Channel.
Microsoft Tech Community – Latest Blogs –Read More
Upcoming preview of Microsoft Office LTSC 2024
Microsoft 365 offers the cloud-backed apps, security, and storage that customers worldwide rely on to achieve more in a connected world – and lays a foundation for leveraging generative AI to go even further. However, we know that some customers have niche yet important scenarios that require a truly long-term servicing channel: regulated devices that cannot accept feature updates for years at a time, process control devices on the manufacturing floor that are not connected to the internet, and specialty systems like medical testing equipment that run embedded apps that must stay locked in time. For these special cases, Microsoft continues to offer and support the Office Long-Term Servicing Channel (LTSC). Today we are pleased to announce that the commercial preview of the next Office LTSC release – Office LTSC 2024 – will begin next month, with general availability to follow later this year.
About this release
Like earlier perpetual versions of Office, Office LTSC 2024 will include only a subset of the value found in Microsoft 365 Apps, building on the features included in past releases. New features for Office LTSC 2024 include: new meeting creation options and search enhancements in Outlook, dozens of new Excel features and functions including Dynamic Charts and Arrays; and improved performance, security, and accessibility. Office LTSC 2024 will not ship with Microsoft Publisher, which is being retired, or with the Microsoft Teams app, which is available to download separately.
While Office LTSC 2024 offers many significant improvements over the previous Office LTSC release, as an on-premises product it will not offer the cloud-based capabilities of Microsoft 365 Apps, like real-time collaboration; AI-driven automation in Word, Excel, and PowerPoint; or cloud-backed security and compliance capabilities that give added confidence in a hybrid world. And with device-based licensing and extended offline access, Microsoft 365 offers deployment options for scenarios like computer labs and submarines that require something other than a user-based, always-online solution. Microsoft 365 (or Office 365) is also required to subscribe to Microsoft Copilot for Microsoft 365; as a disconnected product, Office LTSC does not qualify.
As with previous releases, Office LTSC 2024 will still be a device-based “perpetual” license, supported for five years under the Fixed Lifecycle Policy, in parallel with Windows 11 LTSC, which will also launch later this year. And because we know that many customers deploy Office LTSC on only a subset of their devices, we will continue to support the deployment of both Office LTSC and Microsoft 365 Apps to different machines within the same organization using a common set of deployment tools.
Office LTSC is a specialty product that Microsoft has committed to maintaining for use in exceptional circumstances, and the 2024 release provides substantial new feature value for those scenarios. To support continued innovation in this niche space, Microsoft will increase the price of Office LTSC Professional Plus, Office LTSC Standard, Office LTSC Embedded, and the individual apps by up to 10% at the time of general availability. And, because we are asked at the time of release if there will be another one, I can confirm our commitment to another release in the future.
We will provide additional information about the next version of on-premises Visio and Project in the coming months.
Office 2024 for consumers
We are also planning to release a new version of on-premises Office for consumers later this year: Office 2024. Office 2024 will also be supported for five years with the traditional “one-time purchase” model. We do not plan to change the price for these products at the time of the release. We will announce more details about new features included in Office 2024 closer to general availability.
Embracing the future of work
The future of work in an AI-powered world is on the cloud. In most customer scenarios, Microsoft 365 offers the most secure, productive, and cost-effective solution, and positions customers to unlock the transformative power of AI with Microsoft Copilot. Especially as we approach the end of support for Office 2016 and Office 2019 in October 2025, we encourage customers still using these solutions to transition to a cloud subscription that suits their needs as a small business or a larger organization. And for scenarios where that is not possible – where a disconnected, locked-in-time solution is required – this new release reflects our commitment to supporting that need.
FAQ
Q: Will the next version of Office have a Mac version?
A: Yes, the next version of Office will have both Windows and Mac versions for both commercial and consumer.
Q: Will the next version of Office be supported on Windows 10?
A: Yes, Office LTSC 2024 will be supported on Windows 10 and Windows 10 LTSC devices (with the exception of Arm devices, which will require Windows 11).
Q: Will the next version support both 32- and 64-bit?
A: Yes, the next version of Office will ship both 32-and 64-bit versions.
Microsoft Tech Community – Latest Blogs –Read More
Cumulative Update #12 for SQL Server 2022 RTM
The 12th cumulative update release for SQL Server 2022 RTM is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.
To learn more about the release or servicing model, please visit:
CU12 KB Article: https://learn.microsoft.com/troubleshoot/sql/releases/sqlserver-2022/cumulativeupdate12
Starting with SQL Server 2017, we adopted a new modern servicing model. Please refer to our blog for more details on Modern Servicing Model for SQL Server.
Microsoft® SQL Server® 2022 RTM Latest Cumulative Update: https://www.microsoft.com/download/details.aspx?id=105013
Update Center for Microsoft SQL Server: https://learn.microsoft.com/en-us/troubleshoot/sql/releases/download-and-install-latest-updates
Microsoft Tech Community – Latest Blogs –Read More
Known issue: iOS/iPadOS ADE users incorrectly redirected to Intune Company Portal website
We recently identified an issue where users with iOS/iPadOS devices enrolling with Automated Device Enrollment (ADE) are unable to sign in to the Intune Company Portal app for iOS/iPadOS. Instead, they’re incorrectly prompted to navigate to the Intune Company Portal website to enroll their device, and they are unable to complete enrollment. This issue occurs for newly enrolled ADE users that are targeted with an “Account driven user enrollment” or “Web based device enrollment” enrollment profile and just in time (JIT) registration has not been set up.
While we’re actively working on resolving this issue, we always recommend organizations using iOS/iPadOS ADE to have JIT registration set up for their devices for the best and most secure user experience. For more information review: Set up just in time registration.
Stay tuned to this blog for updates on the fix! If you have any questions, leave a comment below or reach out to us on X @IntuneSuppTeam.
Microsoft Tech Community – Latest Blogs –Read More
How to monitor the performance of your on-prem & multi-cloud SQL Servers w/ Azure Arc | Data Exposed
Learn how to use Azure Arc to monitor key performance metrics for your SQL Servers located in your data center, at the edge, or even in other public clouds.
Resources:
https://aka.ms/ArcSQLMonitoring
https://aka.ms/ArcDocs
View/share our latest episodes on Microsoft Learn and YouTube!
Microsoft Tech Community – Latest Blogs –Read More
Last chance to nominate for POTYA!
This is your chance to be recognized as part of the Microsoft Partner of the Year Awards! Nominate before the April 3 deadline.
Celebrated annually, these awards recognize the incredible impact that Microsoft partners are delivering to customers and celebrate the outstanding successes and innovations across Solution Areas, industries, and key areas of impact, with a focus on strategic initiatives and technologies. Partners of all types, sizes, and geographies are encouraged to self-nominate. This is an opportunity for partners to be recognized on a global scale for their innovative solutions built using Microsoft technologies.
In addition to recognizing partners for the impact in our award categories, we also recognize partners from over 100 countries/regions around the world as part of the Country/Region Partner of the Year Awards. In 2024, we’re excited to offer additional opportunities to recognize partner impact through new awards – read our blog to learn more and download the official guidelines for specific eligibility requirements.
Find resources on how to write a great entry, FAQs and more on our the Partner of the Year Awards website.
Nominate here!
Microsoft Tech Community – Latest Blogs –Read More
Mastering Azure Cost Optimization – A Comprehensive Guide
Introduction
Hi folks! My name is Felipe Binotto, Cloud Solution Architect, based in Australia.
I understand and you probably do as well, that cost savings in the cloud is a very hot topic for any organization.
Believe it or not, there is a huge number of people (including me) doing their best to allow you to get the best value for your money. This is imperative for us.
The plan here is to highlight the most used cost savings artifacts as well as the most effective cost savings actions you can take for cost savings.
I will give you some personal tips, but most of the content already exists and therefore my objective is to have a consolidated article where you can find the best and latest (as of March 2024) cost optimization content.
The content will vary from teams ranging from Product Groups to Cloud Solution Architects which can cover anything from the ins and outs of people actively working on our products to the people actively deploying and implementing them in the field.
Key Strategies for Azure Cost Optimization
There are a huge number of areas where cost optimization can be applied to. These range from doing some type of remediation to just understanding and having ways to visualize your spendings.
I don’t intend to cover every single possible way to achieve cost savings but here is what I will cover:
Hybrid Benefit (both Windows and Linux)
Reservations (for several resource types)
Savings Plan
Idle Resources
SKUs
Resizing
Logs
Workbooks
Dashboards
FinOps
In my experience, if you invest some time and take a good look at the above list, you will be able to achieve good savings and be able to invest those savings in more innovative (or maybe security) initiatives in your company.
Hybrid Benefit
Azure Hybrid Benefit is a cost-saving feature that allows you to use your existing on-premises Windows Server and SQL Server licenses with Software Assurance or qualifying subscription licenses to run workloads in the cloud at a reduced cost. This benefit now also extends to Linux (RHEL and SLES) and Arc-enabled AKS clusters. You can also leverage Hybrid Benefit on Azure Stack HCI. It’s an effective way to reduce costs for organizations that have already invested in Microsoft or Linux licenses and are looking to migrate or extend their infrastructure to Azure.
@arthurclares has a nice blog post on this.
For information on Hybrid Benefit for Arc-enabled AKS clusters, check this page.
For more information on Hybrid Benefit for Azure Stack HCI, check this page.
Reservations and Savings Plans
Azure Reservations help you save money by committing to one-year or three-year plans for multiple products. This includes virtual machines, SQL databases, AVS, and other Azure resources. By pre-paying, you can secure significant discounts over the pay-as-you-go rates. Reservations are ideal for predictable workloads where you have a clear understanding of your long-term resource needs.
The Azure Savings Plan is a flexible pricing model that offers lower prices on Azure resources in exchange for committing to a consistent amount of usage (measured in $/hour) for a one or three-year period. Unlike Reservations, which apply to specific resources, the Savings Plan applies to usage across any eligible services, providing more flexibility in how you use Azure while still benefiting from cost savings.
@BrandonWilson already has a nice blog post on this.
Idle Resources
Identifying and managing idle resources is a straightforward way to optimize costs. Resources such as unused virtual machines, excess storage accounts, or idle database instances can incur costs without providing value. Implementing monitoring and automation to shut down or scale back these resources during off-peak hours can lead to significant savings.
@anortoll has already blogged about this and the post has workbooks that can be used to locate those idle resources.
@Dolev_Shor available here.
SKUs
Selecting the SKUs for your Azure resources is crucial for cost optimization. Azure offers a variety of SKUs for services like storage, databases, and compute, each with different pricing and capabilities. Choosing the most appropriate SKU for your workload requirements can optimize performance and cost efficiency.
A classic example is the over sizing of Virtual Machines. However, there are other resources to consider too. For instance, you deploy an Azure Firewall with the Premium SKU, but do you really need premium features such as TLS or intrusion detection? Another example are App Service plans which in the v3 SKU can be cheaper and enable you to buy Reservations.
Here is a blog post by Diana Gao on VM right sizing.
Here is another blog post by Werner Hall on right sizing of Azure SQL Managed Instances.
Logs
Analyzing logs can provide insights into resource utilization and operational patterns, helping identify opportunities for cost savings. For instance, log analysis can reveal underutilized resources that can be downsized, or inefficient application patterns that can be optimized. Azure offers various tools for log analysis, such as Azure Monitor and Log Analytics, to aid in this process.
However, logs can also be the source of high cloud spending. The key is to understand what you need to log, what type of logs you need and strategies to minimize the cost of storing those logs.
For example, when you are ingesting logs in Log Analytics, depending on the log you are ingesting, you can configure certain tables in the Log Analytics workspace to use Basic Logs. Data in these tables has a significantly reduced ingestion charge and a limited retention period.
There’s a charge to search against these tables. Basic Logs are intended for high-volume verbose logs you use for debugging, troubleshooting, and auditing, but not for analytics and alerts.
Depending on the amount of data being ingested, you could also leverage commitment tiers in your workspace to save as much as 30 percent compared to the PAYG price.
Furthermore, you could also structure your logs to be archived after a period of time and save on costs.
Refer to this page to learn more about these options and to this page to learn more about best practices.
Workbooks
Azure Workbooks provide a customizable, interactive way to visualize and analyze your Azure resources and their metrics. By creating workbooks, you can gain insights into your spending patterns, resource utilization, and operational health. This can help identify inefficiencies and areas where cost optimizations can be applied.
Many workbooks are available. A few examples are:
Azure Advisor workbook
Azure Orphan Resources workbook
Azure Hybrid Benefit workbook
Dashboards
Azure Dashboards offer a unified view of your resources and their metrics, allowing for real-time monitoring of your Azure environment. Custom dashboards can be configured to focus on cost-related metrics, providing a clear overview of your spending, highlighting trends, and pinpointing areas where optimizations can be made.
You can make your own dashboards, but a few are already available, and you can customize them to your needs.
@sairashaik made available two very useful dashboards available here and here.
Another one which has been available for a long time is the Cost Management Power BI App for Enterprise Agreements.
FinOps
FinOps, or Financial Operations, is a cloud financial management practice aimed at maximizing business value by bringing financial accountability to the variable spend model of the cloud. It involves understanding and controlling cloud costs through practices like allocating costs to specific teams or projects, budgeting, and forecasting. Implementing FinOps practices helps organizations make more informed decisions about their cloud spend, ensuring alignment with business goals.
Learn how to implement your own FinOps hub leveraging the FinOps Toolkit.
Additional Resources
The following are some additional resources which provide valuable information for your Cost Optimization journey.
Advisor Cost Optimization Workbook
Cost Optimization Design Principles
Conclusion
Optimizing costs in Azure is a multifaceted endeavor that requires a strategic approach and a deep understanding of the available tools and features. By leveraging Azure’s Hybrid Benefit for both Windows and Linux, making smart use of Reservations for various resources, adopting Savings Plans, and diligently managing Idle Resources, businesses can achieve substantial cost savings.
Additionally, the careful selection of SKUs, appropriate resizing of resources, thorough analysis of logs, and effective use of Workbooks and Dashboards can further enhance cost efficiency. Lastly, embracing FinOps principles ensures that cost management is not just an IT concern but a shared responsibility across the organization, aligning cloud spending with business value. Together, these strategies form a robust framework for achieving cost optimization in Azure, enabling businesses to maximize their cloud investments and drive greater efficiency and innovation.
As always, I hope this was informative to you and thanks for reading.
Disclaimer
The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.
Microsoft Tech Community – Latest Blogs –Read More