Category: Microsoft
Category Archives: Microsoft
Lesson Learned #481: Query Performance Analysis Tips
When working with databases, high resource usage or a query reporting a timeout could indicate that the statistics of the tables involved in the query are not up to date, that we might be missing indexes, or that there are excessive blocking as the most common elements and possible causes of performance loss. For this reason, I would like to add in this article elements that can help us determine what might be happening with our query.
CommandTimeout and Obtaining Statistics
The inception point of our exploration is the creation of a stored procedure, sp_AnalyzeQueryStatistics. This procedure is designed to take a SQL query as input, specified through the @SQLQuery parameter, and dissect it to unveil the underlying schema and tables it interacts with.
Crafting
sp_AnalyzeQueryStatistics: The core functionality of this procedure leverages the sys.dm_exec_describe_first_result_set DMV. This invaluable tool provides a window into the query’s anatomy, pinpointing the schema and tables entwined in its execution path.
CREATE PROCEDURE sp_AnalyzeQueryStatistics
@SQLQuery NVARCHAR(MAX)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @TableNames TABLE (
SourceSchema NVARCHAR(128),
TableName NVARCHAR(128)
);
INSERT INTO @TableNames (SourceSchema, TableName)
SELECT DISTINCT
source_schema AS SourceSchema,
source_table AS TableName
FROM
sys.dm_exec_describe_first_result_set(@SQLQuery, NULL, 1) sp
WHERE sp.error_number IS NULL AND NOT sp.source_table is NULL
SELECT
t.TableName,
s.name AS StatisticName,
STATS_DATE(s.object_id, s.stats_id) AS LastUpdated,
sp.rows,
sp.rows_sampled,
sp.modification_counter
FROM
@TableNames AS t
INNER JOIN
sys.stats AS s ON s.object_id = OBJECT_ID(QUOTENAME(t.SourceSchema) + ‘.’ + QUOTENAME(t.TableName))
CROSS APPLY
sys.dm_db_stats_properties(s.object_id, s.stats_id) AS sp;
END;
Diving Deeper with Table Statistics:
Identification is just the precursor; the crux lies in scrutinizing the statistics of these identified tables. By employing sys.stats and sys.dm_db_stats_properties, we delve into the statistical realm of each table, gleaning insights into data distribution, sampling rates, and the freshness of the statistics.
Informed Decision-Making:
This statistical audit empowers us with the knowledge to make data-driven decisions. Should the rows_sample significantly deviate from the total rows, or the statistics’ last update be a for example 2 months, it’s a clarion call for action—be it updating the statistics or reevaluating index strategies.
C# Implementation:
using System;
using System.Diagnostics;
using System.Data;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Data.SqlClient;
namespace HighCPU
{
class Program
{
private static string ConnectionString = “Server=tcp:myservername.database.windows.net,1433;User Id=MyUser;Password=MyPassword;Initial Catalog=MyDb;Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Pooling=true;Max Pool size=100;Min Pool Size=1;ConnectRetryCount=3;ConnectRetryInterval=10;Application Name=ConnTest”;
private static string Query = “SELECT * FROM [MSxyzTest].[_x_y_z_MS_HighDATAIOBlocks] ORDER BY NEWID() DESC”;
static async Task Main(string[] args)
{
SqlConnection connection = await EstablishConnectionWithRetriesAsync(3, 2000);
if (connection == null)
{
Console.WriteLine(“Failed to establish a database connection.”);
return;
}
await ExecuteQueryWithRetriesAsync(connection, 5, 1000, 100000,2,true);
connection.Close();
}
private static async Task<SqlConnection> EstablishConnectionWithRetriesAsync(int maxRetries, int initialDelay)
{
SqlConnection connection = null;
int retryDelay = initialDelay;
for (int attempt = 1; attempt <= maxRetries; attempt++)
{
try
{
connection = new SqlConnection(ConnectionString);
await connection.OpenAsync();
Console.WriteLine(“Connection established successfully.”);
return connection;
}
catch (SqlException ex)
{
Console.WriteLine($”Failed to establish connection: {ex.Message}. Attempt {attempt} of {maxRetries}.”);
if (attempt == maxRetries)
{
Console.WriteLine(“Maximum number of connection attempts reached. The application will terminate.”);
return null;
}
Console.WriteLine($”Waiting {retryDelay / 1000} seconds before the next connection attempt…”);
await Task.Delay(retryDelay);
retryDelay *= 2;
}
}
return null;
}
private static async Task ExecuteQueryWithRetriesAsync(SqlConnection connection, int maxRetries, int initialDelay, int CancellationTokenTimeout, int CommandSQLTimeout, Boolean bReviewQuery = false)
{
int retryDelay = initialDelay;
for (int attempt = 1; attempt <= maxRetries; attempt++)
{
using (var cts = new CancellationTokenSource())
{
cts.CancelAfter(CancellationTokenTimeout*attempt);
try
{
using (SqlCommand command = new SqlCommand(Query, connection))
{
command.CommandTimeout = CommandSQLTimeout*attempt;
Stopwatch stopwatch = Stopwatch.StartNew();
await command.ExecuteNonQueryAsync(cts.Token);
stopwatch.Stop();
Console.WriteLine($”Query executed successfully in {stopwatch.ElapsedMilliseconds} milliseconds.”);
return;
}
}
catch (TaskCanceledException)
{
Console.WriteLine($”Query execution was canceled by the CancellationToken. Attempt {attempt} of {maxRetries}.”);
}
catch (SqlException ex) when (ex.Number == -2)
{
Console.WriteLine($”Query execution was canceled due to CommandTimeout. Attempt {attempt} of {maxRetries}.”);
if (bReviewQuery)
{ await ReviewQuery(); }
}
catch (SqlException ex) when (ex.Number == 207 || ex.Number == 208 || ex.Number == 2627)
{
Console.WriteLine($”SQL error preventing retries: {ex.Message}”);
return;
}
catch (Exception ex)
{
Console.WriteLine($”An exception occurred: {ex.Message}”);
return;
}
Console.WriteLine($”Waiting {retryDelay / 1000} seconds before the next query attempt…”);
await Task.Delay(retryDelay);
retryDelay *= 2;
}
}
}
private static async Task ReviewQuery()
{
SqlConnection connection = await EstablishConnectionWithRetriesAsync(3, 2000);
if (connection == null)
{
Console.WriteLine(“Review Query – Failed to establish a database connection.”);
return;
}
await ReviewQueryWithRetriesAsync(connection, 5, 1000, 10000, 15);
connection.Close();
}
private static async Task ReviewQueryWithRetriesAsync(SqlConnection connection, int maxRetries, int initialDelay, int CancellationTokenTimeout, int CommandSQLTimeout, Boolean bReviewQuery = false)
{
int retryDelay = initialDelay;
for (int attempt = 1; attempt <= maxRetries; attempt++)
{
using (var cts = new CancellationTokenSource())
{
cts.CancelAfter(CancellationTokenTimeout * attempt);
try
{
using (SqlCommand command = new SqlCommand(“sp_AnalyzeQueryStatistics”, connection))
{
command.CommandTimeout = CommandSQLTimeout * attempt;
command.CommandType = CommandType.StoredProcedure;
Stopwatch stopwatch = Stopwatch.StartNew();
command.Parameters.Add(new SqlParameter(“@SQLQuery”, SqlDbType.NVarChar, -1));
command.Parameters[“@SQLQuery”].Value = Query;
using (SqlDataReader reader = await command.ExecuteReaderAsync())
{
while (await reader.ReadAsync())
{
Console.WriteLine(“TableName: ” + reader[“TableName”].ToString());
Console.WriteLine(“StatisticName: ” + reader[“StatisticName”].ToString());
Console.WriteLine(“LastUpdated: ” + reader[“LastUpdated”].ToString());
Console.WriteLine(“Rows: ” + reader[“Rows”].ToString());
Console.WriteLine(“RowsSampled: ” + reader[“Rows_Sampled”].ToString());
Console.WriteLine(“ModificationCounter: ” + reader[“Modification_Counter”].ToString());
Console.WriteLine(“———————————–“);
}
}
stopwatch.Stop();
Console.WriteLine($”Query executed successfully in {stopwatch.ElapsedMilliseconds} milliseconds.”);
return;
}
}
catch (TaskCanceledException)
{
Console.WriteLine($”Query execution was canceled by the CancellationToken. Attempt {attempt} of {maxRetries}.”);
}
catch (SqlException ex) when (ex.Number == -2)
{
Console.WriteLine($”Query execution was canceled due to CommandTimeout. Attempt {attempt} of {maxRetries}.”);
if (bReviewQuery)
{ }
}
catch (SqlException ex) when (ex.Number == 207 || ex.Number == 208 || ex.Number == 2627)
{
Console.WriteLine($”SQL error preventing retries: {ex.Message}”);
return;
}
catch (Exception ex)
{
Console.WriteLine($”An exception occurred: {ex.Message}”);
return;
}
Console.WriteLine($”Waiting {retryDelay / 1000} seconds before the next query attempt…”);
await Task.Delay(retryDelay);
retryDelay *= 2;
}
}
}
}
}
The Bigger Picture
This initial foray into CommandTimeout and statistics is merely the tip of the iceberg. It sets the stage for a broader discourse on query performance, where each element—from indexes to execution plans—plays a crucial role. Our series aims to arm you with the knowledge and tools to not just react to performance issues but to anticipate and mitigate them proactively, ensuring your databases are not just operational but optimized for efficiency and resilience.
Stay tuned as we continue to peel back the layers of SQL performance tuning, offering insights, strategies, and practical advice to elevate your database management game.
Microsoft Tech Community – Latest Blogs –Read More
Lesson Learned #480:Navigating Query Cancellations with Azure SQL Database
In a recent support case, our customer faced an intriguing issue where a query execution in a .NET application was unexpectedly canceled during asynchronous operations against Azure SQL Database. This experience highlighted the nuances of handling query cancellations, which could stem from either a CommandTimeout or a CancellationToken. Through this concise article, I aim to elucidate these two cancellation scenarios, alongside strategies for managing SQL errors, ensuring connection resilience through retries, and measuring query execution time. The accompanying code serves as a practical guide, demonstrating how to adjust timeouts dynamically in an attempt to successfully complete a query, should it face cancellation due to timeout constraints. This narrative not only shares a real-world scenario but also provides actionable insights for developers looking to fortify their .NET applications interacting with Azure SQL Database.
Introduction:
Understanding and managing query cancellations in asynchronous database operations are critical for maintaining the performance and reliability of .NET applications. This article stems from a real-world support scenario where a customer encountered unexpected query cancellations while interacting with Azure SQL Database. The issue brings to light the importance of distinguishing between cancellations caused by CommandTimeout and those triggered by CancellationToken, each requiring a distinct approach to error handling and application logic.
Cancellations: CommandTimeout vs. CancellationToken:
In asynchronous database operations, two primary types of cancellations can occur: one due to the command’s execution time exceeding the CommandTimeout limit, and the other due to a CancellationToken being invoked. Understanding the difference is crucial, as each scenario demands specific error handling strategies. A CommandTimeout cancellation typically indicates that the query is taking longer than expected, possibly due to database performance issues or query complexity. On the other hand, a cancellation triggered by a CancellationToken may be due to application logic deciding to abort the operation, often in response to user actions or to maintain application responsiveness.
Error Handling and Connection Resilience:
Errors during query execution, such as syntax errors or references to non-existent database objects, necessitate immediate attention and are not suitable for retry logic. The application must distinguish these errors from transient faults, where retry logic with exponential backoff can be beneficial. Moreover, connection resilience is paramount, and implementing a retry mechanism for establishing database connections ensures that transient network issues do not disrupt application functionality.
Measuring Query Execution Time:
Gauging the execution time of queries is instrumental in identifying performance bottlenecks and optimizing database interactions. The example code demonstrates using a Stopwatch to measure and log the duration of query execution, providing valuable insights for performance tuning.
Adaptive Timeout Strategy:
The code snippet illustrates an adaptive approach to handling query cancellations due to timeouts. By dynamically adjusting the CommandTimeout and CancellationToken timeout values upon encountering a timeout-related cancellation, the application attempts to afford the query additional time to complete in subsequent retries, where feasible.
Conclusion:
The intersection of CommandTimeout, CancellationToken, error handling, and connection resilience forms the crux of robust database interaction logic in .NET applications. This article, inspired by a real-world support case, sheds light on these critical aspects, offering a pragmatic code example that developers can adapt to enhance the reliability and performance of their applications when working with Azure SQL Database. The nuanced understanding and strategic handling of query cancellations, as discussed, are pivotal in crafting responsive and resilient .NET database applications.
Example C# code:
using System;
using System.Diagnostics;
using System.Data;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Data.SqlClient;
namespace CancellationToken
{
class Program
{
private static string ConnectionString = “Server=tcp:servername.database.windows.net,1433;User Id=MyUser;Password=MyPassword;Initial Catalog=MyDB;Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Pooling=true;Max Pool size=100;Min Pool Size=1;ConnectRetryCount=3;ConnectRetryInterval=10;Application Name=ConnTest”;
private static string Query = “waitfor delay ’00:00:20′”;
static async Task Main(string[] args)
{
SqlConnection connection = await EstablishConnectionWithRetriesAsync(3, 2000);
if (connection == null)
{
Console.WriteLine(“Failed to establish a database connection.”);
return;
}
await ExecuteQueryWithRetriesAsync(connection, 5, 1000, 30000,15);
connection.Close();
}
private static async Task<SqlConnection> EstablishConnectionWithRetriesAsync(int maxRetries, int initialDelay)
{
SqlConnection connection = null;
int retryDelay = initialDelay;
for (int attempt = 1; attempt <= maxRetries; attempt++)
{
try
{
connection = new SqlConnection(ConnectionString);
await connection.OpenAsync();
Console.WriteLine(“Connection established successfully.”);
return connection;
}
catch (SqlException ex)
{
Console.WriteLine($”Failed to establish connection: {ex.Message}. Attempt {attempt} of {maxRetries}.”);
if (attempt == maxRetries)
{
Console.WriteLine(“Maximum number of connection attempts reached. The application will terminate.”);
return null;
}
Console.WriteLine($”Waiting {retryDelay / 1000} seconds before the next connection attempt…”);
await Task.Delay(retryDelay);
retryDelay *= 2;
}
}
return null;
}
private static async Task ExecuteQueryWithRetriesAsync(SqlConnection connection, int maxRetries, int initialDelay, int CancellationTokenTimeout, int CommandSQLTimeout)
{
int retryDelay = initialDelay;
for (int attempt = 1; attempt <= maxRetries; attempt++)
{
using (var cts = new CancellationTokenSource())
{
cts.CancelAfter(CancellationTokenTimeout*attempt); // Set CancellationToken timeout
try
{
using (SqlCommand command = new SqlCommand(Query, connection))
{
command.CommandTimeout = CommandSQLTimeout*attempt;
Stopwatch stopwatch = Stopwatch.StartNew();
await command.ExecuteNonQueryAsync(cts.Token);
stopwatch.Stop();
Console.WriteLine($”Query executed successfully in {stopwatch.ElapsedMilliseconds} milliseconds.”);
return;
}
}
catch (TaskCanceledException)
{
Console.WriteLine($”Query execution was canceled by the CancellationToken. Attempt {attempt} of {maxRetries}.”);
}
catch (SqlException ex) when (ex.Number == -2)
{
Console.WriteLine($”Query execution was canceled due to CommandTimeout. Attempt {attempt} of {maxRetries}.”);
}
catch (SqlException ex) when (ex.Number == 207 || ex.Number == 208 || ex.Number == 2627)
{
Console.WriteLine($”SQL error preventing retries: {ex.Message}”);
return;
}
catch (Exception ex)
{
Console.WriteLine($”An exception occurred: {ex.Message}”);
return;
}
Console.WriteLine($”Waiting {retryDelay / 1000} seconds before the next query attempt…”);
await Task.Delay(retryDelay);
retryDelay *= 2;
}
}
}
}
}
Tests and Results:
In the course of addressing the query cancellation issue, we conducted a series of tests to understand the behavior under different scenarios and the corresponding exceptions thrown by the .NET application. Here are the findings:
Cancellation Prior to Query Execution:
Scenario: The cancellation occurs before the query gets a chance to execute, potentially due to reasons such as application overload or a preemptive cancellation policy.
Exception Thrown: TaskCanceledException
Internal Error Message: “A task was canceled.”
Explanation: This exception is thrown when the operation is canceled through a CancellationToken, indicating that the asynchronous task was canceled before it could begin executing the SQL command. It reflects the application’s decision to abort the operation, often to maintain responsiveness or manage workload.
Cancellation Due to CommandTimeout:
Scenario: The cancellation is triggered by reaching the CommandTimeout of SqlCommand, indicating that the query’s execution duration exceeded the specified timeout limit.
Exception Thrown: SqlException with an error number of -2
Internal Error Message: “Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.”
Explanation: This exception occurs when the query execution time surpasses the CommandTimeout value, prompting SQL Server to halt the operation. It suggests that the query may be too complex, the server is under heavy load, or there are network latency issues.
Cancellation Before CommandTimeout is Reached:
Scenario: The cancellation happens before the CommandTimeout duration is met, not due to the CommandTimeout setting but possibly due to an explicit cancellation request or an unforeseen severe error during execution.
Exception Thrown: General Exception (or a more specific exception depending on the context)
Internal Error Message: “A severe error occurred on the current command. The results, if any, should be discarded.rnOperation cancelled by user.”
Explanation: This exception indicates an abrupt termination of the command, potentially due to an external cancellation signal or a critical error that necessitates aborting the command. Unlike the TaskCanceledException, this may not always originate from a CancellationToken and can indicate more severe issues with the command or the connection.
Microsoft Tech Community – Latest Blogs –Read More
Support for Iterated and Salted Hash Password Verifiers in SQL Server 2022 CU12
Introduction
We all know that as security threats evolve, we must update our defenses to mitigate newer threats. Over the last few months, some customers have asked us to strengthen the way we secure passwords in SQL Server. The most often-cited reference by customers is to comply with NIST SP 800-63b.
Currently supported versions of SQL Server and Azure SQL DB use a SHA-512 hash with a 32-bit random and unique salt. It is statistically infeasible for an attacker to deduce the password knowing just the hash and the salt. It is considerably easier for an attacker to hunt for insecure storage of database connection strings that contain database credentials than it is to break the password verifier (also called a password authenticator) used by SQL Server and Azure SQL DB. But that’s a discussion for another day and is the main reason we highly recommend using Entra ID authentication rather than using uid/pwd-based connections because Entra ID authentication manages credentials and supports access policies.
Considering NIST SP 800-63b, we have added an iterator to the password verifier algorithm. Rather than just storing the SHA-512 hash of the password:
H = SHA512(salt, password)
In this update, we create an iterated hash:
H0 = SHA512(salt, 0, password);
for n=1 to 100,000
Hn = SHA512(salt, Hn-1, password)
NIST SP 800-63b requires a minimum 10,000 iterations, but we chose an iteration count of 100,000 because the Microsoft SDL minimum is 100,000. The iteration means that an attacker attempting a brute-force attack using the password verifier is slowed down by x100,000. We want to stress that there are no known weaknesses in the current SQL Server authenticator implementation, we are adding this update at the request of finance customers to help achieve their compliance goals.
Specifics
Let’s look at some details.
This feature is off by default, and it is only available in SQL Server 2022 CU12 and later. There are no plans to back-port this to older versions of the database engine.
Next, and this is critically important. This changes the on-disk format for the password verifier. This has serious repercussions because we cannot go back to the older version unless you reset users’ passwords or perform a full restore. This update only changes a password verifier to the new algorithm on password change. The database engine cannot update the password verifier automatically because the engine does not know the password.
So how do you enable this if you want to test it out?
First, make sure you’re using a non-production database. The code is production quality, but we want to limit its use until we get more coverage.
Second, make a backup of the database and test that it restores. Remember, there is no going back to the old authenticator other than a password change or a full restore. Reread this point one more time.
Third, using an account that is a sysadmin, you can enable this functionality using the following:
DBCC TRACEON (4671, -1)
DBCC TRACEON (4671)
You can check for this trace flag with:
DBCC TRACESTATUS(4671)
Below is a sample flow testing the functionality.
CREATE LOGIN JimBob WITH PASSWORD = ‘<insert strong pwd>’
DBCC TRACEON (4671,-1)
DBCC TRACEON (4671)
DBCC TRACESTATUS(4671)
CREATE LOGIN MaryJane WITH PASSWORD = ‘<insert strong pwd>’
GO
SELECT name, password_hash
FROM sys.sql_logins
WHERE name NOT LIKE ‘##MS%’
You will see output like this:
sa 0x020097E55B1EC90563A023D6785F4ADC–snip–33A34CB391510CE532B
JimBob 0x0200B378592D3BCFF9B2CD667380D66D–snip–78619048510C10C342E
MaryJane 0x0300D1FB26002DEE6615D02BD9F9F425–snip–0A9F559F3D16EF61A84
Notice the first byte of the hash for JimBob and sa is 0x02, this is the algorithm version number. If you change the password for a login after enabling the new algorithm, the version number changes to 0x03 and this is what we see for MaryJane. The use of a version number at the start of the hash output is a fantastic example of cryptographic agility. Always Encrypted in SQL Server and Azure SQL DB also supports cryptographic agility, as does Microsoft Office. Crypto-agility is a security best practice that allows applications to adopt new cryptographic algorithms over time without breaking existing applications. You can learn more about the topic here. This is especially important as we face a world of post-quantum cryptography.
The data size on disk is the same for v2 and v3 password verifiers because the salt and resulting hash are the same size.
Now, if we update JimBob’s password:
ALTER LOGIN JimBob WITH PASSWORD = ‘<insert strong pwd>’
We will see his password authenticator is something like this:
JimBob 0x0300A269192D2BCEF9A20016729FDEAD–snip–0AFFF124197B1065718
The new v3 algorithm is now applied to his account.
If you want to go back to using the old v2 algorithm, then you can turn the flag off:
DBCC TRACEOFF (4671,-1)
DBCC TRACEOFF (4671)
DBCC TRACESTATUS(4671)
However, this does not change any password authenticators. JimBob and MaryJane can still login and the SQL engine will use the v3 algorithm. However, if either of our two users changes their passwords, they will revert to v2 because the feature is turned off.
Miscellany
If you want to see what a password hash looks like, you can use PWDENCRYPT and you can compare a given hash with the hash of an existing password with PWDCOMPARE. The documentation indicates that HASHBYTES should be used rather than PWDENCRYPT, but this is not correct as HASHBYTES is a general-purpose hash function and not password-specific.
Finally, as noted, to change a password either to roll forward to v3 or roll back to v2, you can use ALTER
LOGIN xxxx WITH PASSWORD = ‘yyyy’, where xxxx is the login name and yyyy is the password, and you can add MUST_CHANGE if needed.
One last and crucial point, if you undo the update then you will have to change all passwords that use the v3 algorithm as the engine before this update cannot deal with the v3 data format.
Summary
Listening to our customers, we are excited to add this new security feature to SQL Server 2022 CU12 and later. If you want to test this, please understand the serious implications of making this change; the updated algorithm changes the on-disk format and there is no going back other than a full database restore or resetting the password for affected logins after turning off the traceflag.
We’d love to hear your feedback.
A big thanks to Steven Gott for all his hard making this happen!
Microsoft Tech Community – Latest Blogs –Read More
Maximizing your role as a Viva Gint Administrator
Are you a Viva Glint administrator looking for information on how to build your survey program? Are you looking for People Science best practices or to learn from fellow administrators such as yourself? As an administrator, you have access to a wealth of resources to help you take full advantage of everything Viva Glint has to offer.
Check out this page for more information on:
Deploying Viva Glint
Documentation – Find technical articles and guidance to help you through your Viva Glint journey
Badging – Complete our recommended learning paths and modules to earn a digital badge and showcase your achievements
Ask the Experts sessions – Join our monthly live sessions to learn best practices and get your questions answered about the product
Learning with us
Events – Attend our Viva Glint events that cover a range of topics sourced from our customer feedback
Blogs – Stay up-to-date with thought leadership, newsletters, and program launches
Connecting with us
Newsletter – Register for this recurring email that highlights new announcements, features, and Viva Glint updates
Product council – Be a part of a community that provides our team with feedback on how we can improve our products and services
Connecting with others
Learning Circles – Participate in these diverse groups to share knowledge, experiences, and challenges with fellow Viva Glint customers
Cohorts – Join these groups to be connected to like-minded Viva Glint customers and hear topic-based best practices from our Viva Glint team (Coming Soon)
Community – Engage with the Viva Community to ask questions, share ideas, and learn best practices through this online forum
Be sure to leverage these resources to make the most of your role as an administrator and empower your organization to achieve more with Viva Glint!
Microsoft Tech Community – Latest Blogs –Read More
Get Verifiable Credentials
A new Account Verification will be required to access Partner Center programs. All participants in the Windows Hardware Program on Partner Center will be required to complete Verifiable Credentials or they will be blocked from accessing the Hardware Program. Verifiable Credentials (VCs) is an open standard for digital credentials.
The Primary Contact for your account will be the initial individual required to obtain Verifiable Credentials. We ask that you take action now to confirm the Primary Contact information on your account in Partner Center is accurate and current.
Please note the information you provide to obtain Verifiable Credential will be verified by Au10tix Ltd, an industry leader engaged to support this effort. You will not need to disclose the information with Microsoft.
To complete your Verifiable Credential, you will need to install the Microsoft Authenticator app on your mobile device and follow the instructions. Microsoft Authenticator will store your Verifiable Credential. You will need a current and valid government ID in the physical form. The name on the government ID must match the Partner Center Primary Contact. For security purposes there are fixed time limits in the steps to obtain Verifiable Credentials. You may need thirty (30) minutes to complete the steps.
When you receive the Verifiable Credentials request from Partner Center, we urge you to complete the steps as soon as possible to avoid losing access to the Hardware Program. If you have any questions or issues, please reach out to our support team. For details on how to contact support, see Get support for Partner Center dashboard issues – Windows drivers | Microsoft Learn.
Thank you for your cooperation.
Microsoft Tech Community – Latest Blogs –Read More
Lesson Learned #479:Loading Data from Parquet to Azure SQL Database using C# and SqlBulkCopy
In the realm of big data and cloud computing, efficiently managing and transferring data between different platforms and formats is paramount. Azure SQL Database, a fully managed relational database service by Microsoft, offers robust capabilities for handling large volumes of data. However, when it comes to importing data from Parquet files, a popular columnar storage format, Azure SQL Database’s native BULK INSERT command does not directly support this format. This article presents a practical solution using a C# console application to bridge this gap, leveraging the Microsoft.Data.SqlClient.SqlBulkCopy class for high-performance bulk data loading.
Understanding Parquet and Its Significance
Parquet is an open-source, columnar storage file format optimized for use with big data processing frameworks. Its design is particularly beneficial for complex nested data structures and efficient data compression and encoding schemes, making it a favored choice for data warehousing and analytical processing tasks.
The Challenge with Direct Data Loading
Azure SQL Database’s BULK INSERT command is a powerful tool for importing large volumes of data quickly.
A C# Solution: Bridging the Gap
To overcome this limitation, we can develop a C# console application that reads Parquet files, processes the data, and utilizes SqlBulkCopy for efficient data transfer to Azure SQL Database. This approach offers flexibility and control over the data loading process, making it suitable for a wide range of data integration scenarios.
Step 1: Setting Up the Environment
Before diving into the code, ensure your development environment is set up with the following:
.NET Core or .NET Framework compatible with Microsoft.Data.SqlClient.
Microsoft.Data.SqlClient package installed in your project.
Parquet.Net package to facilitate Parquet file reading.
Step 2: Create the target table in Azure SQL Database.
create table parquet (id int, city varchar(30))
Step 3: Create Parquet File in C#
The following C# code allows to Parquet file using two columns ID and city.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.IO;
using Parquet;
using Parquet.Data;
using Parquet.Schema;
namespace ManageParquet
{
class ClsWriteParquetFile
{
public async Task CreateFile(string filePath)
{
var schema = new ParquetSchema( new DataField<int>(“id”), new DataField<string>(“city”));
var idColumn = new DataColumn( schema.DataFields[0], new int[] { 1, 2 });
var cityColumn = new DataColumn( schema.DataFields[1], new string[] { “L”, “D” });
using (Stream fileStream = System.IO.File.OpenWrite(filePath))
{
using (ParquetWriter parquetWriter = await ParquetWriter.CreateAsync(schema, fileStream))
{
parquetWriter.CompressionMethod = CompressionMethod.Gzip;
parquetWriter.CompressionLevel = System.IO.Compression.CompressionLevel.Optimal;
// create a new row group in the file
using (ParquetRowGroupWriter groupWriter = parquetWriter.CreateRowGroup())
{
groupWriter.WriteColumnAsync(idColumn);
groupWriter.WriteColumnAsync(cityColumn);
}
}
}
}
}
}
Step 4: Reading Parquet Files in C#
The first part of the solution involves reading the Parquet file. We leverage the ParquetReader class to access the data stored in the Parquet format.
In the following source code you could find two methods. The first method only read the parquet file and the second one reads and saves the data in a table of Azure SQL Database.
using Parquet;
using Parquet.Data;
using Parquet.Schema;
using System;
using System.Data;
using System.IO;
using System.Threading.Tasks;
using Microsoft.Data.SqlClient;
namespace ManageParquet
{
class ClsReadParquetFile
{
public async Task ReadFile(string filePath)
{
ParquetReader parquetReader = await ParquetReader.CreateAsync(filePath);//, options);
ParquetSchema schema = parquetReader.Schema;
Console.WriteLine(“Schema Parquet file:”);
foreach (var field in schema.Fields)
{
Console.WriteLine($”{field.Name}”);
}
for (int i = 0; i < parquetReader.RowGroupCount; i++)
{
using (ParquetRowGroupReader groupReader = parquetReader.OpenRowGroupReader(i))
{
foreach (DataField field in schema.GetDataFields())
{
Parquet.Data.DataColumn column = await groupReader.ReadColumnAsync(field);
Console.WriteLine($”Column Data of ‘{field.Name}’:”);
foreach (var value in column.Data)
{
Console.WriteLine(value);
}
}
}
}
}
public async Task ReadFileLoadSQL(string filePath)
{
ParquetReader parquetReader = await ParquetReader.CreateAsync(filePath);//, options);
//ParquetSchema schema = parquetReader.Schema;
var schema = new ParquetSchema(new DataField<int>(“id”), new DataField<string>(“city”));
DataTable dataTable = new DataTable();
dataTable.Columns.Add(“id”, typeof(int));
dataTable.Columns.Add(“city”, typeof(string));
for (int i = 0; i < parquetReader.RowGroupCount; i++)
{
using (ParquetRowGroupReader groupReader = parquetReader.OpenRowGroupReader(i))
{
var idColumn = new Parquet.Data.DataColumn(schema.DataFields[0], new int[] { 1, 2 });
var cityColumn = new Parquet.Data.DataColumn(schema.DataFields[1], new string[] { “L”, “D” });
for (int j = 0; j < idColumn.Data.Length; j++)
{
var row = dataTable.NewRow();
row[“id”] = idColumn.Data.GetValue(j);
row[“city”] = cityColumn.Data.GetValue(j);
dataTable.Rows.Add(row);
}
}
}
using (SqlConnection dbConnection = new SqlConnection(“Server=tcp:servername.database.windows.net,1433;User Id=MyUser;Password=MyPassword!;Initial Catalog=MyDb;Persist Security Info=False;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;Pooling=true;Max Pool size=100;Min Pool Size=1;ConnectRetryCount=3;ConnectRetryInterval=10;Application Name=ConnTest”))
{
await dbConnection.OpenAsync();
using (SqlBulkCopy s = new SqlBulkCopy(dbConnection))
{
s.DestinationTableName = “Parquet”;
foreach (System.Data.DataColumn column in dataTable.Columns)
{
s.ColumnMappings.Add(column.ColumnName, column.ColumnName);
}
await s.WriteToServerAsync(dataTable);
}
}
}
}
}
Conclusion
This C# console application demonstrates an effective workaround for loading data from Parquet files into Azure SQL Database, circumventing the limitations of the BULK INSERT command. By leveraging the .NET ecosystem and the powerful SqlBulkCopy class, developers can facilitate seamless data integration processes, enhancing the interoperability between different data storage formats and Azure SQL Database.
Microsoft Tech Community – Latest Blogs –Read More
Announcing the 3-year retirement of Windows Server 2022 on Azure Kubernetes Service
Windows Server 2025 and the Windows Server Annual Channel, offer a comprehensive array of enhanced features, heightened security measures, and improved overall performance, and with image portability customers can now run Windows Server 2022 based containers on these new versions. To maximize the experience for customers, not only will Windows Server 2025/Annual Channel provide the most efficient versions of Windows Server yet, but also streamline the upgrade process. In pursuit of an enhanced user experience and an unwavering commitment to safety and reliability, we will be retiring Windows Server 2022 on Azure Kubernetes Service (AKS) in 3-years time.
What does this mean for me?
Windows Server 2022 will be retiring on AKS in March 2027. You should prepare to upgrade to a supported Windows Server version before March 2027.
How can I upgrade my Windows nodepools?
You can follow the Windows Server OS migration process outlined in the AKS documentation to upgrade to Windows Server 2025 or Annual Channel when they’re released on AKS. Portability is key feature available for Windows Server 2025/Annual Channel and onwards, the host and container image no longer need to be upgraded in tandem, older images can now work on newer hosts (ex. running Windows Server 2022 image on Windows Server 2025 host).
Kubernetes version 1.34 will be the final version where Windows Server 2022 is supported on AKS. When Kubernetes version 1.34 is at the end of life on AKS, Windows Server 2022 will no longer be supported. Upgrades to Kubernetes 1.35 on AKS will be blocked if there are any remaining Windows Server 2022 node pools in the cluster.
Windows Server 2025 on AKS and will offer numerous advantages and enhancements. At a high level, Windows Server 2025 introduces enhanced performance and reliability and improved networking support, including density improvements. Learn more about Windows Server 2025 from our recent announcements at Containers – Microsoft Community Hub.
Our commitment centers on customer satisfaction and success, guiding our efforts to provide ample resources and time for upgrading to our premier operating system. Our aim is to simplify the upgrade process, enabling customers to fully leverage the benefits of Windows Server 2025/Annual Channel.
Microsoft Tech Community – Latest Blogs –Read More
Upcoming preview of Microsoft Office LTSC 2024
Microsoft 365 offers the cloud-backed apps, security, and storage that customers worldwide rely on to achieve more in a connected world – and lays a foundation for leveraging generative AI to go even further. However, we know that some customers have niche yet important scenarios that require a truly long-term servicing channel: regulated devices that cannot accept feature updates for years at a time, process control devices on the manufacturing floor that are not connected to the internet, and specialty systems like medical testing equipment that run embedded apps that must stay locked in time. For these special cases, Microsoft continues to offer and support the Office Long-Term Servicing Channel (LTSC). Today we are pleased to announce that the commercial preview of the next Office LTSC release – Office LTSC 2024 – will begin next month, with general availability to follow later this year.
About this release
Like earlier perpetual versions of Office, Office LTSC 2024 will include only a subset of the value found in Microsoft 365 Apps, building on the features included in past releases. New features for Office LTSC 2024 include: new meeting creation options and search enhancements in Outlook, dozens of new Excel features and functions including Dynamic Charts and Arrays; and improved performance, security, and accessibility. Office LTSC 2024 will not ship with Microsoft Publisher, which is being retired, or with the Microsoft Teams app, which is available to download separately.
While Office LTSC 2024 offers many significant improvements over the previous Office LTSC release, as an on-premises product it will not offer the cloud-based capabilities of Microsoft 365 Apps, like real-time collaboration; AI-driven automation in Word, Excel, and PowerPoint; or cloud-backed security and compliance capabilities that give added confidence in a hybrid world. And with device-based licensing and extended offline access, Microsoft 365 offers deployment options for scenarios like computer labs and submarines that require something other than a user-based, always-online solution. Microsoft 365 (or Office 365) is also required to subscribe to Microsoft Copilot for Microsoft 365; as a disconnected product, Office LTSC does not qualify.
As with previous releases, Office LTSC 2024 will still be a device-based “perpetual” license, supported for five years under the Fixed Lifecycle Policy, in parallel with Windows 11 LTSC, which will also launch later this year. And because we know that many customers deploy Office LTSC on only a subset of their devices, we will continue to support the deployment of both Office LTSC and Microsoft 365 Apps to different machines within the same organization using a common set of deployment tools.
Office LTSC is a specialty product that Microsoft has committed to maintaining for use in exceptional circumstances, and the 2024 release provides substantial new feature value for those scenarios. To support continued innovation in this niche space, Microsoft will increase the price of Office LTSC Professional Plus, Office LTSC Standard, Office LTSC Embedded, and the individual apps by up to 10% at the time of general availability. And, because we are asked at the time of release if there will be another one, I can confirm our commitment to another release in the future.
We will provide additional information about the next version of on-premises Visio and Project in the coming months.
Office 2024 for consumers
We are also planning to release a new version of on-premises Office for consumers later this year: Office 2024. Office 2024 will also be supported for five years with the traditional “one-time purchase” model. We do not plan to change the price for these products at the time of the release. We will announce more details about new features included in Office 2024 closer to general availability.
Embracing the future of work
The future of work in an AI-powered world is on the cloud. In most customer scenarios, Microsoft 365 offers the most secure, productive, and cost-effective solution, and positions customers to unlock the transformative power of AI with Microsoft Copilot. Especially as we approach the end of support for Office 2016 and Office 2019 in October 2025, we encourage customers still using these solutions to transition to a cloud subscription that suits their needs as a small business or a larger organization. And for scenarios where that is not possible – where a disconnected, locked-in-time solution is required – this new release reflects our commitment to supporting that need.
FAQ
Q: Will the next version of Office have a Mac version?
A: Yes, the next version of Office will have both Windows and Mac versions for both commercial and consumer.
Q: Will the next version of Office be supported on Windows 10?
A: Yes, Office LTSC 2024 will be supported on Windows 10 and Windows 10 LTSC devices (with the exception of Arm devices, which will require Windows 11).
Q: Will the next version support both 32- and 64-bit?
A: Yes, the next version of Office will ship both 32-and 64-bit versions.
Microsoft Tech Community – Latest Blogs –Read More
Cumulative Update #12 for SQL Server 2022 RTM
The 12th cumulative update release for SQL Server 2022 RTM is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates.
To learn more about the release or servicing model, please visit:
CU12 KB Article: https://learn.microsoft.com/troubleshoot/sql/releases/sqlserver-2022/cumulativeupdate12
Starting with SQL Server 2017, we adopted a new modern servicing model. Please refer to our blog for more details on Modern Servicing Model for SQL Server.
Microsoft® SQL Server® 2022 RTM Latest Cumulative Update: https://www.microsoft.com/download/details.aspx?id=105013
Update Center for Microsoft SQL Server: https://learn.microsoft.com/en-us/troubleshoot/sql/releases/download-and-install-latest-updates
Microsoft Tech Community – Latest Blogs –Read More
Known issue: iOS/iPadOS ADE users incorrectly redirected to Intune Company Portal website
We recently identified an issue where users with iOS/iPadOS devices enrolling with Automated Device Enrollment (ADE) are unable to sign in to the Intune Company Portal app for iOS/iPadOS. Instead, they’re incorrectly prompted to navigate to the Intune Company Portal website to enroll their device, and they are unable to complete enrollment. This issue occurs for newly enrolled ADE users that are targeted with an “Account driven user enrollment” or “Web based device enrollment” enrollment profile and just in time (JIT) registration has not been set up.
While we’re actively working on resolving this issue, we always recommend organizations using iOS/iPadOS ADE to have JIT registration set up for their devices for the best and most secure user experience. For more information review: Set up just in time registration.
Stay tuned to this blog for updates on the fix! If you have any questions, leave a comment below or reach out to us on X @IntuneSuppTeam.
Microsoft Tech Community – Latest Blogs –Read More
How to monitor the performance of your on-prem & multi-cloud SQL Servers w/ Azure Arc | Data Exposed
Learn how to use Azure Arc to monitor key performance metrics for your SQL Servers located in your data center, at the edge, or even in other public clouds.
Resources:
https://aka.ms/ArcSQLMonitoring
https://aka.ms/ArcDocs
View/share our latest episodes on Microsoft Learn and YouTube!
Microsoft Tech Community – Latest Blogs –Read More
Last chance to nominate for POTYA!
This is your chance to be recognized as part of the Microsoft Partner of the Year Awards! Nominate before the April 3 deadline.
Celebrated annually, these awards recognize the incredible impact that Microsoft partners are delivering to customers and celebrate the outstanding successes and innovations across Solution Areas, industries, and key areas of impact, with a focus on strategic initiatives and technologies. Partners of all types, sizes, and geographies are encouraged to self-nominate. This is an opportunity for partners to be recognized on a global scale for their innovative solutions built using Microsoft technologies.
In addition to recognizing partners for the impact in our award categories, we also recognize partners from over 100 countries/regions around the world as part of the Country/Region Partner of the Year Awards. In 2024, we’re excited to offer additional opportunities to recognize partner impact through new awards – read our blog to learn more and download the official guidelines for specific eligibility requirements.
Find resources on how to write a great entry, FAQs and more on our the Partner of the Year Awards website.
Nominate here!
Microsoft Tech Community – Latest Blogs –Read More
Mastering Azure Cost Optimization – A Comprehensive Guide
Introduction
Hi folks! My name is Felipe Binotto, Cloud Solution Architect, based in Australia.
I understand and you probably do as well, that cost savings in the cloud is a very hot topic for any organization.
Believe it or not, there is a huge number of people (including me) doing their best to allow you to get the best value for your money. This is imperative for us.
The plan here is to highlight the most used cost savings artifacts as well as the most effective cost savings actions you can take for cost savings.
I will give you some personal tips, but most of the content already exists and therefore my objective is to have a consolidated article where you can find the best and latest (as of March 2024) cost optimization content.
The content will vary from teams ranging from Product Groups to Cloud Solution Architects which can cover anything from the ins and outs of people actively working on our products to the people actively deploying and implementing them in the field.
Key Strategies for Azure Cost Optimization
There are a huge number of areas where cost optimization can be applied to. These range from doing some type of remediation to just understanding and having ways to visualize your spendings.
I don’t intend to cover every single possible way to achieve cost savings but here is what I will cover:
Hybrid Benefit (both Windows and Linux)
Reservations (for several resource types)
Savings Plan
Idle Resources
SKUs
Resizing
Logs
Workbooks
Dashboards
FinOps
In my experience, if you invest some time and take a good look at the above list, you will be able to achieve good savings and be able to invest those savings in more innovative (or maybe security) initiatives in your company.
Hybrid Benefit
Azure Hybrid Benefit is a cost-saving feature that allows you to use your existing on-premises Windows Server and SQL Server licenses with Software Assurance or qualifying subscription licenses to run workloads in the cloud at a reduced cost. This benefit now also extends to Linux (RHEL and SLES) and Arc-enabled AKS clusters. You can also leverage Hybrid Benefit on Azure Stack HCI. It’s an effective way to reduce costs for organizations that have already invested in Microsoft or Linux licenses and are looking to migrate or extend their infrastructure to Azure.
@arthurclares has a nice blog post on this.
For information on Hybrid Benefit for Arc-enabled AKS clusters, check this page.
For more information on Hybrid Benefit for Azure Stack HCI, check this page.
Reservations and Savings Plans
Azure Reservations help you save money by committing to one-year or three-year plans for multiple products. This includes virtual machines, SQL databases, AVS, and other Azure resources. By pre-paying, you can secure significant discounts over the pay-as-you-go rates. Reservations are ideal for predictable workloads where you have a clear understanding of your long-term resource needs.
The Azure Savings Plan is a flexible pricing model that offers lower prices on Azure resources in exchange for committing to a consistent amount of usage (measured in $/hour) for a one or three-year period. Unlike Reservations, which apply to specific resources, the Savings Plan applies to usage across any eligible services, providing more flexibility in how you use Azure while still benefiting from cost savings.
@BrandonWilson already has a nice blog post on this.
Idle Resources
Identifying and managing idle resources is a straightforward way to optimize costs. Resources such as unused virtual machines, excess storage accounts, or idle database instances can incur costs without providing value. Implementing monitoring and automation to shut down or scale back these resources during off-peak hours can lead to significant savings.
@anortoll has already blogged about this and the post has workbooks that can be used to locate those idle resources.
@Dolev_Shor available here.
SKUs
Selecting the SKUs for your Azure resources is crucial for cost optimization. Azure offers a variety of SKUs for services like storage, databases, and compute, each with different pricing and capabilities. Choosing the most appropriate SKU for your workload requirements can optimize performance and cost efficiency.
A classic example is the over sizing of Virtual Machines. However, there are other resources to consider too. For instance, you deploy an Azure Firewall with the Premium SKU, but do you really need premium features such as TLS or intrusion detection? Another example are App Service plans which in the v3 SKU can be cheaper and enable you to buy Reservations.
Here is a blog post by Diana Gao on VM right sizing.
Here is another blog post by Werner Hall on right sizing of Azure SQL Managed Instances.
Logs
Analyzing logs can provide insights into resource utilization and operational patterns, helping identify opportunities for cost savings. For instance, log analysis can reveal underutilized resources that can be downsized, or inefficient application patterns that can be optimized. Azure offers various tools for log analysis, such as Azure Monitor and Log Analytics, to aid in this process.
However, logs can also be the source of high cloud spending. The key is to understand what you need to log, what type of logs you need and strategies to minimize the cost of storing those logs.
For example, when you are ingesting logs in Log Analytics, depending on the log you are ingesting, you can configure certain tables in the Log Analytics workspace to use Basic Logs. Data in these tables has a significantly reduced ingestion charge and a limited retention period.
There’s a charge to search against these tables. Basic Logs are intended for high-volume verbose logs you use for debugging, troubleshooting, and auditing, but not for analytics and alerts.
Depending on the amount of data being ingested, you could also leverage commitment tiers in your workspace to save as much as 30 percent compared to the PAYG price.
Furthermore, you could also structure your logs to be archived after a period of time and save on costs.
Refer to this page to learn more about these options and to this page to learn more about best practices.
Workbooks
Azure Workbooks provide a customizable, interactive way to visualize and analyze your Azure resources and their metrics. By creating workbooks, you can gain insights into your spending patterns, resource utilization, and operational health. This can help identify inefficiencies and areas where cost optimizations can be applied.
Many workbooks are available. A few examples are:
Azure Advisor workbook
Azure Orphan Resources workbook
Azure Hybrid Benefit workbook
Dashboards
Azure Dashboards offer a unified view of your resources and their metrics, allowing for real-time monitoring of your Azure environment. Custom dashboards can be configured to focus on cost-related metrics, providing a clear overview of your spending, highlighting trends, and pinpointing areas where optimizations can be made.
You can make your own dashboards, but a few are already available, and you can customize them to your needs.
@sairashaik made available two very useful dashboards available here and here.
Another one which has been available for a long time is the Cost Management Power BI App for Enterprise Agreements.
FinOps
FinOps, or Financial Operations, is a cloud financial management practice aimed at maximizing business value by bringing financial accountability to the variable spend model of the cloud. It involves understanding and controlling cloud costs through practices like allocating costs to specific teams or projects, budgeting, and forecasting. Implementing FinOps practices helps organizations make more informed decisions about their cloud spend, ensuring alignment with business goals.
Learn how to implement your own FinOps hub leveraging the FinOps Toolkit.
Additional Resources
The following are some additional resources which provide valuable information for your Cost Optimization journey.
Advisor Cost Optimization Workbook
Cost Optimization Design Principles
Conclusion
Optimizing costs in Azure is a multifaceted endeavor that requires a strategic approach and a deep understanding of the available tools and features. By leveraging Azure’s Hybrid Benefit for both Windows and Linux, making smart use of Reservations for various resources, adopting Savings Plans, and diligently managing Idle Resources, businesses can achieve substantial cost savings.
Additionally, the careful selection of SKUs, appropriate resizing of resources, thorough analysis of logs, and effective use of Workbooks and Dashboards can further enhance cost efficiency. Lastly, embracing FinOps principles ensures that cost management is not just an IT concern but a shared responsibility across the organization, aligning cloud spending with business value. Together, these strategies form a robust framework for achieving cost optimization in Azure, enabling businesses to maximize their cloud investments and drive greater efficiency and innovation.
As always, I hope this was informative to you and thanks for reading.
Disclaimer
The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.
Microsoft Tech Community – Latest Blogs –Read More
Partner Blog | Microsoft Copilot for Security generally available on April 1
By Julie Sanford, Vice President, Partner GTM, Programs & Experiences
As malicious actors continue to intensify their use of AI, security professionals must also incorporate AI into their solutions to counter the increasing threat. To support our partners and customers in securing their businesses, we are excited to announce the general availability of Microsoft Copilot for Security in all commerce channels, including CSP, on April 1, 2024. This new Copilot provides Microsoft partners with a powerful resource to safeguard their organizations, while improving the security services and solutions they offer.
Copilot for Security is the first generative AI security product designed to defend organizations at the speed and scale of AI. This announcement continues our AI momentum following the recent general availability of Copilot for Microsoft 365, Copilot for Finance, Copilot for Sales, Copilot for Service, and Copilot for the education market.
We have also added new resources for Copilot for Microsoft 365 ready to help you deliver more value, adoption, and seat growth with customers. Read our blog for these updates.
Copilot for Security is designed to complement, rather than replace human skills. Our partners bring their experience, skills, and established methods for dealing with vulnerabilities. This new tool enables them to apply their expertise to services and offerings that AI solutions without human insights cannot match.
Continue reading here
Microsoft Tech Community – Latest Blogs –Read More
Microsoft Secure Tech Accelerator: Securing AI– RSVP now
Join us April 3rd for another Microsoft Secure Tech Accelerator! We will be joined by members of our product engineering and customer adoption teams to help you explore, expand, and improve the way you secure and implement AI.
In this edition of Microsoft Secure Tech Accelerator, we are focusing on Microsoft Copilot for Security, Securing AI, and Exposure Management. We want to help you understand how you can make sure that the way you implement your AI tools is secure. We’ll also cover some of the newly available solutions in the Microsoft Security suite that allows you to make securing your AI easier.
As always, the focus of this series is on your questions! In addition to open Q&A with our product experts, we will kick off each session with a brief demo to get everyone warmed up and excited to engage.
How do I attend?
Choose a session name below and add any (or all!) of them to your calendar. Then, click RSVP to event and post your questions in the Comments anytime! We’ll note if we answer your question in the live stream and follow up in the chat with a reply as well.
Can’t find the option to RSVP? No worries, sign in on the Tech Community first.
Afraid to miss out due to scheduling or time zone conflicts? We got you! Every AMA will be recorded and available on demand the same day.
Agenda: April 3, 2024
Start time
Session title
7:00 a.m. PT
Copilot for Security: Customize your Copilot (deep dive + AMA)
8:00 a.m. PT
Secure AI applications using Microsoft Defender for Cloud Apps
(deep dive)
8:30 a.m. PT
Transform your defense: Microsoft Security Exposure Management
(deep dive + AMA)
More ways to engage
Join the Microsoft Management SCI Community to engage more with our product team.
Microsoft Tech Community – Latest Blogs –Read More
What’s new in Defender: How Copilot for Security can transform your SOC
What’s new in Defender: How Copilot for Security can transform your SOC
Today at Secure, we announced that Microsoft Copilot for Security will be generally available on April 1. Copilot equips security teams with purpose-built capabilities at every stage of the security lifecycle, embedded right into the unified security operations platform in the Defender portal. Early users of Copilot for Security have already seen significant measurable results when integrated in their SOC, transforming their operations and boosting their defense and posture against both ongoing and emerging threats. Read on to learn about the capabilities to GA on 4/1 embedded in the Defender portal for Defender XDR and Microsoft Sentinel data and how early access customers are already enjoying its value.
Prevent breaches with dynamic threat insight
Copilot for Security leverages the rich portfolio of Microsoft Security products to produce enriched insights for security analysts in the context of their workflow. At GA, you will be able to use Copilot for Security with Microsoft Defender Threat Intelligence and Threat Analytics in the Defender portal to tap into high-fidelity threat intelligence on threat actors, tooling and infrastructure and easily discover and summarize recommendations specific to your environment’s risk profile, all using natural language. These insights can help security teams improve their security posture by prioritizing threats and managing exposures proactively against adversaries, keeping their organizations protected from potential breaches.
Identify and prioritize with built-in context
“Copilot for Security is allowing us to re-envision security operations. It will be critical in helping us close the talent gap.” Greg Petersen Sr. Director – Security Technology & Operations, Avanade
Automation of common manual tasks with Copilot frees up analyst time and allows them to focus on more complex and urgent demands. For example, analysts need to understand the attack story and impact to determine next steps, and this often requires time and effort to collect and understand all of the relevant details. To make this task faster and easier, Copilot’s incident summaries, with AI-powered data processing and contextualization, provides this content readily available, saving significant triage time. Complimenting Microsoft Defender XDR’s unique ability to correlate incidents from a variety of workloads, Copilot’s incident summary provides the attack story and potential impact directly in the incident page. At GA, asset summaries become available for use in investigation. The first of these is a device summary, where Copilot provides highlights about the device based on all cross-workload information available in Defender XDR, as well as other device data integrated in from Intune. This further improves efficiency during triage and enables analysts to more quickly assess and prioritize incidents, leading to faster response.
As part of incident investigation and response, analysts often reach out to employees to get more information about unusual activity on their devices or to communicate about an incident or a limitation in access. New at GA, Copilot now makes this faster by generating tailored messages with all the details an employee would need and enabling analysts to send those messages through Microsoft Teams or Outlook – directly from the portal. Copilot links directly to many tasks that would normally require going to another view or product – another example of added efficiency for security teams.
During Early access, 97%* of security professionals reported they would make consistent use of Copilot capabilities in their day-to-day workflows.
Accelerate full resolution for every incident
“Copilot for Security can democratize security to the end user. It is no longer just with the subject matter expert. The average analyst training time used to be a couple of months, and that can reduce drastically if you’re using Copilot.” Chandan Pani, Chief Information Security Officer, LTIMindtree
During an incident, every second counts. With additional Copilot capabilities, like guided response and automated incident reports, analysts of all levels can move an average of 22% faster* and accelerate time to resolution.
Guided response, provided by Copilot during incident investigation and report in the Defender portal, helps analysts determine what to do next, based on the specific incident at hand.
Example recommendations include:
Triaging an incident with a recommended classification and threat category
Steps to take to contain an incident, such as suspending a compromised account
Investigation actions, such as finding all emails that were part of a phishing campaign
How to remediate an incident, such as resetting a user’s password
Action recommendations are provided with links to the next steps, which can be taken directly in the Copilot window, reducing time spent switching views.
After successfully closing out an incident, analysts often spend time drafting reports for peers and leadership to provide a summary of the attack and remediation steps taken. Using Copilot, an incident report is easily generated with the click of a button, instantly delivering a high-quality summary ready to share or save for documentation. For GA, exporting the report to a detailed formatted PDF is now available, making for a great executive-shareable report.
Elevate analysts with intelligent assistance
“Copilot for Security allows us to quickly analyze Python and PowerShell scripts. This means that staff with less experience can quickly analyze scripts, saving valuable time in the cybersecurity area where time is so important.” Mark Marshall, Assistant Chief Information Officer , Peel District School Board
Security teams are made up of individuals with a variety of different skillsets and levels of experience, and as demands and requirements change, up-leveling becomes critical. It can take time and expertise to learn how to effectively manage hunting jobs or analyze malicious scripts, which many organizations simply don’t have. Copilot makes expert tasks significantly simpler, reducing the time spent onboarding new recruits and training analysts while driving faster results.
For example, Copilot assists less experienced analysts with hunting during an investigation in the Defender Portal. An analyst can now create KQL queries simply using natural language – for example just asking for “all devices that logged on in the last hour”. The user can then choose to run the generated query or have Copilot execute them automatically. Copilot can also recommend the best filters to apply after results are surfaced or suggest common next steps. Security teams see significant benefits with this as more senior analysts are now able to delegate threat hunting projects to newer or less experienced employees.
Another task commonly reserved to more experienced analysts is reverse engineering PowerShell, Python or other scripts, often used in HumOR and other attacks, and not every team even has this expertise. Copilot’s script analysis feature gives security teams the ability to examine these scripts easily, without needing any prior knowledge of how to do so. This feature is also into the investigation process with a button prompting a user to “analyze with Copilot” anytime an alert contains a script. The resulting analysis is a line-by-line explanation of what the script is trying to do, with excerpts from the script for each explained section. Wit this, an analyst can quickly tell if a script is potentially harmful or not. New at GA, these capabilities extend to suspicious file analysis as well (executable or other), delivering details about the file’s internal characteristics and behavior and an easy way to assess maliciousness.
Interested in getting started with Copilot for Security?
The pace of innovation in AI is moving at lightning speed and we expect many more security teams to see significant benefits of the technology with the general availability of Copilot for Security. To learn more about Microsoft Copilot for Security, click here or contact your Microsoft sales representative.
Learn more about Copilot skills for Defender XDR announced at early access : Operationalizing Microsoft Security Copilot to Reinvent SOC Productivity
*Microsoft Copilot for Security randomized controlled trial (RCT) with experienced security analysts conducted by Microsoft Office of the Chief Economist, January 2024.
Microsoft Tech Community – Latest Blogs –Read More
Certification
Hi! I work in LE and I will be teaching some of our staff Detentions 101. As a perk of taking the class I would love to have those who take the class be Sharepoint “Certified” through Microsoft. Is that possible?
Hi! I work in LE and I will be teaching some of our staff Detentions 101. As a perk of taking the class I would love to have those who take the class be Sharepoint “Certified” through Microsoft. Is that possible? Read More
Azure Container Apps Managed Certificates now in General Availability (GA)!
General Availability (GA): Azure Container Apps Managed Certificates!
Managed Certificates on Azure Container Apps will allow you to create certificates free of charge for custom domains added to your container app. The service will also manage the life cycle of these certificates and auto-renew them when they’re close to expiring.
To learn more, see Azure Container Apps managed certificate documentation.
Microsoft Tech Community – Latest Blogs –Read More
Azure at KubeCon Europe 2024 | Paris, France – March 19-22
Note: Brendan Burns’ “Welcome to KubeCon EU 2024” blog post will be live on March 19 at aka.ms/kubeconblog. Please check back at that time.
Are you as excited as we are for KubeCon + CloudNativeCon Europe 2024? We can’t wait and hope you’ll join us for some awesome Microsoft Azure KubeCon + CloudNativeCon related events and activities happening in Paris March 18-22!
Azure Kubernetes Service (AKS) Essentials Day (March 18): New for this KubeCon + CloudNativeCon, we’ve added an in-person, hands-on, introductory workshop for those just getting started with AKS. The full-day event will be in Paris on March 18. Registration is required for this free event and space is limited. Learn more and register.
Azure Day with Kubernetes (March 19): Join our Microsoft experts in-person in Paris on Tuesday, March 19 from 9am to 5pm for an exclusive opportunity to learn best practices for building cloud-native and intelligent apps with Kubernetes on Azure. Registration is required for this free event and space is limited. Learn more and register.
KubeCon + CloudNativeCon (March 20-22):
Don’t miss the Microsoft keynote on Wednesday March 20 9:40am to learn about how to Build an Open Source Platform for AI/ML.
Check out sessions by Microsoft engineers on diverse topics including Notary project, what’s new in containerd 2.0, strategies for efficient LLM deployments, OpenTelemetry, Confidential Containers, Network Policy, OPA, special purpose operating systems, and more!
Brendan Burns, Kubernetes co-founder and Microsoft CVP, will share his thoughts on the latest developments and key Microsoft announcements related to cloud-native intelligent application development in his KubeCon + CloudNativeCon Europe 2024 blog on March 19th.
And of course, swing by our Microsoft Azure booth #G1 from March 20th to 22nd! We’ll have short sessions and demos on all things cloud native and AI, an Xbox Forza racing competition with a chance to win some cool prizes, and some sweet swag. Don’t forget to pick up your copy of Brendan Burn’s latest Kubernetes Best Practices book when you visit the Microsoft booth!
We look forward to seeing you in Paris!
– Microsoft Azure team
Microsoft Tech Community – Latest Blogs –Read More
Sync Up Episode 09: Creating a New Future with OneDrive
Sync Up Episode 9 is now available on all your favorite podcast apps! This month, Arvind Mishra and I are talking with Liz Scoble and Libby McCormick about the power of Create.Microsoft.com and how we’re bringing that power into the OneDrive experience! Along the way, we learn a little more about ourselves, about TPS reports, and much more!
Show: https://aka.ms/SyncUp | Apple Podcasts: https://aka.ms/SyncUp/Apple | Spotify: https://aka.ms/SyncUp/Spotify | RSS: https://aka.ms/SyncUp/RSS
As always, we hope you enjoyed this episode! Let us know what you think in the comments below!
Microsoft Tech Community – Latest Blogs –Read More