Category: Microsoft
Category Archives: Microsoft
Instant File Initialization for the transaction log | SQL Server 2022 Hidden Gems | Data Exposed
Next in the SQL Server 2022 hidden gems series you’ll learn about Instant file Initialization (IFI) behavior for Log file growth even with TDE enabled (does not require special privilege).
Resources:
What’s new in SQL Server 2022 – SQL Server | Microsoft Learn
View/share our latest episodes on Microsoft Learn and YouTube!
Microsoft Tech Community – Latest Blogs –Read More
MGDC for SharePoint FAQ: How do I process Deltas?
This is a follow up on the blog about delta datasets. If you haven’t read it yet, take a look at MGDC for SharePoint FAQ: How can I use Delta State Datasets?
Our team got some follow-up questions on this, so I thought it would make sense to write a little more and make things clear.
First of all, from some conversations with CoPilot, the basic SQL code for merging a delta would be something like this:
— Start a transaction
BEGIN TRANSACTION;
— Assuming the Users table has a primary key constraint on user_id
— and the UserChanges table has a foreign key constraint on user_id referencing Users
— First, delete the users that have operation = ‘Deleted’ in UserChanges
DELETE FROM Users
WHERE user_id IN
(SELECT user_id
FROM UserChanges
WHERE operation = ‘Deleted’);
— Next, update the users that have operation = ‘Updated’ in UserChanges
UPDATE Users
SET user_name = UC.user_name,
user_age = UC.user_age
FROM Users U
JOIN UserChanges UC ON U.user_id = UC.user_id
WHERE UC.operation = ‘Updated’;
— Finally, insert the users that have operation = ‘Created’ in UserChanges
INSERT INTO Users (user_id, user_name, user_age)
SELECT user_id, user_name, user_age
FROM UserChanges
WHERE operation = ‘Created’;
— Commit the transaction
COMMIT TRANSACTION;
Note that the column names used (shown here as user_id, user_name and user_age) need to be updated for each dataset, but the structure will be the same.
I also asked CoPilot to translate this SQL code to PySpark and it suggested the code below (with a few minor manual touches):
# Import SparkSession and functions
from pyspark.sql import SparkSession
from pyspark.sql import functions as F
# Create SparkSession
spark = SparkSession.builder.appName(“Delta dataset”).getOrCreate()
# Assuming the Users and UserChanges tables are already loaded as DataFrames
users = spark.table(“Users”)
user_changes = spark.table(“UserChanges”)
# First, delete the users that have operation = ‘Deleted’ in UserChanges
users = users.join(user_changes.filter(user_changes.operation == “Deleted”), “user_id”, “left_anti”)
# Next, update the users that have operation = ‘Updated’ in UserChanges
users = users.join(user_changes.filter(user_changes.operation == “Updated”), “user_id”, “left_outer”)
.select(F.coalesce(user_changes.user_name, users.user_name).alias(“user_name”),
F.coalesce(user_changes.user_age, users.user_age).alias(“user_age”),
users.user_id)
# Finally, insert the users that have operation = ‘Created’ in UserChanges
users = users.union(user_changes.filter(user_changes.operation == “Created”)
.select(“user_name”, “user_age”, “user_id”))
After that, there’s the question of how to run this in Azure Data Factory or Azure Synapse.
I would suggest going with Azure Synapse. You could get some inspiration from the template that we published https://go.microsoft.com/fwlink/?linkid=2207816. This includes examples of how to get the data and run a notebook to produce a new dataset.
Another good resource is this guide on “How to transform data by running a Synapse Notebook”. The link is at https://learn.microsoft.com/en-us/azure/data-factory/transform-data-synapse-notebook.
The more notable part missing from the code above is how to read the data from ADLS v2. For that, here is a link to stack overflow article on how to bring the data in and out of ADLS v2 using Linked Services. There is an article specifically on that at https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/tutorial-spark-pool-filesystem-spec.
That’s it! For more general information MGDC for SharePoint, visit the main blog at Links about SharePoint on MGDC.
Microsoft Tech Community – Latest Blogs –Read More
Advancing Trust, Transparency, and Control with Copilot for Microsoft 365
Hello, Microsoft Tech Community! I’m excited to share some important updates about Copilot for Microsoft 365. As you may recall from TJ’s blog post on February 29, we’ve been working hard to enhance your experience with Copilot. Today, I’d like to highlight some key updates that will benefit our customers, as outlined in Paul Lorimer’s blog: Announcing the expansion of Microsoft’s data residency capabilities | Microsoft 365 Blog
Paul’s post delves into the expansion of our data residency capabilities. We understand that data control is paramount in today’s digital landscape. That’s why we’re ensuring that your interaction data with Copilot for Microsoft 365 (for eligible customers) will be stored in the location specified by your Microsoft 365 data residency settings. This is a significant step forward in our commitment to providing a secure and compliant environment for our enterprise and highlight regulated customers that have data particularly stringent requirements for how their data is stored.
But we’re not stopping there. Our vision is to democratize AI, making it accessible and beneficial for everyone. As we continue to innovate and enhance Copilot, our guiding principles remain the same: Trust, Transparency, and Control. These principles have always been at the heart of Microsoft 365, and they continue to shape our approach to Copilot. Stay tuned for more updates as we continue to evolve Copilot for Microsoft 365.
Please reply with your questions and share your experiences and needs as you explore your Copilot and our Data Residency options.
More resources to get support your Copilot and AI journey:
Five tips for prompting AI: How we’re communicating better at Microsoft with Microsoft Copilot
Microsoft 365 – How Microsoft 365 Delivers Trustworthy AI (2024-01).docx
Data, Privacy, and Security for Microsoft Copilot for Microsoft 365
Microsoft Purview data security and compliance protections for Microsoft Copilot
Microsoft Copilot Privacy and Protections
Apply principles of Zero Trust to Microsoft Copilot for Microsoft 365
Learn about retention for Microsoft Copilot for Microsoft 365
Microsoft Tech Community – Latest Blogs –Read More
Improved Next.js support (Preview) for Azure Static Web Apps
Next.js is a popular framework for building modern web applications with React, making it a common workload to deploy to Azure Static Web Apps’ optimized hosting of frontend web applications. We are excited to announce that we have improved our support for Next.js on Azure Static Web Apps (preview), increasing the compatibility with recent Next.js features and providing support for the newest Next.js versions, enabling you to deploy and run your Next.js applications with full feature access on Azure.
What’s new?
As we continue to iterate on our Next.js support during our preview, we’ve made fundamental improvements to ensure feature compatibility with the most recent and future versions of Next.js. Such improvements include support for the new React Server Components model in Next.js as well as hosting Next.js backend functions on dedicated App Service instances to ensure feature compatibility.
Support for Next.js 14, React Server Components, Server Actions, Server Side Rendering
With the introduction of the App Directory, React Server Components and Server Actions, it’s now possible to build full-stack components, where individual components exist server-side and have access to sensitive resources such as databases or APIs, providing a more integrated full-stack developer experience.
For instance, a single component can now contain both queries and mutations with database access, facilitating the componentization of code.
// Server Component
export default function Page() {
// handle queries, accessing databases or APIs securely here
// Server Action
async function create() {
‘use server’
// handle mutations, accessing databases or APIs securely here
// …
}
return (
// …
)
}
These features, including recent server-side developments for Next.js, are now better supported on Azure Static Web Apps. Support for the pages directory, which is still supported by Next.js 14, will also continue to work on Azure Static Web Apps.
Increased size limits of Next.js deployments
Previously, Next.js deployments were limited to 100mb due to the hosting restrictions of managed functions. Now, you can deploy Next.js applications up to 250mb in size to Azure Static Web Apps, with the Next.js statically exported sites supporting up to the regular Static Web Apps quotas now.
Partial support for staticwebapps.config.json
With the improved support for Next.js sites, the `staticwebapps.config.json` file which is used to provide configurations to the way your site is hosted by Azure Static Web Apps is now partially supported. While app-level configurations should still be completed within the `next.config.json` to configure the Next.js server, the `staticwebapps.config.json` file can still be used to limit routes to roles and other headers, routes, redirects or other configurations.
Support for Azure Static Web Apps authentication and authorization
Azure Static Web Apps provides built-in authentication and role-based authorization. You can now use this built-in authentication with Next.js sites to secure them. The client principal containing the information of the authenticated user can be accessed from the request headers and used to perform server-side operations within API routes, Server Actions or React Server Components. The following code snippet indicates how the user headers can be accessed from within a Next.js codebase.
import { headers } from ‘next/headers’
export default async function Home() {
const headersList = headers();
const clientPrincipal = JSON.parse(atob(headersList.get(‘x-ms-client-principal’) || ”))
return (
<main >
{clientPrincipal.userId}
</main>
);
}
Next.js backend functions hosted on a dedicated App Service plan
To ensure full feature compatibility with current and future Next.js releases, Next.js workloads on Azure Static Web Apps are uniquely hosted leveraging App Service. Static contents for Next.js workloads will continue to be hosted on Azure Static Web Apps globally distributed content store, and Next.js backend functions are hosted by Azure Static Web Apps with a dedicated App Service plan. This enables improved support for Next.js features, while retaining Static Web Apps’ existing pricing plans.
How can you activate these improvements?
These improvements have been rolled out to all regions of Azure Static Web Apps, and will take effect on your next deployment of a Next.js workload to your Static Web Apps resource.
Get started with Next.js on Azure Static Web Apps
We hope you find these improvements useful and helpful for your Next.js development. We are always looking for feedback and suggestions, and are actively reading and engaging in our community GitHub.
Get stated deploying Next.js sites on Azure Static Web Apps for free!
Share feedback for Next.js in the Azure Static Web Apps GitHub repository
Follow and tag our Twitter account for Azure Static Web Apps
Microsoft Tech Community – Latest Blogs –Read More
Accessibility in Microsoft 365 Core Apps
“Accessibility is not a bolt on. It’s something that must be built in to every product we make so that our products work for everyone. Only then will we empower every person and every organization on the planet to achieve more. This is the inclusive culture we aspire to create” – Satya Nadella
Our journey in accessible technology is grounded in a shared conviction at Microsoft. As product makers, we believe in the obligation to build technology that truly empowers people, fostering an inherently equitable experience.
In this pursuit, we’ve embraced a mindset we call “shift left,” incorporating accessibility at every stage and right from the inception of designing and building our products.
Reflecting on my tenure, I’ve had the privilege of contributing to some of the world’s most impactful technologies, particularly with Office and Windows. Traditionally, these products were well established with years of development and only later received our focused attention on accessibility.
However, Copilot presented a rare opportunity for us to incorporate accessibility right from the inception of the product and therefore “shift left”, the entire design and development process. And what’s more, AI technologies like Copilot brought a unique opportunity to reshape how humans interact with computers in a way that makes the experience MORE equitable and transformative for all.
Now, integrated into our Microsoft 365 Core Apps, Copilot brings forth exciting capabilities. Our goal is to bridge the gap between your interaction with technology and how you express your ideas, making the user experience more inclusive and empowering for all.
At Copilot’s core lies a commitment to equity, underscoring our ongoing dedication to fostering a technology landscape that truly serves every individual, ensuring that no one is left behind.
Equity at Copilot’s Core
We are actively shifting left in our product development by making accessibility a core part of Copilot’s design and functionality. Copilot is designed to work well with assistive technologies, such as screen readers, magnifiers, contrast themes, and voice input, and to provide a seamless and intuitive user experience.
But in addition to that, Copilot is a tool designed to be accessible itself.
In this process, we have collaborated and co-innovated with a diverse set of customers, 600+ members of Microsoft’s employee disability community, partners in research and design, and the commitment of engineers and product makers to listen and be accountable.
To illustrate how Copilot can enhance accessibility, I want to share with you some highlights from engaging with participants who had early access to Copilot:
Drafting emails: Copilot can help create 1st drafts in a matter of minutes. This can be especially helpful to those who have more limited mobility and challenges typing. You can generate different versions of the email with different levels of formality and detail and ask Copilot to check their calendar and suggest a meeting time, just with a few clicks of a button.
Using voice commands: With Copilot, you can create entire PowerPoint presentations just using your voice. Just tell Copilot about what you want to create, and Copilot can generate relevant graphics and notes for slides.
These examples demonstrate how Copilot can save users time and effort, as well as help them express their ideas and communicate their expertise more effectively. They also show how Copilot can adapt to their preferences and needs and provide them with a supportive partner that can enhance their communication and productivity.
In addition to some of these areas of feedback, we also conducted a deep dive study with those of the neurodiverse community. The neurodiverse community (which makes up 10-20% of the world) faces common challenges that we can all relate to, but their lives are disproportionately affected by them. Examples include planning, focus, procrastination, communication, reading ease and comprehension, being overly literal, and fatigue.
For the neurodiverse community, our study showed that Copilot can be a powerful ally, offering assistance in overcoming these challenges. It serves as a facilitator for thought organization, acts as a springboard for writing tasks, aids in surmounting task initiation barriers, and assists in processing extensive information found in documents, messages, or emails.
Members of the community reported Copilot helping their communication effectiveness by distilling action items from team meetings and documents, generating summaries, adjusting the tone and context of their content, and bridging communication gaps.
As one of the participants in the study said, “For me, Copilot itself is accessibility. Having Copilot is like putting on glasses after I’ve been squinting my entire career. It is equity and I think as a neurodivergent individual, I can’t imagine going back.”
Making Accessible Content with Ease
On our journey to create products that are truly inclusive, we’re also empowering document authors to shift left and build better authoring habits by catching accessibility errors early in the doc creation process. Ensuring that your content is comprehensible to all individuals, irrespective of their visual abilities or preferences, is a crucial component of accessibility. To assist you in this endeavor, we have created the Accessibility Assistant, a robust tool that can detect and resolve potential problems in your documents, emails, and presentations. You can access the Accessibility Assistant from the Review tab in Word, Outlook, and PowerPoint.
New Features of the Accessibility Assistant include some of the following highlights:
The in-canvas notifications for readability is a feature that notifies you of accessibility hurdles for common issues, such as text color not meeting the Web Content Accessibility Guidelines (WCAG) color contrast ratio or images lacking descriptions. You can use the inclusive color picker to choose an appropriate color from the suggested options and utilize the automatically generated image descriptions to provide alt-text, making it easier to create accessible content.
Quick fix card for multiple issues: This feature allows you to fix several issues of the same type with fewer clicks. For example, you can change the color of all the text that has low contrast in your document.
Per-slide toggle for PowerPoint: This feature enables you to view and fix the accessibility issues for each slide individually, instead of seeing them by categories. This can help you focus on your own slides and collaborate with others more easily.
These capabilities are designed to help you create accessible content faster and easier, and to ensure that everyone can access and enjoy your work. The Accessibility Assistant for Word Desktop has started rolling out to Insider Beta Channel users running Version 2012 (Build 17425.2000) or later. This feature for Outlook Desktop will be available in Insider Beta Channel by April 2024, followed by release to PowerPoint Desktop this summer
Our Commitment
At Microsoft, we believe that everyone has something valuable to offer, and that diversity of perspectives and experiences can enrich our products and services. That’s why we are committed to empowering everyone to achieve more, fostering an inherently equitable experience. Copilot is one of the ways that we are fulfilling this commitment, by providing a supportive partner that can help you with common challenges, enhance your communication, and bridge the gap between your interaction with technology and how you express your ideas.
But we also know that we are not done yet. We are still on a journey of understanding how AI and LLMs will continue to evolve and make the world a more equitable place. We are constantly learning from our customers, partners, and the disability community, and we are always looking for ways to improve our accessibility features and functionality. We welcome your feedback and suggestions on how we can make Copilot better for you and for everyone.
To learn more about Copilot and how to get started, please visit the Copilot website or the Copilot support page. To learn more about accessibility at Microsoft and how to access our accessibility features, please visit the Microsoft Accessibility website or the Disability Answer Desk. And to share your feedback or suggestions on Copilot, please use the feedback button (thumbs up or down).
Together, we can make the world a more equitable place for everyone.
Microsoft Tech Community – Latest Blogs –Read More
Windows 11 Plans to Expand CLAT Support
Thank you everyone who responded to our recent IPv6 migration survey! We want you to know that we are committed to improving your IPv6 journey and these data are helpful in shaping our future plans.
To that end, just a quick update: we are committing to expanding our CLAT support to include non-cellular network interfaces in a future version of Windows 11. This will include discovery using the relevant parts of RFC 7050 (ipv4only.arpa DNS query), RFC 8781 (PREF64 option in RAs), and RFC 8925 (DHCP Option 108) standards. Once we do have functionality available for you to test in Windows Insiders builds, we will let you know.
We are looking forward to continuing to provide support for your platform networking needs!
Microsoft Tech Community – Latest Blogs –Read More
Optimize your Azure costs
Author introduction
Hi, I am Saira Shaik, Working Principal customer success account manager at Microsoft India.
This article will provide guidance to the customers who wants to Optimize their Azure costs by providing tools and resources to help customers to save cost, Understand and forecast your costs, Cost optimize workloads and Control costs.
Explore tools and resources to help you save
Find out about the tools, offers, and guidance designed to help you manage and optimize your Azure costs. Learn how to understand and forecast your bill, optimize workload costs, and control your spending.
8 ways to optimize the cost
1. Shut down unused resources.
Identify idle virtual machines (VMs), ExpressRoute circuits, and other resources with Azure Advisor. Get recommendations on which resources to shut down and see how much you would save.
Useful Links
Reduce service costs using Azure Advisor – Azure Advisor | Microsoft Learn
2. Right-size underused resources
Find underutilized resources with Azure Advisor—and get recommendations on how to reduce your spend by reconfiguring or consolidating them.
Useful Links
Reduce service costs using Azure Advisor – Azure Advisor | Microsoft Learn
3. Add an Azure savings plan for compute for dynamic workloads
Save up to 65 percent off pay-as-you-go pricing when you commit to spend a fixed hourly amount on compute services for one or three years.
Useful Links
Azure Savings Plan Savings – youtube.com/playlist?list=PLlrxD0HtieHjd-zn7u09YoGJY18ZrN1Hq
Introduction to Azure savings plan for compute (youtube.com)
Understanding your Azure savings plan recommendations (youtube.com)
How Azure savings plan is applied to a customer’s compute environment (youtube.com)
Azure Savings Plan for Compute | Microsoft Azure
4. Reserve instances for consistent workloads
Get a discount of up to 72 percent over pay-as-you-go pricing on Azure services when you prepay for a one- or three-year term with reservation pricing.Get a discount of up to 72 percent over pay-as-you-go pricing on Azure services when you prepay for a one- or three-year term with reservation pricing.
Useful Links
Reservations | Microsoft Azure
Advisor Clinic: Lower costs with Azure Virtual Machine reservations (youtube.com)
Model virtual machine costs with the Azure Cost Estimator Power BI Template (youtube.com)
5. Take advantage of the Azure Hybrid Benefit
AWS is up to five times more expensive than Azure for Windows Server and SQL Server. Save when you migrate your on-premises workloads to Azure.
Useful Links
Azure Hybrid Benefit—hybrid cloud | Microsoft Azure
Reduce costs and increase SQL license utilization using Azure Hybrid Benefit (youtube.com)
Managing and Optimizing Your Azure Hybrid Benefit Usage (With Tools!) – Microsoft Community Hub
6. Configure autoscaling
Save by dynamically allocating and de-allocating resources to match your performance needs.
Useful Links
Autoscaling guidance – Best practices for cloud applications | Microsoft Learn
7. Choose the right Azure compute service
Azure offers many ways to host your code. Operate more cost efficiently by selecting the right compute service for your application.
Useful Links
Choose an Azure compute service – Azure Architecture Center | Microsoft Learn
Armchair Architects: Exploring the relationship between Cost and Architecture (youtube.com)
8. Set up budgets and allocate costs to teams and projects
Create and manage budgets for the Azure services you use or subscribe to—and monitor your organization’s cloud spending—with Microsoft Cost Management.
Useful Links
Tutorial – Create and manage budgets – Microsoft Cost Management | Microsoft Learn
The Cloud Clinic: Use tagging and cost management tools to keep your org accountable (youtube.com)
Understand and forecast your costs
Monitor and analyze your Azure bill with Microsoft Cost Management. Set budgets and allocate spending to your teams and projects.
Estimate the costs for your next Azure projects using the Azure pricing calculator and the Total Cost of Ownership (TCO) calculator.
Successfully build your cloud business case with key financial and technical guidance from Azure.
Useful Links
FinOps toolkit – Kick start your FinOps efforts (microsoft.github.io)
Azure Savings Dashboard – Microsoft Community Hub
Azure Cost Management Dashboard – Microsoft Community Hub
Cost optimize your workloads
Follow your Azure Advisor best practice recommendations for cost savings.
Review your workload architecture for cost optimization using the Microsoft Azure Well-Architected Review assessment and the Microsoft Azure Well-Architected Framework design documentation, well architected cost optimization implementation – Customer Offerings: Well-Architected Cost Optimization Implementation – Microsoft Community Hub
Save with Azure offers and licensing terms such as the Azure Hybrid Benefit, paying in advance for predictable workloads with reservations, Azure Spot Virtual Machines, Azure savings plan for compute, and Azure dev/test pricing.
Control your costs
Mitigate cloud spending risks by implementing cost management governance best practices at your company using the Microsoft Cloud Adoption Framework for Azure.
Implement cost controls and guardrails for your environment with Azure Policy.
Microsoft Tech Community – Latest Blogs –Read More
Azure SQL MI premium-series memory optimized hw is now available in all regions with up to 40 vCores
Recently, we announced a number of Azure SQL Managed Instance improvements in Business Critical tier. In this article, we would like to highlight that the premium-series memory optimized hardware is now available in all Azure regions, up to 40 vCores!
What is new?
Having the latest and greatest hardware generation available for the Azure SQL Managed Instance Business Critical service tier can be crucial for the critical customer workloads. Until recently, premium-series memory optimized hardware generation was available only in a subset of Azure regions. Now you can have a SQL MI BC instance with premium-series memory optimized hardware in any Azure region up to 40 vCores.
This means that the new state for premium-series memory optimized hardware availability is:
Up to 40 vCores: available in every Azure region.
48, 56, 64, 80, 96 and 128 vCore options: for now, available in a subset of Azure regions.
Improve performance of your database workload with more memory per vCore
Increasing memory can improve the performance of applications and databases by reducing the need to read from disk and instead storing more data in memory, which is faster to access. You might want to consider upgrading to memory-optimized premium-series for several reasons:
Buffering and Caching: More memory can be utilized for caching frequently accessed data or buffering I/O operations, leading to faster response times and improved overall system performance.
Handling Larger Datasets: If the user is dealing with larger datasets or increasing workload demands, more memory can accommodate the additional data and processing requirements without experiencing slowdowns or performance bottlenecks.
Concurrency and Scalability: Higher memory capacity can support more concurrent users or processes, allowing the system to handle increased workload and scale effectively without sacrificing performance.
Complex Queries and Analytics: Memory-intensive operations such as complex queries, data analytics, and reporting often benefit from having more memory available to store intermediate results and perform calculations efficiently.
In-Memory Processing: Some databases and applications offer in-memory processing capabilities, where data is stored and manipulated entirely in memory for faster processing. Increasing memory allows for more data to be processed in-memory, resulting in faster query execution and data manipulation.
How to upgrade your instance to premium-series memory optimized hardware
You can scale your existing managed instance from Azure portal, PowerShell, Azure CLI or ARM templates. You can also utilize ‘online scaling’ with minimal downtime. See Scale resources – Azure SQL Database & Azure SQL Managed Instance | Microsoft Learn.
Summary
More memory for a managed instance can lead to improved performance, scalability, and efficiency in handling larger workloads, complex operations, and data processing tasks. This improvement in Azure SQL Managed Instance Business Critical makes premium-series memory optimized hardware available in all regions, up to 40 vCores.
If you’re still new to Azure SQL Managed Instance, now is a great time to get started and take Azure SQL Managed Instance for a spin!
Next steps:
Learn more about the latest innovation in Azure SQL Managed Instance.
Try SQL MI free of charge for the first 12 months.
Microsoft Tech Community – Latest Blogs –Read More
Learn about AI and Microsoft Copilot for Security with Learn Live
Want to learn more about Generative AI and Microsoft Copilot?
Microsoft is launching a Learn Live Series called “Getting Started with Microsoft Copilot for Security.” This weekly online seminar series will run from March 19th through April 9th and will review skill development resources and discuss topics related to AI and Copilot for Security.
Hosts Edward Walton, Andrea Fisher, and Rod Trent will guide you through four topics each with a corresponding Microsoft Learn module designed to help anyone interested in getting users ready for Microsoft Copilot for Security.
Fundamentals of Generative AI
March 19th 12:00 pm – 1:30 pm PDT
In this session, you will explore the way in which large language models (LLMs) enable AI applications and services to generate original content based on natural language input. You will also learn how generative AI enables the creation of AI-powered copilots that can assist humans in creative tasks. In this episode, you will:
Learn about the kinds of solutions AI can make possible and considerations for responsible AI practices
Understand generative AI’s place in the development of artificial intelligence
Understand large language models and their role in intelligent applications
Describe how Azure OpenAI supports intelligent application creation
Describe examples of copilots and good prompts
Fundamentals of Responsible Generative AI
March 27th 12:00 pm –1:30 pm PDT
Generative AI enables amazing creative solutions but must be implemented responsibly to minimize the risk of harmful content generation. In this episode, you will:
Describe an overall process for responsible generative AI solution development
Identify and prioritize potential harms relevant to a generative AI solution
Measure the presence of harms in a generative AI solution
Mitigate harms in a generative AI solution
Prepare to deploy and operate a generative AI solution responsibly
Get started with Microsoft Security Copilot
April 2nd 12:00pm – 1:30 pm PDT
Get acquainted with Microsoft Copilot for Security. You will be introduced to some basic terminology, how Microsoft Copilot for Security processes prompts, the elements of an effective prompt, and how to enable the solution. In this episode, you will:
Describe what Microsoft Copilot for Security is.
Describe the terminology of Microsoft Copilot for Security.
Describe how Microsoft Copilot for Security processes prompt requests.
Describe the elements of an effective prompt
Describe how to enable your Microsoft Copilot for Security solution.
Describe the core features of Microsoft Security Copilot
April 9th 12:00 pm – 1:30 pm PDT
Microsoft Copilot for Security has a rich set of features. Learn about available plugins that enable integration with various data sources, promptbooks, the ways you can export and share information from Copilot for Security, and much more. In this episode, you will:
Describe the features available in the standalone experience.
Describe the services to which Copilot for Security can integrate.
Describe the embedded experience
Jump-start your Copilot for Security journey and join us for the Learn Live series starting on Tuesday, March 19th!
Microsoft Tech Community – Latest Blogs –Read More
Announcing the Public Preview of Change Actor
Change Analysis
Identifying who made a change to your Azure resources and how the change was made just became easier! With Change Analysis, you can now see who initiated the change and with which client that change was made, for changes across all your tenants and subscriptions.
Audit, troubleshoot, and govern at scale
Changes should be available in under five minutes and are queryable for fourteen days. In addition, this support includes the ability to craft charts and pin results to Azure dashboards based on specific change queries.
What’s new: Actor Functionality
This added functionality is in private preview.
Who made the change
This can be either ‘AppId’ (client or Azure service) or email-ID of the user
E.g. changedBy: elizabeth@contoso.com
With which client the change was made
E.g. clientType: portal
What operation was called
Azure resource provider operations | Microsoft Learn
Try it out
You can try it out by querying the “resourcechanges” or “resourcecontainerchanges” tables in Azure Resource Graph.
Sample Queries
Here is documentation on how to query resourcechanges and resourcecontainerchanges in Azure Resource Graph. Get resource changes – Azure Resource Graph | Microsoft Learn
Summarization of who and which client were used to make resource changes in the last 7 days ordered by the number of changes
resourcechanges
| extend changeTime = todatetime(properties.changeAttributes.timestamp),
targetResourceId = tostring(properties.targetResourceId),
changeType = tostring(properties.changeType), changedBy = tostring(properties.changeAttributes.changedBy),
changedByType = properties.changeAttributes.changedByType,
clientType = tostring(properties.changeAttributes.clientType)
| where changeTime > ago(7d)
| project changeType, changedBy, changedByType, clientType
| summarize count() by changedBy, changeType, clientType
| order by count_ desc
Summarization of who and what operations were used to make resource changes ordered by the number of changes
resourcechanges
| extend changeTime = todatetime(properties.changeAttributes.timestamp),
targetResourceId = tostring(properties.targetResourceId),
operation = tostring(properties.changeAttributes.operation),
changeType = tostring(properties.changeType), changedBy = tostring(properties.changeAttributes.changedBy),
changedByType = properties.changeAttributes.changedByType,
clientType = tostring(properties.changeAttributes.clientType)
| project changeType, changedBy, operation
| summarize count() by changedBy, operation
| order by count_ desc
List resource container (resource group, subscription, and management group) changes. who made the change, what client was used, and which operation was called, ordered by the time of the change
resourcecontainerchanges
| extend changeTime = todatetime(properties.changeAttributes.timestamp),
targetResourceId = tostring(properties.targetResourceId),
operation=tostring(properties.changeAttributes.operation),
changeType = tostring(properties.changeType), changedBy = tostring(properties.changeAttributes.changedBy),
changedByType = properties.changeAttributes.changedByType,
clientType = tostring(properties.changeAttributes.clientType)
| project changeTime, changeType, changedBy, changedByType, clientType, operation, targetResourceId
| order by changeTime desc
FAQ
How do I use Change Analysis?
Change Analysis can be used by querying the resourcechanges or resourcecontainterchanges tables in Azure Resource Graph, such as with Azure Resource Graph Explorer in the Azure Portal or through the Azure Resource Graph APIs. More information can be found here: Get resource changes – Azure Resource Graph | Microsoft Learn.
What does unknown mean?
Unknown is displayed when the change happened on a client that is unrecognized.
Why are some of the changedBy values unspecified?
Some resources in the resourcechanges tables are not fully covered yet in the change actor functionality. This could be caused by a resource that has been affected by a system change or the RP needs to first send us the Who/How information. Unspecified is displayed when the resource is missing changedByType values and could be missing for either Creates or Updates. You may also see an increase in Unspecified values for these types,
virtualmachines
virtualmachinescalesets
publicipaddresses
disks
networkinterfaces
What resources are included?
You can try it out by querying the “resourcechanges” or “resourcecontainerchanges” tables in Azure Resource Graph.
Questions and Feedback
If you have any questions or want to provide direct input, you can reach out to us at (argchange@microsoft.com)
Share Product feedback and ideas with us at Azure Governance · Community
Microsoft Tech Community – Latest Blogs –Read More
Load Test Emulation for Azure Database for MySQL – Flexible Server using mysqlslap
Guidance for using mysqlslap to simulate client load and measure performance
Introduction
Mysqlslap is a diagnostic program included with the MySQL server binary that you can use to emulate client load for a MySQL server and report the timing of each stage. Mysqlslap works as if multiple clients are accessing the server simultaneously.
In this post, I’ll show you how to use mysqlslap to perform load test emulation for Azure Database for MySQL – Flexible Server, a fully managed and scalable MySQL service on Azure. I’ll install mysqlslap, configure the connection parameters, run different types of tests, and then analyze the results.
Prerequisites
Before you begin, ensure that the following prerequisites are in place:
An instance of Azure Database for MySQL – Flexible Server. To create one, follow this guidance in this tutorial: Quickstart: Create with Azure portal – Azure Database for MySQL – Flexible Server.
A MySQL client tool that supports mysqlslap – download them from MySQL :: MySQL Community Downloads.
A test database and table on your Azure Database for MySQL Flexible Server instance. Use the following queries to create a test database and table with one million dummy records:
mysql> CREATE DATABASE loadtestdb;
mysql> use loadtestdb;
mysql> CREATE TABLE loadtesttable (
ID INT PRIMARY KEY AUTO_INCREMENT,
Name VARCHAR(255),
Age INT,
Salary DECIMAL(10, 2),
Department VARCHAR(50),
City VARCHAR(100),
Country VARCHAR(100)
);
mysql> INSERT INTO loadtesttable (Name, Age, Salary, Department, City, Country)
SELECT
CONCAT(CHAR(FLOOR(RAND() * 26) + 65), ‘Person’, n),
FLOOR(RAND() * 100) + 18,
ROUND(RAND() * 10000000, 2),
CASE WHEN RAND() < 0.5 THEN ‘IT’ ELSE ‘Sales’ END,
CASE WHEN RAND() < 0.5 THEN ‘New York’ ELSE ‘Los Angeles’ END,
CASE WHEN RAND() < 0.5 THEN ‘USA’ ELSE ‘Canada’ END
FROM (
SELECT
a.N + b.N * 10 + c.N * 100 + d.N * 1000 + e.N * 10000 + f.N * 100000 + 1 AS n
FROM
(SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) a,
(SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) b,
(SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) c,
(SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) d,
(SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) e,
(SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) f
) AS Numbers;
Installing mysqlslap
Instructions for installing mysqlslap on a computer running Windows or Linux appear in the following sections.
Note: If the Azure Cloud Shell is installed, mysqlslap is included automatically.
Windows
To install mysqlslap on a computer running Windows, download the MySQL Installer, run it, and then follow the wizard. Mysqlslap will be installed in the folder C:Program FilesMySQLMySQL Server 8.1bin (assuming the installation is on C:). Alternately, you can download MySQL ZIP Archive and extract mysqlslap from mysql-8.0.36-winx64.zipmysql-8.0.36-winx64bin.
Linux
To install mysqlslap on a computer running Linux, install the MySQL client package (which includes mysqlslap) by running the following command:
sudo apt update
sudo apt install mysql-client
mysqlslap –version
Configuring the connection parameters
To connect to the Azure Database for MySQL – Flexible Server instance, run the following command:
mysqlslap –host=myserver.mysql.database.azure.com –user=myuser –password=mypassword –port=3306 –ssl-mode=REQUIRED
With this command, you can specify the following parameters:
–host: The host name or IP address of your server. You can find it on the Azure portal under the Overview section of your server.
–port: The port number of your server. The default is 3306.
–user: The username to log in to your server. You can use the admin user that you created when you provisioned your server, or any other user that has access to the test database.
–password: The password to log in to your server. You will be prompted to enter it when you run mysqlslap.
–ssl-mode: The SSL mode to use for the connection. You can use REQUIRED, VERIFY_CA, or VERIFY_IDENTITY. The default is REQUIRED. For more information about SSL modes, see MySQL 8.0 : Configuring MySQL to Use Encrypted Connections.
Running different types of tests
The mysqlslap test process includes three stages:
Create schema, table, and optionally any stored programs or data to use for the test. This stage uses a single client connection.
Run the load test. This stage can use many client connections.
Clean up (disconnect, drop table if specified). This stage uses a single client connection.
Use mysqlslap to run different types of tests, such as concurrency tests, stress tests, or benchmark tests. To specify the test parameters, consider the following options:
Option Name
Description
—concurrency
Specifies the number of simultaneous client connections. You can provide a single value or a comma-separated list of values. For example, –concurrency=10 means 10 threads, and –concurrency=10,20,30 means three tests with 10, 20, and 30 threads respectively.
—iterations
Defines the number of times the benchmark test should be repeated. The default is 1.
—number-of-queries
The number of queries to run per thread. The default is 0, which means unlimited.
—query
Specifies the SQL query to be executed during the test. You can provide a single query or multiple queries. For example, –query=”SELECT * FROM testtable” means to run a simple SELECT query.
—create-schema
The name of the database to use for the test. The default is the mysqlslap database.
—create
The statement to create the test table. You can provide a single statement or multiple statements. For example, –create=”CREATE TABLE testtable (id INT)” means to create a simple test table.
—delimiter
Use the –delimiter option to specify a different delimiter, which enables you to specify statements that span multiple lines or place multiple statements on a single line.
—auto-generate-sql
A flag to indicate whether to generate random queries for the test. The default is FALSE. If you set it to TRUE, you can use the following options to control the query generation.
—auto-generate-sql-add-autoincrement
A flag to indicate whether to add an AUTO_INCREMENT column to the test table. The default is FALSE.
—auto-generate-sql-execute-number
The number of queries to generate and execute per thread. The default is 10.
—auto-generate-sql-load-type
The type of queries to generate. You can use MIXED, UPDATE, WRITE, or READ. The default is MIXED.
—auto-generate-sql-unique-query-number
The number of unique queries to generate. The default is 10.
—auto-generate-sql-unique-write-number
The number of unique queries to generate for write load. The default is 10.
You can also use the –engine option to specify the storage engine to use for the test table. The default is InnoDB. For more information about mysqlslap options, see MySQL 8.0 Reference Manual :: 6.5.8 mysqlslap.
Before running a test with mysqlslap, please ensure to use an empty user database or the default mysqlslap database when using –create or –auto-generate-sql-* option. If the –create or –auto-generate-sql-* option is given, mysqlslap drops the schema at the end of the test run. This means that any existing data in the database will be lost.
Some examples showing how to run different types of tests using mysqlslap follow.
To run a concurrency test with 10, 20, and 30 threads, each executing 100 queries 10 times, run the following command:
mysqlslap –host=server-name.mysql.database.azure.com –port=3306 –user=user-name –password –ssl-mode=REQUIRED –concurrency=10,20,30 –iterations=10 –number-of-queries=100 –query=”SELECT ID, Name, Age, Salary, Department, City, Country FROM loadtesttable WHERE Name like ‘A%’ AND Age BETWEEN 30 AND 40;” –create-schema=loadtestdb –verbose
To run a stress test with 50 threads, each executing the query 20 times, run the following command:
mysqlslap –host=server-name.mysql.database.azure.com –port=3306 –user=user-name –password –ssl-mode=REQUIRED –concurrency=50 –iterations=25 –query=”SELECT ID, Name, Age, Salary, Department, City, Country FROM (SELECT *, ROW_NUMBER() OVER (PARTITION BY Department ORDER BY Salary DESC) AS R FROM loadtesttable) AS ranked WHERE R <= 5;” –create-schema=loadtestdb –verbose
To run a benchmark test with 10 threads, each executing 1000 randomly generated queries with a mixed load type, run the following command:
mysqlslap –host=server-name.mysql.database.azure.com –port=3306 –user=user-name –password –ssl-mode=REQUIRED –concurrency=10 –iterations=1 –number-of-queries=1000 –auto-generate-sql –auto-generate-sql-load-type=MIXED –verbose
Analyzing the results
After running a test, mysqlslap displays the results as output., which includes the:
Average number of seconds to run all queries: The average time it took to run all the queries per thread.
Minimum number of seconds to run all queries: The minimum time it took to run all the queries per thread.
Maximum number of seconds to run all queries: The maximum time it took to run all the queries per thread.
Number of clients running queries: The number of threads that simulated the client load.
Average number of queries per client: The average number of queries that each thread executed.
You can use the –silent option to suppress the verbose output and display only the results. You can also use the –csv option to format the results as comma-separated values, which can easily be imported into a spreadsheet or a database for further analysis.
An example of the results from a concurrency test with 10, 20, and 30 threads, each executing 100 queries 10 times, follows:
Benchmark
Average number of seconds to run all queries: 2.024 seconds
Minimum number of seconds to run all queries: 2.003 seconds
Maximum number of seconds to run all queries: 2.041 seconds
Number of clients running queries: 10
Average number of queries per client: 10
Benchmark
Average number of seconds to run all queries: 2.070 seconds
Minimum number of seconds to run all queries: 2.022 seconds
Maximum number of seconds to run all queries: 2.228 seconds
Number of clients running queries: 20
Average number of queries per client: 5
Benchmark
Average number of seconds to run all queries: 1.885 seconds
Minimum number of seconds to run all queries: 1.849 seconds
Maximum number of seconds to run all queries: 2.021 seconds
Number of clients running queries: 30
Average number of queries per client: 3
You can use the results to compare the performance of an Azure Database for MySQL – Flexible Server instance under different load scenarios, and to identify any potential bottlenecks or issues. You can also use the results to tune the server configuration, such as the number of connections, the buffer pool size, the query cache size, or the index statistics.
Best practices
When you’re using mysqlslap to perform load test emulation for an Azure Database for MySQL – Flexible Server instance, consider the following best practices.
Before using the mysqlslap utility on your production environment, test the mysqlslap utility thoroughly in your lowest environment against a test database.
Define benchmarking scenarios that closely resemble your production environment.
Use realistic datasets and queries representative of your actual workload.
Adjust benchmarking parameters such as concurrency, iterations, and query complexity to match your workload characteristics.
Test different combinations of parameters to understand their impact on performance.
Analyze results carefully and consider multiple metrics for performance evaluation.
Monitor system resources (CPU, memory, disk I/O) during benchmark tests to identify any resource bottlenecks.
Repeat benchmark tests multiple times to validate results and ensure consistency.
Conclusion
In this post, I’ve described how to use mysqlslap to perform load test emulation for an Azure Database for MySQL – Flexible Server instance. I’ve described how to install mysqlslap, configure the connection parameters, run different types of tests, and analyze the results. Be sure to use mysqlslap to simulate client load and measure the performance of your MySQL flexible server, as well as to optimize your server configuration and query performance.
If you have any questions about the detail provided above, please leave a comment below or email us at AskAzureDBforMySQL@service.microsoft.com. Thank you!
References
For more information about using mysqlslap, in the MySQL documentation, see https://dev.mysql.com/doc/refman/8.0/en/mysqlslap.html
Microsoft Tech Community – Latest Blogs –Read More
Partner Blog | Accelerating our collective progress with women-led innovation.
Women are a powerful force, and the businesses they lead contribute to a more inclusive world. This International Women’s Day and Women’s History Month in the United States, we celebrate the strides the world is making towards gender equality, while acknowledging there is much more work to be done.
The United Nations recognizes that gender equality is a fundamental human right—the basis for a peaceful, prosperous, and sustainable world. According to the Global Gap Report 2023, women’s global economic participation and representation in STEM (science, technology, engineering, and mathematics) and in senior leadership positions are declining. Closing the global gender gap is critical to sustainable economic growth, increasing access to new markets and opportunities for more people. When we invest in women, we can accelerate progress.
Continue reading here
Microsoft Tech Community – Latest Blogs –Read More
Securing the Clouds: Achieving a Unified Security Stance and threat-based approach to Use Cases
Note: this is the second of a four-part blog series that explores the complexities of securing multiple clouds and the limitations of traditional Security Information and Event Management (SIEM) tools.
With the first post, we discussed the importance of adopting a multi-cloud approach to Observability, centralizing in a single SIEM all the events generated by your infrastructure to enable a more comprehensive analysis of potential security incidents by correlating events independently from their origin. We also hinted to the complexity of such endeavor.
You can read the first post here: Securing the Clouds: Navigating Multi-Cloud Security with Advanced SIEM Strategies – Microsoft Community Hub
With this new post, we focus on a different topic: the importance of adopting a threat-based approach. In the process, we discuss how this can be achieved and provide you with a few practical ideas you can apply to your scenarios.
The Threat-Based Approach
The threat-based approach for creating use cases consists in the identification of potential attacks to the system, considering each Cloud environment, on-prem environment, and then how they interrelate and interact. You then derive attack uses cases which drive the definition of the logic to identify those attacks and then trigger remediation activities. Those potential attacks are also known as threats or, more precisely, as threat events.
The threat-based approach is not the only possibility. Actually, the most common approaches are vulnerability-based. With them, the focus is on the identification of vulnerabilities like the infamous Log4Shell, and consequently on indicators which may identify attacks in progress.
Vulnerability-Based Approaches and How They Compare
Vulnerability-driven approaches have various shortcomings. For instance, they tend to grow the detection capabilities to respond to known vulnerabilities, and as a reaction to successful attacks. Therefore, they are designed to prevent such attacks from occurring. Conversely, threat driven approaches are based on the understanding of how attacks may happen, and are designed to detect them independently from the presence of specific vulnerabilities. We have chosen to adopt a threat driven approach to improve the detection and response capabilities and to reduce the impact of security incidents and response.
Another advantage of the threat-driven approach is that it is often independent from the existence of specific vulnerabilities. For example, you can analyze the possibility for an attacker to inject code in its requests leveraging some vulnerability to execute such code, without referring to a specific vulnerability. This allows you to design detection mechanisms that are vulnerability independent. Therefore, they would apply to many similar vulnerabilities, including yet undisclosed zero-day vulnerabilities.
A threat-driven approach is a proactive and strategic way of analyzing a complex infrastructure for identifying This approach is much more effective of the corresponding ones based on vulnerability analysis because it takes in account the exploitability of the vulnerabilities. In other words, you determine what a malicious actor can do to compromise your system. This is much more effective than adopting a vulnerability-driven approach because it allows you to discard vulnerabilities that do not matter because they are hardly exploitable.
How to Apply the Threat-Based Approach
By adopting a threat-driven approach we started with the following steps:
Analyze the organization threat landscape focusing on factors like geographical location, industry, externally exposed services, and potentially much more.
Leverage the organization’s threat intelligence to gain broader understanding of the threat landscape, and the specific threats and attacks that target them.
Identify and prioritize the most impactful threats and attack vectors that target the organization’s assets, operations, and objectives, using your telemetry.
Assess and understand the capabilities, tactics, techniques, and procedures of the threat actors and their motivations and goals.
Monitor and evaluate the effectiveness and performance of the security controls and countermeasures and adjust them over time as the threat landscape evolves.
To apply a threat-driven approach, organizations need to incorporate threat analysis and threat intelligence across their systems development and operational processes.
A Practical Example of Threat-Based Approach
Now that we have understood what the threat-driven approach is, it is time to get a few ideas about how you can implement it in your organization.
We can consider as an example a Multi-Cloud organization. It is not uncommon for those organizations to adopt identity and security solutions from various vendors. This complicates integration and soars the risk of supply chain attacks due to the lack of comprehensive visibility and increased complexity. To address this situation, it is best to adopt a structured approach based on the following steps:
Get a good understanding of the various environments, focusing your attention on their components and on how they interact with each other.
Perform a risk analysis on the said environments to identify threats, monitoring capabilities included with each service, and identifying eventual gaps to be covered with additional events. This could be done by applying some lightweight threat modeling.
Evaluate the typical attack scenarios seen by the organization towards the systems in scope and ensure that they are represented within the threat analysis. This may include an analysis of the specific threat landscape for the organization.
The “threat modeling” specified in the second point is a security process to understand security threats to a system, determine risks from those threats, and establish appropriate mitigations. There are various ways to perform threat modeling, and all of them are well represented by the Threat Modeling Manifesto. Microsoft has developed one of the first threat modeling processes, called Microsoft STRIDE Threat Modeling. This approach has evolved over the years and is continuously evolving. If you want to learn more, please go to Microsoft Security Development Lifecycle Threat Modelling.
Creating the Use Cases
The three steps defined above define the threats for the system. This represents the first phase of our journey. The next one consists of the definition of Use Cases. You will often have a Use Case for each Threat, but this is not an absolute rule. In various situations you will want to cover more threats with a single Use Case. Nevertheless, each Use Cases describes the associated threats as a single story and identifies the events necessary to detect such attacks and where they can be found. The Use Case typically also contains the definition of the actions that can be performed for controlling the risk, and the conditions under which they are triggered. Those actions will be both manual and automated.
Responding to Attacks
It is very important to define automated activities that are executed when a potential attack is detected. This allows you to respond to attacks faster than you could if you rely only on manual response.
Time is a critical factor in the realm of cybersecurity. Let’s delve into why swift responses matter:
When a cyberattack occurs, a rapid response allows organizations to identify and contain the breach promptly. By doing so, they prevent the attack from spreading or escalating into a larger incident. This quick action minimizes the damage inflicted on systems, data, and operations.
Fast response enables organizations to restore normal operations swiftly. By minimizing downtime, they mitigate financial losses and maintain business continuity.
The cybersecurity landscape involves a constant race between defenders and attackers. Cybercriminals leverage evolving tools, tactics, and procedures, including zero-day exploits. While it takes an independent cybercriminal around 9.5 hours to gain illicit access to a target, defenders must act even faster to thwart such attempts. See The Importance Of Time And Speed In Cybersecurity (forbes.com).
While automation plays a major role in reducing the time required to respond to attacks, manual remediation activities are still essential. The problem is that automated actions cannot be too drastic: it’s fine to disable an IP address or an account that seems to attack the organization, but you will not want to have some automated procedure to take down the whole infrastructure automatically, even if you detect a possible data exfiltration.
Ultimately, the production of the Use Cases is instrumental in the creation of the rules in your SIEM system to detect attacks, and to configure your SOAR system for automatically respond to them.
Conclusions
Complex Multi-Cloud environments represent a significant challenge when you must create a monitoring infrastructure. Yet, achieving a comprehensive view of what happens in your organization is more important than ever. A structured approach like the Threat-Based Approach described in this post may help you to conquer complexity and get the results your organization needs.
Nevertheless, implementing a Threat-Based Approach is not the end. Organizations face new attacks every day. New software is acquired, extending the attack surface for the Organization. And new vulnerabilities are regularly found. For these reasons, it is essential to adopt a continuous improvement approach. The threat assessment must be regularly reiterated, and new Use Cases must be created. This leads to updating the existing rules in the SIEM or SOAR. If you do so, your Organization will gradually but surely improve its security posture, and will eventually become one of the main tools to guarantee your Business’ security.
Future posts in this series will cover the following topics:
How Microsoft has implemented its security solutions across Azure, Oracle, AWS, and on-premises environments, thus enabling a unified and comprehensive defense against threats, for any enterprise
Key benefits and outcome examples for some of our multi-cloud security projects, including improved detection capabilities, enhanced visibility across enterprise, efficiency, and cost savings.
Microsoft Tech Community – Latest Blogs –Read More
Frequently Asked Question about TLS and Cipher Suite configuration
Disclaimer: Microsoft does not endorse the products listed in this article. They are provided for informational purposes and their listing does not constitute an endorsement. We do not guarantee the quality, safety, or effectiveness of listed products and disclaim liability for any related issues. Users should exercise their own judgment, conduct research, and seek professional advice before purchasing or using any listed products.
Disclaimer: This article contains content generated by Microsoft Copilot.
What versions of Windows support TLS 1.3?
Starting with Windows Server 2022, TLS 1.3 is supported by default in all versions. The protocol is not available in down level OS versions.
What Linux distros will not support TLS 1.3?
Most modern Linux distributions have support for TLS 1.3. TLS 1.3 is a significant improvement in security and performance over earlier versions of TLS, and it’s widely adopted in modern web servers and clients. However, the specific versions of Linux and software components that support TLS 1.3 can vary, and it’s essential to keep your software up-to-date to benefit from the latest security features.
To ensure TLS 1.3 support, consider the following factors:
**Linux Kernel:** Most modern Linux kernels have support for TLS 1.3. Kernel support is essential for low-level network encryption. Ensure that your Linux distribution is running a reasonably recent kernel.
**OpenSSL or OpenSSL-Compatible Libraries:** TLS 1.3 support is primarily dependent on the version of OpenSSL or other TLS libraries in use. OpenSSL 1.1.1 and later versions generally provide support for TLS 1.3. However, the specific version available may depend on your Linux distribution and the software you’re using.
**Web Servers and Applications:** The web servers and applications you run on your Linux system need to be configured to enable TLS 1.3. Popular web servers like Apache, Nginx, and others have been updated to support TLS 1.3 in newer versions. Ensure that you are using an updated version of your web server software and have TLS 1.3 enabled in its configuration.
**Client Software:** If you are using Linux as a client to connect to servers over TLS, your client software (e.g., web browsers, email clients) should support TLS 1.3. Most modern web browsers and email clients on Linux have added support for TLS 1.3.
**Distribution Updates:** Regularly update your Linux distribution to receive security updates and new software versions, including those with TLS 1.3 support. Each Linux distribution may have different release schedules and package versions.
Since the state of software support can change over time, it’s crucial to check the specific versions and configurations of the software components you are using on your Linux system to determine their TLS 1.3 compatibility. Generally, using up-to-date software and keeping your Linux system patched with the latest security updates will ensure that you have the best support for TLS 1.3 and other security features.
How do remove my dependency on Legacy TLS encryption?
At high level, resolving legacy TLS encryption issues requires understanding your TLS 1.0 and TLS 1.1 dependencies, upgrading to TLS 1.2+ compliant OS versions, updating applications and testing.
Given the length of time TLS 1.0 has been supported by the software industry, it is highly recommended that any TLS 1.0 deprecation plan include the following:
Code analysis to find/fix hardcoded instances of TLS 1.0 or older security protocols.
Network endpoint scanning and traffic analysis to identify operating systems using TLS 1.0 or older protocols.
Full regression testing through your entire application stack with TLS 1.0 disabled.
Migration of legacy operating systems and development libraries/frameworks to versions capable of negotiating TLS 1.2 by default.
Compatibility testing across operating systems used by your business to identify any TLS 1.2 support issues.
Coordination with your own business partners and customers to notify them of your move to deprecate TLS 1.0.
Understanding which clients may no longer be able to connect to your servers once TLS 1.0 is disabled.
How do I configure protocols and cipher suites for Apache?
Configuring cipher suites and protocols for the Apache web server involves modifying the server’s SSL/TLS settings in its configuration file. This process can help you enhance the security and compatibility of your web server. Here are the steps to configure cipher suites and protocols for Apache:
**Backup Configuration Files:**
Before making any changes, it’s essential to create backups of your Apache configuration files to ensure you can revert if something goes wrong. Common configuration files include `httpd.conf` or `apache2.conf`, and the SSL/TLS configuration file, often named something like `ssl.conf`.
**Edit SSL/TLS Configuration:**
Open the SSL/TLS configuration file for your Apache server using a text editor. The location of this file can vary depending on your Linux distribution and Apache version. Common locations include `/etc/httpd/conf.d/ssl.conf`, `/etc/apache2/sites-available/default-ssl.conf`, or similar. You may need root or superuser privileges to edit this file.
Example command to open the file in a text editor:
“`
sudo nano /etc/httpd/conf.d/ssl.conf
“`
**Specify Protocol Versions:**
To configure the allowed SSL/TLS protocols, you can use the `SSLProtocol` directive. For example, to allow only TLS 1.2 and TLS 1.3, you can add the following line to your configuration:
“`
SSLProtocol -all +TLSv1.2 +TLSv1.3
“`
This configuration disables SSL (SSLv2 and SSLv3) and enables TLS 1.2 and TLS 1.3.
**Specify Cipher Suites:**
To configure the allowed cipher suites, use the `SSLCipherSuite` directive. You can specify a list of cipher suites that you want to enable. Ensure that you use secure and modern cipher suites. For example:
“`
SSLCipherSuite TLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256
“`
This example includes cipher suites that offer strong security and forward secrecy.
**Save and Close the Configuration File**
Save your changes and exit the text editor.
**Test Configuration**
Before you restart Apache, it’s a good practice to test your configuration for syntax errors. You can use the following command:
“`
apachectl configtest
“`
If you receive a “Syntax OK” message, your configuration is valid.
**Restart Apache:**
Finally, restart the Apache web server to apply the changes:
“`
sudo systemctl restart apache2 # On systemd-based systems
“`
“`
sudo service apache2 restart # On non-systemd systems
“`
Your Apache web server should now be configured to use the specified SSL/TLS protocols and cipher suites. Remember that keeping your SSL/TLS configuration up to date and secure is crucial for the overall security of your web server. Be sure to monitor security advisories and best practices for SSL/TLS configuration regularly.
How do I configure protocols and cipher suites for nginx?
To configure cipher suites and protocols for the Nginx web server, you’ll need to modify its SSL/TLS settings in the server block configuration. This process allows you to enhance the security and compatibility of your web server. Here are the steps to configure cipher suites and protocols for Nginx:
**Backup Configuration Files:**
Before making any changes, create backups of your Nginx configuration files to ensure you can revert if needed. Common configuration files include `nginx.conf`, `sites-available/default`, or a custom server block file.
**Edit the Nginx Configuration File:**
Open the Nginx configuration file in a text editor. The location of the main configuration file varies depending on your Linux distribution and Nginx version. Common locations include `/etc/nginx/nginx.conf`, `/etc/nginx/sites-available/default`, or a custom configuration file within `/etc/nginx/conf.d/`.
Example command to open the file in a text editor:
“`bash
sudo nano /etc/nginx/nginx.conf
“`
**Specify Protocol Versions:**
To configure the allowed SSL/TLS protocols, you can use the `ssl_protocols` directive in your `server` block or `http` block. For example, to allow only TLS 1.2 and TLS 1.3, add the following line:
“`nginx
ssl_protocols TLSv1.2 TLSv1.3;
“`
This configuration disables SSL (SSLv2 and SSLv3) and enables TLS 1.2 and TLS 1.3.
**Specify Cipher Suites:**
To configure the allowed cipher suites, use the `ssl_ciphers` directive. Specify a list of cipher suites that you want to enable. Ensure that you use secure and modern cipher suites. For example:
“`nginx
ssl_ciphers ‘TLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256’;
“`
This example includes cipher suites that offer strong security and forward secrecy.
**Save and Close the Configuration File:**
Save your changes and exit the text editor.
**Test Configuration:**
Before you reload Nginx to apply the changes, test your configuration for syntax errors:
“`bash
sudo nginx -t
“`
If you receive a “syntax is okay” message, your configuration is valid.
**Reload Nginx:**
Finally, reload Nginx to apply the new SSL/TLS settings:
“`bash
sudo systemctl reload nginx # On systemd-based systems
“`
“`bash
sudo service nginx reload # On non-systemd systems
“`
Your Nginx web server should now be configured to use the specified SSL/TLS protocols and cipher suites. Ensure that you stay updated with best practices and security advisories for SSL/TLS configurations to maintain the security of your web server.
What open-source tools can be used to test client connections?
There are several open-source tools available to test client connections for TLS (Transport Layer Security) connections, either for troubleshooting or security auditing purposes. Here are some popular ones:
Nmap
Nmap, a powerful network scanning tool, can be used to test TLS/SSL configurations and identify supported cipher suites on a server. Here are a couple of ways you can utilize Nmap for testing TLS client connections:
Using the ssl-enum-ciphers Script:
Nmap includes a script called ssl-enum-ciphers, which assesses the cipher suites supported by a server and rates them based on cryptographic strength.
It performs multiple connections using SSLv3, TLS 1.1, and TLS 1.2.
To check the supported ciphers on a specific server (e.g., Bing), run the following command:
nmap –script ssl-enum-ciphers -p 443 www.bing.com
The output will provide information about the supported ciphers and their strengths123.
Checking for Weak Ciphers:
If you specifically want to identify weak ciphers, you can use the following command:
nmap –script ssl-enum-ciphers -p 443 yoursite.com | grep weak
This command will highlight any weak ciphers detected during the scan4
Remember that Nmap is a versatile tool, and its ssl-enum-ciphers script can help you assess the security of your TLS connections.
SSLyze
SSLyze is a powerful Python tool designed to analyze the SSL configuration of a server by connecting to it. It helps organizations and testers identify misconfigurations affecting their SSL servers. Here’s how you can use SSLyze to assess TLS connections:
Basic Scan with sslyze:
To perform a basic scan of a website’s HTTPS configuration, run the following command, replacing example.com with the domain you want to scan:
sslyze –regular example.com
This command will display information about the protocol version, cipher suites, certificate chain, and more.
Specific Scan Commands:
You can use various scan commands to test specific aspects of TLS connections:
–sslv3: Test for SSL 3.0 support.
–tlsv1: Test for TLS 1.0 support.
–early_data: Test for TLS 1.3 early data support.
–sslv2: Test for SSL 2.0 support.
Online SSL Scan:
If you prefer an online approach, you can use SSLyze to test any SSL/TLS-enabled service on any port. It checks for weak ciphers and known cryptographic vulnerabilities (such as Heartbleed).
Remember to adjust the scan parameters based on your specific requirements.
testssl.sh
testssl.sh is a powerful open-source command-line tool that allows you to check TLS/SSL encryption on various services. Here are some features and instructions for using it:
Installation:
You can install testssl.sh by cloning its Git repository:
git clone –depth 1 https://github.com/drwetter/testssl.sh.git
cd testssl.sh
Make sure you have bash (usually preinstalled on most Linux distributions) and a newer version of OpenSSL (1.1.1 recommended) for effective usage.
Basic Usage:
To test a website’s HTTPS configuration, simply run:
./testssl.sh https://www.bing.com/
To test STARTTLS-enabled protocols (e.g., SMTP, FTP, IMAP, etc.), use the -t option:
./testssl.sh -t smtp https://bing.com:25
Additional Options:
Parallel Testing:
By default, mass tests are done in serial mode. To enable parallel testing, use the –parallel flag:
./testssl.sh –parallel
Custom OpenSSL Path:
If you want to use an alternative OpenSSL program, specify its path using the –openssl flag:
./testssl.sh –parallel –sneaky –openssl /path/to/your/openssl
Logging:
To keep logs for later analysis, use the –log (store log file in the current directory) or –logfile (specify log file location) options:
./testssl.sh –parallel –sneaky –logging
Disable DNS Lookup:
To speed up tests, disable DNS lookup using the -n flag:
./testssl.sh -n –parallel –sneaky –logging
Single Checks:
You can run specific checks for protocols, server defaults, headers, vulnerabilities, and more. For example:
To check each local cipher remotely, use the -e flag.
To omit some checks and make the test faster, include the –fast flag.
To test TLS/SSL protocols (including SPDY/HTTP2), use the -p option.
To view the server’s default picks and certificate, use the -S option.
To see the server’s preferred protocol and cipher, use the -P flag.
Remember that testssl.sh provides comprehensive testing capabilities, including support for mass testing and logging.
TLS-Attacker
TLS-Attacker is a powerful Java-based framework designed for analyzing TLS libraries. It serves as both a manual testing tool for TLS clients and servers and a software library for more advanced tools. Here’s how you can use it:
Compilation and Installation:
To get started, ensure you have Java and Maven installed. On Ubuntu, you can install Maven using:
sudo apt-get install maven
TLS-Attacker currently requires Java JDK 11 to run. Once you have the correct Java version, clone the TLS-Attacker repository:
git clone https://github.com/tls-attacker/TLS-Attacker.git
cd TLS-Attacker
mvn clean install
The resulting JAR files will be placed in the “apps” folder. If you want to use TLS-Attacker as a dependency, include it in your pom.xml like this:
<dependency><groupId>de.rub.nds.tls.attacker</groupId><artifactId>tls-attacker</artifactId><version>5.2.1</version><type>pom</type></dependency>
Running TLS-Attacker:
You can run TLS-Attacker as a client or server:
As a client:
cd apps
java -jar TLS-Client.jar -connect [host:port]
As a server:
java -jar TLS-Server.jar -port [port]
TLS-Attacker also ships with example attacks on TLS, demonstrating how easy it is to implement attacks using the framework:
java -jar Attacks.jar [Attack] -connect [host:port]
Although the example applications are powerful, TLS-Attacker truly shines when used as a programming library.
Customization and Testing:
You can define custom TLS protocol flows and test them against your TLS library.
TLS-Attacker allows you to send arbitrary protocol messages in any order to the TLS peer and modify them using a provided interface.
Remember that TLS-Attacker is primarily a research tool intended for TLS developers and pentesters. It doesn’t have a GUI or green/red lights—just raw power for analyzing TLS connections!
ssldump
ssldump is a versatile SSL/TLS network protocol analyzer that can help you examine, decrypt, and decode SSL-encrypted packet streams. Here’s how you can use it for testing TLS connections:
Capture the Target Traffic:
First, capture a packet trace containing the SSL traffic you want to examine. You can use the tcpdump utility to capture the traffic.
To write the captured packets to a file for examination with ssldump, use the -w option followed by the name of the file where the data should be stored.
Specify the interface or VLAN from which traffic is to be captured using the -i option.
Use appropriate tcpdump filters to include only the traffic you want to examine.
Examine the SSL Handshake and Record Messages:
When you run ssldump on the captured data, it identifies TCP connections and interprets them as SSL/TLS traffic.
It decodes SSL/TLS records and displays them in text format.
You’ll see details about the SSL handshake, including the key exchange.
Example command:
ssldump -i en0 -w captured_traffic.pcap
Decrypt Application Data (If Possible):
If you have the private key used to encrypt the connections, ssldump may also decrypt the connections and display the application data traffic.
Keep in mind that ssldump cannot decrypt traffic for which the handshake (including the key exchange) was not seen during the capture.
Remember to follow best practices when capturing SSL conversations for examination. For more information, refer to the official documentation.
sslscan
sslscan is a handy open-source tool that tests SSL/TLS-enabled services to discover supported cipher suites. It’s particularly useful for determining whether your configuration has enabled or disabled specific ciphers or TLS versions. Here’s how you can use it:
Installation:
If you’re using Ubuntu, you can install sslscan using the following command:
sudo apt-get install sslscan
Basic Usage:
To scan a server and list the supported algorithms and protocols, simply point sslscan at the server you want to test. For example:
sslscan example.com
The output will highlight various aspects, including SSLv2 and SSLv3 ciphers, CBC ciphers on SSLv3 (to detect POODLE vulnerability), 3DES and RC4 ciphers, and more.
Additional Options:
You can customize the scan by using various options:
–targets=<file>: Specify a file containing a list of hosts to check.
–show-certificate: Display certificate information.
–failed: Show rejected ciphers.
Remember that sslscan provides valuable insights into your SSL/TLS configuration.
curl
You can use curl to test TLS connections. Here are some useful commands and tips:
Testing Different TLS Versions:
To test different TLS versions, you can use the following options with curl:
–tlsv1.0: Test TLS 1.0
–tlsv1.1: Test TLS 1.1
–tlsv1.2: Test TLS 1.2
–tlsv1.3: Test TLS 1.3
For example, to test TLS 1.2, use:
curl –tlsv1.2 https://example.com
Replace example.com with the URL you want to test1.
Debugging SSL Handshake:
While curl can provide some information, openssl is a better tool for checking and debugging SSL.
To troubleshoot client certificate negotiation, use:
openssl s_client -connect www.example.com:443 -prexit
This command will show acceptable client certificate CA names and a list of CA certificates from the server2.
Checking Certificate Information:
To see certificate information, use:
curl -iv https://example.com
However, for detailed TLS handshake troubleshooting, prefer openssl s_client instead of curl. Use options like -msg, -debug, and -status for more insights3.
Remember that curl can be handy for quick checks, but for in-depth analysis, openssl provides more comprehensive details about SSL/TLS connections.
OpenSSL
OpenSSL is a versatile tool that allows you to test and verify TLS/SSL connections. Here are some useful commands and examples:
Testing TLS Versions:
To specify the TLS version for testing, use the appropriate flag with openssl s_client. For instance:
To test TLS 1.3, run:
openssl s_client -connect example.com:443 -tls1_3
Other supported SSL and TLS version flags include -tls1_2, -tls1_1, -tls1, -ssl2, and -ssl31.
Checking Certificate Information:
To see detailed certificate information, use:
openssl s_client -connect your.domain.io:443
For more in-depth analysis, consider using openssl instead of curl. Options like -msg, -debug, and -status provide additional insights2.
Upgrading Plain Text Connections:
You can upgrade a plain text connection to an encrypted (TLS or SSL) connection using the -starttls option. For example:
openssl s_client -connect mail.example.com:25 -starttls smtp
This command checks and verifies secure connections, making it a valuable diagnostic tool for SSL servers3.
Remember, openssl s_client is your go-to for testing and diagnosing SSL/TLS connections.
Can you use WireShark to inspect the TLS connections?
Most modern Linux distributions have support for TLS 1.3. TLS 1.3 is a significant improvement in security and performance over earlier versions of TLS, and it’s widely adopted in modern web servers and clients. However, the specific versions of Linux and software
Capture the Traffic:
Start Wireshark and select the network interface you want to capture traffic from.
Click the Start button (usually a green shark fin icon) to begin capturing packets.
Browse to a website or perform any action that involves TLS communication (e.g., visiting an HTTPS website).
Filter for TLS Traffic:
In the packet list, you’ll see various packets. To focus on TLS traffic, apply a display filter:
Click on the Display Filter field (located at the top of the Wireshark window).
Type tls or ssl and press Enter.
Wireshark will now display only packets related to TLS/SSL.
Inspect TLS Handshake and Records:
Look for packets with the TLS Handshake Protocol (such as Client Hello, Server Hello, Certificate Exchange, Key Exchange, and Finished messages).
Expand these packets to view details about the handshake process, including supported cipher suites, certificate information, and key exchange.
You can also examine the Application Data packets to see encrypted data being exchanged after the handshake.
Decryption (Optional):
If you have access to the pre-master secret or an RSA private key, you can decrypt the TLS traffic:
Go to Edit → Preferences.
Open the Protocols tree and select TLS.
Configure the (Pre)-Master-Secret log filename or provide the RSA private key.
Wireshark will use this information to decrypt the TLS packets.
Tool references
gnutls-cli(1) – Linux manual page (man7.org)
Testing TLS/SSL configuration using Nmap – Web Penetration Testing with Kali Linux – Third Edition [Book] (oreilly.com)
Testing SSL ports using nmap and check for weak ciphers | Global Security and Marketing Solutions (gss-portal.com)
How to use sslyze to assess your web server HTTPS TLS? – Full Security Engineer
Overview of packet tracing with the ssldump utility (f5.com)
Curl – Test TLS and HTTP versions – Kerry Cordero
openssl s_client commands and examples – Mister PKI
TLS – Wireshark Wiki
GitHub – drwetter/testssl.sh: Testing TLS/SSL encryption anywhere on any port
GitHub – tls-attacker/TLS-Attacker: TLS-Attacker is a Java-based framework for analyzing TLS libraries. It can be used to manually test TLS clients and servers or as as a software library for more advanced tools.
GitHub – rbsec/sslscan: sslscan tests SSL/TLS enabled services to discover supported cipher suites
Other references
Restricting TLS 1.2 Ciphersuites in Windows using PowerShell
Solving the TLS 1.0 Problem, 2nd Edition
Support for legacy TLS protocols and cipher suites in Azure Offerings
Microsoft Tech Community – Latest Blogs –Read More
Optimizing Azure OpenAI: A Guide to Limits, Quotas, and Best Practices
This blog focuses on good practices for monitoring Azure Open AI limits and quotas. With the growing interest and application of Generative AI, Open AI models have emerged as pioneers in this transformative era. To maintain consistent and predictable performance for all users, these models impose certain limits and quotas. For Independent Software Vendors (ISVs) and Digital Natives utilizing these models, understanding these limits and establishing efficient monitoring strategies is paramount to ensures a good customer experience to the end-users of their products and services. This blog seeks to provide a comprehensive understanding of these monitoring strategies, thereby enabling ISVs and Digital Natives to optimally leverage AI technologies for their respective customer bases.
Understanding Limits and Quotas
Azure OpenAI’s quota feature enables assignment of rate limits to your deployments, up-to a global limit called your “quota”. Quota is assigned to your subscription on a per-region, per-model basis in units of **Tokens-per-Minute** (TPM). Your subscription is onboarded with a default quota for most models.
Refer to this document for default TPM values. You can allocate TPM among deployments until reaching quota. If you exceed a model’s TPM limit in a region, you can reassign quota among deployments or request a quota increase. Alternatively, if viable, consider creating a deployment in a new Azure region in the same geography as the existing one.
For example, with a 240,000 TPM quota for GPT-35-Turbo in East US, you could create one deployment of 240K TPM, two of 120K TPM each, or multiple deployments adding up to less than 240K TPM in that region.
TPM rate limits are based on the maximum tokens **estimated** to be processed when the request is received. It is different than the token count used for billing, which is computed after all processing is completed. Azure OpenAI calculates a max processed-token count per request using:
– Prompt text and count
– The max_tokens setting
– The best_of setting
This estimated count is added to a running token count of all requests, which resets every minute. A 429 response code is returned once the TPM rate limit is reached within the minute.
A **Requests-Per-Minute** (RPM) rate limit is also enforced. It is set proportionally to the TPM assignment at a ratio of 6 RPM per 1000 TPM. If requests aren’t evenly distributed over a minute, a 429 response may be received. Azure OpenAI Service evaluates incoming requests’ rate over a short period, typically 1 or 10 seconds, and issues a 429 response if requests surpass the RPM limit. For example, if the service monitors with a 1-second interval, a 600-RPM deployment would be throttled if more than 10 requests are received per second.
In addition to the standard quota, there is also a provisioned throughput capability, or PTU. It is useful to think of the standard quota as a serverless mode, where your requests are served from a pool of resources and no capacity is reserved for you, hence the overall latency could vary. In contrast, with a provisioned throughput capability, you specify the amount of throughput you require for your application. The service then provisions the necessary compute and ensures it is ready for you. This gives you a more predictable performance and stable max latency. For high throughput workloads this may provide cost savings versus the token-based consumption. At the time of the writing, provisioned throughput units are not available by default. For more details about it, contact your Microsoft Account team.
There is also a limit of 30 Azure OpenAI resource instances per region. For an exhaustive and up-to-date list of quotas and limits please check this document. It is important to plan ahead on how you will manage and segregate tenant data and traffic in order to ensure reliable performance and optimal costs. Please check the Azure Open AI service specific guidance for considerations and strategies pertinent to multitenant solutions.
Choosing between tokens-per-minute and provisioned throughput models
To choose effectively between TPM and PTU you need to understand that there are minimum PTUs per deployment required. If your current usage is above the requirement and expected to grow, it might be more economically feasible to purchase provisioned capacity. In high token usage scenarios, this provides a lower per token price and stable max latency. It is important to understand that with PTUs, you are isolated and protected from the noisy neighbor problem of a SaaS application with shared resources. However, you can still experience higher than average latency caused by other factors, such as the total load you send to the service, length of the prompt and response, etc.
The table below shows the minimum TPMs per model type and an approximate relation to TPMs:
![Minimum PTU per model and TPM equivalent](/.attachments/PTU-to-TPM-b606cdfd-192d-427b-9720-6aea7443597a.png)
Source: https://github.com/Azure/aoai-apim
Effective Monitoring Techniques
Now that we understand better the limits and quotas of the service, let’s discuss how to effectively monitor usage and set up alerts to be notified and take action when you reach the limits and quotas assigned.
Azure OpenAI service has metrics and logs available as part of the Azure Monitor capabilities. Metrics are available out of the box, at no additional cost. By default, a history of 30 days is kept. If you need to keep these metrics for longer, or route to a different destination, you can do so by enabling it in the Diagnostic settings.
Metrics are grouped into four categories:
– HTTP Requests dimensions: Model Name, Model Version, Deployment, Status Code, Stream Type, and Operation.
– Tokens-Based Usage: Active tokens, Generated Completions Tokens, Processed Inference and Prompt Tokens.
– PTU Utilization dimensions: Model Name, Model Version, Deployment, and Stream Type.
– Fine-tuning: Training Hours by Deployment and Training Hours by Model Name.
Additionally, each API response header contains the RateLimit-Global-Remaining and RateLimit-Global-Reset. And the response body contains a usage section with the prompt tokens, completion tokens, and total tokens values that shows the billing tokens per request.
The available logs in Azure OpenAI are Audit logs, Request and Response logs, and Trace Logs. Once you enable these through the Diagnostic settings, you can send these to a Log Analytics workspace, Storage account, Event Hub, or a partner solution. Keep in mind that using diagnostic settings and sending data to Azure Monitor Logs has other costs associated with it. For more information, see Azure Monitor Logs cost calculations and options.
My colleagues created an Azure Monitor Workbook that serves as a great baseline to start monitoring your Azure Open AI service logs and metrics.
Optimization Recommendations
Use LLMs for what they are best at – natural language understanding and fluent language generation. This means understanding that LLMs are boxes trying to predict the most likely next token and just because you could use an LLM for a task, it doesn’t necessarily make it the most optimal tool for it.
1. Always start with can this be done in code? Are there existing libraries, tools, patterns that can perform the task? If yes, use those. These will probably be more performant and cost less.
Examples: use Azure AI Language service for key phrase extraction instead of the LLM; use standard libraries to do math operations, data aggregation, etc.
2. Control the size of the input prompt (e.g. set a limit on the user input field; in RAG, depending on scenario, restrict the number of relevant chunks sent to the LLM) and completion (with max_tokens and best_of).
3. Call the GPT models as few times as possible. Ensure you gather all the data you need to generate an optimal response, and only then call the model.
4. Use the cheapest model that gets the task done. This could mean using GPT 3.5 instead of GPT 4 for tasks where the cheapest model performs at an acceptable level.
Prevention and Response Strategies for Limit Exceeding
Here are some best practices and strategies to avoid rate limiting errors in a tokens per minute, i.e. Pay-As-You-Go model:
– Use minimum feasible values for max_tokens and best_of in your scenario. For instance, don’t set a high max-tokens if expecting small responses.
– Manage your quota to allocate more TPM to high-traffic deployments and less to those with limited needs.
– Avoid sharp changes in the workload. Increase the workload gradually.
– Test different load increase patterns.
– Check the size of prompts against the model limits before sending the request to the Azure Open AI service. For example, for GPT-4 (8k), a max request token limit of 8,192 is supported. If your prompt is 10K in size, then this will fail, and also any subsequent retries would fail as well, consuming your quota.
– retrying with exponential backoff – in practice it means performing a short sleep when a rate limit error is hit, then retrying the unsuccessful request. If the request is still unsuccessful, the sleep length is increased and the process is repeated. Note that unsuccessful requests contribute to your per-minute limit, so continuously resending a request won’t work. This strategy is useful for real-time requests from users.
– batching requests – If you’re hitting the limit on requests per minute, but have headroom on tokens per minute, you can increase your throughput by batching multiple tasks into each request. This will allow you to process more tokens per minute, especially with the smaller models.
– when handling batch processing, maximizing throughput matters more than latency. Proactively adding delay between batch requests can help. E.g., if your rate limit 20 requests per minute, add a delay of 3–6 seconds to each request. This can help you operate near the rate limit ceiling without hitting it and incurring wasted requests.
For more details on these strategies and an example of a parallel processing script, please see this notebook and documentation from Azure Open AI.
If your workload is particularly sensitive to latency and cannot tolerate latency spikes, you can consider implementing a mechanism that checks the latency of Azure Open AI in different Azure regions and send requests to the region with the smallest latency. You can group regions into geographies, like Americas, EMEA and Asia, and perform these checks on a per geography basis. This should also account for any compliance regulation and data residency requirements. For a more detailed walkthrough of this strategy, please check this blog.
In Azure, API Management (APIM) service can help you implement some of these best practices and strategies. APIM supports queueing, rate throttling, error handling, managing user quotas, as well as distributing requests to different Azure Open AI instances, potentially located in different regions to implement the pattern described above.
Conclusion
In conclusion, understanding the limits, quotas, and optimization techniques for Azure Open AI is crucial for effectively utilizing the service and achieving optimal performance and cost efficiency. By carefully monitoring usage, setting up alerts, and implementing prevention and response strategies for limit exceeding, you can ensure reliable performance and avoid unnecessary disruptions.
The insights and recommendations provided in this document serve as a valuable guide to help you make informed decisions and optimize your Azure Open AI use-cases. By following these best practices, such as leveraging existing libraries and tools, controlling input prompt size, minimizing API calls, and using the most cost-effective models, you can maximize the value and efficiency of your AI applications.
Remember to plan ahead, allocate resources wisely, and continuously monitor and adjust your usage based on the metrics and logs available through Azure Monitor. By doing so, you can proactively address any potential issues, avoid rate limiting errors, and deliver a seamless and responsive experience to your users.
Microsoft Tech Community – Latest Blogs –Read More
Records are not getting updated/deleted in Search Index despite enabling Track Deletions in SQL DB
Symptom:
The count of records in the indexer and the index did not align even after activating the change detection policy. Even with record deletions, the entries persisted in the Index Search Explorer.
To enable incremental indexing, configure the “dataChangeDetectionPolicy” property within your data source definition. This setting informs the indexer about the specific change tracking mechanism employed by your table or view.
For Azure SQL indexers, you can choose the change detection policy below:
“SqlIntegratedChangeTrackingPolicy” (applicable to tables exclusively)
It is recommended using “SqlIntegratedChangeTrackingPolicy” for its efficiency and its ability to identify deleted rows.
Database requirements:
Prerequisites:-
SQL Server 2012 SP3 and later, if you’re using SQL Server on Azure VMs
Azure SQL Database or SQL Managed Instance
Tables only (no views)
On the database, enable change tracking for the table.
No composite primary key (a primary key containing more than one column) on the table.
No clustered indexes on the table. As a workaround, any clustered index would have to be dropped and re-created as NonClustered index, however, performance may be affected in the source compared to having a clustered index.
When using SQL integrated change tracking policy, don’t specify a separate data deletion detection policy. The SQL integrated change tracking policy has built-in support for identifying deleted rows.
However, for the deleted rows to be detected automatically, the document key in your search index must be the same as the primary key in the SQL table.
Once you have done all the above steps, still you see the discrepancy in the count of Indexer and Index Count
Approach:
Enabling change tracking before or after inserting data can affect how the system tracks changes, and the order in which you enable it matters. It’s important to understand how change tracking works in your specific context to resolve the issue.
Check whether you have enabled Change tracking at the Table level as well along with Database level.
Check whether you have enabled Change Tracking before or after Data Insertion.
ALTER TABLE [TableName] ENABLE CHANGE_TRACKING
Here are some general guidelines on how change tracking typically works:
Enable Change Tracking Before Inserting Data:
– If you enable change tracking before inserting data, the system will start tracking changes from the beginning.
– This is the recommended approach if you want to track changes to existing data and any new data that will be added.
Enable Change Tracking After Inserting Data:
– If you enable change tracking after inserting data, the system might not have a baseline for the existing data.
– You may encounter errors if you attempt to retrieve change information for data that was already in the system before change tracking was enabled.
Solution :
To ensure that the Indexer starts tracking deletions from the beginning, it is important to enable Change Tracking before inserting data.
This approach also helps to match the count of the Indexer and Index without having to reset the Indexer repeatedly.
Reference Links :–
Enable and Disable Change Tracking – SQL Server | Microsoft Learn
Azure SQL indexer – Azure AI Search | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
[Some] SQL Server and Azure SQL DB Security Fundamentals | Data Exposed
Learn about SQL Server and Azure SQL Database security fundamentals you won’t want to miss.
Resources:
Microsoft Tech Community – Latest Blogs –Read More
Tech Community Live: Microsoft Intune – RSVP now
Join us March 20th for another Microsoft Intune edition of Tech Community Live! We will be joined by members of our product engineering and customer adoption teams to help you explore, expand, and improve the way you cloud manage devices – or learn the first steps to take to get to the cloud – we’re here to help you.
In this edition of Tech Community Live, we are focusing on cloud management for your entire device estate – specifically for those of you managing Windows or macOS devices with Intune. We’ll also cover some of the newly available solutions in Intune Suite including Enterprise App Management, Advanced Analytics and Cloud PKI.
As always, the focus of this series is on your questions! In addition to open Q&A with our product experts, we will kick off each session with a brief demo to get everyone warmed up and excited to engage.
How do I attend?
Choose a session name below and add any (or all!) of them to your calendar. Then, click RSVP to event and post your questions in the Comments anytime! We’ll note if we answer your question in the live stream and follow up in the chat with a reply as well.
Can’t find the option to RSVP? No worries, sign in on the Tech Community first.
Afraid to miss out due to scheduling or time zone conflicts? We got you! Every AMA will be recorded and available on demand the same day.
Time
AMA Topic
7:30 AM – 8:30 a.m. (Pacific Time)
Securely manage macOS with Intune
8:30 AM – 9:30 a.m. (Pacific Time)
Windows management with Intune
9:30 AM – 10:30 a.m. (Pacific Time)
Enterprise App Management, Advanced Analytics in Intune Suite
10:30 AM – 11:30 a.m. (Pacific Time)
Microsoft Cloud PKI in Intune Suite
More ways to engage
Join the Microsoft Management Customer Connection Program (MM CCP) community to engage more with our product team.
Check out our monthly series, Unpacking Endpoint Management, to view upcoming topics and catch up on everything we’ve covered so far.
Did you know this is a series? Check out our on-demand sessions from Tech Community Live: Intune – the series!
Stay up to date! Bookmark the Microsoft Intune Blog and follow us on LinkedIn or @MSIntune on X to continue the conversation.
Microsoft Tech Community – Latest Blogs –Read More
Unlock the full potential of Copilot for Microsoft 365
The Microsoft 365 Copilot Adoption Accelerator engagement is crafted to ensure the seamless adoption of Copilot for Microsoft 365.
This engagement comprises three key phases: Readiness, Build the Plan, and Drive Adoption. It is recommended to undertake the Adoption Accelerator after completing the Copilot for Microsoft 365 engagement, wherein high-value scenarios and the technical and organizational baseline are identified. The Adoption Accelerator Engagement will specifically target these high-value scenarios.
The adoption process should involve key stakeholders such as Adoption Managers, Business Decision Makers, End-User Support, and Champions. Responsibility for sustained success should be effectively transitioned during the adoption process.
Click here for more information
Microsoft Tech Community – Latest Blogs –Read More
Scaling up: Customer-driven enhancements in the FHIR service enable better healthcare solutions
This blog has been authored by Ketki Sheth, Principal Program Manager, Microsoft Health and Life Sciences Platform
We’re always listening to customer feedback and working hard to improve the FHIR service in Azure Health Data Services. In the past few months, we rolled out several new features and enhancements that enable you to build more scalable, secure, and efficient healthcare solutions.
Let’s explore some highlights.
Unlock new possibilities with increased storage capacity up to 100 TB
In January 2024 we increased storage capacity within the FHIR service to enable healthcare organizations to manage vast volumes of data for analytical insights and transactional workloads. Previously constrained by a 4 TB limit, customers can build streamline workflows with native support for up to 100 TB of storage.
More storage means more possibilities for analytics with large data sets. For example, you can explore health data to improve population health, conduct research, and discover new insights. More storage also allows Azure API for FHIR customers who have more than 4 TB of data to switch to the evolved FHIR service in Azure Health Data Services before September 26, 2026, when Azure API for FHIR will be retired.
If you need storage greater than 4 TB, let us know by creating a support request on the Azure portal with the issue type Service and Subscription limit (quotas). We’d be happy to enable your organization to take advantage of this expanded storage capacity.
Connect any OpenID Connect (OIDC) identity provider to the FHIR service with Azure Active Directory B2C
In January 2024 we also released the integration of the FHIR service with Azure Active Directory B2C. The integration gives organizations a secure and convenient way to grant access with fine-grained access control for different users or groups – without creating or comingling user accounts in the same Microsoft Entra ID tenant. Plus, along with the support for Azure Active Directory B2C (Azure AD B2C), we announced the general availability of the integration with OpenID Connect (OIDC) compliant identity providers (IDP) as part of the expanded authentication and authorization model for the FHIR service.
With Azure AD B2C and OIDC integration, organizations building SMART on FHIR applications can integrate non-Microsoft Entra identity providers with EHRs (Electronic Health Records) and other healthcare applications.
Learn more: Use Azure Active Directory B2C to grant access to the FHIR service
Ingest FHIR resource data at high throughput with incremental import
The incremental import capability was released in August last year. With incremental import, healthcare organizations can ingest FHIR resource data at high throughput in batches, without disrupting transactions through the API on the same server. You can also ingest multiple versions of a resource in the same batch without worrying about the order of ingestion.
Incremental import allows healthcare organizations to:
Import data concurrently while executing API CRUD operations on the FHIR server.
Ingest multiple versions of FHIR resources in single batch while maintaining resource history.
Retain the lastUpdated field value in FHIR resources during the ingestion process, while also maintaining the chronological order of resources. In other words, you no longer need to pre-load historical data before importing the latest version of FHIR resources.
Take advantage of initial and incremental mode import. Initial mode import can be used to hydrate the FHIR service. Also, call out using Execution of initial mode import operation does not incur any charge. For incremental import, a charge is incurred per successfully ingested resource, following the pricing model of the API request.
Visit pricing page for more details Pricing – Azure Health Data Services | Microsoft
Why incremental import matters
Healthcare organizations using the FHIR service often need to run synchronous and asynchronous data flows simultaneously. The asynchronous data flow includes receiving batches of large data sets that contain patient records from various sources, such as Electronic Medical Record (EMR) systems. These data sets must be imported into a FHIR server simultaneously with the synchronous data flow to execute API CRUD (Create, Read, Update, Delete) operations in the FHIR service.
Performing data import and API CRUD operations concurrently on the FHIR server is crucial to ensure uninterrupted healthcare service delivery and efficient data management. Incremental import allows organizations to run both synchronous and asynchronous data flows at the same time, eliminating this issue. Incremental import also enables efficient migration and synchronization of data between FHIR servers, and from the Azure API for FHIR service to the FHIR service in Azure Health Data Services.
Learn more: Import data into the FHIR service in Azure Health Data Services
Delete FHIR resources in bulk (preview)
In late 2023, the ability to delete FHIR resources in bulk became available for preview. We heard feedback from customers about the challenges they faced when deleting individual resources. Now, with the bulk delete operation, you can delete data from the FHIR service asynchronously. The FHIR service bulk delete operation allows you to delete resources at different levels – system, resource level, and per search criteria. Healthcare organizations that use the FHIR service need to comply with data retention policies and regulations. Incorporating the bulk delete operation in the workflow enables organizations to delete data at high throughput.
Learn more: Bulk-delete operation for the FHIR service in Azure Health Data Services
Selectable search parameters (preview)
As of January 2024, selectable search parameters are available for preview. This capability allows you to tailor and enhance searches on FHIR resources. You can choose which standard search parameters to enable or disable for the FHIR service according to your unique requirements. By enabling only the search parameters you need, you can store more FHIR resources and potentially improve performance of FHIR search queries.
Searching for resources is fundamental to the FHIR® service. During the provisioning of FHIR service, standard search parameters are enabled by default. The FHIR service performs efficient searches by extracting and indexing specific properties from FHIR resources during the ingestion of data. Search parameters indexes may take majority of the overall database size.
This new capability gives you the control to enable or disable search parameter according to your needs.
Selectable search parameters help healthcare organizations:
Store more data at reduced cost. Reduction in search parameter indexes provides space to store more resources in the FHIR service. Depending on your organization’s need for search parameter values, on average the efficiency gained in storage is assumed to be 2X-3X. In other words, you’ll be able to store more resources and save on any additional storage cost.
Positively impact performance. During API interactions or while using the import operation, selecting a subset of search parameters can have significant positive performance impact.
Learn more: Selectable search parameters for the FHIR service in Azure Health Data Services
In conclusion
We are constantly working to improve the FHIR service to meet your needs and expectations. With new features such as increased storage capacity up to 100 TB, integration with Azure Active Directory B2C, and incremental import, we are excited to see how you leverage these new capabilities to create innovative healthcare solutions that improve outcomes and experiences for patients and providers.
Do more with your data with the Microsoft Cloud for Healthcare
In the era of AI, Microsoft Cloud for Healthcare enables healthcare organizations to accelerate their data and AI journey by augmenting the Microsoft Cloud with industry-relevant data solutions, templates, and capabilities. With Microsoft Cloud for Healthcare, healthcare organizations can create connected patient experiences, empower their workforce, and unlock the value from clinical and operational data using data standards that are important to healthcare. And we’re doing all of this on a foundation of trust. Every organization needs to safeguard their business, their customers, and their data. Microsoft Cloud runs on trust, and we’re helping every organization build safety and responsibility into their AI journey from the very beginning.
We’re excited to help your organization gain value from your data and use AI innovation to deliver meaningful outcomes across the entire healthcare journey.
Learn more about Azure Health Data Services
Explore Microsoft Cloud for Healthcare
Stay up to date with Azure Health Data Services Release Notes
Microsoft Tech Community – Latest Blogs –Read More