Tag Archives: microsoft
Publishing a SaaS offer for PowerApps a canvas app in AppSource
Hello,
We have been trying for many months now to publish our PowerApps canvas apps as a SaaS offer, thereby allowing us to have transactable, licensed customers.
Through a lot of effort, we managed to get some support where we were told quite vaguely how to achieve this, but without any meaningful guidance we are struggling as we are a small ISV with expertise in our actual business, and Power Platform. We are not, however, experts in SQL (that’s being generous) or in creating APIs which seem to be a requirement to keep track of user licenses and feeding the data to our canvas app.
If anyone here has any info on this, or how they might have achieved it themselves it would be massively appreciated, as we can’t seem to get there by ourselves and MS support is very difficult to find.
It does feel as though the app development part is so much simpler than getting it onto the marketplace, which is surely the wrong way round.
Thank you,
Craig
Hello, We have been trying for many months now to publish our PowerApps canvas apps as a SaaS offer, thereby allowing us to have transactable, licensed customers. Through a lot of effort, we managed to get some support where we were told quite vaguely how to achieve this, but without any meaningful guidance we are struggling as we are a small ISV with expertise in our actual business, and Power Platform. We are not, however, experts in SQL (that’s being generous) or in creating APIs which seem to be a requirement to keep track of user licenses and feeding the data to our canvas app. If anyone here has any info on this, or how they might have achieved it themselves it would be massively appreciated, as we can’t seem to get there by ourselves and MS support is very difficult to find. It does feel as though the app development part is so much simpler than getting it onto the marketplace, which is surely the wrong way round. Thank you,Craig Read More
Assigned to me in planner – Project capability
Hi
Reading through the various comments and reports, it states that the ability for planner to pull through Project tasks through to your Planner “Assigned to me” tasks is due in March.
Can anyone confirm when this will be happening date wise as it looks like there is a bit of a delay?
Thanks,
Omar Warrak
Hi Reading through the various comments and reports, it states that the ability for planner to pull through Project tasks through to your Planner “Assigned to me” tasks is due in March. Can anyone confirm when this will be happening date wise as it looks like there is a bit of a delay? Thanks, Omar Warrak Read More
RAISE Summit Paris 2024 PRO Ticket
Hello everyone!
I just won a PRO ticket for the Paris RAISE Summit 2024.
The ticket price on the official website is €799, and I want to sell it for half price.
Can you please help me with where I can sell it, or maybe someone from here wants to buy it?
Hello everyone!I just won a PRO ticket for the Paris RAISE Summit 2024.The ticket price on the official website is €799, and I want to sell it for half price.Can you please help me with where I can sell it, or maybe someone from here wants to buy it? Read More
Announcing Azure Health Data Services DICOM service with Data Lake Storage
We are thrilled to announce the general availability of the Azure Health Data Services DICOM service with Data Lake Storage, a solution that enables teams to store, manage, and access their medical imaging data in the cloud. Whether you’re involved in clinical operations, research endeavors, AI/ML model development, or any other facet of healthcare that involves medical imaging, the DICOM service can expand the possibilities of your imaging data and enable new workflows.
The DICOM service is available for teams to start using today with production imaging data. To get started, visit the Azure Health Data Services docs and follow the steps to Deploy the DICOM service with Data Lake Storage.
Who Can Benefit?
The DICOM service with Data Lake Storage is designed for any team that requires a robust and scalable cloud storage solution for their medical imaging data. Whether you’re a healthcare institution migrating clinical and research data to the cloud, a development team in need of a scalable storage platform for imaging data, or an organization seeking to operationalize imaging data in AI/ML model development or secondary use scenarios, our DICOM service with Data Lake Storage is here to empower your endeavors.
Benefits of Azure Data Lake Storage
By integrating with Azure Data Lake Storage (ADLS Gen2), our DICOM service offers a myriad of benefits to healthcare teams:
Scalable Storage: Enjoy performant, massively scalable storage capabilities that can effortlessly accommodate your growing imaging data assets.
Data Governance: Take full control of your imaging data assets. Manage storage permissions, access controls, data replication strategies, backups, and more, ensuring compliance with global privacy standards.
Direct Data Access: Seamlessly access your DICOM data through Azure Storage APIs, enabling efficient retrieval and manipulation of your valuable medical imaging assets. The DICOM service continues to provide DICOMweb APIs for storing, querying for, and retrieving imaging data.
Ecosystem Integration: Leverage the entire ecosystem of tools surrounding ADLS, including AzCopy, Azure Storage Explorer, and Azure Storage Data Movement library, to help streamline your workflows and enhance productivity.
Unlock New Possibilities: Unlock new analytics and AI/ML scenarios by integrating with services like Azure Synapse, Azure Databricks, Azure Machine Learning, and Microsoft Fabric, enabling you to extract deeper insights and drive innovation in healthcare.
Integration with Microsoft Fabric
As called out above, a key benefit of Azure Data Lake Storage is that it connects to Microsoft Fabric. Microsoft Fabric is an end-to-end, unified analytics platform that brings together all the data and analytics tools that organizations need to unlock the potential of their data and lay the foundation for AI scenarios. By using Microsoft Fabric, you can use the rich ecosystem of Azure services to perform advanced analytics and AI/ML with medical imaging data, such as building and deploying machine learning models, creating cohorts for clinical trials, and generating insights for patient care and outcomes.
Get Started Today
The DICOM service with Data Lake Storage is available for teams to start using today with production imaging data – and customers can expect to receive the same level of support and adherence consistent with the healthcare privacy standards that Azure Health Data Services is known for. Whether you’re looking to enhance clinical operations, drive research breakthroughs, or unlock new AI-driven insights, the power of Azure Health Data Services can help you to achieve your goals.
To learn more about analytics with imaging data, see Get started using DICOM data in analytics workloads.
Pricing
With Azure Health Data Services, customers pay only for what they use. DICOM service customers incur storage costs for storage of the DICOM data and metadata used to operate the DICOM service as well as charges for API requests. The data lake storage model shifts most of the storage costs from Azure Health Data Services to Azure Data Lake Storage (where the .dcm files are stored).
For detailed pricing information, see Pricing – Azure Health Data Services and Azure Storage Data Lake Gen2 Pricing.
Microsoft Tech Community – Latest Blogs –Read More
Simplifying Azure Kubernetes Service Authentication Part 3
Welcome to the third installment of this series simplifying azure Kubernetes service authentication. Part two is here Part 2 .In this third part we’ll continue from where we left off and set up cert manager, create a CA issuer, upgrade our ingress routes, register our app, and create secrets and a cookie for authentication. You can also refer to the official documentation here for some of the steps TLS with an ingress controller.
Install cert-manager Let’s Encrypt
In the previous post we uploaded cert manager images to our ACR. Now lets install the cert manager images by running the following:
# Set variable for ACR location to use for pulling images
$AcrUrl = (Get-AzContainerRegistry -ResourceGroupName $ResourceGroup -Name $RegistryName).LoginServer
# Label the ingress-basic namespace to disable resource validation
kubectl label namespace ingress-basic cert-manager.io/disable-validation=true
# Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io
# Update your local Helm chart repository cache
helm repo update
# Install the cert-manager Helm chart
helm install cert-manager jetstack/cert-manager –namespace ingress-basic –version $CertManagerTag –set installCRDs=true –set nodeSelector.”kubernetes.io/os”=linux –set image.repository=”${AcrUrl}/${CertManagerImageController}” –set image.tag=$CertManagerTag –set webhook.image.repository=”${AcrUrl}/${CertManagerImageWebhook}” –set webhook.image.tag=$CertManagerTag –set cainjector.image.repository=”${AcrUrl}/${CertManagerImageCaInjector}” –set cainjector.image.tag=$CertManagerTag
You should get some output and make sure the READY column is set to True.
Create a CA Issuer
A certificate authority (CA) validates the identities of entities (such as websites, email addresses, companies, or individual persons) and binds them to cryptographic keys through the issuance of digital certificates. We are using the letsencrypt CA. We can create a CA by applying a ClusterIssuer to our ingress-basic namespace. Create the following cluster-issuer.yaml file:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: MY_EMAIL_ADDRESS
privateKeySecretRef:
name: letsencrypt
solvers:
– http01:
ingress:
class: nginx
podTemplate:
spec:
nodeSelector:
“kubernetes.io/os”: linux
Now apply this yaml file by running the following kubectl command:
kubectl apply -f cluster-issuer.yaml –namespace ingress-basic
Update your ingress route
In the previous part of this series we created a FQDN which enabled us to route to our apps in the web browser via a URL. We need to update our ingress routes to handle this change. Update the hello-world-ingress.yaml as follows:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: “true”
cert-manager.io/cluster-issuer: letsencrypt
spec:
ingressClassName: nginx
tls:
– hosts:
– hello-world-ingress.MY_CUSTOM_DOMAIN
secretName: tls-secret
rules:
– host: hello-world-ingress.MY_CUSTOM_DOMAIN
http:
paths:
– path: /hello-world-one(/|$)(.*)
pathType: Prefix
backend:
service:
name: aks-helloworld-one
port:
number: 80
– path: /hello-world-two(/|$)(.*)
pathType: Prefix
backend:
service:
name: aks-helloworld-two
port:
number: 80
– path: /(.*)
pathType: Prefix
backend:
service:
name: aks-helloworld-one
port:
number: 80
—
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ingress-static
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: “false”
nginx.ingress.kubernetes.io/rewrite-target: /static/$2
spec:
ingressClassName: nginx
tls:
– hosts:
– hello-world-ingress.MY_CUSTOM_DOMAIN
secretName: tls-secret
rules:
– host: hello-world-ingress.MY_CUSTOM_DOMAIN
http:
paths:
– path: /static(/|$)(.*)
pathType: Prefix
backend:
service:
name: aks-helloworld-one
port:
number: 80
Then apply the update:
kubectl apply -f hello-world-ingress.yaml –namespace ingress-basic
You should get some output and make sure the READY column is set to True.
Register your app in Entra ID and create a client secret
An Azure Active Directory (AAD) App referred to as Entra ID now, is an application registered in Entra ID, which allows it to interact with Azure services and authenticate users. We can then use the Entra ID App to obtain a client secret for authentication purposes. Perform the following actions to register an app and create a client secret.
In the Azure portal search for Microsoft Entra ID
Click App registrations in the left side navigation
Click new registration button
Add a name and enter your redirect URL (Web) https://FQDN/oauth2/callback
Register and take note of your Application (client) ID
Click Certificates and Secrets and click New client secret and take note of the Secret Value
Create a cookie secret and set Kubernetes secrets
Now register the following client-id, client-secret, and cookie secret. Remember this series is for educational purposes and thus may not meet all security requirements. If you need to store your secrets in a more secure location you can also refer to how to use Key Vault to do so here Key Vault. Run the following commands in PowerShell:
$cookie_secret=“$(openssl rand -hex 16)”
# or with python
python -c ‘import os,base64; print(base64.urlsafe_b64encode(os.urandom(32)).decode())’
kubectl create secret generic client-id –from-literal=oauth2_proxy_client_id=<APPID> -n ingress-basic
kubectl create secret generic client-secret –from-literal=oauth2_proxy_client_secret=<SECRETVALUE> -n ingress-basic
kubectl create secret generic cookie-secret –from-literal=oauth2_proxy_cookie_secret=<COOKIESECRET> -n ingress-basic
Create a Redis Password
Azure uses large cookies when authenticating over Oauth2, thus it is recommended to setup Redis to handle these large cookies. For now we will create a Redis password and set the Kubernetes secret. In the next post we will install and setup Redis. Run the following command in PowerShell:
$REDIS_PASSWORD=“<YOUR_PASSWORD>”
kubectl create secret generic redis-password –from-literal=redis-password=$REDIS_PASSWORD -n ingress-basic
This ends the third post in our series. Look out for the fourth and final post.
Microsoft Tech Community – Latest Blogs –Read More
March 2024: Exploring open source at Microsoft, and other highlights for developers
Microsoft has developed a strong open source program over the past decade. Many of our tools and approaches are available for you to learn from and contribute to. This blog post explores some of the open-source projects at Microsoft and resources that will help you start contributing to and managing your own open-source projects. To learn more about Open Source at Microsoft, visit opensource.microsoft.com.
.NET is open source
Did you know .NET is open source? .NET is open source and cross-platform, and it’s maintained by Microsoft and the .NET community. Check it out on GitHub.
Microsoft JDConf 2024
Get ready for JDConf 2024—a free virtual event for Java developers. Explore the latest in tooling, architecture, cloud integration, frameworks, and AI. It all happens online March 27-28 and will include sessions on OpenJDK, OpenTelemetry, and Java development with Visual Studio Code. Learn more and register now.
Getting started with the Fluent UI Blazor library
The Fluent UI Blazor library is an open-source set of Blazor components used for building applications that have a Fluent design. Watch this Open at Microsoft episode for an overview and find out how to get started with the Fluent UI Blazor library.
Generative AI for Beginners
Want to build your own GenAI application? The free Generative AI for Beginners course on GitHub is the perfect place to start. Work through 18 in-depth lessons and learn everything from setting up your environment to using open-source models available on Hugging Face.
Reactor series: GenAI for software developers
Step into the future of software development with the Reactor series. GenAI for Software Developers explores cutting-edge AI tools and techniques for developers, revolutionizing the way you build and deploy applications. Register today and elevate your coding skills.
How to get GraphQL endpoints with Data API Builder
The Open at Microsoft show takes a look at using Data API Builder to easily create Graph QL endpoints. See how you can use this no-code solution to quickly enable advanced—and efficient—data interactions.
Microsoft Graph Toolkit v4.0 is now generally available
Microsoft Graph Toolkit v4.0 is now available. Learn about its new features, bug fixes, and improvements to the developer experience.
Customize Dev Containers in VS Code with Dockerfiles and Docker Compose
Dev containers offer a convenient way to deliver consistent and reproducible environments. Follow along with this video demo to customize your dev containers using Dockerfiles and Docker Compose.
Other news and highlights for developers
AI Show: LLM Evaluations in Azure AI Studio
Don’t deploy your LLM application without testing it first! Watch the AI Show to see how to use Azure AI Studio to evaluate your app’s performance and ensure it’s ready to go live. Watch now.
Use OpenAI Assistants API to build your own cooking advisor bot on Teams
Find out how to build an AI assistant right into your app using the new OpenAI Assistants API. Learn about the open playground for experimenting and watch a step-by-step demo for creating a cooking assistant that will suggest recipes based on what’s in your fridge.
What’s new in Teams Toolkit for Visual Studio 17.9
What’s new in Teams Toolkit for Visual Studio? Get an overview of new tools and capabilities for .NET developers building apps for Microsoft Teams.
Embed a custom webpage in Teams
Find out how to share a custom web page, such as a dashboard or portal, inside a Teams app. It’s easier than you might think. This short video shows how to do this using Teams Toolkit for Visual Studio and Blazor.
Build your own assistant for Microsoft Teams
Creating your own assistant app is super easy. Learn how in under 3 minutes! Watch a demo using the OpenAI Assistants, Teams AI Library, and the new AI Assistant Bot template in VS Code.
Build your custom copilot with your data on Teams featuring an AI dragon
Build your own copilot for Microsoft Teams in minutes. Watch this video to see how in this demo that builds an AI Dragon that will take your team on a cyber role-playing adventure.
Microsoft Mesh: Now available for creating innovative multi-user 3D experiences
Microsoft Mesh is now generally available, providing an immersive 3D experience for the virtual workplace. Get an overview of Microsoft Mesh and find out how to start building your own custom experiences.
Global AI Bootcamp 2024
Global AI Bootcamp is a worldwide annual event that runs throughout the month of March for developers and AI enthusiasts. Learn about AI through workshops, sessions, and discussions. Find an in-person bootcamp event near you.
C# Dev Kit for Visual Studio Code
Learn how to use the C# Dev Kit for Visual Studio Code. Get details and download the C# Dev Kit from the Visual Studio Marketplace.
Visual Studio Code: C# and .NET development for beginners
Have questions about Visual Studio Code and C# Dev Kit? Watch the C# and .NET Development in VS Code for Beginners series and start writing C# applications in VS Code.
Python Data Science Day 2024: Unleashing the Power of Python in Data Analysis
Celebrate Pi Day (3.14) with a journey into data science with Python. Set for March 14, Python Data Science Day is an online event for developers, data scientists, students, and researchers who want to explore modern solutions for data pipelines and complex queries.
Use GitHub Copilot for your Python coding
Discover a better way to code in Python. Check out this free Microsoft Learn module on how GitHub Copilot provides suggestions while you code in Python.
Remote development with Visual Studio Code
Find out how to tap into more powerful hardware and develop on different platforms from your local machine. Check out this Microsoft Learn path to explore tools in VS Code for remote development setups and discover tips for personalizing your own remote dev workflow.
Using GitHub Copilot with JavaScript
Use GitHub Copilot while you work with JavaScript. This Microsoft Learn module will tell you everything you need to know to get started with this AI pair programmer.
Get to know GitHub Copilot in VS Code and be more productive
Get to know GitHub Copilot in VS Code and find out how to use it. Watch this video to see how incredibly easy it is to start working with GitHub Copilot…Just start coding and watch the AI go to work.
Designing for Trust
Learn how to design trustworthy experiences in the world of AI. Watch a demo of an AI prompt injection attack and learn about setting up guardrails to protect the system.
Use Visual Studio for modern development
Want to learn more about using Visual Studio to develop and test apps. Start here. In this free learning path, you’ll dig into key features for debugging, editing, and publishing your apps.
GitHub Copilot fundamentals – Understand the AI pair programmer
Improve developer productivity and foster innovation with GitHub Copilot. Explore the fundamentals of GitHub Copilot in this free training path from Microsoft Learn.
Microsoft, GitHub, and DX release new research into the business ROI of investing in Developer Experience
Investing in the developer experience has many benefits and improves business outcomes. Dive into our groundbreaking research (with data from more than 2000 developers at companies around the world) to discover what your business can gain with better DevEx.
Microsoft Tech Community – Latest Blogs –Read More
Empowering women through digital skills
As we mark International Women’s Day, we celebrate the work of our partners to empower women around the world with skills for the AI economy. We know that when women have access to education, digital skills, and opportunities, they can build a better future for themselves, their families, and their communities.
Women in Digital Business
With the rise of the digital economy, women have new opportunities to start and grow a business, but still face many challenges. In low-income countries especially, female small business owners lack the competencies to develop a digital transformation strategy and implement it. In this context, equipping women entrepreneurs with digital skills is essential to the growth of their business. To tackle this challenge, Microsoft has partnered with the International Training Center of the International Labor Organization (ITCILO) to offer the Women in Digital Business (WIDB) program.
WIDB offers training programs in digital skills to women entrepreneurs who are looking to digitalize their business. By training partners all over the world in using the ILO’s platform and methodologies, the program will enable over 30,000 women-led micro and small businesses in 10+ countries to gain role-based skills, employability skills, and digital skills through online and residential training centers.
In Colombia, our partner ImpactHub is integrating this training to support the personal and professional growth of female entrepreneurs who have been affected by armed conflict in their communities. Through a blend of business training, leadership skills development, and access to economic opportunities, ImpactHub’s implementation of this program is nurturing the potential of these women to persevere through political crisis, thereby strengthening the country’s business and social fabric.
Learn more about Women in Digital Business and how you can become a master trainer for the program.
Cybersecurity skilling
Cybersecurity roles are high-wage, high-growth jobs across every industry. Yet globally only 1 in 4cybersecurity roles are filled by women. Microsoft is proud to partner with women-focused organizations to help change that. Some of our resources, partnerships, and opportunities to support women in cyber include:
In honor of Women’s History Month, we just launched the Microsoft Cybersecurity Certification Scholarship, awarded by Women in Cloud. This scholarship equips women in the U.S. with access to industry recognized certifications, mentorship networks, and monthly job preparedness sessions. Learn more about how to apply for this scholarship at aka.ms/WiC.
As part of the expansion of our cybersecurity skills initiative, we are partnering with Women in CyberSecurity (WiCyS) to bring their student chapters to a global audience. WiCyS student chapters receive funding, access to resources and conferences as well as networking opportunities for both students and faculty advisors. Learn more about how to create a student chapter on the WiCyS website.
Our partner LATAM Women in Cybersecurity (WOMCY) has a mission to minimize the knowledge gap and increase the talent pool in cybersecurity across Latin America. Through multiple grants from Microsoft, WOMCY has provided 5,200 women with coursework and vouchers to complete a SC-900 certification in Cybersecurity. Find out more about WOMCY.
The International Telecommunications Union (ITU), a UN agency, recently finished the third cycle of their Women in Cyber Mentorship Program. With support from Microsoft, this program provided over 300 women mentors and mentees in the field of cybersecurity with courses, live trainings, and multiple forms of mentorship activities to foster continued growth in their roles.
See all the resources Microsoft offers for Cybersecurity skilling at aka.ms/Cybersecurity_Skills.
International Women’s Day is an opportunity to reflect on our progress and recognize the impact of our partners and programs around the world. But there is more work to do. Together, we can ensure women everywhere have access to the skills and opportunities they need to thrive in a rapidly changing economy.
Access digital skills resources to help empower women in your community at: aka.ms/MicrosoftDigitalSkillsHub
Microsoft Tech Community – Latest Blogs –Read More
Como gerenciar conexões SQL no .NET Core
Gerenciamento de conexões SQL é um tema que sempre quis abordar, mas acreditava ser desnecessário, pois não havia me deparado com muitos problemas desse tipo.
Porém, recentemente, deparei com um caso bem desafiador, onde uma aplicação extremamente crítica estava caindo, e adivinhe só? A causa raiz era o gerenciamento de conexões SQL.
O objetivo desse artigo é explicar e demonstrar através de provas de conceito o que fazer para evitar esse tipo de problema.
SQL Connection Pool no ADO.NET
Um objeto de SqlConnection representa uma conexão física com um banco de dados, onde o método Open é utilizado para abrir a conexão e o método Close é utilizado para fechar a conexão.
Abrir e fechar conexões é uma operação cara, pois envolve algumas etapas, como:
Estabelecer um canal físico, como um socket ou um pipe nomeado.
Realizar o handshake inicial com o servidor.
Analisar as informações da cadeia de conexão (connection string).
Autenticar a conexão no servidor.
Realizar verificações para a inclusão na transação atual.
Executar outras verificações e procedimentos necessários durante a conexão.
Em resumo, é um processo que envolve muitas etapas que podem e devem ser evitadas. A biblioteca ADO.NET, implementa o Connection Polling, onde as conexões são criadas sob demanda, e reutilizadas durante o ciclo de vida da aplicação.
O pool reduz a necessidade de criação de novas conexões, quando a aplicação chamar o método Open, ele irá verificar se já existe uma conexão aberta disponível antes de abrir uma nova. Quando o método Close é chamado, a conexão é devolvida ao pool.
Problemas comuns
O problema mais comum que ocorre com o gerenciamento de conexões SQL é o vazamento de conexões. Isso ocorre quando a aplicação não fecha a conexão corretamente. Os impactos no desempenho e escalabilidade da aplicação são significativos, pois o pool de conexões é limitado, e quando uma conexão não é fechada corretamente, ela fica indisponível pois, uma vez que o pool atinga o número máximo de conexões, a aplicação irá esperar até que uma conexão seja liberada.
Exemplo de vazamento de conexão
O código a seguir é um exemplo de vazamento de conexão:
public int ExecuteNonQuery(string command)
{
SqlConnection connection = new SqlConnection(“connectionString”);
DbCommand dbCommand = Connection.CreateCommand();
dbCommand.CommandText = command;
dbCommand.Connection = connection;
return dbCommand.ExecuteNonQuery();
}
Vamos executar os seguintes passos para simular o problema e entender qual é o problema dessa implementação:
Implementar o código acima em um projeto de prova de conceito
Simular o problema através de um teste de carga
Coletar e analisar um dump de memória
O código de referência está disponível em: https://github.com/claudiogodoy99/Sql-Demo
Para reproduzir o problema vou utilizar o k6 como ferramenta de deste de carga, e vou utilizar o seguinte script:
import http from “k6/http”;
export default function () {
const response = http.get(“<http://localhost:5096/exemplo>”);
}
O comando que utilizei para rodar o teste foi: k6 run -u 100 -d 120s .loadTest.js. Ele simula 100 usuários acessando a url http://localhost:5096/exemplo durante 120 segundos.
O resultado do teste foi o seguinte:
execution: local
script: loadTest.js
output: –
scenarios: (100.00%) 1 scenario, 100 max VUs, 2m30s max duration (incl. graceful stop):
http_req_duration……….: avg=33.44s min=1.53s med=33.21s max=1m0s p(90)=51.56s p(95)=57.29s
http_req_failed…………: 100.00% ✓ 390 ✗ 0
running (2m30.0s), 000/100 VUs, 390 complete and 19 interrupted iterations
Em linhas gerais foi um resultado muito ruim, o tempo médio de resposta foi de 33 segundos.
Utilizei o dotnet-dump para gerar e analisar o dump de memória, através dos comandos:
dotnet-dump collect -p PID
dotnet-dump analyze .NOME-DO-ARQUIVO-GERADO.dmp
Com o dump aberto no terminal, vou rodar o comando clrthreads que vai listar todas as pilhas de execuções gerenciadas, enumerando suas respectivas threads:
…
System.Threading.WaitHandle.WaitMultiple
Microsoft.Data.ProviderBase.DbConnectionPool.TryGetConnection
Microsoft.Data.ProviderBase.DbConnectionPool.TryGetConnection
Microsoft.Data.ProviderBase.DbConnectionFactory.TryGetConnection
Microsoft.Data.ProviderBase.DbConnectionInternal.TryOpenConnectionInternal
Microsoft.Data.SqlClient.SqlConnection.TryOpen
Microsoft.Data.SqlClient.SqlConnection.Open
UnityOfWork.OpenConnection
UnityOfWork.BeginTransaction
ExemploRepository.AlgumaOperacao
pocSql.Controllers.ExemploController.Get
….
==> 48 threads with 7 roots
Repare que todas as threads gerenciadas que estavam processando alguma requisição estavam esperando uma resposta do método: Microsoft.Data.ProviderBase.DbConnectionPool.TryGetConnection(DbConnection, UInt32, Boolean, Boolean, DbConnectionOptions, DbConnectionInternal ByRef).
Isto significa que todas as threadsaguardavam uma conexão ao
banco de dados ser liberada para que pudessem continuar o processamento da requisição.
Solução
Neste exemplo a utilização da palavra reservada using já resolveria o problema:
public int ExecuteNonQuery(string command)
{
using SqlConnection connection = new SqlConnection(“connectionString”);
DbCommand dbCommand = Connection.CreateCommand();
dbCommand.CommandText = command;
dbCommand.Connection = connection;
return dbCommand.ExecuteNonQuery();
}
A palavra reservada using garante uso correto de objetos que implementam a interface IDisposable, em outras palavras, quando o programa finalizar o escopo do método acima, o método Dispose da conexão será chamado, garantindo que a conexão seja fechada corretamente, mesmo que ocorra uma exceção.
Segue o resultado do teste após a implementação da correção:
script: .pocSqlloadTest.js
output:
scenarios: (100.00%) 1 scenario, 100 max VUs, 2m30s max duration (incl. graceful stop):
http_req_connecting……..: avg=77.15µs min=0s med=0s max=9.22ms p(90)=0s p(95)=0s
http_req_duration……….: avg=1.38s min=286.15ms med=1.14s max=17.94s p(90)=1.99s p(95)=2.6s
http_req_failed…………: 100.00% ✓ 8689 ✗ 0
running (2m01.3s), 000/100 VUs, 8689 complete and 0 interrupted iterations
A diferença é gritante, o tempo médio de resposta caiu de 33 segundos para 1,38 segundos.
Padrão Dispose
Infelizmente nem toda implementação do ADO.NET é tão simples como a que demonstrei nesse artigo. Em diversos casos, deparei-me com classes que implementam o objeto SqlConnection como propriedade para reutilizar a conexão em diversos métodos, controlar transações, entre outras coisas.
Para esses casos, a utilização do using é inviável, e a implementação do padrão Dispose pode ser necessária. Para nossa sorte, as versões recentes do container de injeção de dependência no .NET Core o Microsoft.Extensions.DependencyInjection, já resolve boa parte do problema.
Imagine que temos a seguinte classe:
public class Connection
{
private readonly SqlConnection _connection;
public Connection(SqlConnection connection)
{
_connection = connection;
}
}
Se a classe acima foi registrada corretamente, o container de injeção de dependência irá chamar o método Dispose da conexão quando a aplicação finalizar o escopo do método que a utilizou.
Para registrar a classe corretamente:
services.AddScoped<IDbConnection>((sp) => new SqlConnection(dbConnectionString));
services.AddScoped<Connection>();
Como a conexão foi injetada como uma dependência, a classe Connection não precisa implementar a interface Dispose.
Agora um exemplo onde o método construtor é responsável por instânciar o objeto _connection:
public class ExemploRepository
{
private readonly IDbConnection _connection;
public ExemploRepository()
{
_connection = new SqlConnection(“connectionString”);
}
}
A classe ExemploRepository precisa implementar a interface IDisposable, e chamar o método Dispose da conexão, caso contrário o container de injeção de dependência não conseguiria identificar que a propriedade _connectio implementa a interface IDisposable.
public class ExemploRepository : IDisposable
{
private readonly IDbConnection _connection;
public ExemploRepository()
{
_connection = new SqlConnection(“connectionString”);
}
public void Dispose()
{
_connection.Dispose();
}
}
Conclusão
Os objetos do tipo SqlConnection são objetos que representam uma conexão física com um banco de dados, e devem ser gerenciados corretamente para evitar problemas de desempenho e escalabilidade. A utilização da palavra reservada using é a forma mais simples de garantir que a conexão seja fechada corretamente, mesmo que ocorra uma exceção. Em casos mais complexos, a implementação do padrão Dispose pode ser necessária.
Embora sutil, o gerenciamento de conexões SQL é um tema que merece atenção, pois pode impactar significativamente o desempenho e escalabilidade de uma aplicação.
Microsoft Tech Community – Latest Blogs –Read More
RAG techniques: Function calling for more structured retrieval
Retrieval Augmented Generation (RAG) is a popular technique to get LLMs to provide answers that are grounded in a data source. When we use RAG, we use the user’s question to search a knowledge base (like Azure AI Search), then pass along both the question and the relevant content to the LLM (gpt-3.5-turbo or gpt-4), with a directive to answer only according to the sources. In psuedo-code:
user_query = “what’s in the Northwind Plus plan?”
user_query_vector = create_embedding(user_query, “ada-002”)
results = search(user_query, user_query_vector)
response = create_chat_completion(system_prompt, user_query, results)
If the search function can find the right results in the index (assuming the answer is somewhere in the index), then the LLM can typically do a pretty good job of synthesizing the answer from the sources.
Unstructured queries
This simple RAG approach works best for “unstructured queries”, like:
What’s in the Northwind Plus plan?
What are the expectations of a product manager?
What benefits are provided by the company?
When using Azure AI Search as the knowledge base, the search call will perform both a vector and keyword search, finding all the relevant document chunks that match the keywords and concepts in the query.
Structured queries
But you may find that users are instead asking more “structured” queries, like:
Summarize the document called “perksplus.pdf”
What are the topics in documents by Pamela Fox?
Key points in most recent uploaded documents
We can think of them as structured queries, because they’re trying to filter on specific metadata about a document. You could imagine a world where you used a syntax to specify that metadata filtering, like:
Summarize the document title:perksplus.pdf
Topics in documents author:PamelaFox
Key points time:2weeks
We don’t want to actually introduce a query syntax to a a RAG chat application if we don’t need to, since only power users tend to use specialized query syntax, and we’d ideally have our RAG just do the right thing in that situation.
Using function calling in RAG
Fortunately, we can use the OpenAI function-calling feature to recognize that a user’s query would benefit from a more structured search, and perform that search instead.
If you’ve never used function calling before, it’s an alternative way of asking an OpenAI GPT model to respond to a chat completion request. In addition to sending our usual system prompt, chat history, and user message, we also send along a list of possible functions that could be called to answer the question. We can define those in JSON or as a Pydantic model dumped to JSON. Then, when the response comes back from the model, we can see what function it decided to call, and with what parameters. At that point, we can actually call that function, if it exists, or just use that information in our code in some other way.
To use function calling in RAG, we first need to introduce an LLM pre-processing step to handle user queries, as I described in my previous blog post. That will give us an opportunity to intercept the query before we even perform the search step of RAG.
For that pre-processing step, we can start off with a function to handle the general case of unstructured queries:
tools: List[ChatCompletionToolParam] = [
{
“type”: “function”,
“function”: {
“name”: “search_sources”,
“description”: “Retrieve sources from the Azure AI Search index”,
“parameters”: {
“type”: “object”,
“properties”: {
“search_query”: {
“type”: “string”,
“description”: “Query string to retrieve documents from azure search eg: ‘Health care plan'”,
}
},
“required”: [“search_query”],
},
},
}
]
Then we send off a request to the chat completion API, letting it know it can use that function.
chat_completion: ChatCompletion = self.openai_client.chat.completions.create(
messages=messages,
model=model,
temperature=0.0,
max_tokens=100,
n=1,
tools=tools,
tool_choice=”auto”,
)
When the response comes back, we process it to see if the model decided to call the function, and extract the search_query parameter if so.
response_message = chat_completion.choices[0].message
if response_message.tool_calls:
for tool in response_message.tool_calls:
if tool.type != “function”:
continue
function = tool.function
if function.name == “search_sources”:
arg = json.loads(function.arguments)
search_query = arg.get(“search_query”, self.NO_RESPONSE)
If the model didn’t include the function call in its response, that’s not a big deal as we just fall back to using the user’s original query as the search query. We proceed with the rest of the RAG flow as usual, sending the original question with whatever results came back in our final LLM call.
Adding more functions for structured queries
Now that we’ve introduced one function into the RAG flow, we can more easily add additional functions to recognize structured queries. For example, this function recognizes when a user wants to search by a particular filename:
{
“type”: “function”,
“function”: {
“name”: “search_by_filename”,
“description”: “Retrieve a specific filename from the Azure AI Search index”,
“parameters”: {
“type”: “object”,
“properties”: {
“filename”: {
“type”: “string”,
“description”: “The filename, like ‘PerksPlus.pdf'”,
}
},
“required”: [“filename”],
},
},
},
We need to extend the function parsing code to extract the filename argument:
if function.name == “search_by_filename”:
arg = json.loads(function.arguments)
filename = arg.get(“filename”, “”)
filename_filter = filename
Then we can decide how to use that filename filter. In the case of Azure AI search, I build a filter that checks that a particular index field matches the filename argument, and pass that to my search call. If using a relational database, it’d become an additional WHERE clause.
Simply by adding that function, I was able to get much better answers to questions in my RAG app like ‘Summarize the document called “perksplus.pdf”‘, since my search results were truly limited to chunks from that file. You can see my full code changes to add this function to our RAG starter app repo in this PR.
Considerations
This can be a very powerful technique, but as with all things LLM, there are gotchas:
Function definitions add to your prompt token count, increasing cost.
There may be times where the LLM doesn’t decide to return the function call, even when you thought it should have.
The more functions you add, the more likely the LLM will get confused about which one to pick, especially if functions are similar to each other. You can try to make it more clear to the LLM by prompt engineering the function name and description, or even providing few shots.
Here are additional approaches you can try:
Content expansion: Store metadata inside the indexed field and compute the embedding based on both the metadata and content. For example, the content field could have “filename:perksplus.pdf text:The perks are…”.
Add metadata as separate fields in the search index, and append those to the content sent to the LLM. For example, you could put “Last modified: 2 weeks ago” in each chunk sent to the LLM, if you were trying to help it’s ability to answer questions about recency. This is similar to the content expansion approach, but the metadata isn’t included when calculating the embedding. You could also compute embeddings separately for each metadata field, and do a multi-vector search.
Add filters to the UI of your RAG chat application, as part of the chat box or a sidebar of settings.
Use fine-tuning on a model to help it realize when it should call particular functions or respond a certain way. You could even teach it to use a structured query syntax, and remove the functions entirely from your call. This is a last resort, however, since fine-tuning is costly and time-consuming.
Microsoft Tech Community – Latest Blogs –Read More
Instant File Initialization for the transaction log | SQL Server 2022 Hidden Gems | Data Exposed
Next in the SQL Server 2022 hidden gems series you’ll learn about Instant file Initialization (IFI) behavior for Log file growth even with TDE enabled (does not require special privilege).
Resources:
What’s new in SQL Server 2022 – SQL Server | Microsoft Learn
View/share our latest episodes on Microsoft Learn and YouTube!
Microsoft Tech Community – Latest Blogs –Read More
MGDC for SharePoint FAQ: How do I process Deltas?
This is a follow up on the blog about delta datasets. If you haven’t read it yet, take a look at MGDC for SharePoint FAQ: How can I use Delta State Datasets?
Our team got some follow-up questions on this, so I thought it would make sense to write a little more and make things clear.
First of all, from some conversations with CoPilot, the basic SQL code for merging a delta would be something like this:
— Start a transaction
BEGIN TRANSACTION;
— Assuming the Users table has a primary key constraint on user_id
— and the UserChanges table has a foreign key constraint on user_id referencing Users
— First, delete the users that have operation = ‘Deleted’ in UserChanges
DELETE FROM Users
WHERE user_id IN
(SELECT user_id
FROM UserChanges
WHERE operation = ‘Deleted’);
— Next, update the users that have operation = ‘Updated’ in UserChanges
UPDATE Users
SET user_name = UC.user_name,
user_age = UC.user_age
FROM Users U
JOIN UserChanges UC ON U.user_id = UC.user_id
WHERE UC.operation = ‘Updated’;
— Finally, insert the users that have operation = ‘Created’ in UserChanges
INSERT INTO Users (user_id, user_name, user_age)
SELECT user_id, user_name, user_age
FROM UserChanges
WHERE operation = ‘Created’;
— Commit the transaction
COMMIT TRANSACTION;
Note that the column names used (shown here as user_id, user_name and user_age) need to be updated for each dataset, but the structure will be the same.
I also asked CoPilot to translate this SQL code to PySpark and it suggested the code below (with a few minor manual touches):
# Import SparkSession and functions
from pyspark.sql import SparkSession
from pyspark.sql import functions as F
# Create SparkSession
spark = SparkSession.builder.appName(“Delta dataset”).getOrCreate()
# Assuming the Users and UserChanges tables are already loaded as DataFrames
users = spark.table(“Users”)
user_changes = spark.table(“UserChanges”)
# First, delete the users that have operation = ‘Deleted’ in UserChanges
users = users.join(user_changes.filter(user_changes.operation == “Deleted”), “user_id”, “left_anti”)
# Next, update the users that have operation = ‘Updated’ in UserChanges
users = users.join(user_changes.filter(user_changes.operation == “Updated”), “user_id”, “left_outer”)
.select(F.coalesce(user_changes.user_name, users.user_name).alias(“user_name”),
F.coalesce(user_changes.user_age, users.user_age).alias(“user_age”),
users.user_id)
# Finally, insert the users that have operation = ‘Created’ in UserChanges
users = users.union(user_changes.filter(user_changes.operation == “Created”)
.select(“user_name”, “user_age”, “user_id”))
After that, there’s the question of how to run this in Azure Data Factory or Azure Synapse.
I would suggest going with Azure Synapse. You could get some inspiration from the template that we published https://go.microsoft.com/fwlink/?linkid=2207816. This includes examples of how to get the data and run a notebook to produce a new dataset.
Another good resource is this guide on “How to transform data by running a Synapse Notebook”. The link is at https://learn.microsoft.com/en-us/azure/data-factory/transform-data-synapse-notebook.
The more notable part missing from the code above is how to read the data from ADLS v2. For that, here is a link to stack overflow article on how to bring the data in and out of ADLS v2 using Linked Services. There is an article specifically on that at https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/tutorial-spark-pool-filesystem-spec.
That’s it! For more general information MGDC for SharePoint, visit the main blog at Links about SharePoint on MGDC.
Microsoft Tech Community – Latest Blogs –Read More
Advancing Trust, Transparency, and Control with Copilot for Microsoft 365
Hello, Microsoft Tech Community! I’m excited to share some important updates about Copilot for Microsoft 365. As you may recall from TJ’s blog post on February 29, we’ve been working hard to enhance your experience with Copilot. Today, I’d like to highlight some key updates that will benefit our customers, as outlined in Paul Lorimer’s blog: Announcing the expansion of Microsoft’s data residency capabilities | Microsoft 365 Blog
Paul’s post delves into the expansion of our data residency capabilities. We understand that data control is paramount in today’s digital landscape. That’s why we’re ensuring that your interaction data with Copilot for Microsoft 365 (for eligible customers) will be stored in the location specified by your Microsoft 365 data residency settings. This is a significant step forward in our commitment to providing a secure and compliant environment for our enterprise and highlight regulated customers that have data particularly stringent requirements for how their data is stored.
But we’re not stopping there. Our vision is to democratize AI, making it accessible and beneficial for everyone. As we continue to innovate and enhance Copilot, our guiding principles remain the same: Trust, Transparency, and Control. These principles have always been at the heart of Microsoft 365, and they continue to shape our approach to Copilot. Stay tuned for more updates as we continue to evolve Copilot for Microsoft 365.
Please reply with your questions and share your experiences and needs as you explore your Copilot and our Data Residency options.
More resources to get support your Copilot and AI journey:
Five tips for prompting AI: How we’re communicating better at Microsoft with Microsoft Copilot
Microsoft 365 – How Microsoft 365 Delivers Trustworthy AI (2024-01).docx
Data, Privacy, and Security for Microsoft Copilot for Microsoft 365
Microsoft Purview data security and compliance protections for Microsoft Copilot
Microsoft Copilot Privacy and Protections
Apply principles of Zero Trust to Microsoft Copilot for Microsoft 365
Learn about retention for Microsoft Copilot for Microsoft 365
Microsoft Tech Community – Latest Blogs –Read More
Improved Next.js support (Preview) for Azure Static Web Apps
Next.js is a popular framework for building modern web applications with React, making it a common workload to deploy to Azure Static Web Apps’ optimized hosting of frontend web applications. We are excited to announce that we have improved our support for Next.js on Azure Static Web Apps (preview), increasing the compatibility with recent Next.js features and providing support for the newest Next.js versions, enabling you to deploy and run your Next.js applications with full feature access on Azure.
What’s new?
As we continue to iterate on our Next.js support during our preview, we’ve made fundamental improvements to ensure feature compatibility with the most recent and future versions of Next.js. Such improvements include support for the new React Server Components model in Next.js as well as hosting Next.js backend functions on dedicated App Service instances to ensure feature compatibility.
Support for Next.js 14, React Server Components, Server Actions, Server Side Rendering
With the introduction of the App Directory, React Server Components and Server Actions, it’s now possible to build full-stack components, where individual components exist server-side and have access to sensitive resources such as databases or APIs, providing a more integrated full-stack developer experience.
For instance, a single component can now contain both queries and mutations with database access, facilitating the componentization of code.
// Server Component
export default function Page() {
// handle queries, accessing databases or APIs securely here
// Server Action
async function create() {
‘use server’
// handle mutations, accessing databases or APIs securely here
// …
}
return (
// …
)
}
These features, including recent server-side developments for Next.js, are now better supported on Azure Static Web Apps. Support for the pages directory, which is still supported by Next.js 14, will also continue to work on Azure Static Web Apps.
Increased size limits of Next.js deployments
Previously, Next.js deployments were limited to 100mb due to the hosting restrictions of managed functions. Now, you can deploy Next.js applications up to 250mb in size to Azure Static Web Apps, with the Next.js statically exported sites supporting up to the regular Static Web Apps quotas now.
Partial support for staticwebapps.config.json
With the improved support for Next.js sites, the `staticwebapps.config.json` file which is used to provide configurations to the way your site is hosted by Azure Static Web Apps is now partially supported. While app-level configurations should still be completed within the `next.config.json` to configure the Next.js server, the `staticwebapps.config.json` file can still be used to limit routes to roles and other headers, routes, redirects or other configurations.
Support for Azure Static Web Apps authentication and authorization
Azure Static Web Apps provides built-in authentication and role-based authorization. You can now use this built-in authentication with Next.js sites to secure them. The client principal containing the information of the authenticated user can be accessed from the request headers and used to perform server-side operations within API routes, Server Actions or React Server Components. The following code snippet indicates how the user headers can be accessed from within a Next.js codebase.
import { headers } from ‘next/headers’
export default async function Home() {
const headersList = headers();
const clientPrincipal = JSON.parse(atob(headersList.get(‘x-ms-client-principal’) || ”))
return (
<main >
{clientPrincipal.userId}
</main>
);
}
Next.js backend functions hosted on a dedicated App Service plan
To ensure full feature compatibility with current and future Next.js releases, Next.js workloads on Azure Static Web Apps are uniquely hosted leveraging App Service. Static contents for Next.js workloads will continue to be hosted on Azure Static Web Apps globally distributed content store, and Next.js backend functions are hosted by Azure Static Web Apps with a dedicated App Service plan. This enables improved support for Next.js features, while retaining Static Web Apps’ existing pricing plans.
How can you activate these improvements?
These improvements have been rolled out to all regions of Azure Static Web Apps, and will take effect on your next deployment of a Next.js workload to your Static Web Apps resource.
Get started with Next.js on Azure Static Web Apps
We hope you find these improvements useful and helpful for your Next.js development. We are always looking for feedback and suggestions, and are actively reading and engaging in our community GitHub.
Get stated deploying Next.js sites on Azure Static Web Apps for free!
Share feedback for Next.js in the Azure Static Web Apps GitHub repository
Follow and tag our Twitter account for Azure Static Web Apps
Microsoft Tech Community – Latest Blogs –Read More
Accessibility in Microsoft 365 Core Apps
“Accessibility is not a bolt on. It’s something that must be built in to every product we make so that our products work for everyone. Only then will we empower every person and every organization on the planet to achieve more. This is the inclusive culture we aspire to create” – Satya Nadella
Our journey in accessible technology is grounded in a shared conviction at Microsoft. As product makers, we believe in the obligation to build technology that truly empowers people, fostering an inherently equitable experience.
In this pursuit, we’ve embraced a mindset we call “shift left,” incorporating accessibility at every stage and right from the inception of designing and building our products.
Reflecting on my tenure, I’ve had the privilege of contributing to some of the world’s most impactful technologies, particularly with Office and Windows. Traditionally, these products were well established with years of development and only later received our focused attention on accessibility.
However, Copilot presented a rare opportunity for us to incorporate accessibility right from the inception of the product and therefore “shift left”, the entire design and development process. And what’s more, AI technologies like Copilot brought a unique opportunity to reshape how humans interact with computers in a way that makes the experience MORE equitable and transformative for all.
Now, integrated into our Microsoft 365 Core Apps, Copilot brings forth exciting capabilities. Our goal is to bridge the gap between your interaction with technology and how you express your ideas, making the user experience more inclusive and empowering for all.
At Copilot’s core lies a commitment to equity, underscoring our ongoing dedication to fostering a technology landscape that truly serves every individual, ensuring that no one is left behind.
Equity at Copilot’s Core
We are actively shifting left in our product development by making accessibility a core part of Copilot’s design and functionality. Copilot is designed to work well with assistive technologies, such as screen readers, magnifiers, contrast themes, and voice input, and to provide a seamless and intuitive user experience.
But in addition to that, Copilot is a tool designed to be accessible itself.
In this process, we have collaborated and co-innovated with a diverse set of customers, 600+ members of Microsoft’s employee disability community, partners in research and design, and the commitment of engineers and product makers to listen and be accountable.
To illustrate how Copilot can enhance accessibility, I want to share with you some highlights from engaging with participants who had early access to Copilot:
Drafting emails: Copilot can help create 1st drafts in a matter of minutes. This can be especially helpful to those who have more limited mobility and challenges typing. You can generate different versions of the email with different levels of formality and detail and ask Copilot to check their calendar and suggest a meeting time, just with a few clicks of a button.
Using voice commands: With Copilot, you can create entire PowerPoint presentations just using your voice. Just tell Copilot about what you want to create, and Copilot can generate relevant graphics and notes for slides.
These examples demonstrate how Copilot can save users time and effort, as well as help them express their ideas and communicate their expertise more effectively. They also show how Copilot can adapt to their preferences and needs and provide them with a supportive partner that can enhance their communication and productivity.
In addition to some of these areas of feedback, we also conducted a deep dive study with those of the neurodiverse community. The neurodiverse community (which makes up 10-20% of the world) faces common challenges that we can all relate to, but their lives are disproportionately affected by them. Examples include planning, focus, procrastination, communication, reading ease and comprehension, being overly literal, and fatigue.
For the neurodiverse community, our study showed that Copilot can be a powerful ally, offering assistance in overcoming these challenges. It serves as a facilitator for thought organization, acts as a springboard for writing tasks, aids in surmounting task initiation barriers, and assists in processing extensive information found in documents, messages, or emails.
Members of the community reported Copilot helping their communication effectiveness by distilling action items from team meetings and documents, generating summaries, adjusting the tone and context of their content, and bridging communication gaps.
As one of the participants in the study said, “For me, Copilot itself is accessibility. Having Copilot is like putting on glasses after I’ve been squinting my entire career. It is equity and I think as a neurodivergent individual, I can’t imagine going back.”
Making Accessible Content with Ease
On our journey to create products that are truly inclusive, we’re also empowering document authors to shift left and build better authoring habits by catching accessibility errors early in the doc creation process. Ensuring that your content is comprehensible to all individuals, irrespective of their visual abilities or preferences, is a crucial component of accessibility. To assist you in this endeavor, we have created the Accessibility Assistant, a robust tool that can detect and resolve potential problems in your documents, emails, and presentations. You can access the Accessibility Assistant from the Review tab in Word, Outlook, and PowerPoint.
New Features of the Accessibility Assistant include some of the following highlights:
The in-canvas notifications for readability is a feature that notifies you of accessibility hurdles for common issues, such as text color not meeting the Web Content Accessibility Guidelines (WCAG) color contrast ratio or images lacking descriptions. You can use the inclusive color picker to choose an appropriate color from the suggested options and utilize the automatically generated image descriptions to provide alt-text, making it easier to create accessible content.
Quick fix card for multiple issues: This feature allows you to fix several issues of the same type with fewer clicks. For example, you can change the color of all the text that has low contrast in your document.
Per-slide toggle for PowerPoint: This feature enables you to view and fix the accessibility issues for each slide individually, instead of seeing them by categories. This can help you focus on your own slides and collaborate with others more easily.
These capabilities are designed to help you create accessible content faster and easier, and to ensure that everyone can access and enjoy your work. The Accessibility Assistant for Word Desktop has started rolling out to Insider Beta Channel users running Version 2012 (Build 17425.2000) or later. This feature for Outlook Desktop will be available in Insider Beta Channel by April 2024, followed by release to PowerPoint Desktop this summer
Our Commitment
At Microsoft, we believe that everyone has something valuable to offer, and that diversity of perspectives and experiences can enrich our products and services. That’s why we are committed to empowering everyone to achieve more, fostering an inherently equitable experience. Copilot is one of the ways that we are fulfilling this commitment, by providing a supportive partner that can help you with common challenges, enhance your communication, and bridge the gap between your interaction with technology and how you express your ideas.
But we also know that we are not done yet. We are still on a journey of understanding how AI and LLMs will continue to evolve and make the world a more equitable place. We are constantly learning from our customers, partners, and the disability community, and we are always looking for ways to improve our accessibility features and functionality. We welcome your feedback and suggestions on how we can make Copilot better for you and for everyone.
To learn more about Copilot and how to get started, please visit the Copilot website or the Copilot support page. To learn more about accessibility at Microsoft and how to access our accessibility features, please visit the Microsoft Accessibility website or the Disability Answer Desk. And to share your feedback or suggestions on Copilot, please use the feedback button (thumbs up or down).
Together, we can make the world a more equitable place for everyone.
Microsoft Tech Community – Latest Blogs –Read More
Windows 11 Plans to Expand CLAT Support
Thank you everyone who responded to our recent IPv6 migration survey! We want you to know that we are committed to improving your IPv6 journey and these data are helpful in shaping our future plans.
To that end, just a quick update: we are committing to expanding our CLAT support to include non-cellular network interfaces in a future version of Windows 11. This will include discovery using the relevant parts of RFC 7050 (ipv4only.arpa DNS query), RFC 8781 (PREF64 option in RAs), and RFC 8925 (DHCP Option 108) standards. Once we do have functionality available for you to test in Windows Insiders builds, we will let you know.
We are looking forward to continuing to provide support for your platform networking needs!
Microsoft Tech Community – Latest Blogs –Read More
Optimize your Azure costs
Author introduction
Hi, I am Saira Shaik, Working Principal customer success account manager at Microsoft India.
This article will provide guidance to the customers who wants to Optimize their Azure costs by providing tools and resources to help customers to save cost, Understand and forecast your costs, Cost optimize workloads and Control costs.
Explore tools and resources to help you save
Find out about the tools, offers, and guidance designed to help you manage and optimize your Azure costs. Learn how to understand and forecast your bill, optimize workload costs, and control your spending.
8 ways to optimize the cost
1. Shut down unused resources.
Identify idle virtual machines (VMs), ExpressRoute circuits, and other resources with Azure Advisor. Get recommendations on which resources to shut down and see how much you would save.
Useful Links
Reduce service costs using Azure Advisor – Azure Advisor | Microsoft Learn
2. Right-size underused resources
Find underutilized resources with Azure Advisor—and get recommendations on how to reduce your spend by reconfiguring or consolidating them.
Useful Links
Reduce service costs using Azure Advisor – Azure Advisor | Microsoft Learn
3. Add an Azure savings plan for compute for dynamic workloads
Save up to 65 percent off pay-as-you-go pricing when you commit to spend a fixed hourly amount on compute services for one or three years.
Useful Links
Azure Savings Plan Savings – youtube.com/playlist?list=PLlrxD0HtieHjd-zn7u09YoGJY18ZrN1Hq
Introduction to Azure savings plan for compute (youtube.com)
Understanding your Azure savings plan recommendations (youtube.com)
How Azure savings plan is applied to a customer’s compute environment (youtube.com)
Azure Savings Plan for Compute | Microsoft Azure
4. Reserve instances for consistent workloads
Get a discount of up to 72 percent over pay-as-you-go pricing on Azure services when you prepay for a one- or three-year term with reservation pricing.Get a discount of up to 72 percent over pay-as-you-go pricing on Azure services when you prepay for a one- or three-year term with reservation pricing.
Useful Links
Reservations | Microsoft Azure
Advisor Clinic: Lower costs with Azure Virtual Machine reservations (youtube.com)
Model virtual machine costs with the Azure Cost Estimator Power BI Template (youtube.com)
5. Take advantage of the Azure Hybrid Benefit
AWS is up to five times more expensive than Azure for Windows Server and SQL Server. Save when you migrate your on-premises workloads to Azure.
Useful Links
Azure Hybrid Benefit—hybrid cloud | Microsoft Azure
Reduce costs and increase SQL license utilization using Azure Hybrid Benefit (youtube.com)
Managing and Optimizing Your Azure Hybrid Benefit Usage (With Tools!) – Microsoft Community Hub
6. Configure autoscaling
Save by dynamically allocating and de-allocating resources to match your performance needs.
Useful Links
Autoscaling guidance – Best practices for cloud applications | Microsoft Learn
7. Choose the right Azure compute service
Azure offers many ways to host your code. Operate more cost efficiently by selecting the right compute service for your application.
Useful Links
Choose an Azure compute service – Azure Architecture Center | Microsoft Learn
Armchair Architects: Exploring the relationship between Cost and Architecture (youtube.com)
8. Set up budgets and allocate costs to teams and projects
Create and manage budgets for the Azure services you use or subscribe to—and monitor your organization’s cloud spending—with Microsoft Cost Management.
Useful Links
Tutorial – Create and manage budgets – Microsoft Cost Management | Microsoft Learn
The Cloud Clinic: Use tagging and cost management tools to keep your org accountable (youtube.com)
Understand and forecast your costs
Monitor and analyze your Azure bill with Microsoft Cost Management. Set budgets and allocate spending to your teams and projects.
Estimate the costs for your next Azure projects using the Azure pricing calculator and the Total Cost of Ownership (TCO) calculator.
Successfully build your cloud business case with key financial and technical guidance from Azure.
Useful Links
FinOps toolkit – Kick start your FinOps efforts (microsoft.github.io)
Azure Savings Dashboard – Microsoft Community Hub
Azure Cost Management Dashboard – Microsoft Community Hub
Cost optimize your workloads
Follow your Azure Advisor best practice recommendations for cost savings.
Review your workload architecture for cost optimization using the Microsoft Azure Well-Architected Review assessment and the Microsoft Azure Well-Architected Framework design documentation, well architected cost optimization implementation – Customer Offerings: Well-Architected Cost Optimization Implementation – Microsoft Community Hub
Save with Azure offers and licensing terms such as the Azure Hybrid Benefit, paying in advance for predictable workloads with reservations, Azure Spot Virtual Machines, Azure savings plan for compute, and Azure dev/test pricing.
Control your costs
Mitigate cloud spending risks by implementing cost management governance best practices at your company using the Microsoft Cloud Adoption Framework for Azure.
Implement cost controls and guardrails for your environment with Azure Policy.
Microsoft Tech Community – Latest Blogs –Read More
Azure SQL MI premium-series memory optimized hw is now available in all regions with up to 40 vCores
Recently, we announced a number of Azure SQL Managed Instance improvements in Business Critical tier. In this article, we would like to highlight that the premium-series memory optimized hardware is now available in all Azure regions, up to 40 vCores!
What is new?
Having the latest and greatest hardware generation available for the Azure SQL Managed Instance Business Critical service tier can be crucial for the critical customer workloads. Until recently, premium-series memory optimized hardware generation was available only in a subset of Azure regions. Now you can have a SQL MI BC instance with premium-series memory optimized hardware in any Azure region up to 40 vCores.
This means that the new state for premium-series memory optimized hardware availability is:
Up to 40 vCores: available in every Azure region.
48, 56, 64, 80, 96 and 128 vCore options: for now, available in a subset of Azure regions.
Improve performance of your database workload with more memory per vCore
Increasing memory can improve the performance of applications and databases by reducing the need to read from disk and instead storing more data in memory, which is faster to access. You might want to consider upgrading to memory-optimized premium-series for several reasons:
Buffering and Caching: More memory can be utilized for caching frequently accessed data or buffering I/O operations, leading to faster response times and improved overall system performance.
Handling Larger Datasets: If the user is dealing with larger datasets or increasing workload demands, more memory can accommodate the additional data and processing requirements without experiencing slowdowns or performance bottlenecks.
Concurrency and Scalability: Higher memory capacity can support more concurrent users or processes, allowing the system to handle increased workload and scale effectively without sacrificing performance.
Complex Queries and Analytics: Memory-intensive operations such as complex queries, data analytics, and reporting often benefit from having more memory available to store intermediate results and perform calculations efficiently.
In-Memory Processing: Some databases and applications offer in-memory processing capabilities, where data is stored and manipulated entirely in memory for faster processing. Increasing memory allows for more data to be processed in-memory, resulting in faster query execution and data manipulation.
How to upgrade your instance to premium-series memory optimized hardware
You can scale your existing managed instance from Azure portal, PowerShell, Azure CLI or ARM templates. You can also utilize ‘online scaling’ with minimal downtime. See Scale resources – Azure SQL Database & Azure SQL Managed Instance | Microsoft Learn.
Summary
More memory for a managed instance can lead to improved performance, scalability, and efficiency in handling larger workloads, complex operations, and data processing tasks. This improvement in Azure SQL Managed Instance Business Critical makes premium-series memory optimized hardware available in all regions, up to 40 vCores.
If you’re still new to Azure SQL Managed Instance, now is a great time to get started and take Azure SQL Managed Instance for a spin!
Next steps:
Learn more about the latest innovation in Azure SQL Managed Instance.
Try SQL MI free of charge for the first 12 months.
Microsoft Tech Community – Latest Blogs –Read More
Learn about AI and Microsoft Copilot for Security with Learn Live
Want to learn more about Generative AI and Microsoft Copilot?
Microsoft is launching a Learn Live Series called “Getting Started with Microsoft Copilot for Security.” This weekly online seminar series will run from March 19th through April 9th and will review skill development resources and discuss topics related to AI and Copilot for Security.
Hosts Edward Walton, Andrea Fisher, and Rod Trent will guide you through four topics each with a corresponding Microsoft Learn module designed to help anyone interested in getting users ready for Microsoft Copilot for Security.
Fundamentals of Generative AI
March 19th 12:00 pm – 1:30 pm PDT
In this session, you will explore the way in which large language models (LLMs) enable AI applications and services to generate original content based on natural language input. You will also learn how generative AI enables the creation of AI-powered copilots that can assist humans in creative tasks. In this episode, you will:
Learn about the kinds of solutions AI can make possible and considerations for responsible AI practices
Understand generative AI’s place in the development of artificial intelligence
Understand large language models and their role in intelligent applications
Describe how Azure OpenAI supports intelligent application creation
Describe examples of copilots and good prompts
Fundamentals of Responsible Generative AI
March 27th 12:00 pm –1:30 pm PDT
Generative AI enables amazing creative solutions but must be implemented responsibly to minimize the risk of harmful content generation. In this episode, you will:
Describe an overall process for responsible generative AI solution development
Identify and prioritize potential harms relevant to a generative AI solution
Measure the presence of harms in a generative AI solution
Mitigate harms in a generative AI solution
Prepare to deploy and operate a generative AI solution responsibly
Get started with Microsoft Security Copilot
April 2nd 12:00pm – 1:30 pm PDT
Get acquainted with Microsoft Copilot for Security. You will be introduced to some basic terminology, how Microsoft Copilot for Security processes prompts, the elements of an effective prompt, and how to enable the solution. In this episode, you will:
Describe what Microsoft Copilot for Security is.
Describe the terminology of Microsoft Copilot for Security.
Describe how Microsoft Copilot for Security processes prompt requests.
Describe the elements of an effective prompt
Describe how to enable your Microsoft Copilot for Security solution.
Describe the core features of Microsoft Security Copilot
April 9th 12:00 pm – 1:30 pm PDT
Microsoft Copilot for Security has a rich set of features. Learn about available plugins that enable integration with various data sources, promptbooks, the ways you can export and share information from Copilot for Security, and much more. In this episode, you will:
Describe the features available in the standalone experience.
Describe the services to which Copilot for Security can integrate.
Describe the embedded experience
Jump-start your Copilot for Security journey and join us for the Learn Live series starting on Tuesday, March 19th!
Microsoft Tech Community – Latest Blogs –Read More
Announcing the Public Preview of Change Actor
Change Analysis
Identifying who made a change to your Azure resources and how the change was made just became easier! With Change Analysis, you can now see who initiated the change and with which client that change was made, for changes across all your tenants and subscriptions.
Audit, troubleshoot, and govern at scale
Changes should be available in under five minutes and are queryable for fourteen days. In addition, this support includes the ability to craft charts and pin results to Azure dashboards based on specific change queries.
What’s new: Actor Functionality
This added functionality is in private preview.
Who made the change
This can be either ‘AppId’ (client or Azure service) or email-ID of the user
E.g. changedBy: elizabeth@contoso.com
With which client the change was made
E.g. clientType: portal
What operation was called
Azure resource provider operations | Microsoft Learn
Try it out
You can try it out by querying the “resourcechanges” or “resourcecontainerchanges” tables in Azure Resource Graph.
Sample Queries
Here is documentation on how to query resourcechanges and resourcecontainerchanges in Azure Resource Graph. Get resource changes – Azure Resource Graph | Microsoft Learn
Summarization of who and which client were used to make resource changes in the last 7 days ordered by the number of changes
resourcechanges
| extend changeTime = todatetime(properties.changeAttributes.timestamp),
targetResourceId = tostring(properties.targetResourceId),
changeType = tostring(properties.changeType), changedBy = tostring(properties.changeAttributes.changedBy),
changedByType = properties.changeAttributes.changedByType,
clientType = tostring(properties.changeAttributes.clientType)
| where changeTime > ago(7d)
| project changeType, changedBy, changedByType, clientType
| summarize count() by changedBy, changeType, clientType
| order by count_ desc
Summarization of who and what operations were used to make resource changes ordered by the number of changes
resourcechanges
| extend changeTime = todatetime(properties.changeAttributes.timestamp),
targetResourceId = tostring(properties.targetResourceId),
operation = tostring(properties.changeAttributes.operation),
changeType = tostring(properties.changeType), changedBy = tostring(properties.changeAttributes.changedBy),
changedByType = properties.changeAttributes.changedByType,
clientType = tostring(properties.changeAttributes.clientType)
| project changeType, changedBy, operation
| summarize count() by changedBy, operation
| order by count_ desc
List resource container (resource group, subscription, and management group) changes. who made the change, what client was used, and which operation was called, ordered by the time of the change
resourcecontainerchanges
| extend changeTime = todatetime(properties.changeAttributes.timestamp),
targetResourceId = tostring(properties.targetResourceId),
operation=tostring(properties.changeAttributes.operation),
changeType = tostring(properties.changeType), changedBy = tostring(properties.changeAttributes.changedBy),
changedByType = properties.changeAttributes.changedByType,
clientType = tostring(properties.changeAttributes.clientType)
| project changeTime, changeType, changedBy, changedByType, clientType, operation, targetResourceId
| order by changeTime desc
FAQ
How do I use Change Analysis?
Change Analysis can be used by querying the resourcechanges or resourcecontainterchanges tables in Azure Resource Graph, such as with Azure Resource Graph Explorer in the Azure Portal or through the Azure Resource Graph APIs. More information can be found here: Get resource changes – Azure Resource Graph | Microsoft Learn.
What does unknown mean?
Unknown is displayed when the change happened on a client that is unrecognized.
Why are some of the changedBy values unspecified?
Some resources in the resourcechanges tables are not fully covered yet in the change actor functionality. This could be caused by a resource that has been affected by a system change or the RP needs to first send us the Who/How information. Unspecified is displayed when the resource is missing changedByType values and could be missing for either Creates or Updates. You may also see an increase in Unspecified values for these types,
virtualmachines
virtualmachinescalesets
publicipaddresses
disks
networkinterfaces
What resources are included?
You can try it out by querying the “resourcechanges” or “resourcecontainerchanges” tables in Azure Resource Graph.
Questions and Feedback
If you have any questions or want to provide direct input, you can reach out to us at (argchange@microsoft.com)
Share Product feedback and ideas with us at Azure Governance · Community
Microsoft Tech Community – Latest Blogs –Read More
Load Test Emulation for Azure Database for MySQL – Flexible Server using mysqlslap
Guidance for using mysqlslap to simulate client load and measure performance
Introduction
Mysqlslap is a diagnostic program included with the MySQL server binary that you can use to emulate client load for a MySQL server and report the timing of each stage. Mysqlslap works as if multiple clients are accessing the server simultaneously.
In this post, I’ll show you how to use mysqlslap to perform load test emulation for Azure Database for MySQL – Flexible Server, a fully managed and scalable MySQL service on Azure. I’ll install mysqlslap, configure the connection parameters, run different types of tests, and then analyze the results.
Prerequisites
Before you begin, ensure that the following prerequisites are in place:
An instance of Azure Database for MySQL – Flexible Server. To create one, follow this guidance in this tutorial: Quickstart: Create with Azure portal – Azure Database for MySQL – Flexible Server.
A MySQL client tool that supports mysqlslap – download them from MySQL :: MySQL Community Downloads.
A test database and table on your Azure Database for MySQL Flexible Server instance. Use the following queries to create a test database and table with one million dummy records:
mysql> CREATE DATABASE loadtestdb;
mysql> use loadtestdb;
mysql> CREATE TABLE loadtesttable (
ID INT PRIMARY KEY AUTO_INCREMENT,
Name VARCHAR(255),
Age INT,
Salary DECIMAL(10, 2),
Department VARCHAR(50),
City VARCHAR(100),
Country VARCHAR(100)
);
mysql> INSERT INTO loadtesttable (Name, Age, Salary, Department, City, Country)
SELECT
CONCAT(CHAR(FLOOR(RAND() * 26) + 65), ‘Person’, n),
FLOOR(RAND() * 100) + 18,
ROUND(RAND() * 10000000, 2),
CASE WHEN RAND() < 0.5 THEN ‘IT’ ELSE ‘Sales’ END,
CASE WHEN RAND() < 0.5 THEN ‘New York’ ELSE ‘Los Angeles’ END,
CASE WHEN RAND() < 0.5 THEN ‘USA’ ELSE ‘Canada’ END
FROM (
SELECT
a.N + b.N * 10 + c.N * 100 + d.N * 1000 + e.N * 10000 + f.N * 100000 + 1 AS n
FROM
(SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) a,
(SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) b,
(SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) c,
(SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) d,
(SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) e,
(SELECT 0 AS N UNION ALL SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6 UNION ALL SELECT 7 UNION ALL SELECT 8 UNION ALL SELECT 9) f
) AS Numbers;
Installing mysqlslap
Instructions for installing mysqlslap on a computer running Windows or Linux appear in the following sections.
Note: If the Azure Cloud Shell is installed, mysqlslap is included automatically.
Windows
To install mysqlslap on a computer running Windows, download the MySQL Installer, run it, and then follow the wizard. Mysqlslap will be installed in the folder C:Program FilesMySQLMySQL Server 8.1bin (assuming the installation is on C:). Alternately, you can download MySQL ZIP Archive and extract mysqlslap from mysql-8.0.36-winx64.zipmysql-8.0.36-winx64bin.
Linux
To install mysqlslap on a computer running Linux, install the MySQL client package (which includes mysqlslap) by running the following command:
sudo apt update
sudo apt install mysql-client
mysqlslap –version
Configuring the connection parameters
To connect to the Azure Database for MySQL – Flexible Server instance, run the following command:
mysqlslap –host=myserver.mysql.database.azure.com –user=myuser –password=mypassword –port=3306 –ssl-mode=REQUIRED
With this command, you can specify the following parameters:
–host: The host name or IP address of your server. You can find it on the Azure portal under the Overview section of your server.
–port: The port number of your server. The default is 3306.
–user: The username to log in to your server. You can use the admin user that you created when you provisioned your server, or any other user that has access to the test database.
–password: The password to log in to your server. You will be prompted to enter it when you run mysqlslap.
–ssl-mode: The SSL mode to use for the connection. You can use REQUIRED, VERIFY_CA, or VERIFY_IDENTITY. The default is REQUIRED. For more information about SSL modes, see MySQL 8.0 : Configuring MySQL to Use Encrypted Connections.
Running different types of tests
The mysqlslap test process includes three stages:
Create schema, table, and optionally any stored programs or data to use for the test. This stage uses a single client connection.
Run the load test. This stage can use many client connections.
Clean up (disconnect, drop table if specified). This stage uses a single client connection.
Use mysqlslap to run different types of tests, such as concurrency tests, stress tests, or benchmark tests. To specify the test parameters, consider the following options:
Option Name
Description
—concurrency
Specifies the number of simultaneous client connections. You can provide a single value or a comma-separated list of values. For example, –concurrency=10 means 10 threads, and –concurrency=10,20,30 means three tests with 10, 20, and 30 threads respectively.
—iterations
Defines the number of times the benchmark test should be repeated. The default is 1.
—number-of-queries
The number of queries to run per thread. The default is 0, which means unlimited.
—query
Specifies the SQL query to be executed during the test. You can provide a single query or multiple queries. For example, –query=”SELECT * FROM testtable” means to run a simple SELECT query.
—create-schema
The name of the database to use for the test. The default is the mysqlslap database.
—create
The statement to create the test table. You can provide a single statement or multiple statements. For example, –create=”CREATE TABLE testtable (id INT)” means to create a simple test table.
—delimiter
Use the –delimiter option to specify a different delimiter, which enables you to specify statements that span multiple lines or place multiple statements on a single line.
—auto-generate-sql
A flag to indicate whether to generate random queries for the test. The default is FALSE. If you set it to TRUE, you can use the following options to control the query generation.
—auto-generate-sql-add-autoincrement
A flag to indicate whether to add an AUTO_INCREMENT column to the test table. The default is FALSE.
—auto-generate-sql-execute-number
The number of queries to generate and execute per thread. The default is 10.
—auto-generate-sql-load-type
The type of queries to generate. You can use MIXED, UPDATE, WRITE, or READ. The default is MIXED.
—auto-generate-sql-unique-query-number
The number of unique queries to generate. The default is 10.
—auto-generate-sql-unique-write-number
The number of unique queries to generate for write load. The default is 10.
You can also use the –engine option to specify the storage engine to use for the test table. The default is InnoDB. For more information about mysqlslap options, see MySQL 8.0 Reference Manual :: 6.5.8 mysqlslap.
Before running a test with mysqlslap, please ensure to use an empty user database or the default mysqlslap database when using –create or –auto-generate-sql-* option. If the –create or –auto-generate-sql-* option is given, mysqlslap drops the schema at the end of the test run. This means that any existing data in the database will be lost.
Some examples showing how to run different types of tests using mysqlslap follow.
To run a concurrency test with 10, 20, and 30 threads, each executing 100 queries 10 times, run the following command:
mysqlslap –host=server-name.mysql.database.azure.com –port=3306 –user=user-name –password –ssl-mode=REQUIRED –concurrency=10,20,30 –iterations=10 –number-of-queries=100 –query=”SELECT ID, Name, Age, Salary, Department, City, Country FROM loadtesttable WHERE Name like ‘A%’ AND Age BETWEEN 30 AND 40;” –create-schema=loadtestdb –verbose
To run a stress test with 50 threads, each executing the query 20 times, run the following command:
mysqlslap –host=server-name.mysql.database.azure.com –port=3306 –user=user-name –password –ssl-mode=REQUIRED –concurrency=50 –iterations=25 –query=”SELECT ID, Name, Age, Salary, Department, City, Country FROM (SELECT *, ROW_NUMBER() OVER (PARTITION BY Department ORDER BY Salary DESC) AS R FROM loadtesttable) AS ranked WHERE R <= 5;” –create-schema=loadtestdb –verbose
To run a benchmark test with 10 threads, each executing 1000 randomly generated queries with a mixed load type, run the following command:
mysqlslap –host=server-name.mysql.database.azure.com –port=3306 –user=user-name –password –ssl-mode=REQUIRED –concurrency=10 –iterations=1 –number-of-queries=1000 –auto-generate-sql –auto-generate-sql-load-type=MIXED –verbose
Analyzing the results
After running a test, mysqlslap displays the results as output., which includes the:
Average number of seconds to run all queries: The average time it took to run all the queries per thread.
Minimum number of seconds to run all queries: The minimum time it took to run all the queries per thread.
Maximum number of seconds to run all queries: The maximum time it took to run all the queries per thread.
Number of clients running queries: The number of threads that simulated the client load.
Average number of queries per client: The average number of queries that each thread executed.
You can use the –silent option to suppress the verbose output and display only the results. You can also use the –csv option to format the results as comma-separated values, which can easily be imported into a spreadsheet or a database for further analysis.
An example of the results from a concurrency test with 10, 20, and 30 threads, each executing 100 queries 10 times, follows:
Benchmark
Average number of seconds to run all queries: 2.024 seconds
Minimum number of seconds to run all queries: 2.003 seconds
Maximum number of seconds to run all queries: 2.041 seconds
Number of clients running queries: 10
Average number of queries per client: 10
Benchmark
Average number of seconds to run all queries: 2.070 seconds
Minimum number of seconds to run all queries: 2.022 seconds
Maximum number of seconds to run all queries: 2.228 seconds
Number of clients running queries: 20
Average number of queries per client: 5
Benchmark
Average number of seconds to run all queries: 1.885 seconds
Minimum number of seconds to run all queries: 1.849 seconds
Maximum number of seconds to run all queries: 2.021 seconds
Number of clients running queries: 30
Average number of queries per client: 3
You can use the results to compare the performance of an Azure Database for MySQL – Flexible Server instance under different load scenarios, and to identify any potential bottlenecks or issues. You can also use the results to tune the server configuration, such as the number of connections, the buffer pool size, the query cache size, or the index statistics.
Best practices
When you’re using mysqlslap to perform load test emulation for an Azure Database for MySQL – Flexible Server instance, consider the following best practices.
Before using the mysqlslap utility on your production environment, test the mysqlslap utility thoroughly in your lowest environment against a test database.
Define benchmarking scenarios that closely resemble your production environment.
Use realistic datasets and queries representative of your actual workload.
Adjust benchmarking parameters such as concurrency, iterations, and query complexity to match your workload characteristics.
Test different combinations of parameters to understand their impact on performance.
Analyze results carefully and consider multiple metrics for performance evaluation.
Monitor system resources (CPU, memory, disk I/O) during benchmark tests to identify any resource bottlenecks.
Repeat benchmark tests multiple times to validate results and ensure consistency.
Conclusion
In this post, I’ve described how to use mysqlslap to perform load test emulation for an Azure Database for MySQL – Flexible Server instance. I’ve described how to install mysqlslap, configure the connection parameters, run different types of tests, and analyze the results. Be sure to use mysqlslap to simulate client load and measure the performance of your MySQL flexible server, as well as to optimize your server configuration and query performance.
If you have any questions about the detail provided above, please leave a comment below or email us at AskAzureDBforMySQL@service.microsoft.com. Thank you!
References
For more information about using mysqlslap, in the MySQL documentation, see https://dev.mysql.com/doc/refman/8.0/en/mysqlslap.html
Microsoft Tech Community – Latest Blogs –Read More