Category: Microsoft
Category Archives: Microsoft
Azure WAF integration in Copilot for Security- Protect web applications using Gen AI
Today, we are launching the public preview of Azure Web Application Firewall (WAF) integration in Microsoft Copilot for Security. Azure WAF capabilities available in the standalone Copilot for Security experience are: Get Top Rules Triggered, Get Top Blocks By IP, Get SQLi Blocks By WAF, and Get XSS Blocks By WAF
Azure WAF network security analysts face many challenges. A lot of their time goes into research and understanding why certain WAF requests were blocked, which is a very time-consuming and manual task.
With the Azure WAF in Copilot for Security integration, security and IT teams can move faster, and focus on high value tasks. The Copilot summarizes data and generates in-depth contextual insights into the WAF threat landscape. This enables analysts to determine if the WAF policy is blocking a request it should not have blocked, or if their WAF policy needs to be fine-tuned. It results in time and cost savings since Copilot can reason over terabytes of data in a matter of minutes, not hours or days.
Another gain in productivity is simplifying the complex, analysts don’t have to write complex KQL queries. Instead, they can simply ask questions in natural language and Copilot for Security understands the context and generates the response. This results in time savings and unlocks new skills for junior analysts while Tier1 analysts can now complete more complex tasks focusing on strategic rather than tactical work.
Let’s take a closer look at what each of these new Azure WAF Skills in Copilot for Security do to help network security professionals investigate logs via natural language prompts.
Azure WAF Skills in Copilot for Security
The four WAF Skills available are:
Get Top Rules Triggered: Retrieve contextual details about WAF detections.
Get Top Blocks By IP: Retrieve the top malicious IPs in the environment along with related WAF rules and patterns triggering the attack.
Get SQLi Blocks By WAF: Explain why Azure WAF blocks SQL Injection (SQLi) attacks. Analyze Azure WAF diagnostic logs and connect related logs over a specific time period to generate a summary of the attack.
Get XSS Blocks By WAF: Explain why Azure WAF blocks Cross-site Scripting (XSS) attacks. Analyze Azure WAF diagnostic logs and connect related logs over a specific time period to generate a summary of the attack.
Using the Get Top Rules Triggered Skill
This Copilot Skill summarizes in natural language the overall threat landscape in the WAF environment. The Skill reasons over terabytes of WAF logs and generates a list of top WAF rules triggered, detection logic information used for detections, malicious client IPs triggering the WAF rules. The list is ordered based on the number of times rules are hit and rules with the greatest number of hits are displayed at the top.
The screenshot below describes the response generated when a prompt is issued for top WAF rules in a regional WAF over the last one day.
The default timespan for any of the WAF Skills is 24 hours but prompts can be tailored specific to a request.
Using the top WAF rules triggered Skill, it is possible for analyst to get details on any of the WAF rule sets – Default Rule Set, Bot Rule Set, or Custom rule set.
The screenshot given below looks for details of the bot rules triggered.
Furthermore, it is possible to use this Skill to obtain details of a specific vulnerability. In the following example, an analyst is trying to see if any Remote Code Execution (RCE) is seen by WAF and receives details about an RCE including the Log4J CVE details. The analyst can use other Copilot for Security products such as Microsoft Defender for Threat Intelligence to obtain further details about the CVE.
Using the Get Top Blocks By IP Skill
This Skill generates a list of most frequently triggered offending IPs along with related WAF contextual information.
By using the response from this Skill, analysts can get a holistic picture of WAF rules triggered by the offending IPs and overall exposure of the WAF policy to the IPs.
Furthermore, the malicious IPs discovered by this WAF Skill can be searched in other Copilot for Security products such as the Microsoft Defender for Threat Intelligence to get other attack vectors associated with the IPs.
Using the Get SQLi Blocks By WAF Skill
This Skill provides contextual insights into WAF detections of SQL Injection (SQLi) attacks. This helps analysts understand the details of the SQLi attack such as WAF resources under attack, attack pattens such as query parameters triggering the attack.
Using the Get XSS Blocks By WAF Skill
This Skill provides contextual insights into WAF detections of cross-site scripting (XSS) attacks. This helps analysts understand the details of the attack such as WAF resources under attack, attack pattens such as query parameters triggering the attack.
How to use Azure WAF integration in Copilot for Security
Copilot for Security is accessible to organizations as a pay-as-you-go consumption model. After the Security Compute Units (SCU) are provisioned and Azure WAF logs are present in Azure Log Analytics, the WAF Skills will be ready for use.
Select “sources” in the prompt bar and ensure the Azure Web Application Firewall plugin is enabled for use. Ensure the WAF Log Analytics workspace name, Log Analytics resource group name and Log Analytics subscription ID are configured.
With the Azure WAF in Copilot for Security integration, security and IT teams can move faster, upskill and transition into the age of AI. The integration announced today combine Microsoft’s expertise in security with Gen AI, packaged together to empower network security analysts to outpace adversaries with the speed and scale of AI.
Sowmya Mahadevaiah
Principal Product Manager, Azure Networking
Microsoft Tech Community – Latest Blogs –Read More
Building Intelligent Apps with Azure Cache for Redis, EntraID, Azure Functions, E1 SKU, and more!
We’re excited to announce the latest updates to Azure Cache for Redis that will improve your data management and application performance as we kickoff for Microsoft Build 2024. Coming soon, the Enterprise E1 SKU (Preview) will offer a lower entry price, Redis modules, and enterprise-grade features. The Azure Function Triggers and Bindings for Redis are now in general availability, simplifying your workflow with seamless integration. Microsoft EntraID in Azure Cache for Redis is now in GA, providing enhanced security management. And there’s more – we are also sharing added resources for developing intelligent applications using Azure Cache for Redis Enterprise, enabling you to build smarter, more responsive apps. Read the blog below to find out more about these amazing updates and how they can enhance your Azure Cache for Redis experience.
Building Intelligent Apps with Azure Cache for Redis
Developers can leverage the power and versatility of Azure Cache for Redis Enterprise to build and enhance intelligent apps. In this Azure .NET session, you will learn how to use various libraries and SDKs, such as semantic kernel, Redis OM for DotNet, and .NET 8 caching abstractions, to implement scenarios such as AI chatbots, vector similarity search, semantic caching, and more. Additionally, the scenarios can also be supported across various languages such as Java, Python, Node.js, and, Go. You will also see how to integrate Azure Cache for Redis with other Azure services, such as Cognitive Services and Azure Cosmos DB, to create responsive intelligent applications. Check out the video, documentation, and demo to discover how Azure Cache for Redis Enterprise can help you take your apps to the next level of intelligence and performance.
Enterprise E1 SKU (Preview)
The Azure Cache for Redis Enterprise tier will have a new E1 SKU available in preview soon. The E1 SKU reduces the cost to get started with Azure Cache for Redis Enterprise. This tier will continue to support all Redis modules, such as RediSearch, RedisBloom, RedisTimeSeries, and vector search for generative AI applications.
Azure Function Triggers and Bindings in Azure Cache for Redis (GA)
We are also happy to announce that the Azure Functions triggers and bindings in Azure Cache for Redis are now generally available. This feature allows you to easily build serverless applications that connect with your Azure Cache for Redis data, without writing repetitive code. You can use different triggers, such as pub/sub channels, lists, streams, and key space notifications, to run your functions based on events in your cache. You can also use input and output bindings to read and write data from and to your cache within your function code. The Azure Functions Triggers and Bindings for Redis support various languages, such as C#, Java, Node, Python, and PowerShell. They work with both premium and durable functions, and support for consumption functions is being rolled out now on a regional basis. To learn more about this feature and how to get started, check out the tutorial in our documentation, or read about how to use Functions to refresh expired keys in Redis
Read through cache using Azure Functions
Event based architectures with Azure Cache for Redis & Azure Functions triggers
Microsoft Entra ID for Authentication and Authorization (GA)
Lastly, we are pleased to announce the general availability of Microsoft Entra ID for Authentication and Authorization. Microsoft Entra ID allows you to assign permissions to your Entra ID identities, to control data access policies for your cache.
Using Microsoft Entra ID for authentication provides you with a secure and flexible way to manage your data access policies and allows you to use Microsoft Entra ID identities, such as service principals and managed identities, to authenticate to your cache. This eliminates the need to store and rotate access keys and simplifies the credential management process. It also enables you to assign permissions to your Microsoft Entra ID identities, and control which commands and keys they can access in your cache. This helps you enforce the principle of least privilege and protect your data from unauthorized access. Learn more about Microsoft Entra ID here and how to configure it for your Azure Cache for Redis here.
Resources
Azure Cache for Redis
Building .NET Based Intelligent Apps with Azure Cache for Redis
Making .NET intelligent apps smarter and consistent with Redis
ChatGPT + Enterprise data with Azure OpenAI and Azure Cognitive Search (.NET) Demo
Vector similarity search in Azure Cache for Redis
Enterprise E1 SKU (Preview)
Azure Function Trigger and Bindings in Azure Cache for Redis (GA)
How to Refresh Expired Keys in Redis using Azure Functions
Get started with Azure Functions triggers and bindings in Azure Cache for Redis
Create a write-behind cache by using Azure Functions and Azure Cache for Redis
Microsoft EntraID Authentication and Authorization (GA)
Microsoft EntraID documentation
Microsoft Tech Community – Latest Blogs –Read More
Azure Firewall integration in Copilot for Security: protect networks at machine speed with Gen AI
Azure Firewall is a cloud-native and intelligent network firewall security service that provides best of breed threat protection for your cloud workloads running in Azure. It’s a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. In this blog we will be focusing on the newly announced Azure Firewall integration in Copilot for Security.
The Azure Firewall integration in Copilot for Security helps analysts perform detailed investigations of the malicious traffic intercepted by the IDPS feature of their firewalls across their entire fleet using natural language questions in the Copilot for Security standalone experience.
These capabilities were announced at RSA. Take a look at this blog to learn more about the user journey and value that Copilot can deliver: Bringing generative AI to Azure network security with new Microsoft Copilot integrations.
There are four primary capabilities now in public preview which are outlined below.
Get top IDPS signature hits
This capability retrieves the top IDPS signature hits for an Azure Firewall. It helps the user get information about the traffic intercepted by the IDPS feature by simply asking natural language questions instead of the user having to construct KQL queries manually.
Get details on an IDPS signature
This capability enriches the threat profile of an IDPS signature beyond the information found in logs. It helps the user get additional details about an IDPS signature instead of requiring them to manually source this information. The Microsoft Defender Threat Intelligence plugin is another source that Copilot may use to provide threat intelligence for IDPS signatures.
Search across firewalls for an IDPS signature
This capability looks for a given IDPS signature across your tenant, subscription or resource group. It helps users perform a fleet-wide search (over any scope) for a threat across all their Firewalls instead of searching for the threat manually.
Secure your environment using IDPS
This capability generates recommendations to secure your environment using Azure Firewall’s IDPS feature. It helps users get information from documentation about using Azure Firewall’s IDPS feature to secure their environment instead of having to look up this information manually. Copilot for Security may also use the Ask Microsoft Documentation capability to provide this information.
Get started
Learn more in our documentation about these capabilities and how to access them in Microsoft Copilot for Security today!
Abhinav Sriram,
Product Manager
Microsoft Tech Community – Latest Blogs –Read More
Deploy and Scale Spring Batch in the Cloud – with Adaptive Cost Control
You can now use Azure Spring Apps to effectively run Spring Batch applications with adaptive cost control. You only pay when batch jobs are running, and you can simply lift and shift your Spring Batch jobs with no code change.
Spring Batch is a framework for processing large amounts of data in Java applications. It provides reusable functions for logging, transaction management, job statistics, job restart, skipping errors, and resource management. It also supports high-performance tasks through optimization and partitioning. Introduced in March 2008, Spring Batch is popular among Java developers and is part of the Spring portfolio. It is widely used in modern enterprise systems to handle complex batch processing tasks efficiently.
Running Spring Batch jobs in the cloud presents several challenges:
Scalability: Ensuring batch jobs can scale efficiently to handle large volumes of data.
Cost Management: Controlling costs by only paying for resources when jobs are running.
Job Lifecycle Management: Managing the lifecycle of batch jobs, including scheduling, monitoring, and restarting jobs if they fail.
Infrastructure Management: Handling the underlying infrastructure, such as servers and storage, required to run batch jobs.
Security: Securing the batch jobs and the data they process.
Monitoring: Setting up effective monitoring and logging for job performance and errors.
Again, you can now use Azure Spring Apps to effectively run Spring Batch applications with adaptive cost control:
You only pay when batch jobs are running.
You can simply lift and shift your Spring Batch jobs with no code change.
We are announcing the public preview of Jobs in Azure Spring Apps to enable you to deploy and scale Spring Batch applications without worrying about job scalability, cost control, lifecycle, infrastructure, security, and monitoring. This makes it easier to handle large-scale data processing efficiently, leveraging the flexibility and scalability of the cloud.
Introduction to Jobs in Azure Spring Apps
Jobs in Azure Spring Apps are tasks with a finite lifespan — they start, perform processing, and exit upon completion. Each job execution typically handles a single unit of work and can run from minutes to hours, with multiple executions running simultaneously. Examples include batch processes that run on demand and scheduled tasks — a great fit for scenarios such as data processing, machine learning, building intelligence for AI applications, and any scenario where on-demand processing is required. This capability enables developers to efficiently manage and scale tasks within their applications, ensuring optimized performance and resource usage in a cloud environment.
Jobs in Azure Spring Apps enable you to run containerized, run-to-completion tasks within your environment. They will support three trigger types:
Manual: Triggered on demand by a user or application.
Schedule: Runs on a recurring schedule.
Event: Triggered by an event, like a message in a queue, and can be used for CI/CD pipeline build agents.
Currently, the public preview supports manual triggers. Our engineering team is actively working on adding support for scheduled and event-based triggers, which will be available soon. This ongoing development ensures that you can fully leverage the flexibility and power of Azure Spring Apps for all your batch processing needs.
Jobs share the same environment as your Spring applications, enabling shared resources like networking and storage. You can create and manage jobs, bind secrets with Azure Key Vault, secure communications, and monitor jobs, just like your Spring applications in Azure Spring Apps. You can combine Jobs and Apps to build powerful solutions.
Deploy Spring Batch Jobs in 3 Easy Steps
With these simple steps, you can quickly deploy and run your Spring Batch jobs on Azure Spring Apps.
Achieve Cost Efficiency and Simplicity with Adaptive Cost Control for Spring Batch Jobs
Let’s use an example to explain adaptive cost control. Suppose you have a Spring Batch job needing 8 vCPUs and 16 GB of memory. Normally, you’d use a larger virtual machine, like an Azure Virtual Machine D16v5, costing around $572 USD per month. Even if you run the job for only 2 hours a day, you still pay for the full month and handle maintenance for the OS, packages, JDK, and APM.
With Azure Spring Apps, you allocate 8 vCPUs and 16 GB for just the job’s runtime, say 60 hours a month. This costs around $45 USD per month, with all underlying infrastructure maintenance — OS, packages, JDK, and APM — handled for you. This reduces both infrastructure costs and the effort required by your developers and platform engineers. This approach is known as adaptive cost control.
Deploy Spring Batch Jobs and Share Your Feedback
Azure Spring Apps delivers simplicity and productivity, and you can leverage Spring experts to make your projects even more successful. You can easily deploy your Spring and polyglot applications – and now Spring Batch Jobs – to the cloud and get them up and running in no time. It’s a golden path to production that simplifies the deployment process and optimizes your resource usage. We’ll continue to innovate tools and optimize services for streamlining Spring app migration to cloud at scale and running those Spring apps efficiently and economically – Faster, Cheaper, and Better.
And the best part? We’re offering FREE monthly grants on all tiers – 50 vCPU hours and 100 GB hours per tier. This is the number of FREE hours you get BEFORE any usage is billed, giving you a chance to test out the service without any financial charges.
So why wait? Take advantage of our FREE monthly grants and deploy your first Spring Batch Job to Azure Spring Apps today!
Go to aka.ms/first-spring-batch-job !!
Microsoft Tech Community – Latest Blogs –Read More
.NET8 MAUI Image “gif” animation Debug vs. Release
Has anyone figured out how to get a MAUI Image “gif” animation to work in Release mode?
Using Visual Studio 2022’s Android Device Manager, Emulator set to Tablet M-DPI 10.1in – API34, Android 14.0 – API 34, the MAUI Image animation in Debug mode works every single time! Awesome! However, when I switch the Build to Release mode and Deploy to the Emulator, the application responds just fine but I see a Image control presenting a FROZEN “gif” and I don’t know how to solve the problem.
I experience the same FROZEN “gif” problem if the Emulator is running Pixel 6 Pro Android 14 – API 34.
Using the Debug build, Pixel 6 Pro Android 14 – API 34 Emulator shows the Image control animating the “gif” perfectly!
However, switching Build to Release mode and Deploying to the Pixel 6 Pro Emulator, again I experience the application responding just fine, but the Image control presents a FROZEN “gif”.
Here’s my XAML definition for my Image element:
<Image
x:Name=”ClintHatGif”
Source=”clinteastwood.gif”
IsAnimationPlaying=”True”
Aspect=”AspectFit”
VerticalOptions=”Center”
HeightRequest=”180″ />
When I select my Project and visit “Manage NuGet Packages” and select the “Updates” tab, no updates appear. So, I think I’ve got the latest.
Maybe you know of a NuGet Package or a Build Release setting that solves the problem? I’m unsure how to proceed.
Thanks for reading this post.
Has anyone figured out how to get a MAUI Image “gif” animation to work in Release mode? Using Visual Studio 2022’s Android Device Manager, Emulator set to Tablet M-DPI 10.1in – API34, Android 14.0 – API 34, the MAUI Image animation in Debug mode works every single time! Awesome! However, when I switch the Build to Release mode and Deploy to the Emulator, the application responds just fine but I see a Image control presenting a FROZEN “gif” and I don’t know how to solve the problem. I experience the same FROZEN “gif” problem if the Emulator is running Pixel 6 Pro Android 14 – API 34.Using the Debug build, Pixel 6 Pro Android 14 – API 34 Emulator shows the Image control animating the “gif” perfectly! However, switching Build to Release mode and Deploying to the Pixel 6 Pro Emulator, again I experience the application responding just fine, but the Image control presents a FROZEN “gif”.Here’s my XAML definition for my Image element: <Image
x:Name=”ClintHatGif”
Source=”clinteastwood.gif”
IsAnimationPlaying=”True”
Aspect=”AspectFit”
VerticalOptions=”Center”
HeightRequest=”180″ /> When I select my Project and visit “Manage NuGet Packages” and select the “Updates” tab, no updates appear. So, I think I’ve got the latest. Maybe you know of a NuGet Package or a Build Release setting that solves the problem? I’m unsure how to proceed.Thanks for reading this post. Read More
FILTER and COUNTA function returning 1 even when no data is found
Hi,
I am using the formula below to track an unique count of Red Hat Enterprise OS which are in a specific migration wave. However, the formula is returning a count of “1” even though there are no Red Hat Enterprise OS’s in the wave in question. How can I modify this formula to provide an accurate unique count of Red Hat devices?
=COUNTA(UNIQUE(FILTER(MasterServerToApp[Server], (MasterServerToApp[Wave] = B4) * ISNUMBER(SEARCH(“Red Hat Enterprise”, MasterServerToApp[OS Trim])))))
Below are two screenshots which prove there are “0” Red Hat Enterprise devices, however the dashboard still shows a value of “1”
Thanks,
Connor
Hi,I am using the formula below to track an unique count of Red Hat Enterprise OS which are in a specific migration wave. However, the formula is returning a count of “1” even though there are no Red Hat Enterprise OS’s in the wave in question. How can I modify this formula to provide an accurate unique count of Red Hat devices?=COUNTA(UNIQUE(FILTER(MasterServerToApp[Server], (MasterServerToApp[Wave] = B4) * ISNUMBER(SEARCH(“Red Hat Enterprise”, MasterServerToApp[OS Trim])))))Below are two screenshots which prove there are “0” Red Hat Enterprise devices, however the dashboard still shows a value of “1” Thanks,Connor Read More
Permanently deleted log analytics workspace in Azure and how to recover it ?
Have permanently deleted log analytics workspace in Azure environment and need to recover the deleted workspace.
Reason# Recreated a new log analytics workspace but when tried to check under conditional access –> Insights and reporting
receiving an error message as insufficient permission and highlighting the deleted log analytics workspace.
/subscriptions/xxxxxxxxxxxxxxxxx/resourceGroups/rg-test-prod-uks-001/providers/Microsoft.OperationalInsights/workspaces/
Error code showing : 403 | Content : NewLogAnalyticsBlade
Any idea on how to recover the deleted one or how to fix this permission issue.
Impact: Not able to get audit logs from Conditional access policies to Log analytics workspace.
Thanks.
Have permanently deleted log analytics workspace in Azure environment and need to recover the deleted workspace.Reason# Recreated a new log analytics workspace but when tried to check under conditional access –> Insights and reportingreceiving an error message as insufficient permission and highlighting the deleted log analytics workspace. /subscriptions/xxxxxxxxxxxxxxxxx/resourceGroups/rg-test-prod-uks-001/providers/Microsoft.OperationalInsights/workspaces/Error code showing : 403 | Content : NewLogAnalyticsBladeAny idea on how to recover the deleted one or how to fix this permission issue.Impact: Not able to get audit logs from Conditional access policies to Log analytics workspace.Thanks. Read More
Scheduling a meeting with many required attendees
I’m trying to book a 2 hour meeting across 3 time zones (EST,PST,Mountain) and although the scheduling assistant is helpful, its still really tedious.
Does anyone know of a plugin or helper app that can do that work for me? Find two hours and then I can create the meeting?
Thanks
I’m trying to book a 2 hour meeting across 3 time zones (EST,PST,Mountain) and although the scheduling assistant is helpful, its still really tedious. Does anyone know of a plugin or helper app that can do that work for me? Find two hours and then I can create the meeting? Thanks Read More
Join Our Post-Build AMA on Copilot for Microsoft 365 – May 23rd at 9 am PST
We are hosting a post-Build AMA event on Copilot for Microsoft 365 on Thursday, May 23rd, at 9 AM PDT / 12 PM EST. This session will focus on the announcements made at Microsoft Build 2024 about Copilot for Microsoft 365. Be sure to RSVP and join us in the Copilot for Microsoft 365 Tech Community
We are hosting a post-Build AMA event on Copilot for Microsoft 365 on Thursday, May 23rd, at 9 AM PDT / 12 PM EST. This session will focus on the announcements made at Microsoft Build 2024 about Copilot for Microsoft 365. Be sure to RSVP and join us in the Copilot for Microsoft 365 Tech Community Read More
NEW outlook will NOT play sound when notifications appear.
I have looked absolutely everywhere and tried soooo many workarounds and settings changes both in the NEW outlook, the WEB outlook, reverting back to the OLD outlook and back again, ENSURED my windows notification settings, battery settings, focus assist settings are all CORRECT, I have enabled notifications for outlook on windows, for all apps, I have went through ALL the old outlook settings, web and new (desktop) outlook notification settings and even my windows control center’s sound settings.
I receive sounds for EVERY other app I have notifications set except for outlook. very unhappy with the new outlook but am determined to continue using it as I am now comfortable with it. I am using an HP-Dragonfly 3 laptop with Windows 11. how difficult can it be to get the sound working properly? I work in tech and as I’ve said, I’ve gone through ALL the windows help forum discussions and many others outside of windows/outlook trying to find a solution.
Has anyone with this issue actually found a solution?
I have looked absolutely everywhere and tried soooo many workarounds and settings changes both in the NEW outlook, the WEB outlook, reverting back to the OLD outlook and back again, ENSURED my windows notification settings, battery settings, focus assist settings are all CORRECT, I have enabled notifications for outlook on windows, for all apps, I have went through ALL the old outlook settings, web and new (desktop) outlook notification settings and even my windows control center’s sound settings. I receive sounds for EVERY other app I have notifications set except for outlook. very unhappy with the new outlook but am determined to continue using it as I am now comfortable with it. I am using an HP-Dragonfly 3 laptop with Windows 11. how difficult can it be to get the sound working properly? I work in tech and as I’ve said, I’ve gone through ALL the windows help forum discussions and many others outside of windows/outlook trying to find a solution. Has anyone with this issue actually found a solution? Read More
Need Help With Rule
Hi! I am trying to create a new rule that will successfully sort emails like the attached example into a separate folder. The problem is that there is nothing in either the subject line or the body of the email that is unique. In the attached example, what you see for the body of the message is the entirety of the message–there is nothing beyond the reference number line that I could key on. Every email produced by our ticketing system includes “Ref:MSG” followed by a number, so I my rule cannot key on that without picking up a lot of other messages that I don’t want to catch with this rule.
In the attached example, the text “Short Description: Move” would be enough to make the rule work, but I cannot figure out how to get at that. It’s not part of the subject line, it’s not part of the body, and I don’t find it in the message headers. However, if I right-click on that text and select the View Source option, it appears to be HTML. I get this:
</head><body><div>Short Description: Move
Is there any way that I can use that in a rule?
Thanks for any help that you can offer!
–Tom
Hi! I am trying to create a new rule that will successfully sort emails like the attached example into a separate folder. The problem is that there is nothing in either the subject line or the body of the email that is unique. In the attached example, what you see for the body of the message is the entirety of the message–there is nothing beyond the reference number line that I could key on. Every email produced by our ticketing system includes “Ref:MSG” followed by a number, so I my rule cannot key on that without picking up a lot of other messages that I don’t want to catch with this rule. In the attached example, the text “Short Description: Move” would be enough to make the rule work, but I cannot figure out how to get at that. It’s not part of the subject line, it’s not part of the body, and I don’t find it in the message headers. However, if I right-click on that text and select the View Source option, it appears to be HTML. I get this: </head><body><div>Short Description: Move Is there any way that I can use that in a rule? Thanks for any help that you can offer! –Tom Read More
Duplicate Domains Different Tenants
We have an issue we have a tenant registered as abwidget.com our internal AD is abw.com. Once we started using azure ad sync(hybrid mode) we found out there is a company with registered abw.com for their domain. Now when our users try to log in it directs them to the other company even though they are using abwidget.com to identify.Azure AD
We have an issue we have a tenant registered as abwidget.com our internal AD is abw.com. Once we started using azure ad sync(hybrid mode) we found out there is a company with registered abw.com for their domain. Now when our users try to log in it directs them to the other company even though they are using abwidget.com to identify.Azure AD Read More
Introducing Model Customization for Azure AI
We are thrilled to announce the launch of our Model Customization for Azure AI, an engineering service designed to accelerate our co-innovation with customers to deliver tailored AI solutions. Our commitment to empowering our customers extends beyond the provision of tools and platforms; we are offering an opportunity for selected customers to collaborate closely with our engineering and research teams to develop custom models tailored to their unique domain-specific needs.
Custom models can offer significant benefits for enterprises and complement other techniques such as fine-tuning, retrieval-augmented generation (RAG), and prompt engineering by encapsulating specialized domain knowledge and understanding nuanced context in the domain. By refining the model’s parameters for a specific domain, custom models can improve accuracy and enhance the model’s ability to comprehend the subtleties of language and context, as well as specific domain knowledge. This refinement aids in better generalization within that domain, enabling the model to perform well with new data while minimizing overfitting. Additionally, custom models can increase robustness, equipping the model to handle diverse scenarios and protect against potential vulnerabilities. They can also incorporate safety and ethical considerations, ensuring responsible and fair AI behavior. Moreover, custom models will be able to enhance language proficiency by refining the model’s ability to process and generate text in a specific language. This can lead to more efficient use of tokens, resulting in smoother and more coherent language output.
Engineering Excellence Meets Domain Expertise
At the heart of this co-innovation approach lies the synergy between Microsoft’s engineering excellence and the domain expertise of our customers. We understand that the challenges faced in specialized fields require customized AI solutions to maximize value realization.
For businesses, the ability to leverage AI that precisely understands and operates within their specific context can be highly beneficial, as it not only understand the intricacies of a specific domain but can also enhance the capabilities within that sphere. This level of customization can potentially improve accuracy in tasks such as customer service, predictive analytics, and decision-making processes, directly contributing to improved operational efficiency and customer satisfaction. Additionally, custom-trained models are designed to handle tasks that require understanding complex, specialized knowledge, offering the possibility of enhanced performance over standard models in these scenarios.
Our Model Customization service offers customers the opportunity to work hand-in-glove with our world-class AI engineers. By collaborating closely, we can develop models that are uniquely tailored to specific business needs, leveraging advanced techniques and extensive expertise to ensure that AI solutions are both accurate and contextually relevant. That is why we are offering this paid-for service with our expert engineering and science resources to help our customers.
For more information, please reach out to your Microsoft representatives or account managers.
As we embark on this journey together, we are not just providing a service; we are creating innovations that can define the future of domain-specific AI applications.
Learn more about Azure AI
Build with Azure AI Studio: ai.azure.com
Get the latest Azure AI news and resources
Apply now for access to Azure OpenAI Service
Learn more about What’s new in Azure OpenAI Service?
If you are a current Azure OpenAI customer and would like to add additional use cases, fill out the Azure OpenAI Additional Use Case form.
Responsible AI: Transparency Note for Azure OpenAI Service
Microsoft Tech Community – Latest Blogs –Read More
Announcing SharePoint Embedded General Availability
Today we’re pleased to announce the general availability of SharePoint Embedded, a new way to build file and document centric apps. SharePoint Embedded allows you to integrate advanced Microsoft 365 features into your apps including full featured collaborative functions from Office, Purview’s security and compliance tools, and Copilot capabilities. It also helps you build both enterprise line of business apps and independent software vendor (ISV) apps. SharePoint Embedded is a metered service with pay-as-you-go pricing. In addition, we are also excited to announce a private preview of SharePoint Embedded custom copilot experiences.
View all documentation on Microsoft Learn and register here to stay up to date with the latest newsletters and upcoming webinars.
Enterprises today often have files and documents spread across multiple systems, all with different capabilities, lowering user satisfaction and increasing administrative complexity. SharePoint Embedded delivers Microsoft 365 superpowers as part of any app and consolidates all files and documents within a universal document layer. Apps that manage files and documents with SharePoint Embedded have a common set of collaboration, compliance, security, and AI capabilities, all designed to delight users and admins.
SharePoint Embedded is a headless, API only version of SharePoint, specifically built for apps. SharePoint Embedded introduces the ability for an app developer to create and manage a dedicated partition for their app within their Microsoft 365 tenant. This partition is logically separated from existing storage areas like SharePoint Online and OneDrive, but integrated with core Microsoft 365 services, including Office co-authoring, search, compliance, Copilot, business continuity, and more. And, since it’s a pay-as-you-go service, apps built on it have their own limits around things like API transaction rates, rather than being part of shared Microsoft 365 limits. SharePoint Embedded apps build and manage their own user experience layer and are managed by admins through familiar Microsoft 365 admin centers ISVs can now create their own partitions within a customer’s M365 tenant, surfacing the same capabilities as part of their app. With an ISV app, tenants remain in control of their documents, and tenant specific compliance settings such as retention periods automatically apply.
Building a file and document centric application presents unique challenges, from compliance to collaboration to AI. SharePoint Embedded handles all of this and simplifies and accelerates your file and document management roadmap, for any app. Developers leverage the robust and secure document management features of Microsoft 365 without the need to build or maintain their own infrastructure. IT professionals benefit from centralized administration and governance, ensuring compliance and security across all applications that use it. Users get the collaboration experience and productivity tools they love.
Teams at Microsoft already use SharePoint Embedded to provide apps like Microsoft Loop and Microsoft Designer with rich file and document management capabilities for use around the world. When you choose SharePoint Embedded, you’re using the exact same platform that Microsoft uses to build our own apps.
Many customers and partners like KPMG, Peppermint Technologies, BDO, AvePoint and more are already working with SharePoint Embedded to solve common business process and content management problems.
Proventeq, a long time Microsoft partner, is using SharePoint Embedded to build apps that help customers rationalize their document management footprint into a universal document layer powered by Microsoft 365.
“SharePoint Embedded is a great approach to managing documents originating in systems outside of Microsoft 365,” said Rakesh Chenchery, Chief Technology Officer at Proventeq, whose product Proventeq Document Management for Salesforce is generally available today. “SharePoint Embedded was simple to integrate into our existing app and gives us a high-performance solution with the easy to manage security and rich collaboration tools our customers are looking for.“
Announcing custom copilot experiences for SharePoint Embedded
In addition to out of the box integration with Microsoft 365 Copilot, today we are pleased to announce that custom copilot experiences based on your SharePoint Embedded managed data and built on the Copilot platform are now in private preview. With custom copilot experiences, you can create robust interactions with your SharePoint Embedded managed data, and easily surface these within your app. If you would like to nominate your company for the SharePoint Embedded custom copilot private preview, please complete this form.
Resources
Discover a new way of building and operating apps with SharePoint Embedded at SharePoint Embedded Overview | Microsoft Learn.
Learn more about SharePoint Embedded development on the Microsoft 365 Community Call SharePoint Embedded playlist.
Watch the SharePoint Embedded announcement at Microsoft BUILD.
Join the next SharePoint Embedded webinar here.
Register here to stay up to date with the latest from the SharePoint Embedded team.
Microsoft Tech Community – Latest Blogs –Read More
General Availability of license-free standby replica for Azure SQL database
We are excited to announce General Availability of license-free standby replica for Azure SQL Database letting you to save on licensing costs by designating your secondary disaster recovery database as standby replica. Typically license costs constitute to be about 40% and so with license-free standby replica the secondary will be about 40% less expensive.
To protect database powering the application from region failures and achieving higher business continuity it is crucial to enable disaster recovery for database. In some industries it is mandatory and part of compliance requirement to have disaster recovery in place and frequently conduct drills. One of the biggest hindrances in enabling disaster recovery has been cost as secondary database is mainly used in the event of a disaster.
When a secondary database replica is used only for disaster recovery, and doesn’t have any workloads running on it, or applications connecting to it, you can save on licensing costs by designating the database as a standby replica. Microsoft provides you with the number of vCores licensed to the primary database at no extra charge under the failover rights benefit in the product licensing terms for standby replica. You’re still billed for the compute and storage that the secondary database uses.
The standby database replica must only be used for disaster recovery. The following lists the only activities that are permitted on the standby database:
Perform maintenance operations, such as checkDB
Connect monitoring applications
Run disaster recovery drills
You can designate one secondary single database deployment model as license-free standby replica in General Purpose & Business Critical service tier and provisioned compute tier. It is possible to configure license-free standby replica using portal, powershell or CLI.
Additional capabilities added for general availability release are:
Perform in place update of geo replica to standby replica using portal and REST API.
Assign standby replica while creating failover group using portal and REST API.
Estimate cost for standby replica by using Azure pricing calculator and selecting Standby replica in Disaster Recovery dropdown.
For comprehensive details on license-free standby replica including limitations and frequently asked questions, please refer to documentation.
Microsoft Tech Community – Latest Blogs –Read More
The #1 factor in ADX/KQL database performance
The #1 factor in ADX/KQL database performance
In Power BI or any other tool
In this article I’ll show many variations of a query executed on a large table that contains public events arriving at GitHub.
The query summarizes data for 10 or 20 days and I compare the CPU consumption of the query in different syntax variations.
I mention only CPU time and not execution time because execution can vary by the cluster size and load on the cluster.
My purpose is to demonstrate how the query performs well when the date filter is used by the engine to limit the number of scanned extents (aka shards).
In some cases, the query scans all extents, and it takes a lot of CPU.
In other cases, only a small subset of the extents are scanned and performance is good.
In a follow-up article I’ll explain how Power BI and ADX dashboards can be used to filter and join tables in an optimal way.
Queries on a single table
1. The query summarize 10 days of data.
An element is extracted from a Json structure and a distinct count operation is done on the extracted value. These two operations contribute significantly to the overall cost.
Above each query you can see the CPU seconds, the volume of scanned data and the number of scanned extents.
// 6.53 1.98GB 128
EventsFromLiveStream
| where CreatedAt between(datetime(2024-4-1)..10d)
| extend Login=tostring(Actor.login)
| summarize count(),dcount(Login) by Type
2. The same for 20 days. The cost is almost exactly double which is expected.
This is the benchmark against which we can compare all other variations.
// 12.5 3.63GB 132
EventsFromLiveStream
| where CreatedAt between(datetime(2024-4-1)..20d)
| extend Login=tostring(Actor.login)
| summarize count(),dcount(Login) by Type
3. A function is applied to the datetime column and so the effect of filtering is lost. All data is scanned and cost is 4 times more
// 49.87 8.67 all
EventsFromLiveStream
| extend shiftdata=datetime_add(‘hour’,2,CreatedAt)
| where shiftdata between(datetime(2024-4-1)..20d)
| extend Login=tostring(Actor.login)
| summarize count(),dcount(Login) by Type
4. Another variation of shifting the datetime 2 hours forward and then filtering. Equally bad as #3
// 49.3 8.67 all
EventsFromLiveStream
| extend shiftdata=CreatedAt + 2h
| where shiftdata between(datetime(2024-4-1)..20d)
| extend Login=tostring(Actor.login)
| summarize count(),dcount(Login) by Type
5. Another function (bin) is applied to the datetime column but this time the filter is applied correctly. Cost is a bit higher because the actual bin function needs to be calculated.
// 13.42 3.79GB 132
EventsFromLiveStream
| extend Day=bin(CreatedAt,1d)
| where Day between(datetime(2024-4-1)..20d)
| extend Login=tostring(Actor.login)
| summarize count(),dcount(Login) by Type
6. Same as #5, startofday , startofmonth are also applied correctly.
// 13.51 3.79GB 132
EventsFromLiveStream
| extend Day=startofday(CreatedAt)
| where Day between(datetime(2024-4-1)..20d)
| extend Login=tostring(Actor.login)
| summarize count(),dcount(Login) by Type
7. The worst case scenario – 45 times slower than the base
Trying to shift the datetime value using a very expensive function that needs to be applied to all rows. Also, the filter cannot be used.
In this case filtering on 10 days or 20 days cost the same because almost all the CPU is spent on the datetime_utc_to_local function.
// 9:51.67 8.7GB all
EventsFromLiveStream
| extend LocalTime=datetime_utc_to_local(CreatedAt,’America/Buenos_Aires’)
| where LocalTime between(datetime(2024-4-1)..20d)
| extend Login=tostring(Actor.login)
| summarize count(),dcount(Login) by Type
8. Shifting the filter range instead of shifting the data.
Cost is back to base.
Notice that leaving the statement of calculating local time doesn’t cost anything because the result is not used so it is not calculated
// 19.5 3.53GB 154
EventsFromLiveStream
| extend LocalTime=datetime_utc_to_local(CreatedAt,’America/Buenos_Aires’)
| where CreatedAt between(datetime_local_to_utc(datetime(2024-4-1),’America/Buenos_Aires’)..20d)
| extend Login=tostring(Actor.login)
| summarize count(),dcount(Login) by Type
9. Add another where clause on the base datetime column .
Still more expensive but not by such a big margin.
Notice that although the filter on the original column is mentioned after the calculation of the shifted datetime value , it is executed before and so only a small subset of the data is actually shifted.
// 19.5 3.63GB 154
EventsFromLiveStream
| extend LocalTime=datetime_utc_to_local(CreatedAt,’America/Buenos_Aires’)
| where CreatedAt between (datetime(2024-3-30) ..21d )
| where LocalTime between(datetime(2024-4-1)..20d)
| extend Login=tostring(Actor.login)
| summarize count(),dcount(Login) by Type
Applying the filter on the left side of a join
10. A dates table is joined with the Events table.
The dates table is on the left side of the join.
The filter on the dates table is applied to the right side when the filter is using in or ==
// 1:16 8:14GB 134
let Calendar = range Day from datetime(2024-1-1) to datetime(2024-12-31) step 1d;
Calendar | where Day in(
datetime(2024-04-01T00:00:00Z),
datetime(2024-04-02T00:00:00Z),
datetime(2024-04-03T00:00:00Z),
datetime(2024-04-04T00:00:00Z),
datetime(2024-04-05T00:00:00Z),
datetime(2024-04-06T00:00:00Z),
datetime(2024-04-07T00:00:00Z),
datetime(2024-04-08T00:00:00Z),
datetime(2024-04-09T00:00:00Z),
datetime(2024-04-10T00:00:00Z),
datetime(2024-04-11T00:00:00Z),
datetime(2024-04-12T00:00:00Z),
datetime(2024-04-13T00:00:00Z),
datetime(2024-04-14T00:00:00Z),
datetime(2024-04-15T00:00:00Z),
datetime(2024-04-16T00:00:00Z),
datetime(2024-04-17T00:00:00Z),
datetime(2024-04-18T00:00:00Z),
datetime(2024-04-19T00:00:00Z),
datetime(2024-04-20T00:00:00Z))
| join kind=inner hint.strategy=broadcast
(EventsFromLiveStream | extend Day=startofday(CreatedAt)) on Day
| extend Login=tostring(Actor.login)
| summarize count(),dcount(Login) by Type
11. When the filter on the left side is using a between or < or > , it is not applied to the right side of the join.
The results are correct, but performance is bad.
// 47:40 206.5GB all
let Calendar = range Day from datetime(2024-1-1) to datetime(2024-12-31) step 1d;
Calendar | where Day between(datetime(2024-4-1)..20d)
| join kind=inner hint.strategy=broadcast
(EventsFromLiveStream | extend Day=startofday(CreatedAt)) on Day
| extend Login=tostring(Actor.login)
| summarize count(),dcount(Login) by Type
12. Improvement on the join, extract the Login value inside the parentheses of the right side table. The join still have a cost but the overall is much less than in #9.
The reason is that the dynamic column Actor does not need to be part of the join,only the login value
// 24 8:33GB 150
let Calendar = range Day from datetime(2024-1-1) to datetime(2024-12-31) step 1d;
Calendar | where Day in(
datetime(2024-04-01T00:00:00Z),
datetime(2024-04-02T00:00:00Z),
datetime(2024-04-03T00:00:00Z),
datetime(2024-04-04T00:00:00Z),
datetime(2024-04-05T00:00:00Z),
datetime(2024-04-06T00:00:00Z),
datetime(2024-04-07T00:00:00Z),
datetime(2024-04-08T00:00:00Z),
datetime(2024-04-09T00:00:00Z),
datetime(2024-04-10T00:00:00Z),
datetime(2024-04-11T00:00:00Z),
datetime(2024-04-12T00:00:00Z),
datetime(2024-04-13T00:00:00Z),
datetime(2024-04-14T00:00:00Z),
datetime(2024-04-15T00:00:00Z),
datetime(2024-04-16T00:00:00Z),
datetime(2024-04-17T00:00:00Z),
datetime(2024-04-18T00:00:00Z),
datetime(2024-04-19T00:00:00Z),
datetime(2024-04-20T00:00:00Z))
| join kind=inner hint.strategy=broadcast
(EventsFromLiveStream | extend Day=startofday(CreatedAt),Login=tostring(Actor.login)) on Day
| summarize count(),dcount(Login) by Type
Microsoft Tech Community – Latest Blogs –Read More
Partner Case Study Series | proMX: Dynamics 365 add-ons improve project management at Interflex
Helping businesses improve efficiency and achieve digital transformation
proMX is a Microsoft partner headquartered in Nuremberg, Germany. Since its founding in 2000, proMX has helped small and large businesses transform themselves into digital organizations, and it has supported them in their efforts to become more efficient. One of the ways proMX does this is through applications designed for Dynamics 365. Numerous add-ons, such as Time Tracking for Dynamics 365 Project Service Automation, are available on a free-trial basis on Microsoft AppSource.
proMX reports that, across several companies, the integration of its project management add-ons has led to improvements regarding administrative working hours, project documentation costs, and, most significantly, per-employee capacity utilization. On average, revenue per employee has increased by 15 percent, while costs have remained stable.
Continue reading here
**Explore all case studies or submit your own**
Microsoft Tech Community – Latest Blogs –Read More
What’s New in Azure App Service at Build 2024
Welcome to Build 2024!
The team will be covering the latest AI enhancements for migrating web applications, how AI helps developers to monitor and troubleshoot applications, examples of integrating generative AI into both classic ASP.NET and .Net Core apps, and platform enhancements for scaling, load testing, observability, WebJobs and sidecar extensibility.
Drop by the breakout session “Using AI with App Service to deploy differentiated web apps and APIs” on Thursday May 23rd (12:30PM to 1:15PM Pacific time – BRK125 – In-Person and Online) to see live demonstrations of all of these topics!
Azure App Service team members will also be in attendance at the Expert Meetup area on the fifth floor – drop by and chat if you are attending Build in-person!
There are additional demos and presentations from partner teams that will cover (in part) App Service specific scenarios, so if you have time consider the additional sessions as well!
Using AI with App Service to deploy differentiated web apps and APIs
BRK125
Thursday, May 23rd
12:30 PM – 1:15 PM Pacific Daylight Time
Breakout Session – In-Person and Online
App innovation in the AI era: cost, benefits, and challenges
BRK120
Tuesday, May 21st
4:45 PM – 5:30 PM Pacific Daylight Time
Breakout Session – In-Person and Online
Conversational app and code assessment in Azure Migrate
DEM713
Wednesday, May 22nd
10:30 AM – 10:45 AM Pacific Daylight Time
Demo Session – In-Person Only
Leverage Azure Testing Services to build high quality applications
BRK183
Thursday, May 23rd
1:45 PM – 2:30 PM Pacific Daylight Time
Breakout Session – In-Person and Online
Vision to value – SAS accelerates modernization at scale with Azure
BRK170
Thursday, May 23rd
1:45 PM – 2:30 PM Pacific Daylight Time
Breakout Session – In-Person and Online
GitHub Copilot Skills for Azure Migrate
In a recent IDC study of 900 IT decision makers worldwide, 74% of the respondents cited faster innovation, faster time to market, and/or improved business agility as one of the top benefits driving the business case for migrating and modernizing apps with a managed cloud service. Microsoft has been continuously investing in first party tools to make it easier and faster to migrate using the tools you already use and love. We are excited to announce that Azure Migrate application and code assessment, which was released at Microsoft Ignite 2023, now adds GitHub Copilot Chat enhancement to the Visual Studio migration extension!
Once you have the updated migration extension installed in Visual Studio, as well as enabling the Visual Studio GitHub Copilot Chat extension, GitHub Copilot Chat will guide you through the individual items found in the application migration report. You can ask questions like “Can I migrate this app to Azure?” or “What changes do I need to make to this code?” and get answers and recommendations from Azure Migrate. (Note: GitHub Copilot licenses sold separately).
You can get started by clicking on the “Open Chat” button in the compatibility report as shown below.
This will open an interactive chat session where you can chat with Copilot to iterate through the various assessment suggestions. In this example the migration report recommends moving secrets like database connection strings out of web.config or code, and into a secure location such as Azure Key Vault.
You can interactively step through recommended remediations for each issue:
In this example after selecting “No, I don’t have an Azure Key Vault…”, Copilot will show the commands necessary to setup Key Vault in Azure:
You can continue to walk through all of the migration suggestions and issues found in the assessment report in this manner, leveraging Copilot to provide specific steps, CLI commands, and code remediations to prepare your application for migration into Azure!
Sidecar Scenarios in Azure App Service on Linux
Sidecar patterns are a way to add extra features to an application, such as logging, monitoring and caching, without changing the application’s core code. Sidecar support for container based applications on Azure App Service on Linux is now in public preview! Public preview for using sidecars with source-code based applications is expected to be available this summer.
Common scenarios include attaching monitoring solutions to your application, including popular third-party application performance monitoring (APM) offerings. This example shows a container-based application configured with an OpenTelemetry (OTel) collector sidecar which exports metrics to OTel compatible targets. There are also additional examples showing how to integrate with commonly used ISV solutions such as Datadog with your web applications.
Other common scenarios include attaching a sidecar for in-memory caching using Azure Cache for Redis, and attaching a vector cache sidecar to reduce traffic to back-end LLLM resources when adding generative AI to your application.
At Microsoft Build 2024, breakout session BRK125 includes demonstrations of sidecar scenarios for both container-based and source-code based applications!
WebJobs for Azure App Service on Linux
Webjobs are background tasks that run on the same server as the web app and can perform various functions, such as sending emails, executing bash scripts and running scheduled jobs. WebJobs are now integrated with Azure App Service on Linux, which means they share the same compute resources as the web app to help save costs and ensure consistent performance. WebJobs support for both Azure App Service on Linux as well as Windows Containers on Azure App Service is broadly available in public preview.
WebJobs enable developers to easily run arbitrary code and scripts, in the language of their choice, on a variety of schedules including continuously, manually on-demand, or on a periodic schedule defined via a crontab expression. For example, Linux developers can continuously run shell scripts that perform background “infra-glue” tasks like scanning through a back-end database and sending email reports.
The full list of supported scripting options, as well as information on how to run jobs in a specific language, is available in the updated WebJobs documentation.
Automatic Scaling on Azure App Service
We’re happy to announce that the automatic scaling feature in Azure App Service is now generally available! Automatic scaling provides significant performance improvements for any web app without writing new code or making code changes. With this feature, Azure App Service automatically adjusts the number of application instances and worker instances based on dynamically assessing the incoming HTTP request rate and observed load on the underlying app service plan.
We improved the Automatic Scaling feature based on your feedback during the preview phase with expanded SKU availability and a new scaling metric:
Automatic scaling expanded support to encompass the P0v3 and P*mv3 SKUs.
A new metric called “AutomaticScalingInstanceCount” was added which shows the number of worker instances your application is consuming.
Let Azure App Service adjust the worker count of your App Service plan to match your web application load, without worrying about auto-scale profiles or manual control. It is like an “automatic cruise control” for your web apps! Also check out our community standup to see this feature in action!
Four Nines’ Resiliency is Kind of a Big Deal!
As of May 1st Azure App Service officially supports 99.99% resiliency when your app service plan is running in an Availability Zone based configuration! Availability Zones are isolated locations within an Azure region that provide high availability and fault tolerance. Please refer to the Service Level Agreement (SLA) documentation dated May 01, 2024 to learn more about the higher SLA.
Azure App Service Environment version 3: New and Notable
For customers using the Isolatedv2 SKU on App Service Environment v3 (ASEv3) with Windows, the new memory-optimized pricing tiers, denoted with an ‘m’ such as in Imv2, are now available and can be configured using the Azure CLI as well as ARM/Bicep! The memory optimized tiers provide a higher memory-to-core ratio than their regular counterparts. For instance, in one of the larger Isolated v2 tiers, both I5v2 and I5mv2 provide the same number of cores at 32 vCPU, but the memory-optimized tier has double the RAM at 256GB. Support for Linux and Windows Containers is expected to be available later this year. Portal support for Windows source-code based apps running on ASEv3 will also be available shortly after Build! Please refer to the product documentation to learn more about the new tiers and availability.
Friendly Reminder: While on the subject of Azure App Service Environment, allow me to rerun our public service announcement about the upcoming retirement of Azure App Service Environment v1 and v2 on August 31 2024. We recommend starting the migration process as soon as possible (time is quickly running out!). Many customers have already completed this migration with little to no downtime. Please visit product documentation for detailed steps, tools, and useful resources to help you. Our next community standup scheduled for June 5th will also cover this in detail.
TLS 1.3 and More!
We are pleased to announce that TLS 1.3 has been rolled out worldwide and is now generally available across App Service on Public Cloud and Azure for US Government! Customers can configure an application to require TLS 1.3 via the minimum TLS setting available in the Azure portal, as well as via ARM.
With the availability of TLS 1.3, App Service has also updated the TLS cipher suite order to account for recommended TLS 1.3 cipher suites. You will see the following two TLS cipher suites listed on the minimum TLS cipher suite feature:
TLS_AES_256_GCM_SHA384
TLS_AES_128_GCM_SHA256
As part of the TLS updates, App Service on both Windows and Linux support End to End (E2E) TLS Encryption (in public preview). Incoming HTTPS requests are usually terminated at the App Service front-ends, with the requests proxied to individual workers over HTTTP. With the updated E2E TLS Encryption feature, both Windows and Linux applications can choose to encrypt the requests between the App Service front-ends and the workers running applications. E2E TLS Encryption is available for Standard App Service Plans and above, and can be enabled in the Azure portal as well as via ARM and Azure CLI.
If you have an Azure Key Vault that uses Azure role-based access control (RBAC), you can now import that Key Vault certificate to your web app. Because newly created Key Vaults are configured to use RBAC by default, instead of the legacy access policies, this new support in Azure App Service will make it easier for you to integrate your Key Vault certificates with App Service. Support for importing certificates into App Service from Key Vault using RBAC permissions is available via ARM and the Azure CLI, with Azure portal support planned for the future. Developers can read more about this new support in the documentation.
For more information regarding TLS 1.3 on App Service, the new minimum cipher suites, and updates to E2E TLS Encryption refer to the all-inclusive article on the Microsoft Community Hub!
Better Together with Recommended Azure Service
You can now find recommendations in the Azure Portal for services commonly deployed with Azure App Service! The initial list is curated and primarily focuses on connecting newly created Azure resources to your existing App Service applications. An example of showing recommended services is shown below.
In addition to the curated listed, the new Recommended Services capability in Copilot for Azure offers quick recommendations tailored to your specific application. For instance, it can suggest a popular database suitable for your application type or ensure that you are “on the right track” with commonly deployed services, drawing insights from similar applications.
To use the new Copilot integrated capability, navigate to the Azure Portal and open Copilot for Azure. Examples of the types of questions that you can ask include: “What are commonly deployed services for my app?” or “What is the recommended database for my app?” Read more about these capabilities and try out the new Recommend Services Copilot capability today!
Azure Load Testing Integration
How many times has new code been released to production only to encounter unexpected performance related problems? With the recent release of Azure Load Testing integration with Azure App Service, there has never been a better time to easily run load tests on your web applications. Discover performance problems before they make it into production and uncover race conditions and other load related bugs ahead of time!
You can start setting up load tests directly from the Overview page of your web applications.
As part of this you configure one or more Urls to include in the test run.
You also configure the size of the load test, along with other parameters governing startup behavior and load test duration. After the load test is completed, you will see summarized results for the specific load test where you can also drill down to more detailed metrics.
For more advanced scenarios involving high-scale production scenarios, Azure Load Testing integration also makes it easy for developers to experiment with different scaling strategies and compare the results to achieve desired workload performance.
Language and Deployment Updates
App Service regularly updates major and minor language versions across both the Windows and Linux variants of the platform. As part of that continuing cadence, App Service on Linux just released PHP 8.3 last week! And just last month WordPress on Linux App Service GA’d the Free Tier option which includes a twelve month no-cost backend database running on Azure Database for MySql!
An interesting technical tidbit for the curious, there is also a great write-up here on how to use WordPress on App Service as a headless CMS back-end in conjunction with Azure Static Web Apps.
gRPC has been generally available for App Service on Linux since last November. We’re happy to announce that gRPC support is now available in public preview for App Service on Windows! The team recently demonstrated using gRPC on Windows and Linux at the recently concluded .Net Day 2024.
Azure App Service on Linux has also added a new deployment status tracking API that surfaces detailed deployment log information when deploying source-code based applications. The deployment status tracking API surfaces detailed step-by-step progress information including specific failure information, a link to follow for more detailed deployment failure logs, and post-deployment app startup information. The platform is continuing to expand this capability with additional integration planned for the Azure Portal. For more details on the new deploy status tracking API and guidance on how to use it see this article!
Next Steps
Developers can learn more about Azure App Service at Getting Started with Azure App Service. Stay up to date on new features and innovations on Azure App Service via Azure Updates as well as the Azure App Service (@AzAppService) X feed. There is always a steady stream of great deep-dive technical articles about App Service as well as the breadth of developer focused Azure services over on the Apps on Azure blog.
Take a look at innovation with .Net, and .Net on Azure App Service, with the recently completed .Net Day 2024 event where the new code assessment migration tools were demonstrated as well as gRPC functionality running on both Windows and Linux App Service.
And lastly take a look at Azure App Service Community Standups hosted on the Microsoft Azure Developers YouTube channel. The Azure App Service Community Standup series regularly features walkthroughs of new and upcoming features from folks that work directly on the product!
Microsoft Tech Community – Latest Blogs –Read More
Azure SQL DB availability portal metric
Azure SQL database is the modern cloud based relational database service to power wide variety of applications including mission critical, resource intensive and the latest generative AI applications. Azure SQL database provides industry leading availability SLA of 99.99%. We know customers want to monitor availability of critical Azure services like Azure SQL database in granular, consistent way and in near real time with high quality data.
We are excited to announce Public Preview of Availability portal metric enabling you to monitor SLA compliant availability. This Azure monitor metric is emitted at 1-minute frequency and has up to 93 days of history. Typically, the latency to display availability is less than three minutes. You can visualize the metric in Azure monitor and set up alerts too.
Availability is determined based on the database being operational for connections. A minute is considered as downtime or unavailable if all continuous attempts by users to establish connection to the database within the minute fail due to a service issue. If there is intermittent unavailability, the duration of continuous unavailability must cross the minute boundary to be considered as downtime.
Availability metric data is applicable for a database in DTU or vCore purchasing model and in all the service tiers (Basic, Standard, Premium, General Purpose, Business Critical & Hyperscale). Both singleton and elastic pool deployments are supported. You can monitor the metric by adding Availability metric in portal as shown below:
For comprehensive details on Availability metric like the logic used for computing availability please refer to documentation. To learn more of Azure SQL database Service Level Agreements (SLA) refer to SLA.
Microsoft Tech Community – Latest Blogs –Read More
Leveraging Azure AI Services to Build, Deploy, and Monitor AI Applications with .NET
Azure AI services offer robust tools and platforms that enable developers to bring their AI solutions from concept to production seamlessly. Using .NET 8 alongside these services, developers can experiment, build, and scale their AI applications effectively. This post explores how you can harness the power of Azure AI and .NET to transform your ideas into production-ready AI solutions.
From Prototyping to Production with Azure AI
Start your AI journey by experimenting with local prototypes using Azure AI’s extensive suite of tools. Azure Machine Learning and Azure Cognitive Services provide the necessary components to plug in different AI models and build comprehensive solutions. When you’re ready to scale, Azure OpenAI Service and .NET Aspire enable you to run and monitor your applications efficiently, ensuring high performance and reliability.
Why Build AI Apps with Azure AI Services?
Integrating AI into your applications with Azure AI offers numerous benefits:
Enhanced User Engagement: Deliver more relevant and satisfying user interactions.
Increased Productivity: Automate tasks to save time and reduce errors.
New Business Opportunities: Create innovative, value-added services.
Competitive Advantage: Stay ahead of market trends with cutting-edge AI capabilities.
Getting Started with Azure AI and .NET
Explore the new Azure AI and .NET documentation to learn core AI development concepts. These resources include quickstart guides to help you get hands-on experience with code and start building your AI applications.
Utilizing Semantic Kernel
Semantic Kernel, an open-source SDK, simplifies building AI solutions by enabling easy integration with various models like OpenAI, Azure OpenAI, and Hugging Face. It supports connections to popular vector stores such as Weaviate, Pinecone, and Azure AI Search. By providing common abstractions for dependency injection in .NET, Semantic Kernel allows you to experiment and iterate on your apps with minimal code impact.
Testing and Monitoring with .NET Aspire
.NET Aspire offers robust support for debugging and diagnostics, leveraging the .NET OpenTelemetry SDK. It simplifies the configuration of logging, tracing, and metrics, making it easy to monitor your applications. Azure Monitor and Prometheus can be used to keep an eye on your production deployments, ensuring your applications run smoothly.
Real-World Example: H&R Block’s AI Tax Assistant
H&R Block has developed an innovative AI Tax Assistant using .NET and Azure OpenAI, transforming how clients handle tax-related queries. This assistant provides personalized advice and simplifies the tax process, showcasing the capabilities of Azure AI in building scalable, AI-driven solutions. This project serves as an inspiring example for developers looking to integrate AI into their applications.
Join H&R Block at Microsoft Build as they discuss their journey and experience building AI with .NET and Azure in the session, Infusing your .NET Apps with AI: Practical Tools and Techniques.
Learn More
To dive deeper into AI development with Azure AI and .NET:
Explore the latest .NET and Azure AI documentation
Get started with our quickstart guides for Azure AI and Semantic Kernel
Read the Semantic Kernel announcement post
Share your feedback and connect with our team
Microsoft Tech Community – Latest Blogs –Read More