Tag Archives: microsoft
Discover your next integration inspiration at this year’s Build!
Get ready for an exciting digital experience at Microsoft Build 2024! Running May 21-23 in Seattle and online, this year’s event is all about delving deep into the cutting-edge world of AI and cloud technology. And if you’re eager to dive into the transformative world of Azure Integration Services, get ready for something special.
From seamless application and data integration to API management and powerful workflow automation, Azure Integration Services is revolutionizing the way businesses operate. Kantar, a global leader in marketing data, used Azure Integration Services to create the KantarHub, a centralized platform that simplifies data sharing and enhances security, integrating approximately 150 internal applications. Össur, a prosthetic innovation leader, migrated its diverse legacy apps to the cloud with Azure Integration Services, ensuring uninterrupted operations and improving data security and API access. These examples highlight how Azure Integration Services is transforming customer operations through seamless integration and increased efficiency.
In this blog, we’ll unpack the major announcements for Azure Integration Services from this year’s Build event. Register today to attend!
Azure API Management
With the rise in Gen AI app usage, there’s an urgent need for enterprise-wide, federated access to manage and secure endpoints. This year, we’re excited to announce GenAI Gateway capabilities in Azure API Management to tackle these challenges for Azure OpenAI Services endpoints (general availability).
As a first step, we’ve simplified the onboarding process so you can now import all Azure OpenAI endpoints into the Azure API Management platform with a single click. These endpoints will be protected by Azure API Management’s built-in managed identity authentication. For scaled workloads, we provide load balancing, rate limiting, and out-of-box observability support.
Here’s a rundown of all the policies and features we’ve added:
Import Azure OpenAI as an API: New Import Azure OpenAI as an API in Azure API management provides an easy single click experience to import your existing Azure OpenAI endpoints as APIs and simplifies the onboarding process.
Azure OpenAI Token Limit Policy: Manage and enforce token-based limits per API consumer to ensure fair usage.
Azure OpenAI Emit Token Metric Policy: Get detailed monitoring and analysis by logging the token usage metrics and sending those to Azure Application Insights.
Load Balancer and Circuit Breaker: Distribute the load across multiple Azure OpenAI endpoints with support for various load distribution strategies ensuring optimal performance and reliability.
Azure OpenAI Semantic Caching Policy (public preview): Optimize token usage by caching completions for semantically similar prompts improving response performance.
Click here to learn more about the GenAI Gateway capabilities in Azure API Management. We launched the “Gen AI Gateway Accelerator,” a reference implementation that demonstrates how to provision and interact with Generative AI resources through API Management. This new scenario in the APIM landing zone accelerator helps accelerate our customers on their path to Gen AI production workloads. Learn more about the “Gen AI Gateway Accelerator” here.
In addition, we have two features now in General Availability (GA):
OData API Type : First-class support for OData makes it easier for customers to publish OData APIs in API Management, including the ability to secure them with standard API protections. You can now use Azure API Management for publishing APIs from platforms like SAP, Oracle, Dataverse, and others that expose OData APIs.
gRPC API Type in Self-Hosted Gateway: Seamlessly manage your gRPC services as APIs within Azure API Management.
Azure API Center
Another exciting announcement—Azure API Center is now in General Availability! Complementing Azure API Management, Azure API Center is a centralized solution that offers a unified inventory for seamless discovery, consumption, and governance of APIs, regardless of their type, lifecycle stage, or deployment location. With Azure API Center, your organization can effectively manage your API landscape and promote efficiency, consistency, and innovation across the board.
Key features of Azure API Center include:
API Inventory Management: Create an up-to-date API catalog that includes essential metadata like API names, descriptions, lifecycle stages, and owners. Custom metadata can be added to capture organization-specific API information.
API Cataloging for Azure API Management: Quickly import APIs into API Center via a single CLI command, creating a cohesive center across different API Management services.
API Design Governance: Enable API best practices at scale and enforce design rules across your organization. This empowers API developers to ensure quality and uniformity across all produced APIs.
API Reusability: Foster reusability by empowering consumers to swiftly discover and utilize the appropriate APIs.
API Development Enhancement: Seamlessly integrate with our API Center Visual Studio Code extension, enhancing and simplifying the API development process.
Azure Logic Apps
By simplifying and automating how you connect and integrate various applications, services, and data sources in the cloud, Azure Logic Apps users can create and run automated workflows with little to no code. Recent updates to the platform include new features that enhance seamless management of integration flows, simplify legacy integration, and enable efficient B2B integration.
Seamless Management of Integration Flows
Efficiently monitoring, troubleshooting, and updating automated workflows can be challenging, especially when dealing with multiple integrations. To address these pain points, we’ve introduced:
Support for Zero Downtime deployment scenarios in the portal (public preview for Logic Apps Standard): Zero downtime deployment is a technique that allows updating an application without affecting its availability or performance. Logic Apps Standard now supports zero downtime deployment by using deployment slots, which are isolated environments that can host different versions of the application and can be swapped with the production slot without any interruption. Click here more details.
Logic Apps Monitoring dashboard for workflow monitoring, troubleshooting and resubmissions (public preview): We have released UI dashboards for Logic Apps Standard to help with diagnosis and troubleshooting of Logic Apps workflow runs and failures. The dashboard also offers the ability to take actions such as bulk resubmission of failed runs.
Advanced Development and Customization
Developers need the flexibility to customize workflows and integrate the latest technologies seamlessly, while also benefiting from efficient debugging and development environments.
.NET 8 Custom Code Support (public preview for Logic Apps Standard): We’ve extended our built-in action capabilities to include support for calling .NET 8 custom code. Within a Logic Apps workspace, you can now effortlessly develop and debug your custom code right alongside your workflows, streamlining your development process with the most up-to-date .NET technology.
Improved Onboarding Experience on VS Code for Logic Apps Standard (general availability): Extend the Logic App designer to empower users to transition from developing workflows in the cloud to a local environment. The intuitive no-code designer of Logic Apps combined with the powerful pro-code capabilities of VS Code has enabled developers to build, run and test their Logic App workflows locally with features such as breakpoint debugging.
Logic Apps Standard Deployment Scripting Tools in VS Code (public preview for Logic Apps Standard): For Standard logic app workflows that run in single-tenant Azure Logic Apps, you can use Visual Studio Code with the Azure Logic Apps Standard extension to locally develop, test, and store your logic app project using any source control system. You can also use the extension to streamline the creation of deployment pipelines, automating the deployment of your Logic Apps Standard infrastructure and code. Click here for more technical details.
B2B Integration
Managing complex B2B transactions and integrations requires robust, scalable solutions and efficient management tools. And, we have new features to help with these transactions:
EDI (X12/EDIFACT) processing with built in actions (general availability): Run B2B workloads at scale with connectors that can process single or batched EDI messages and larger payloads, providing greater control over performance.
Integration Account Enhancements (public preview): Integration Account Premium offers UI based Trading Partner management capabilities and centralized store for artifacts including maps and schemas. With this release, we have enabled Availably Zone support for Integration Account.
Mainframes and midranges Integration
Extending the functionality of legacy systems to the cloud without extensive re-investment can be difficult. That’s why we have connectors for IBM mainframes and midranges.
Azure Logic Apps connectors for IBM Mainframe and Midranges: Preserve the value of your workloads running on mainframes and midranges by extending them to the Azure Cloud, without investing more resources on the mainframe or midrange environments using Azure Logic Apps. Click here for more technical details.
Azure Service Bus
Azure Service Bus is a fully managed enterprise message broker that ensures secure and efficient delivery of data messages between different parts of your system, even when they’re disconnected or processing tasks at different speeds. At Build, we’re thrilled to announce a new feature: batch delete. Currently in preview, this feature empowers customers to delete messages on the service side from an entity or the dead letter queue in batches of up to 4,000 messages.
Azure Event Grid
Like an event dispatcher for your cloud, Azure Event Grid triggers actions across your applications and services in near real-time whenever something significant happens. New features are generally available that are tailored to customers who are looking for a pub-sub message broker that can enable Internet of Things (IoT) solutions using MQTT protocol and can help build event-driven applications.
These capabilities enhance Event Grid’s MQTT broker capability, make it easier to transition to Event Grid namespaces for push and pull delivery of messages, and integrate new sources. Customers can now:
Use the Last Will Testament feature in compliance with MQTT v5 and MQTT v.3.1.1 specifications, so applications can get notifications when clients get disconnected, enabling management of downstream tasks to prevent performance degradation.
Create data pipelines that utilize both Event Grid Basic resources and Event Grid Namespace Topics (supported in Event Grid Standard). This means customers can utilize Event Grid namespace capabilities such as MQTT broker without needing to reconstruct existing workflows.
Support new event sources, such as Microsoft Entra ID and Microsoft Outlook, leveraging Event Grid’s support for the Microsoft Graph API. This means customers can use Event Grid for new use cases, like when a new employee is hired or a new email is received, to process that information and send to other applications for more action.
For more technical details on these announcements, click here.
See you at Build for these exciting sessions!
Don’t miss the chance to explore these exciting updates at Microsoft Build 2024. Register now and if you’re attending in-person, be sure to stop by the Azure API Management booth in The Hub! You can meet with the engineering and product teams behind API Management and API Center, and further explore Azure Integration Services capabilities to discover exciting new solutions.
Join us for these breakout sessions both in-person or online:
Unleash the Potential of APIs with Azure API Management: Through practical demos we’ll show how to use Azure API Management to expose Azure OpenAI services, manage OpenAI tokens allocation, distribute load across multiple model deployments and gain valuable insights into token usage throughout your intelligent applications portfolio. Explore how Azure API Center revolutionizes API governance and discoverability, driving innovation and efficiency in your organization’s operations.
GenAI Gateway Capabilities in Azure API Management: We will demonstrate how API Management can be configured for authentication and authorization for OpenAI endpoint, enforcing rate limits based on OpenAI tokens used, load balancing across multiple OpenAI endpoints and more.
Microsoft Tech Community – Latest Blogs –Read More
Azure Functions: SDK type bindings for Azure Blob Storage with Azure Functions in Python (Preview)
Azure Functions triggers and bindings enable you to easily integrate event and data sources with function applications. With SDK type bindings, you can use types from service SDKs and frameworks, providing more capability beyond what is currently offered. SDK type bindings for Azure Storage Blob when using Python in Azure Functions is now in Preview.
SDK type bindings for Azure Storage Blob enable the following key scenarios:
Downloading and uploading blobs of large sizes, reducing current memory limitations and GRPC limits.
Improved performance by using blobs with Azure Functions
To get started using SDK type bindings for Azure Storage Blob, the following prerequisites are required:
Azure Functions runtime version 4.34.1, or a later version.
Python version 3.9, or a later supported version.
Python v2 programming model
Note that currently, only synchronous SDK types are supported.
Then, enable the feature in your Azure Function app:
Add the azurefunctions-extensions-bindings-blob extension package to the requirements.txt file in the project.
Add this code to the function_app.py file in the project, which imports the SDK type bindings:
import azurefunctions.extensions.bindings.blob as blob
This example shows how to get the BlobClient from both a Blob storage trigger (blob_trigger) and from the input binding on an HTTP trigger (blob_input).
import logging
import azure.functions as func
import azurefunctions.extensions.bindings.blob as blob
app = func.FunctionApp(http_auth_level=func.AuthLevel.ANONYMOUS)
@app.blob_trigger(
arg_name=”client”, path=”PATH/TO/BLOB”, connection=”AzureWebJobsStorage”
)
def blob_trigger(client: blob.BlobClient):
logging.info(
f”Python blob trigger function processed blob n”
f”Properties: {client.get_blob_properties()}n”
f”Blob content head: {client.download_blob().read(size=1)}”
)
@app.route(route=”file”)
@app.blob_input(
arg_name=”client”, path=”PATH/TO/BLOB”, connection=”AzureWebJobsStorage”
)
def blob_input(req: func.HttpRequest, client: blob.BlobClient):
logging.info(
f”Python blob input function processed blob n”
f”Properties: {client.get_blob_properties()}n”
f”Blob content head: {client.download_blob().read(size=1)}”
)
return “ok”
You can view other SDK type bindings samples for Blob storage in the Python extensions repository:
ContainerClient type
StorageStreamDownloader type
Microsoft Tech Community – Latest Blogs –Read More
Macro Excel : create an excel for each city and send it to the email adresse
Hello can you help me please ? i would like to create using excel VBA macro a code wich will create an Independent excel from with each city and send it to the corresponding mail in the excel.
For exemple in the following testing file : i would like to create an excel for each of the cities and send it to the following emails : also each of the excel will only include data of the city corresponding to the excel
GE SOTRemail address removed for privacy reasonsAD MAD NFemail address removed for privacy reasonsIALTERemail address removed for privacy reasonsLATTEemail address removed for privacy reasonsMOP DOTemail address removed for privacy reasons
(so it will basically create 5 excel and send them to these emails)
Thank you in advance for your help 🙂
Hello can you help me please ? i would like to create using excel VBA macro a code wich will create an Independent excel from with each city and send it to the corresponding mail in the excel. For exemple in the following testing file : i would like to create an excel for each of the cities and send it to the following emails : also each of the excel will only include data of the city corresponding to the excelGE SOTRemail address removed for privacy reasonsAD MAD NFemail address removed for privacy reasonsIALTERemail address removed for privacy reasonsLATTEemail address removed for privacy reasonsMOP DOTemail address removed for privacy reasons (so it will basically create 5 excel and send them to these emails)Thank you in advance for your help 🙂 Read More
Teams: Organisationsweit aktivieren
Hallo zusammen,
wir haben das Problem, dass wir im Admin-Center unter Einstellungen/Einstellungen der OrganisationseinstellungenMicrosoft Teams für alle benutzer aktivieren möchten und dabei
folgende Meldung erhalten und nicht wissen, was wir tun können:
Wir können Ihre Lizenzänerungen nicht speichern. Schließen Sie diese Einstellung, aktualisieren Sie die Seite und versuchen Sie es noch mal.
Wir haben bereits mehrere Browser verwendet, den Virenscanner und die Firewall deaktiviert – alles ohne Nutzen.
Hat hier jemand noch eine Idee?
Hallo zusammen, wir haben das Problem, dass wir im Admin-Center unter Einstellungen/Einstellungen der OrganisationseinstellungenMicrosoft Teams für alle benutzer aktivieren möchten und dabeifolgende Meldung erhalten und nicht wissen, was wir tun können: Wir können Ihre Lizenzänerungen nicht speichern. Schließen Sie diese Einstellung, aktualisieren Sie die Seite und versuchen Sie es noch mal. Wir haben bereits mehrere Browser verwendet, den Virenscanner und die Firewall deaktiviert – alles ohne Nutzen. Hat hier jemand noch eine Idee? Read More
Images are not displayed in incoming emails
Hello,
Since 10 days or so, Outlook 365 does not display images anymore. See below.
Note these images are displayed alright in the New Outlook, in https://outlook.live.com/ and in Mail under iOS.
Of course I have already unchecked both options in File/Options/Trust Center/Automatic Download. See below.
What else should I do to get those images painted?
Thank you!
Stefano
Hello,Since 10 days or so, Outlook 365 does not display images anymore. See below.Note these images are displayed alright in the New Outlook, in https://outlook.live.com/ and in Mail under iOS.Of course I have already unchecked both options in File/Options/Trust Center/Automatic Download. See below.What else should I do to get those images painted? Thank you!Stefano Read More
Policy Tip Text not working for some policies
Hi,
I have an issue with the Policy Tip Text not showing for a specific rule in Outlook. It does however show in OWA.
My rule is quite simple. Condition = File extension is docx, pdf, xlsx, pptx and Content is shared from Microsoft 365 with people outside my organisation.
The policy tip should appear and allow the user to override by providing a business justification.
The policy tip does not appear in the Outlook client, but does appear in OWA.
If I change the condition to Recipient Domain is hotmail.com, gmail.com and Content is shared from Microsoft 365 with people outside my organisation then the policy tip does appear.
Does the policy tip not work when using file extensions in the Outlook Client?
Secondly, when the rule is configured as Condition = File extension is docx, pdf, xlsx, pptx and Content is shared from Microsoft 365 with people outside my organisation, sending through OWA (with the user overriding the policy restriction), the email is still blocked from being sent to an external recipient. The email is sent, but the DLP rule still blocks the email and the user receives the email block notification.
I have other policies where the override works successfully.
Any ideas on how to fix this issues?
Thanks,
Ben
Hi,I have an issue with the Policy Tip Text not showing for a specific rule in Outlook. It does however show in OWA.My rule is quite simple. Condition = File extension is docx, pdf, xlsx, pptx and Content is shared from Microsoft 365 with people outside my organisation. The policy tip should appear and allow the user to override by providing a business justification. The policy tip does not appear in the Outlook client, but does appear in OWA. If I change the condition to Recipient Domain is hotmail.com, gmail.com and Content is shared from Microsoft 365 with people outside my organisation then the policy tip does appear.Does the policy tip not work when using file extensions in the Outlook Client? Secondly, when the rule is configured as Condition = File extension is docx, pdf, xlsx, pptx and Content is shared from Microsoft 365 with people outside my organisation, sending through OWA (with the user overriding the policy restriction), the email is still blocked from being sent to an external recipient. The email is sent, but the DLP rule still blocks the email and the user receives the email block notification. I have other policies where the override works successfully. Any ideas on how to fix this issues? Thanks,Ben Read More
Feature request: ability to submit feedback on EDR blocks
Good day,
I’d like to suggest to add the ability to report behavioral blocks to Microsoft for a review.
The current reporting feature is focused on files and hashes and requires a file or hash to be able to submit something – which does not make any sense for behavioral detections.
To make it a bit clearer I attached a screenshot of a behavioral false positive – there is currently no lightweight approach to report that I believe – but we’d really love to be able to provide quicker feedback on behavioral blocks.
Good day, I’d like to suggest to add the ability to report behavioral blocks to Microsoft for a review.The current reporting feature is focused on files and hashes and requires a file or hash to be able to submit something – which does not make any sense for behavioral detections. To make it a bit clearer I attached a screenshot of a behavioral false positive – there is currently no lightweight approach to report that I believe – but we’d really love to be able to provide quicker feedback on behavioral blocks. Read More
App running as background process
I use this to open my company portal via powershell :
start-process companyportal:
please is there any way to run company portal on background (task manager) without display the company portal on desktop.
thanks.
I use this to open my company portal via powershell :start-process companyportal:please is there any way to run company portal on background (task manager) without display the company portal on desktop.thanks. Read More
Identifier(s) in API calls to load mail folders and mails from folders
Hi all.
I am trying to load user folders with call to list user folders , and later emails for given folder with call to list emails in given folder .
In both calls common part is:
GET /users/{id | userPrincipalName}…
On Azure portal userPrincipalName parameter is editable:
Is it a must to use Object ID (below) for accessing user and so forth ?
For my use case it would be of great benefit to use User principal name , but what happens if someone changes it ?
Thanks in advance,
Dragan
Hi all. I am trying to load user folders with call to list user folders , and later emails for given folder with call to list emails in given folder .In both calls common part is:GET /users/{id | userPrincipalName}…On Azure portal userPrincipalName parameter is editable:Is it a must to use Object ID (below) for accessing user and so forth ?For my use case it would be of great benefit to use User principal name , but what happens if someone changes it ? Thanks in advance,Dragan Read More
Syncing Project to Planner
I’d like to use a PowerAutomate flow to sync my MS Project Online to MS Planner (it would be amazing if I could sync so that the tasks are copied to the same buckets that I defined in Project Online and in Planner too). I haven’t been able to find a flow that will help me achieve this Project Online to Planner synchronization. Does anyone have suggestions?
I’d like to use a PowerAutomate flow to sync my MS Project Online to MS Planner (it would be amazing if I could sync so that the tasks are copied to the same buckets that I defined in Project Online and in Planner too). I haven’t been able to find a flow that will help me achieve this Project Online to Planner synchronization. Does anyone have suggestions? Read More
Top Stories: May 21, 2024
Take a look!
English Top Stories: May 21, 2024 | Microsoft
Français À la une : 21 mai 2024 | Microsoft
Español Novedades más relevantes: 21 de mayo de 2024 | Microsoft
Português Blog de parceiro das Américas | Microsoft
Take a look!
English Top Stories: May 21, 2024 | Microsoft
Français À la une : 21 mai 2024 | Microsoft
Español Novedades más relevantes: 21 de mayo de 2024 | Microsoft
Português Blog de parceiro das Américas | Microsoft Read More
steps to transfer 365 accounts with email from service provider to self manage
We have our 365 accounts managed by a service provider. we want to stop using their services so we need to transfer administration of our accounts to self administor. where can I find instructions so I can make sure we don’t have any loss of data. The microsoft account subscriptions are through the service provider so we need to purchase our own subscriptions for email and office (desktop). I know how to self manage 365 accounts, but I want to make sure with the transfer we don’t lose our email data with the transfer from their subscriptions to our own.
We have our 365 accounts managed by a service provider. we want to stop using their services so we need to transfer administration of our accounts to self administor. where can I find instructions so I can make sure we don’t have any loss of data. The microsoft account subscriptions are through the service provider so we need to purchase our own subscriptions for email and office (desktop). I know how to self manage 365 accounts, but I want to make sure with the transfer we don’t lose our email data with the transfer from their subscriptions to our own. Read More
Announcing key updates to Responsible AI features and content filters in Azure OpenAI Service
We’re excited to announce the release of new Responsible AI features and content filter improvements in Azure OpenAI Service (AOAI) and AI Studio, spanning from new unified content filters, to customizable content filters for DALL-E and GPT-4 Turbo Vision deployments, safety system message templates in the AOAI Studio, asynchronous filters now available for all AOAI customers, and updates to protected material and image generation features.
Unified content filters
We are excited to announce that a new unified content filter experience is coming soon to Azure AI. This update will streamline the process of setting up content filters across different deployments and various products such as Azure AI Studio, AOAI, and Azure AI Content Safety for a more uniform user experience. Content filters enable users to effectively block harmful content, whether it’s text, images, or multimodal forms. With this unified approach, users have the flexibility to establish a content filtering policy tailored to their particular needs and scenarios.
Configurable content filters for DALL-E and GPT-4 Turbo with Vision GA
The integrated content filtering system in AOAI provides Azure AI Content Safety content filters by default, and they detect and the output of harmful content. Furthermore, we also provide a range of different content safety customization options for the AOAI GPT model series. Today, we are releasing configurable content filters for DALL-E 2 and 3, and GPT-4 Turbo Vision GA deployments, enabling content filter customization based on specific use case needs. Customers can configure input and output filters, adjust severity levels for the content harms categories and add additional applicable RAI models and capabilities such as Prompt Shields and custom blocklists. Customers who have been approved for modified content filters can turn the content filters off or use annotate mode to return annotations via API response, without blocking content. Learn more.
Asynchronous Filters
In addition to the default streaming experience in AOAI – where completions are vetted before they are returned to the user, or blocked in case of a policy violation – we’re excited to announce that all customers now have access to the Asynchronous Filter feature. Content filters are run asynchronously, and completion content is returned immediately with a smooth and fast token-by-token streaming experience. No content is buffered, which allows for a faster streaming experience at zero latency associated with content safety. Customers must be aware that while the feature improves latency, it’s a trade-off against the safety and real-time vetting of smaller sections of model output. Because content filters are run asynchronously, content moderation messages and policy violation signals are delayed, which means some sections of harmful content that would otherwise have been filtered immediately could be displayed to the user. Content that is retroactively flagged as protected material may not be eligible for Customer Copyright Commitment coverage. Read more about Asynchronous Filter and how to enable it.
Safety System Messages
System messages for generative AI models are an effective strategy for additional AI content safety. The AOAI Studio and AI Studio are now supporting safety system message templates directly in the playground that can be quickly tested and deployed, covering a range of different safety related topics such as preventing harmful content, jailbreak attempts, as well as grounding instructions. Learn more.
Protected Materials
Protections for Azure OpenAI GPT-based models
In November 2023, Microsoft announced the release of Protected Material Detection for Text in AOAI and Azure AI Content Safety. Soon, this model will upgrade to version 2.0 and identifies content that highly resembles pre-existing content. This update also prevents attempts to subvert the filter by asking for known modifications of the original text, e.g. the original text with repeated characters or more whitespace. Soon, the Protected Material Detection for Code model version 2.0 will update its attribution feature to flag 2023 public GitHub repository code from flagging 2021 repository code.
Updated Features in Azure OpenAI Service DALL-E
AOAI now prevents DALL-E from generating works that closely resemble certain types of known creative content, such as studio characters and contemporary artwork. It does this by re-interpreting the text prompt to DALL-E, removing keywords or phrases associated with creative content categories. Below are examples showing image outputs before and after the modification is applied. Please note that the DALL-E model is non-deterministic and so is likely not going to generate the same image with the same prompt each time.
New Responsible AI features in Azure AI Content Safety & Azure AI Studio
Custom Categories
This week at Build 2024 we also previewed other important features for responsible AI, one of which will be coming soon to Azure OpenAI Service: Custom Categories. Learn more about Custom Categories.
Get started today
Visit Azure OpenAI Service Studio: oai.azure.com
Visit Azure AI Studio: ai.azure.com
Visit Azure AI Content Safety Studio: aka.ms/contentsafetystudio
Microsoft Tech Community – Latest Blogs –Read More
Gen AI simplified: The azure_ai extension now generally available on Azure Database for PostgreSQL
We are thrilled to announce the general availability of the azure_ai extension on Azure Database for PostgreSQL. The azure_ai extension allows developers to seamlessly integrate Azure AI services from within their database using SQL queries. In conjunction with vector data this simplifies building Gen AI applications on Azure Database for PostgreSQL.
Features and Capabilities
With the azure_ai extension, you can now access Azure OpenAI, Azure AI Language services, Azure Translation and Azure Machine learning services with simple function calls from within SQL.
The azure_ai extension enables
Generation of embeddings with embedding models of creating embeddings with dimensions ranging from 384 to 3072. Embeddings can be generated as a scalar single embedding or as a batch for a set of them. Along with native vector data type using vector extension, embeddings can be generated as data is inserted or updated.
Calling into Azure AI Language services to perform summarization, sentiment analysis , Key phrase extraction or PII detection on your data.
Real-time text translation within your database with Azure AI translator simplifies building multi-lingual applications.
Real-time predictions enable many scenarios such as fraud detection, product recommendations, predictive maintenance or predictive healthcare. You can invoke custom trained models or pre-trained models from Azure Machine learning catalog that are hosted on online endpoints. Online inferencing endpoints are a highly scalable way to operationalize models for real-time low latency requests with features such as auto-scale and rich monitoring and debugging support.
Getting Started
To learn more about the azure_ai extension, and how it simplifies building GenAI applications on, visit our documentation below:
Azure AI Extension.
Azure AI Language Services integration
Azure AI Text Translation
Azure AI real-time machine learning scoring.
Vectors on Azure Database for PostgreSQL
Generative AI Overview
To learn even more about our Flexible Server managed service, see the Azure Database for PostgreSQL Flexible Server.
You can always find the latest features added to Flexible server in this release notes page.
Microsoft Tech Community – Latest Blogs –Read More
Build 2024: Unveiling performance and AI innovations in Azure Database for MySQL
Today, we’re thrilled to announce a suite of new features for Azure Database for MySQL that focus on performance enhancements, enterprise capabilities, and cutting-edge AI functionality designed to revolutionize your database management experience and efficiency. Read on to see how these innovations can elevate your workflows!
Microsoft Copilot in Azure: Unlock the benefits of Azure Database for MySQL with your AI companion (Public Preview)
We’re excited to announce Microsoft Copilot in Azure extends capabilities to Azure Database for MySQL. Microsoft Copilot in Azure is an AI-powered tool that leverages Large Language Models (LLMs) and the Azure control plane to help you get answers to your general questions and receive high quality recommendations to real-time problems. With this new integration with Azure Database for MySQL, you can converse with Microsoft Copilot in Azure to discover new features, determine when to enable new features to supplement your own scenarios, learn from summarized tutorials to enable features or build applications, and obtain tips and best practice recommendations to avoid issues.
Learn more: Documentation | Announcement blog with demo video coming soon!
Build RAG applications with Azure OpenAI and MySQL with Azure AI Search
We’re excited to announce that you can now create Retrieval-Augmented Generation (RAG) applications using Azure OpenAI and Azure Database for MySQL with Azure AI Search.
You can combine the smart, human-like responses of Azure OpenAI with MySQL’s powerful database management and Azure AI Search’s advanced search capabilities, making it easier to build apps that deliver relevant info quickly and efficiently. If you’re running applications (content management systems (CMS), e-commerce applications, or gaming sites) with data hosted in Azure Database for MySQL, enhance your user experience by building generative AI search and chat applications using LLMs available in Azure OpenAI and vector storage and indexing provided by Azure AI Search. Unleash the power of your data hosted on MySQL with the simple and seamless AI integrations on Azure!
Learn more: Demo video and sample architecture coming soon! | RAG in Azure AI Search documentation
Advancements in Azure Database for MySQL – Business Critical service tier (General Availability)
Achieve a 2x increase in throughput using Accelerated Logs (General Availability): We’re excited to announce the General Availability of Accelerated Logs, a feature that significantly boosts performance for write heavy workloads, offering up to a 2x improvement in throughput, out of the box, with no additional cost or application changes required. By reducing latency and enhancing data access speeds, the Accelerated Logs feature ensures that your mission-critical applications run more efficiently and smoothly on the Business Critical service tier. Try out this new feature to experience the difference in your workload performance!
Expand storage up to 32TB (General Availability) for your workloads using the Business Critical service tier. With storage auto-grow up to 32TB and auto-scale IOPs up to 80K, you can now run your large, growing mission-critical workloads worry-free on Azure!
Learn more: Documentation | Announcement blog with demo video coming soon!
Enhance data redundancy, availability, and auditing capabilities with on-demand backup and export (Public Preview)
With Public Preview of the on-demand backup and export feature, you can now easily export a physical backup of your MySQL flexible server to an Azure storage account (Azure blob storage) with just a few clicks on the Azure portal or with a single CLI command whenever you want. After exporting backups to blob storage, you can use them for multiple purposes, including:
Data recovery, redundancy, and availability. In addition to the automated backups managed by the service, you can export backups on-demand and use them for data recovery. In case of data corruption, accidental deletion, or hardware failure, simply restore the server to its previous state using this copy of your data.
Auditing. You can use exported physical backup files to restore on-premises MySQL servers to address the auditing, compliance, and archival requirements of an organization.
Compliance. Regulated industries must be able to export any data hosted by a cloud provider.
Avoid vendor lock in. Thake advantage of this solution to export data from MySQL flexible server or avoid vendor lock-in.
Learn more: Documentation | Announcement blog with demo video coming soon!
Simplify security management with Microsoft Defender for Cloud support (General Availability)
Last month, we announced the general availability of Microsoft Defender for Cloud support for Azure Database for MySQL – Flexible Server. The Defender for Cloud Advanced Threat Protection (ATP) feature simplifies security management of your MySQL flexible server by enabling effortless threat prevention, detection, and mitigation through increased visibility into and control over harmful events.
With the Defender for Cloud ATP feature, there’s no need to be a security expert to safeguard your MySQL flexible server against today’s growing threat landscape. ATP uses integrated security monitoring to detect anomalous database access and query patterns, as well as suspicious database activities, and provides targeted security recommendations and alerts.
Learn more: Demo video | Announcement blog
Conclusion
With the release of these capabilities, Azure Database for MySQL continues to be an industry leader for hosting your mission-critical applications on the cloud, offering top-tier performance for your workloads, enterprise capabilities and scale, enhanced monitoring, and robust backup and restore capabilities. The service seamlessly integrates with cutting-edge AI technologies through OpenAI, Azure Copilot, and Azure AI Search to deliver advanced functionalities and insights. Security is paramount, and with Microsoft Defender, your applications are protected by Microsoft’s expertise in cybersecurity, ensuring peace of mind against increasingly sophisticated threats. Azure Database for MySQL combines performance, innovation, and security to support your most demanding applications, while remaining on the open-source community MySQL version to protect against lock-ins.
To learn more about what’s new with Flexible Server, see What’s new in Azure Database for MySQL – Flexible Server. Stay tuned for more updates and announcements by following us on social media: YouTube | LinkedIn | X.
If you have any suggestions for or queries about our service, please let us know by emailing us at AskAzureDBforMySQL@service.microsoft.com. Thank you!
Microsoft Tech Community – Latest Blogs –Read More
Live at Build: Microsoft Learn releases new AI skill-building resources
Microsoft Learn is excited to be at Microsoft Build again this year with a fantastic new onsite presence and to share announcements about new resources to support AI skill-building.
When it comes to AI, having the right resources to develop critical new skills can be a game changer, whether you’re managing your organization’s training needs or advancing your own career. The 2024 Work Trend Index Annual report from Microsoft and LinkedIn suggests a massive opportunity for those willing to skill up in AI—66% of leaders say they wouldn’t hire someone without AI skills.
If you’re a developer learning to build AI-powered solutions, a team lead looking to skill up a team, or a leader looking to understand the benefits that Microsoft Copilot can bring to your organization, Microsoft Learn has something for you. That’s why we’re thrilled to announce the new AI skill-building resources we’re releasing today at Microsoft Build:
NEW AI Applied Skills releasing in May and June.
NEW Plans for AI skill-building.
NEW Copilot learning hub.
Additionally, I’m pleased to introduce two new AI skill-building offerings designed for non-technical roles:
NEW Copilot for Microsoft 365 training sessions for business users.
COMING SOON AI instructor-led training for business leaders.
Read on for more details about these exciting announcements.
Growing the Microsoft Applied Skills for AI portfolio
We developed Microsoft Applied Skills, new verifiable credentials that validate specific real-world skills, to help you address your skills gaps and empower you with the in-demand expertise you need. The positive feedback we’re receiving about the great value these credentials offer to individuals and organizations motivates us to keep expanding the portfolio.
During May and June we’re releasing new Applied Skills credentials to support developers who build AI and cloud solutions, including:
Develop AI agents using Microsoft Azure OpenAI and Semantic Kernel
Implement a data science and machine learning solution with Microsoft Fabric
Implement a Real-Time Intelligence solution with Microsoft Fabric
We’re also releasing new credentials for key cloud scenarios relevant to IT professionals:
Administer Active Directory Domain Services
Deploy and manage Microsoft Azure Arc–enabled servers
Explore Microsoft Applied Skills
The current portfolio of Microsoft Credentials includes over 20 Microsoft Applied Skills and close to 50 industry-recognized Microsoft Certifications, providing you with verifiable skill sets aligned with AI and cloud job roles and projects. Learn more about Microsoft Credentials.
Stay focused on your AI skill-building with new Plans
To stay current with today’s job skills, it’s important to have the right training content. Organizational team leaders and trainers must have the ability to customize and share this content, encourage their learners to stay on track, and monitor learning progress.
Today we’re introducing new AI skill-building Plans on Microsoft Learn, designed to meet all these objectives and more. Plans help learners, teams, and organizations accelerate the achievement of their learning goals using curated sets of structured content combined with milestones and automated nudges to keep learners focused and motivated. Get all the details about Plans in our recent blog post Introducing Plans on Microsoft Learn.
Find our new AI Plans on the AI learning hub on Microsoft Learn:
Master the basics of Azure: AI Fundamentals
Microsoft Copilot for Microsoft 365 for executives
Using AI in your everyday work: GitHub Copilot
Learn to create apps and modernize with Azure OpenAI
Check out the new Copilot learning hub
We’re also excited to announce the new Copilot learning hub on Microsoft Learn, the place where technology professionals can find resources—tailored to their job role and career goals—to help them develop the skills to put Microsoft Copilot to work every day.
As a complement to the already existing AI learning hub, this new hub offers tutorials, videos, and documentation covering the basics of Copilot, along with its features, capabilities, prompting techniques, best practices, and troubleshooting tips. The learning hub also showcases real-world examples and use cases of Copilot in different domains and scenarios, including content specific to developers, data and IT professionals, security analysts, and more.
Microsoft Learn is here to support your AI learning goals, whatever they may be. Choose the AI learning hub when looking to gain skills in all Microsoft’s AI apps and services, regardless of your business or technical role. Choose the Copilot learning hub when looking to deepen your technical expertise in Microsoft Copilot.
Visit the Copilot learning hub
New live Microsoft Copilot for Microsoft 365 training sessions for business users
The widespread adoption of AI across organizations requires a new approach to skill-building that focuses on upskilling all staff, from leadership and IT to business users, enabling them to fully leverage their AI investments.
I’m pleased to announce a new series of live Microsoft Copilot for Microsoft 365 training sessions for business users designed to help key roles in your organization learn how to use Microsoft Copilot for Microsoft 365 to unlock productivity. Each session is delivered in less than one hour and is available in multiple languages and time zones.
The training content is tailored to the following roles:
Executives—Learn how Copilot can synthesize communication history in Teams and create speeches and presentations with Word and PowerPoint.
Sales—Learn how Copilot helps with market research, reports, and recommendations. Use it for sales deals, contracts, and more.
IT—Learn how to use Copilot to summarize a product spec document, create a project plan and business presentation, and draft an email with highlights for a network security product.
Marketing—Learn how to use Copilot to analyze market trends, forecast sales, generate campaign ideas, and consolidate reports.
Finance—Learn how to use Copilot to analyze a spreadsheet with projected revenue, create a marketing campaign report, and summarize your company’s financial statement results.
HR—Learn how to use Copilot to create a job description, analyze multiple resumes, create interview questions and a candidate report, and compose an offer letter to a candidate.
Ops—Learn how to use Copilot to brainstorm a project plan, locate and summarize email threads, troubleshoot equipment issues, and create customer discovery questions.
Explore Microsoft Copilot for Microsoft 365 training sessions
Instructor-led training coming soon: Microsoft AI for business leaders
Microsoft Learn is also releasing our latest instructor-led training (ILT) called Microsoft AI for business leaders, which is designed to help business leaders find the knowledge and resources to adopt AI in their organizations. The training explores planning, strategizing, and scaling AI projects in a responsible way, focusing on use cases, tools, and insights from industry-specific AI success stories such as healthcare, finance, sustainability, retail, and manufacturing.
This new AI-focused training will be available in July 2024 through select Training Services Partners (TSP) with the expertise to deliver unique value to business leaders. Authorized TSPs offer a breadth of training solutions including blended learning, in-person, and online to meet your learning objectives.
Stay tuned for more information about this new AI instructor-led training.
Find AI-ready Training Services Partners
Explore AI skill-building with Microsoft Learn
Microsoft Learn is leading the way in bringing the latest AI skilling and credentials to our community of learners. We’ll continue to help you gain the skills you need to achieve more with technology, through interactive training and resources on Microsoft products and services. We look forward to sharing more news and updates in the coming weeks.
Continue your learning journey beyond Build at Microsoft Learn.
Microsoft Tech Community – Latest Blogs –Read More
Announcing Custom Categories in Azure AI Content Safety
We are excited to announce that Custom Categories is coming soon to Azure AI Content Safety. This new feature enables you to create your own customized classifier based on your specific needs for content filtering and AI safety whether you want to detect sensitive content, moderate user-generated content, or comply with local regulations. Use Custom Categories to train and deploy your own custom content filter with ease and flexibility.
Feature Overview
The Azure AI Content Safety custom categories feature is powered by Azure AI Language, a service that provides advanced natural language processing capabilities for text analysis and generation. The custom categories feature is designed to provide a streamlined process for creating, training, and using custom content classification models.
Here’s an overview of the underlying workflow:
Deploy your custom category when you need it
We are offering two deployment options for our customers:
Custom Categories (Standard):
The Standard option for deploying custom categories is aimed at providing a thorough and robust filtering mechanism. It requires a minimum of 50 lines of natural language examples to train the category. This depth of training material ensures that the custom filter is well-equipped to identify and moderate the specified types of content accurately.
Deployment Timeframe: The Standard option is designed with a deployment window of within 24 hours, balancing speed with the need for a comprehensive understanding of the content to be filtered.
Custom Categories (Rapid):
The Rapid option caters to urgent content safety needs, allowing organizations to respond swiftly to emerging threats and incidents. It requires a definition and few natural language examples for deploying the text incident, or few example images for deploying the image incident. This reduced requirement facilitates quicker creation and deployment of custom filters.
Deployment Timeframe: This option emphasizes speed, enabling the deployment of new custom filters around just an hour for text, and few minutes for image. It is particularly useful for addressing immediate and unforeseen content safety challenges.
Both options serve to empower organizations with the capability to protect their AI applications and users more effectively against a wide array of harmful content and security risks, offering a balance between responsiveness and thoroughness based on the specific needs and circumstances.
How to use this feature?
Step 1: Definition and Setup
By creating a custom category, you are telling the AI exactly which types of content you wish to detect and mitigate. You need to create a clear category name and a detailed definition that encapsulates the content’s characteristics. The setup phase is crucial, as it lays the groundwork for the AI to understand your specific filtering needs.
Then, collect a balanced and small dataset with both positive and (optional) negative examples allows the AI to learn the nuances of the category. This data should be representative of the variety of content that the model will encounter in a real-world scenario.
Step 2: Model Training
Once you have your dataset ready, the Azure AI Content Safety service uses it to train a new model. During training, the AI analyzes the data, learning to distinguish between content that matches the custom category and content that does not. Built on top of the underlying technology of LLM-powered low-touch customization from Azure AI Language, we are tailoring the experience for Content Safety customer towards consistency and more focus on content moderation scenario.
Step 3: Model Inferencing
After training, you need to evaluate the model to ensure it meets your accuracy requirements. This is done by testing the model with new content that it hasn’t seen before. The evaluation phase helps you identify any potential adjustments needed before deploying the model into a production environment.
Step 4: Iteration
In the upcoming release of custom categories studio experience, we will introduce a feature that allows users to modify their definition and training samples using suggestions generated by GPT.
Join our customers using Custom Categories
South Australia Department for Education
“The Custom Categories feature from Azure AI Content Safety is set to be a game-changer for the Department for Education in South Australia, and our pioneering AI chatbot, EdChat. This new feature allows us to tailor content moderation to our specific standards, ensuring a safer and more appropriate experience for users. It’s a significant step towards prioritizing the safety and well-being of our students in the digital educational space.”
– Dan Hughes, Chief Information Officer, South Australia Department for Education
Learn more about how South Australia Department for Education is using Azure AI Content Safety
Stay tuned!
Thank you for your support as we continue to enhance our platform. We are excited for you to begin using custom categories. Stay tuned for more updates and announcements on our progress.
In the meantime, we encourage you to visit our Content Safety documentation or studio to explore the existing capabilities available to you. Custom categories is also coming soon to Azure AI Studio and Azure OpenAI Service.
Microsoft Tech Community – Latest Blogs –Read More
Introducing in-database embedding generation for Azure Database for PostgreSQL
Introducing in-database embedding generation for Azure Database for PostgreSQL:
via the azure_local_ai extension to Azure Database for PostgreSQL
We are excited to announce the public preview release of azure_local_ai, a new extension for Azure Database for PostgreSQL that enables you to create text embeddings from a model deployed within the same VM as your PostgreSQL database.
Vector embeddings enable AI models to better understand relationships and similarities between data, which is key for intelligent apps. Azure Database for PostgreSQL is proud to be the industry’s that has in-database embedding generation with a text embedding model deployed within the PostgreSQL boundary. can be generated right within the database – offering,
single-digit millisecond latency
predictable costs
confidence that data will remain compliant for confidential workloads
In this release, the extension will deploy a single model, multilingual-e5-small, to your Azure Database for PostgreSQL Flexible Server instance. The first time an embedding is created, the model is loaded into memory. Preview terms for the azure_local_ai extension.
azure_local_ai extension – Preview
Generate embeddings from within the database with a single line of SQL code invoking a UDF.
Harness the power of a text embedding model alongside your operational data without leaving your PostgreSQL database boundary.
During this public preview, the azure_local_ai extension will be available in these Azure regions,
East USA
West USA
West Europe
UK South
France Central
Japan East
Australia East
How does the azure_local_ai extension work?
In-database embedding architecture
ONNX Runtime Configuration
– azure_local_ai supports reviewing the configuration parameters of ONNX Runtime thread-pool within the ONNX Runtime Service. Changes are not currently allowed. See ONNX Runtime performance tuning.
Valid values for the key are:
– intra_op_parallelism: Sets total number of threads used for parallelizing single operator by ONNX Runtime thread-pool. By default, we maximize the number of intra ops threads as much as possible as it improves the overall throughput much (half of the available CPUs by default).
– inter_op_parallelism: Sets total number of threads used for computing multiple operators in parallel by ONNX Runtime thread-pool. By default, we set it to minimum possible thread, which is 1. Increasing it often hurts performance due to frequent context switches between threads.
– spin_control: Switches ONNX Runtime thread-pool’s spinning for requests. When disabled, it uses less cpu and hence causes more latency. By default, it is set to true (enabled).
SELECT azure_local_ai.get_setting(key TEXT);
Generate embeddings
The azure_local_ai extension for Azure Database for PostgreSQL makes it easy to generate an embedding from a simple inline UDF call in your SQL statement passing the model name and the data input to generate the embedding.
— Single embedding
SELECT azure_local_ai.create_embeddings(‘multilingual-e5-small:v1’, ‘Vector embeddings power GenAI applications’);
— Simple array embedding
SELECT azure_local_ai.create_embeddings(‘multilingual-e5-small:v1’, array[‘Recommendation System with Azure Database for PostgreSQL – Flexible Server and Azure OpenAI.’, ‘Generative AI with Azure Database for PostgreSQL – Flexible Server.’]);
Here’s a quick example that demonstrates:
Adding a vector column to a table with a default that generates an embedding and stores it when data is inserted.
Creating an HNSW index.
Completing a semantic search by generating an embedding for a search string and comparing with stored vectors with a cosine similarity search.
–Create docs table
CREATE TABLE docs(doc_id INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY, doc TEXT NOT NULL, last_update TIMESTAMPTZ DEFAULT NOW());
— Add a vector column and generate vector embeddings from locally deployed model
ALTER TABLE docs
ADD COLUMN doc_vector vector(384) — multilingual-e5 embeddings are 384 dimensions
GENERATED ALWAYS AS — Generated on inserts
(azure_local_ai.create_embeddings(‘multilingual-e5-small:v1’, doc)::vector) STORED; — TEXT string sent to local model
— Create a HNSW index
CREATE INDEX ON docs USING hnsw (doc_vector vector_ip_ops);
–Insert data into the docs table
INSERT INTO docs(doc) VALUES (‘Create in-database embeddings with azure_local_ai extension.’),
(‘Enable RAG patterns with in-database embeddings and vectors on Azure Database for PostgreSQL – Flexible server.’), (‘Generate vector embeddings in PostgreSQL with azure_local_ai extension.’),(‘Generate text embeddings in PostgreSQL for retrieval augmented generation (RAG) patterns with azure_local_ai extension and locally deployed LLM.’), (‘Use vector indexes and Azure OpenAI embeddings in PostgreSQL for retrieval augmented generation.’);
— Semantic search using vector similarity match
SELECT doc_id, doc, doc_vector
FROM docs d
ORDER BY
d.doc_vector <#> azure_local_ai.create_embeddings(‘multilingual-e5-small:v1’, ‘Generate text embeddings in PostgreSQL.’)::vector
LIMIT 1;
— Add a single record to the docs table and the vector embedding using azure_local_ai and locally deployed model will be automatically generated
INSERT INTO docs(doc) VALUES (‘Semantic Search with Azure Database for PostgreSQL – Flexible Server and Azure OpenAI’);
–View all doc entries and their doc_vector column. A vector embedding will have been generated for single record added above.
SELECT doc, doc_vector, last_update FROM docs;
Getting Started
To get started, review the azure_local_ai extension documentation, enable the extension and begin creating embeddings from your text data without leaving the Azure Database for PostgreSQL boundary.
azure_local_ai extension overview
Generate vector embeddings with azure_local_ai extension
vector extension
Learn more about vector similarity search using pgvector
Microsoft Tech Community – Latest Blogs –Read More
What’s new in Azure AI Language | BUILD 2024
Introduction
At Azure AI Language, we believe that language is at the core of human and artificial intelligence. As part of Azure AI that offers a comprehensive suite of AI services and tools for AI developers, Azure AI Language is a service that empowers developers to build intelligent natural language solutions that leverage a set of state-of-the-art language models, including Z-Code++, fine-tuned GPT and more. While LLMs in Azure OpenAI and model catalog are good for general purposes, Azure AI Language provides a set of prebuilt and customizable natural language capabilities that are fine-tuned and optimized for a wide range of scenarios, such as Personal Identifier Information (PII) detection, document and conversation summarization, text analytics for healthcare domain, conversational intent identification, etc., with leading quality and cost efficiency. These capabilities are available through a unified API that simplifies the integration and orchestration of natural language capabilities with no need of complex prompt engineering.
Today, we’re thrilled to announce more new features and capabilities designed to make your workflow more seamless and efficient than ever before at this year’s Microsoft Build with the following key highlights: 1) a unified experience for Azure AI Language in Azure AI Studio and improved integration with prompt flow, 2) improvements in existing prebuilt features such as Summarization, PII and NER, and 3) enhancements in custom features, especially in Conversational Language Understanding (CLU) to provide intent identification and entity extraction with higher quality in more regions.
Azure AI Language now available in Azure AI Studio and prompt flow
As part of Azure AI services, Azure AI Language now supports the new Azure AI service resource type for prebuilt capabilities like summarization, Personally Identifiable Information (PII) detection, and many others. It lets you access all Azure AI services, including Language, Speech and Vision, etc., with one single resource, which makes it easier to integrate the AI capabilities from across Azure AI. In the next few months, we will also support the customization capabilities in Azure AI Language in Azure AI Studio.
We are excited to introduce Azure AI Language in Azure AI Studio with two new playgrounds for you to try out: Summarization and Personally Identifiable Information detection. Both help infuse generative AI into your solutions. In Azure AI Studio, you have more options to try out and explore how to use them effectively for your needs.
Prompt flow in Azure AI Studio is a development tool designed to streamline the entire development cycle of AI applications. We are happy to announce that Language’s prompt flow tooling is now available in Azure AI prompt flow gallery. With that, you can explore and use various natural language processing features from Azure AI Language in prompt flow. You can quickly start to make use of Azure AI Language, reduce your time to value, and deploy solutions with reliable evaluation.
What’s new in prebuilt features in Azure AI Language service
Azure AI Language’s prebuilt capabilities enable customers to set up and running quickly without the need for model training. These prebuilt services are designed to accelerate time-to-value through pretrained models optimized for specific Language AI tasks, including Personally Identifiable Information (PII), Named Entity Recognition (NER), Summarization, Text Analytics for Health, Language Detection, Key Phrase Extraction and Sentiment Analysis and opinion mining, etc.
As we learned a lot of customers want to use Language AI to derive insights from native documents like Word docs and PDFs, to minimize the time and eliminates the need for data preprocessing, we have recently released a public preview of native documents support for PII detection and Summarization service. More file formats and capabilities will be added into the feature towards its GA.
Here is more information regarding what’s new in Azure AI Language’s prebuilt features:
2.1. Announcing GA general availability of Conversational PII
Azure AI Language’s PII service can help to detect and protect an individual’s identity and privacy in both generative and non-generative AI applications which are critical for highly regulated industries such as financial services, healthcare or government. This PII service also supports Protected Health Information (PHI) and Payment Card Industry (PCI) data, and it’s available in 79 languages for around 30 general entity categories and more than 90 region-specific entity categories. By enabling users to identify, categorize, and redact sensitive information directly from complex text files, and native documents in .pdf, .docx and .txt file format, the PII service enables our customers to adhere to the highest standards of data privacy, security, and compliance with only 1 API call.
Today, we are excited to announce the general availability of conversational PII redaction in English-language contexts to further support customers looking to recognize and redact sensitive information in conversations, particularly now in speech transcriptions from meetings and calls for 6 recognized entity categories for conversations. Customers can now redact transcript, chat, and other text written in a conversational style (i.e. text with “um”s, “ah”s, multiple speakers, sensitive info in non-complete sentences, and the spelling out of words for more clarity) with better confidence in AI quality, Azure SLA support and production environment support, and enterprise-grade security in mind.
Conversational PII will be available starting in late June. Please see here for the full list of supported languages for the PII service and here for supported recognized for PII entities for conversation.
2.2. Enhanced address recognition for UK contexts with NER model updates
We are excited to share an updated NER model with improved AI quality and accuracy for both NER and PII detection. This model update will largely benefit location entities (e.g. addresses), finance entities (e.g. bank account numbers), and single letter spell outs where a speaker in a transcript may be spelling out a relevant entity (e.g. “M. I. CRO. S. O. F. and T”) where our new model shows improved F1 scores and decreased false positive recognitions. The updated model will be available starting in late June.
2.3. General availability of Recap summary for conversations in Summarization
Azure AI Language’s Summarization service enables users to extract key points from the textual content and provide a comprehensive summary of documents or conversations. This service is powered by an ensemble of two sophisticated natural language models in which one is specifically trained for text extraction while the other fine-tuned GPT model is further optimized for text summarization without the need of any prompt engineering. In addition, Azure AI Language’s Summarization service comes with built-in hallucination detection capability.
We appreciate customers’ enthusiasm for Azure AI Language’s Summarization service since we announced its general availability last year. Document abstractive summarization and Conversation summarization capabilities are currently available in 6 regions and 11 languages whereas Custom Summarization is available in East US in English language. Please see Summarization region support article for the full list of supported regions, and Summarization language support article for supported languages.
Today, we are excited to announce the general availability of Recap summary for conversations in Azure AI Language service. This recap summary compresses a long conversation into one short paragraph and captures key information, which has been highly praised by preview customers, especially for many high-volume call center customers. Check out our product document to learn more about the key features in conversation summarization.
What’s new in custom features in Azure AI Language service
Azure AI Language’s custom capabilities empower customers to customize their multilingual machine learning models based on a few labeled examples according to their specific use case. These custom service include but are not limited to Custom Text Classification, Custom Named Entity Recognition (NER), and Conversational Language Understanding (CLU). Powered by the state-of-the-art transformer models, Azure AI Language’s custom multilingual models can be trained in one language and used for multiple other languages. In addition to custom features in Azure AI Language service, the advanced low-touch customization capability in Azure AI Language now also powers Azure AI Content Safety’s Custom Category feature for custom content moderation.
As part of custom services in Azure AI Language, Conversational Language Understanding (CLU) enables reliable conversational AI experience with intent identification and entity extraction. Today, we are excited to announce three new features in CLU as follows:
Enhanced support for CLU applications to automate training data augmentation for diacritics
Today, we are introducing a suite of improvements to increase the AI quality of your CLU apps. Many customers already enjoy our training configuration that allows customers to train in one language and use the app in 100+ languages. Since many customers around the world use English keyboards to type in Germanic and Slavic languages, it can be more difficult to classify the utterance into the correct intent without diacritic characters. Because of this, we’re excited to announce a new feature that allows you to automate the training data augmentation for diacritics. When this setting is enabled in your CLU project, CLU will automatically augment your training dataset to reduce the model’s sensitivity to diacritic characters.
Derive more insights from additional granular entities in CLU applications
Many of our customers enjoy the ease of leveraging prebuilt entity recognition, like location, in their custom models. However, it can be helpful to know even more information about an entity phrase. We are excited to introduce more granular entities in CLU. So, for an utterance such as “New York”, you can now recognize more than just location, but also additional details such as city or state. Check out CLU supported prebuilt entity components for a full list of support prebuilt entities.
Improved CLU training configuration to address CLU model scoring inconsistencies
We have released a new CLU training configuration that is designed to address scoring inconsistencies, especially related to managing confidence scores and ‘None’ intent classification for off-topic utterances. We are excited to see how this new training configuration (available in 2024-06-01-preview via REST API) improves your model’s performance.
Availability of CLU authoring service in Azure US Government cloud
As our government and defense customers expand their use of conversational AI, the need for Azure AI in government-compliant clouds has grown, so we are announcing that CLU authoring service is now available in the Azure US Government cloud. This means that you can build, manage, and deploy your custom CLU models for government use cases with the same ease and functionality as in the public cloud.
We are looking forward to seeing how these new CLU capabilities will provide you with more flexibility and control, as you develop conversational AI solutions in your enterprise.
Summary
We look forward to seeing our customers use these capabilities to enhance productivity, summarize insights, protect data privacy and build intelligent chat experiences based on content in natural language. As always, Azure AI Language team remains committed to delivering innovative solutions that enable our customers to achieve their goals. We welcome your feedback as we strive to continuously improve and evolve our services with state-of-the-art AI models to offer the best managed and compliant natural language processing capabilities to our customers in Azure AI Language service.
Learn more about Azure AI Language in the following resources:
Azure AI Language homepage: https://aka.ms/azure-language
Azure AI Language product documentation: https://aka.ms/language-docs
Azure AI Language product demo videos: https://aka.ms/language-videos
Explore Azure AI Language in Azure AI Studio: https://aka.ms/AzureAiLanguage
Prompt flow in Azure AI Studio: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/prompt-flow
Native document support for PII and Summarization: https://aka.ms/language-native-docs-support
Conversational PII detection: https://aka.ms/conversational-pii
Summarization overview: https://aka.ms/summarization-docs
Conversational Language Understanding overview: https://aka.ms/language-clu
Microsoft Tech Community – Latest Blogs –Read More
Developing AI-enhanced apps of the future with Microsoft’s adaptive cloud approach
As our annual Build conference is about to kick off this week, I’m thrilled to share several product announcements to empower developers to take advantage of Azure’s adaptive cloud approach: Edge Storage Accelerator public preview, Azure Monitor pipeline public preview, Secrets Sync Controller private preview, Jumpstart Agora for Manufacturing general availability, Jumpstart Drops public preview, Visual Studio Code Extension public preview.
There has never been a more exciting time to be an application developer. With cloud native practices and hyperscale cloud services increasingly available at the edge, developers can access data, build for environments and extend to use cases previously unavailable to them. At the same time AI advances are driving efficiency into the application development process and enabling the creation of innovative industry solutions.
However, to take advantage of this progress, developers and adjacent teams need to manage the challenges stemming from legacy systems, heterogeneous environments, fragmented data and lack of standardization. The need for a unified platform and system to achieve this potential and overcome these obstacles becomes increasingly evident. We believe Azure is the platform that can help, and we have been investing in Azure Arc to solve these problems. We see an opportunity to do more by bringing together agility and intelligence so that our customers can proactively adapt to change. This is what we refer to as our adaptive cloud approach.
This approach has enabled customers like US-based DICK’S Sporting Goods to re-imagine its customer experience and implement a “one store” strategy where they can write, deploy, manage and monitor software across all 800+ locations nationwide. Similarly, Coles, an Australian supermarket retailer, has embraced AI-driven solutions for inventory management, personalized shopping experiences, loss prevention and more.
“Win-win solutions are those where we are helping our team members and our customers at the same time. Our technological investments into operational efficiency have translated into real, tangible benefits for our shoppers.”
– Silvio Giorgio, GM of Data & Intelligence at Coles Group
The AI-infused developer opportunity
One of the key principles of our Adaptive cloud approach is Kubernetes everywhere, providing the same scalability and agility developers expect with their cloud solutions, when they build for the edge. Azure Arc, our solution for consistent multi-cloud and on-premises management, works with any CNCF-certified Kubernetes clusters including our first-party Azure Kubernetes Service to enable applications developers to build and run software seamlessly across the cloud and edge. As a result, developers can focus on the application itself instead of worrying about where and how it is going to run across their company’s physical footprint.
The starting point for developers to begin building distributed applications is the same toolset they currently use now, powered by recent releases and improvements. GitHub Actions gives developers the ability to automate, customize, and execute their software development workflows in their GitHub repository. GitHub Copilot will further speed their development of edge solutions with coding suggestions, help solving problems and more.
These tools, combined with Flux and Azure Container Registry, complete the GitOps workflow for consistent and efficient application rollouts across cloud to edge environments.
Distributing software updates via GitOps
DevOps and beyond
There is, however, a lot more to building and scaling applications across boundaries than Arc-enabled Kubernetes and GitOps workflows can deliver alone. DevOps teams need to create pipelines for deployment, testing, and monitoring applications. They want to manage network connectivity, automate application security, deploy and manage infrastructure as code (IaC) components and maintain the overall container orchestration layer.
To support these requirements, we are building a robust set of foundational services that will be available natively and fully supported by via Azure Arc. Once you integrate Azure Arc, these services will be available on the clusters for applications to take dependencies on and use. In terms of these foundational services, we have recently announced the release of Edge Storage Accelerator, and Secrets Sync Controller (details below), with other announcements coming soon.
Foundational Services
Solution orchestration for the edge
The environments that edge applications operate in are heterogeneous and diverse, causing challenges like not having a single programming interface (API), for developers and engineers that are trying to stitch together a larger solution (a factory solution, a software defined vehicle, etc.). To help solve this Microsoft is investing in the Eclipse Foundation Symphony project. Symphony is a platform-independent “orchestrator for orchestrator” engine, allowing solution providers to declare a single deployment manifest for various endpoint deployments. Symphony then ingests the deployment manifest, orchestrates the various orchestration platforms, such as Kubernetes, Linux Shell, Windows and returns feedback whether the deployment was successful. We welcome ecosystem contributions to this project.
Getting the most out of the Adaptive Cloud Ecosystem
While many of our customers decide to develop edge applications themselves, many if not all also purchase solutions from third parties. The specific types of applications differ by industry but there are two key partner types that play a major role in customer edge solutions.
Independent Software Vendors (ISVs)
ISVs play a critical role in providing 3rd-party edge solutions for customers. To ensure that an ISV’s solution can run on Arc-enabled Kubernetes we have created the Azure Arc ISV partner program, a technical validation of the partner’s solution on the platform. Isovalent, Hashicorp and Intel are examples of partners that have completed the program.
ISVs can also publish their containerized applications on the Azure Marketplace as a Kubernetes app for deployment on Arc enabled Kubernetes clusters. Kubernetes apps provide flexible billing options to enable ISVs to charge customers through the Azure Marketplace.
System Integration (SI) partners
For custom solution development or simply help with deployment of an application developed in-house, customers typically employ an SI. We work with an active ecosystem of SIs that are versed in modern application development, deployment and management practices. Partners like Avanade and Maibornwolff are good examples of SIs making an impact for customers with Kubernetes-based application development and deployment at the edge.
“For us, the easy deployment and monitoring of ML models from Azure ML in Kubernetes clusters at the edge is THE game-changing feature of Azure Arc – alongside the ability to use Azure IoT Operations. Both capabilities are essential when we build hybrid cloud smart factory platforms based on Azure technologies.”
– Marc Jäckle, Technical Head of IoT at MaibornWolff
“Azure Arc has enabled us to bring Cloud native services to the Edge of our client’s Industrial solutions without increasing the complexity and effort to manage this fleet of devices that are used to control the shop floor in digital operations scenarios. Having a Standards based execution environment like Kubernetes available to run custom workloads at the Edge or in the Cloud is a big benefit for our customers. Azure and especially Azure Arc fully support these deployments.”
-Juergen Mayrbaeurl, Senior Director at Avanade
Announcements
Ways to help build resilient, observable and secure applications at the edge
Edge Storage Accelerator public preview – At the edge, Kubernetes storage capabilities vary in durability, persistence, and performance, posing a challenge for customers seeking reliable solutions. To address these challenges, we recently introduced Edge Storage Accelerator (ESA), a storage system designed for Arc-connected Kubernetes clusters. ESA offers fault-tolerant, highly available cloud-native persistent storage, empowering customers to confidently host stateful applications, custom apps, and other Arc extensions with ease and reliability. Through standard Kubernetes APIs, users can effortlessly attach containerized applications managing file data stored on Azure Blob storage, leveraging its limitless cloud storage capacity for edge applications. ESA’s flexible deployment options, simplified connection via a Container Storage Interface (CSI) driver, and platform neutrality transforms edge storage solutions, alleviating customer pain points and enabling seamless operations at the edge.
Azure Monitor pipeline public preview – As enterprises scale their infrastructure and applications, the volume of observability data naturally increases, and it is challenging to collect telemetry from certain restricted environments. We are extending our Azure Monitor pipeline at the edge to enable customers to collect telemetry at scale from their edge environment and route to Azure Monitor for observability. With Azure Monitor pipeline at edge, customers can collect telemetry from the resources in segmented networks that do not have a line of sight to cloud. Additionally, the pipeline prevents data loss by caching the telemetry locally during intermittent connectivity periods and backfilling to the cloud, improving reliability and resiliency.
Secret Sync Controller private preview – Customers want the confidence and scalability that comes with unified secrets management in the cloud, while maintaining disconnection-resilience for operational activities at the edge. To help them with this, the new Secret Synchronization Controller for Kubernetes automatically synchronizes secrets from an Azure Key Vault to a Kubernetes cluster for offline access. This means customers can use Azure Key Vault to store, maintain, and rotate secrets, even when running a Kubernetes cluster in a semi-disconnected state. Synchronized secrets are stored in the cluster secret store, making them available as Kubernetes secrets to be used in all the usual ways—mounted as data volumes or exposed as environment variables to a container in a pod.
Exciting ways to engage and get started with Jumpstart and VSCode
Jumpstart Agora for Manufacturing general availability – Customers want interactive test environments that cover real industry scenarios to learn more about what Azure Arc and other Azure technologies can help them accomplish for their business. Jumpstart Agora for Manufacturing is a set of comprehensive cloud-to-edge scenarios brought to life through the story of Contoso Motors and its solutions for digital innovation and employee safety. Users will learn how to deploy and interact with the technology behind Contoso Motor’s quality optimization, AI hazard detection, defect detection and IT/OT observability and control solutions. https://aka.ms/JumpstartAgoraMotorsBlog
Jumpstart Drops public preview – Azure Arc Jumpstart contributors want a unified, accessible and shareable repository for scripts, sample apps, libraries, dashboards, automations or comprehensive tutorials useful in the testing and deployment of Azure Arc-enabled solutions. Jumpstart Drops is a new page on the Jumpstart website that enables users to search for and use pre-built code and artifacts of all types. Users can filter their search by scenarios (Edge/Cloud), tools/languages, tags, code owner and more. Jumpstart Drops also includes a defined template for making contributions and giving back to the community. Embracing an open-source ethos, all contributions are licensed under MIT License. So, dive in, explore the collection of amazing Drops already available, and join us and the community as we share knowledge. https://aka.ms/JumpstartDropsBlog
Visual Studio Code extension public preview – Developers want a single pane of glass and workbench to complete the entire developer workflow for Arc-enabled applications. We released an Arc Visual Studio Code extension in public preview for Arc and AKS which has sample code to access these services, a local environment to test and debug the services and an environment in the cloud to test at a larger scale. The extension provides a one-stop shop for developers and helps accelerate development for both workloads that will run on the edge and that are going to be published on the Azure Marketplace.
Together these resources offer the perfect starting point to learn about industry-specific adaptive cloud approach solutions, find code snippets or contribute to the Jumpstart Drops repository and get started with edge application development. To learn more about these and other exciting offerings that support our adaptive cloud approach please join us in-person or virtually at Microsoft Build.
Here is a list of our sessions. You can also find us on the 5th floor of the convention center at the adaptive cloud approach and community demo stations (within the Expert Meet-Up area).
Breakout session BRK126 | Adaptive cloud approach: Build and scale apps from cloud to edge
Breakout session BRKFP292 | AI Everywhere – Accelerate your development from edge to cloud
Breakout session BRK127 | Azure Monitor: Observability from Code to Cloud
Demo session DEM172 | Next-gen monitoring on Azure
Lab | Taking Azure Kubernetes out of the cloud and into your world (Tuesday/Wednesday/Thursday)
On-demand session OD545 | What’s new in Azure Monitor?
On-demand session OD540 | Improve Application Resilience Using Azure Chaos Studio
To read more about Azure’s adaptive cloud approach here are some of our latest blogs:
Advancing hybrid cloud to adaptive cloud with Azure | Microsoft Azure Blog
Harmonizing AI-enhanced physical and cloud operations | Microsoft Azure Blog
Hannover Messe 2024: Scaling Industrial Transformation with Azure’s Adaptive Cloud Approach – Microsoft Community Hub
Microsoft Tech Community – Latest Blogs –Read More