Category: Microsoft
Category Archives: Microsoft
Syncing Project to Planner
I’d like to use a PowerAutomate flow to sync my MS Project Online to MS Planner (it would be amazing if I could sync so that the tasks are copied to the same buckets that I defined in Project Online and in Planner too). I haven’t been able to find a flow that will help me achieve this Project Online to Planner synchronization. Does anyone have suggestions?
I’d like to use a PowerAutomate flow to sync my MS Project Online to MS Planner (it would be amazing if I could sync so that the tasks are copied to the same buckets that I defined in Project Online and in Planner too). I haven’t been able to find a flow that will help me achieve this Project Online to Planner synchronization. Does anyone have suggestions? Read More
Top Stories: May 21, 2024
Take a look!
English Top Stories: May 21, 2024 | Microsoft
Français À la une : 21 mai 2024 | Microsoft
Español Novedades más relevantes: 21 de mayo de 2024 | Microsoft
Português Blog de parceiro das Américas | Microsoft
Take a look!
English Top Stories: May 21, 2024 | Microsoft
Français À la une : 21 mai 2024 | Microsoft
Español Novedades más relevantes: 21 de mayo de 2024 | Microsoft
Português Blog de parceiro das Américas | Microsoft Read More
steps to transfer 365 accounts with email from service provider to self manage
We have our 365 accounts managed by a service provider. we want to stop using their services so we need to transfer administration of our accounts to self administor. where can I find instructions so I can make sure we don’t have any loss of data. The microsoft account subscriptions are through the service provider so we need to purchase our own subscriptions for email and office (desktop). I know how to self manage 365 accounts, but I want to make sure with the transfer we don’t lose our email data with the transfer from their subscriptions to our own.
We have our 365 accounts managed by a service provider. we want to stop using their services so we need to transfer administration of our accounts to self administor. where can I find instructions so I can make sure we don’t have any loss of data. The microsoft account subscriptions are through the service provider so we need to purchase our own subscriptions for email and office (desktop). I know how to self manage 365 accounts, but I want to make sure with the transfer we don’t lose our email data with the transfer from their subscriptions to our own. Read More
Announcing key updates to Responsible AI features and content filters in Azure OpenAI Service
We’re excited to announce the release of new Responsible AI features and content filter improvements in Azure OpenAI Service (AOAI) and AI Studio, spanning from new unified content filters, to customizable content filters for DALL-E and GPT-4 Turbo Vision deployments, safety system message templates in the AOAI Studio, asynchronous filters now available for all AOAI customers, and updates to protected material and image generation features.
Unified content filters
We are excited to announce that a new unified content filter experience is coming soon to Azure AI. This update will streamline the process of setting up content filters across different deployments and various products such as Azure AI Studio, AOAI, and Azure AI Content Safety for a more uniform user experience. Content filters enable users to effectively block harmful content, whether it’s text, images, or multimodal forms. With this unified approach, users have the flexibility to establish a content filtering policy tailored to their particular needs and scenarios.
Configurable content filters for DALL-E and GPT-4 Turbo with Vision GA
The integrated content filtering system in AOAI provides Azure AI Content Safety content filters by default, and they detect and the output of harmful content. Furthermore, we also provide a range of different content safety customization options for the AOAI GPT model series. Today, we are releasing configurable content filters for DALL-E 2 and 3, and GPT-4 Turbo Vision GA deployments, enabling content filter customization based on specific use case needs. Customers can configure input and output filters, adjust severity levels for the content harms categories and add additional applicable RAI models and capabilities such as Prompt Shields and custom blocklists. Customers who have been approved for modified content filters can turn the content filters off or use annotate mode to return annotations via API response, without blocking content. Learn more.
Asynchronous Filters
In addition to the default streaming experience in AOAI – where completions are vetted before they are returned to the user, or blocked in case of a policy violation – we’re excited to announce that all customers now have access to the Asynchronous Filter feature. Content filters are run asynchronously, and completion content is returned immediately with a smooth and fast token-by-token streaming experience. No content is buffered, which allows for a faster streaming experience at zero latency associated with content safety. Customers must be aware that while the feature improves latency, it’s a trade-off against the safety and real-time vetting of smaller sections of model output. Because content filters are run asynchronously, content moderation messages and policy violation signals are delayed, which means some sections of harmful content that would otherwise have been filtered immediately could be displayed to the user. Content that is retroactively flagged as protected material may not be eligible for Customer Copyright Commitment coverage. Read more about Asynchronous Filter and how to enable it.
Safety System Messages
System messages for generative AI models are an effective strategy for additional AI content safety. The AOAI Studio and AI Studio are now supporting safety system message templates directly in the playground that can be quickly tested and deployed, covering a range of different safety related topics such as preventing harmful content, jailbreak attempts, as well as grounding instructions. Learn more.
Protected Materials
Protections for Azure OpenAI GPT-based models
In November 2023, Microsoft announced the release of Protected Material Detection for Text in AOAI and Azure AI Content Safety. Soon, this model will upgrade to version 2.0 and identifies content that highly resembles pre-existing content. This update also prevents attempts to subvert the filter by asking for known modifications of the original text, e.g. the original text with repeated characters or more whitespace. Soon, the Protected Material Detection for Code model version 2.0 will update its attribution feature to flag 2023 public GitHub repository code from flagging 2021 repository code.
Updated Features in Azure OpenAI Service DALL-E
AOAI now prevents DALL-E from generating works that closely resemble certain types of known creative content, such as studio characters and contemporary artwork. It does this by re-interpreting the text prompt to DALL-E, removing keywords or phrases associated with creative content categories. Below are examples showing image outputs before and after the modification is applied. Please note that the DALL-E model is non-deterministic and so is likely not going to generate the same image with the same prompt each time.
New Responsible AI features in Azure AI Content Safety & Azure AI Studio
Custom Categories
This week at Build 2024 we also previewed other important features for responsible AI, one of which will be coming soon to Azure OpenAI Service: Custom Categories. Learn more about Custom Categories.
Get started today
Visit Azure OpenAI Service Studio: oai.azure.com
Visit Azure AI Studio: ai.azure.com
Visit Azure AI Content Safety Studio: aka.ms/contentsafetystudio
Microsoft Tech Community – Latest Blogs –Read More
Gen AI simplified: The azure_ai extension now generally available on Azure Database for PostgreSQL
We are thrilled to announce the general availability of the azure_ai extension on Azure Database for PostgreSQL. The azure_ai extension allows developers to seamlessly integrate Azure AI services from within their database using SQL queries. In conjunction with vector data this simplifies building Gen AI applications on Azure Database for PostgreSQL.
Features and Capabilities
With the azure_ai extension, you can now access Azure OpenAI, Azure AI Language services, Azure Translation and Azure Machine learning services with simple function calls from within SQL.
The azure_ai extension enables
Generation of embeddings with embedding models of creating embeddings with dimensions ranging from 384 to 3072. Embeddings can be generated as a scalar single embedding or as a batch for a set of them. Along with native vector data type using vector extension, embeddings can be generated as data is inserted or updated.
Calling into Azure AI Language services to perform summarization, sentiment analysis , Key phrase extraction or PII detection on your data.
Real-time text translation within your database with Azure AI translator simplifies building multi-lingual applications.
Real-time predictions enable many scenarios such as fraud detection, product recommendations, predictive maintenance or predictive healthcare. You can invoke custom trained models or pre-trained models from Azure Machine learning catalog that are hosted on online endpoints. Online inferencing endpoints are a highly scalable way to operationalize models for real-time low latency requests with features such as auto-scale and rich monitoring and debugging support.
Getting Started
To learn more about the azure_ai extension, and how it simplifies building GenAI applications on, visit our documentation below:
Azure AI Extension.
Azure AI Language Services integration
Azure AI Text Translation
Azure AI real-time machine learning scoring.
Vectors on Azure Database for PostgreSQL
Generative AI Overview
To learn even more about our Flexible Server managed service, see the Azure Database for PostgreSQL Flexible Server.
You can always find the latest features added to Flexible server in this release notes page.
Microsoft Tech Community – Latest Blogs –Read More
Build 2024: Unveiling performance and AI innovations in Azure Database for MySQL
Today, we’re thrilled to announce a suite of new features for Azure Database for MySQL that focus on performance enhancements, enterprise capabilities, and cutting-edge AI functionality designed to revolutionize your database management experience and efficiency. Read on to see how these innovations can elevate your workflows!
Microsoft Copilot in Azure: Unlock the benefits of Azure Database for MySQL with your AI companion (Public Preview)
We’re excited to announce Microsoft Copilot in Azure extends capabilities to Azure Database for MySQL. Microsoft Copilot in Azure is an AI-powered tool that leverages Large Language Models (LLMs) and the Azure control plane to help you get answers to your general questions and receive high quality recommendations to real-time problems. With this new integration with Azure Database for MySQL, you can converse with Microsoft Copilot in Azure to discover new features, determine when to enable new features to supplement your own scenarios, learn from summarized tutorials to enable features or build applications, and obtain tips and best practice recommendations to avoid issues.
Learn more: Documentation | Announcement blog with demo video coming soon!
Build RAG applications with Azure OpenAI and MySQL with Azure AI Search
We’re excited to announce that you can now create Retrieval-Augmented Generation (RAG) applications using Azure OpenAI and Azure Database for MySQL with Azure AI Search.
You can combine the smart, human-like responses of Azure OpenAI with MySQL’s powerful database management and Azure AI Search’s advanced search capabilities, making it easier to build apps that deliver relevant info quickly and efficiently. If you’re running applications (content management systems (CMS), e-commerce applications, or gaming sites) with data hosted in Azure Database for MySQL, enhance your user experience by building generative AI search and chat applications using LLMs available in Azure OpenAI and vector storage and indexing provided by Azure AI Search. Unleash the power of your data hosted on MySQL with the simple and seamless AI integrations on Azure!
Learn more: Demo video and sample architecture coming soon! | RAG in Azure AI Search documentation
Advancements in Azure Database for MySQL – Business Critical service tier (General Availability)
Achieve a 2x increase in throughput using Accelerated Logs (General Availability): We’re excited to announce the General Availability of Accelerated Logs, a feature that significantly boosts performance for write heavy workloads, offering up to a 2x improvement in throughput, out of the box, with no additional cost or application changes required. By reducing latency and enhancing data access speeds, the Accelerated Logs feature ensures that your mission-critical applications run more efficiently and smoothly on the Business Critical service tier. Try out this new feature to experience the difference in your workload performance!
Expand storage up to 32TB (General Availability) for your workloads using the Business Critical service tier. With storage auto-grow up to 32TB and auto-scale IOPs up to 80K, you can now run your large, growing mission-critical workloads worry-free on Azure!
Learn more: Documentation | Announcement blog with demo video coming soon!
Enhance data redundancy, availability, and auditing capabilities with on-demand backup and export (Public Preview)
With Public Preview of the on-demand backup and export feature, you can now easily export a physical backup of your MySQL flexible server to an Azure storage account (Azure blob storage) with just a few clicks on the Azure portal or with a single CLI command whenever you want. After exporting backups to blob storage, you can use them for multiple purposes, including:
Data recovery, redundancy, and availability. In addition to the automated backups managed by the service, you can export backups on-demand and use them for data recovery. In case of data corruption, accidental deletion, or hardware failure, simply restore the server to its previous state using this copy of your data.
Auditing. You can use exported physical backup files to restore on-premises MySQL servers to address the auditing, compliance, and archival requirements of an organization.
Compliance. Regulated industries must be able to export any data hosted by a cloud provider.
Avoid vendor lock in. Thake advantage of this solution to export data from MySQL flexible server or avoid vendor lock-in.
Learn more: Documentation | Announcement blog with demo video coming soon!
Simplify security management with Microsoft Defender for Cloud support (General Availability)
Last month, we announced the general availability of Microsoft Defender for Cloud support for Azure Database for MySQL – Flexible Server. The Defender for Cloud Advanced Threat Protection (ATP) feature simplifies security management of your MySQL flexible server by enabling effortless threat prevention, detection, and mitigation through increased visibility into and control over harmful events.
With the Defender for Cloud ATP feature, there’s no need to be a security expert to safeguard your MySQL flexible server against today’s growing threat landscape. ATP uses integrated security monitoring to detect anomalous database access and query patterns, as well as suspicious database activities, and provides targeted security recommendations and alerts.
Learn more: Demo video | Announcement blog
Conclusion
With the release of these capabilities, Azure Database for MySQL continues to be an industry leader for hosting your mission-critical applications on the cloud, offering top-tier performance for your workloads, enterprise capabilities and scale, enhanced monitoring, and robust backup and restore capabilities. The service seamlessly integrates with cutting-edge AI technologies through OpenAI, Azure Copilot, and Azure AI Search to deliver advanced functionalities and insights. Security is paramount, and with Microsoft Defender, your applications are protected by Microsoft’s expertise in cybersecurity, ensuring peace of mind against increasingly sophisticated threats. Azure Database for MySQL combines performance, innovation, and security to support your most demanding applications, while remaining on the open-source community MySQL version to protect against lock-ins.
To learn more about what’s new with Flexible Server, see What’s new in Azure Database for MySQL – Flexible Server. Stay tuned for more updates and announcements by following us on social media: YouTube | LinkedIn | X.
If you have any suggestions for or queries about our service, please let us know by emailing us at AskAzureDBforMySQL@service.microsoft.com. Thank you!
Microsoft Tech Community – Latest Blogs –Read More
Live at Build: Microsoft Learn releases new AI skill-building resources
Microsoft Learn is excited to be at Microsoft Build again this year with a fantastic new onsite presence and to share announcements about new resources to support AI skill-building.
When it comes to AI, having the right resources to develop critical new skills can be a game changer, whether you’re managing your organization’s training needs or advancing your own career. The 2024 Work Trend Index Annual report from Microsoft and LinkedIn suggests a massive opportunity for those willing to skill up in AI—66% of leaders say they wouldn’t hire someone without AI skills.
If you’re a developer learning to build AI-powered solutions, a team lead looking to skill up a team, or a leader looking to understand the benefits that Microsoft Copilot can bring to your organization, Microsoft Learn has something for you. That’s why we’re thrilled to announce the new AI skill-building resources we’re releasing today at Microsoft Build:
NEW AI Applied Skills releasing in May and June.
NEW Plans for AI skill-building.
NEW Copilot learning hub.
Additionally, I’m pleased to introduce two new AI skill-building offerings designed for non-technical roles:
NEW Copilot for Microsoft 365 training sessions for business users.
COMING SOON AI instructor-led training for business leaders.
Read on for more details about these exciting announcements.
Growing the Microsoft Applied Skills for AI portfolio
We developed Microsoft Applied Skills, new verifiable credentials that validate specific real-world skills, to help you address your skills gaps and empower you with the in-demand expertise you need. The positive feedback we’re receiving about the great value these credentials offer to individuals and organizations motivates us to keep expanding the portfolio.
During May and June we’re releasing new Applied Skills credentials to support developers who build AI and cloud solutions, including:
Develop AI agents using Microsoft Azure OpenAI and Semantic Kernel
Implement a data science and machine learning solution with Microsoft Fabric
Implement a Real-Time Intelligence solution with Microsoft Fabric
We’re also releasing new credentials for key cloud scenarios relevant to IT professionals:
Administer Active Directory Domain Services
Deploy and manage Microsoft Azure Arc–enabled servers
Explore Microsoft Applied Skills
The current portfolio of Microsoft Credentials includes over 20 Microsoft Applied Skills and close to 50 industry-recognized Microsoft Certifications, providing you with verifiable skill sets aligned with AI and cloud job roles and projects. Learn more about Microsoft Credentials.
Stay focused on your AI skill-building with new Plans
To stay current with today’s job skills, it’s important to have the right training content. Organizational team leaders and trainers must have the ability to customize and share this content, encourage their learners to stay on track, and monitor learning progress.
Today we’re introducing new AI skill-building Plans on Microsoft Learn, designed to meet all these objectives and more. Plans help learners, teams, and organizations accelerate the achievement of their learning goals using curated sets of structured content combined with milestones and automated nudges to keep learners focused and motivated. Get all the details about Plans in our recent blog post Introducing Plans on Microsoft Learn.
Find our new AI Plans on the AI learning hub on Microsoft Learn:
Master the basics of Azure: AI Fundamentals
Microsoft Copilot for Microsoft 365 for executives
Using AI in your everyday work: GitHub Copilot
Learn to create apps and modernize with Azure OpenAI
Check out the new Copilot learning hub
We’re also excited to announce the new Copilot learning hub on Microsoft Learn, the place where technology professionals can find resources—tailored to their job role and career goals—to help them develop the skills to put Microsoft Copilot to work every day.
As a complement to the already existing AI learning hub, this new hub offers tutorials, videos, and documentation covering the basics of Copilot, along with its features, capabilities, prompting techniques, best practices, and troubleshooting tips. The learning hub also showcases real-world examples and use cases of Copilot in different domains and scenarios, including content specific to developers, data and IT professionals, security analysts, and more.
Microsoft Learn is here to support your AI learning goals, whatever they may be. Choose the AI learning hub when looking to gain skills in all Microsoft’s AI apps and services, regardless of your business or technical role. Choose the Copilot learning hub when looking to deepen your technical expertise in Microsoft Copilot.
Visit the Copilot learning hub
New live Microsoft Copilot for Microsoft 365 training sessions for business users
The widespread adoption of AI across organizations requires a new approach to skill-building that focuses on upskilling all staff, from leadership and IT to business users, enabling them to fully leverage their AI investments.
I’m pleased to announce a new series of live Microsoft Copilot for Microsoft 365 training sessions for business users designed to help key roles in your organization learn how to use Microsoft Copilot for Microsoft 365 to unlock productivity. Each session is delivered in less than one hour and is available in multiple languages and time zones.
The training content is tailored to the following roles:
Executives—Learn how Copilot can synthesize communication history in Teams and create speeches and presentations with Word and PowerPoint.
Sales—Learn how Copilot helps with market research, reports, and recommendations. Use it for sales deals, contracts, and more.
IT—Learn how to use Copilot to summarize a product spec document, create a project plan and business presentation, and draft an email with highlights for a network security product.
Marketing—Learn how to use Copilot to analyze market trends, forecast sales, generate campaign ideas, and consolidate reports.
Finance—Learn how to use Copilot to analyze a spreadsheet with projected revenue, create a marketing campaign report, and summarize your company’s financial statement results.
HR—Learn how to use Copilot to create a job description, analyze multiple resumes, create interview questions and a candidate report, and compose an offer letter to a candidate.
Ops—Learn how to use Copilot to brainstorm a project plan, locate and summarize email threads, troubleshoot equipment issues, and create customer discovery questions.
Explore Microsoft Copilot for Microsoft 365 training sessions
Instructor-led training coming soon: Microsoft AI for business leaders
Microsoft Learn is also releasing our latest instructor-led training (ILT) called Microsoft AI for business leaders, which is designed to help business leaders find the knowledge and resources to adopt AI in their organizations. The training explores planning, strategizing, and scaling AI projects in a responsible way, focusing on use cases, tools, and insights from industry-specific AI success stories such as healthcare, finance, sustainability, retail, and manufacturing.
This new AI-focused training will be available in July 2024 through select Training Services Partners (TSP) with the expertise to deliver unique value to business leaders. Authorized TSPs offer a breadth of training solutions including blended learning, in-person, and online to meet your learning objectives.
Stay tuned for more information about this new AI instructor-led training.
Find AI-ready Training Services Partners
Explore AI skill-building with Microsoft Learn
Microsoft Learn is leading the way in bringing the latest AI skilling and credentials to our community of learners. We’ll continue to help you gain the skills you need to achieve more with technology, through interactive training and resources on Microsoft products and services. We look forward to sharing more news and updates in the coming weeks.
Continue your learning journey beyond Build at Microsoft Learn.
Microsoft Tech Community – Latest Blogs –Read More
Announcing Custom Categories in Azure AI Content Safety
We are excited to announce that Custom Categories is coming soon to Azure AI Content Safety. This new feature enables you to create your own customized classifier based on your specific needs for content filtering and AI safety whether you want to detect sensitive content, moderate user-generated content, or comply with local regulations. Use Custom Categories to train and deploy your own custom content filter with ease and flexibility.
Feature Overview
The Azure AI Content Safety custom categories feature is powered by Azure AI Language, a service that provides advanced natural language processing capabilities for text analysis and generation. The custom categories feature is designed to provide a streamlined process for creating, training, and using custom content classification models.
Here’s an overview of the underlying workflow:
Deploy your custom category when you need it
We are offering two deployment options for our customers:
Custom Categories (Standard):
The Standard option for deploying custom categories is aimed at providing a thorough and robust filtering mechanism. It requires a minimum of 50 lines of natural language examples to train the category. This depth of training material ensures that the custom filter is well-equipped to identify and moderate the specified types of content accurately.
Deployment Timeframe: The Standard option is designed with a deployment window of within 24 hours, balancing speed with the need for a comprehensive understanding of the content to be filtered.
Custom Categories (Rapid):
The Rapid option caters to urgent content safety needs, allowing organizations to respond swiftly to emerging threats and incidents. It requires a definition and few natural language examples for deploying the text incident, or few example images for deploying the image incident. This reduced requirement facilitates quicker creation and deployment of custom filters.
Deployment Timeframe: This option emphasizes speed, enabling the deployment of new custom filters around just an hour for text, and few minutes for image. It is particularly useful for addressing immediate and unforeseen content safety challenges.
Both options serve to empower organizations with the capability to protect their AI applications and users more effectively against a wide array of harmful content and security risks, offering a balance between responsiveness and thoroughness based on the specific needs and circumstances.
How to use this feature?
Step 1: Definition and Setup
By creating a custom category, you are telling the AI exactly which types of content you wish to detect and mitigate. You need to create a clear category name and a detailed definition that encapsulates the content’s characteristics. The setup phase is crucial, as it lays the groundwork for the AI to understand your specific filtering needs.
Then, collect a balanced and small dataset with both positive and (optional) negative examples allows the AI to learn the nuances of the category. This data should be representative of the variety of content that the model will encounter in a real-world scenario.
Step 2: Model Training
Once you have your dataset ready, the Azure AI Content Safety service uses it to train a new model. During training, the AI analyzes the data, learning to distinguish between content that matches the custom category and content that does not. Built on top of the underlying technology of LLM-powered low-touch customization from Azure AI Language, we are tailoring the experience for Content Safety customer towards consistency and more focus on content moderation scenario.
Step 3: Model Inferencing
After training, you need to evaluate the model to ensure it meets your accuracy requirements. This is done by testing the model with new content that it hasn’t seen before. The evaluation phase helps you identify any potential adjustments needed before deploying the model into a production environment.
Step 4: Iteration
In the upcoming release of custom categories studio experience, we will introduce a feature that allows users to modify their definition and training samples using suggestions generated by GPT.
Join our customers using Custom Categories
South Australia Department for Education
“The Custom Categories feature from Azure AI Content Safety is set to be a game-changer for the Department for Education in South Australia, and our pioneering AI chatbot, EdChat. This new feature allows us to tailor content moderation to our specific standards, ensuring a safer and more appropriate experience for users. It’s a significant step towards prioritizing the safety and well-being of our students in the digital educational space.”
– Dan Hughes, Chief Information Officer, South Australia Department for Education
Learn more about how South Australia Department for Education is using Azure AI Content Safety
Stay tuned!
Thank you for your support as we continue to enhance our platform. We are excited for you to begin using custom categories. Stay tuned for more updates and announcements on our progress.
In the meantime, we encourage you to visit our Content Safety documentation or studio to explore the existing capabilities available to you. Custom categories is also coming soon to Azure AI Studio and Azure OpenAI Service.
Microsoft Tech Community – Latest Blogs –Read More
Introducing in-database embedding generation for Azure Database for PostgreSQL
Introducing in-database embedding generation for Azure Database for PostgreSQL:
via the azure_local_ai extension to Azure Database for PostgreSQL
We are excited to announce the public preview release of azure_local_ai, a new extension for Azure Database for PostgreSQL that enables you to create text embeddings from a model deployed within the same VM as your PostgreSQL database.
Vector embeddings enable AI models to better understand relationships and similarities between data, which is key for intelligent apps. Azure Database for PostgreSQL is proud to be the industry’s that has in-database embedding generation with a text embedding model deployed within the PostgreSQL boundary. can be generated right within the database – offering,
single-digit millisecond latency
predictable costs
confidence that data will remain compliant for confidential workloads
In this release, the extension will deploy a single model, multilingual-e5-small, to your Azure Database for PostgreSQL Flexible Server instance. The first time an embedding is created, the model is loaded into memory. Preview terms for the azure_local_ai extension.
azure_local_ai extension – Preview
Generate embeddings from within the database with a single line of SQL code invoking a UDF.
Harness the power of a text embedding model alongside your operational data without leaving your PostgreSQL database boundary.
During this public preview, the azure_local_ai extension will be available in these Azure regions,
East USA
West USA
West Europe
UK South
France Central
Japan East
Australia East
How does the azure_local_ai extension work?
In-database embedding architecture
ONNX Runtime Configuration
– azure_local_ai supports reviewing the configuration parameters of ONNX Runtime thread-pool within the ONNX Runtime Service. Changes are not currently allowed. See ONNX Runtime performance tuning.
Valid values for the key are:
– intra_op_parallelism: Sets total number of threads used for parallelizing single operator by ONNX Runtime thread-pool. By default, we maximize the number of intra ops threads as much as possible as it improves the overall throughput much (half of the available CPUs by default).
– inter_op_parallelism: Sets total number of threads used for computing multiple operators in parallel by ONNX Runtime thread-pool. By default, we set it to minimum possible thread, which is 1. Increasing it often hurts performance due to frequent context switches between threads.
– spin_control: Switches ONNX Runtime thread-pool’s spinning for requests. When disabled, it uses less cpu and hence causes more latency. By default, it is set to true (enabled).
SELECT azure_local_ai.get_setting(key TEXT);
Generate embeddings
The azure_local_ai extension for Azure Database for PostgreSQL makes it easy to generate an embedding from a simple inline UDF call in your SQL statement passing the model name and the data input to generate the embedding.
— Single embedding
SELECT azure_local_ai.create_embeddings(‘multilingual-e5-small:v1’, ‘Vector embeddings power GenAI applications’);
— Simple array embedding
SELECT azure_local_ai.create_embeddings(‘multilingual-e5-small:v1’, array[‘Recommendation System with Azure Database for PostgreSQL – Flexible Server and Azure OpenAI.’, ‘Generative AI with Azure Database for PostgreSQL – Flexible Server.’]);
Here’s a quick example that demonstrates:
Adding a vector column to a table with a default that generates an embedding and stores it when data is inserted.
Creating an HNSW index.
Completing a semantic search by generating an embedding for a search string and comparing with stored vectors with a cosine similarity search.
–Create docs table
CREATE TABLE docs(doc_id INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY, doc TEXT NOT NULL, last_update TIMESTAMPTZ DEFAULT NOW());
— Add a vector column and generate vector embeddings from locally deployed model
ALTER TABLE docs
ADD COLUMN doc_vector vector(384) — multilingual-e5 embeddings are 384 dimensions
GENERATED ALWAYS AS — Generated on inserts
(azure_local_ai.create_embeddings(‘multilingual-e5-small:v1’, doc)::vector) STORED; — TEXT string sent to local model
— Create a HNSW index
CREATE INDEX ON docs USING hnsw (doc_vector vector_ip_ops);
–Insert data into the docs table
INSERT INTO docs(doc) VALUES (‘Create in-database embeddings with azure_local_ai extension.’),
(‘Enable RAG patterns with in-database embeddings and vectors on Azure Database for PostgreSQL – Flexible server.’), (‘Generate vector embeddings in PostgreSQL with azure_local_ai extension.’),(‘Generate text embeddings in PostgreSQL for retrieval augmented generation (RAG) patterns with azure_local_ai extension and locally deployed LLM.’), (‘Use vector indexes and Azure OpenAI embeddings in PostgreSQL for retrieval augmented generation.’);
— Semantic search using vector similarity match
SELECT doc_id, doc, doc_vector
FROM docs d
ORDER BY
d.doc_vector <#> azure_local_ai.create_embeddings(‘multilingual-e5-small:v1’, ‘Generate text embeddings in PostgreSQL.’)::vector
LIMIT 1;
— Add a single record to the docs table and the vector embedding using azure_local_ai and locally deployed model will be automatically generated
INSERT INTO docs(doc) VALUES (‘Semantic Search with Azure Database for PostgreSQL – Flexible Server and Azure OpenAI’);
–View all doc entries and their doc_vector column. A vector embedding will have been generated for single record added above.
SELECT doc, doc_vector, last_update FROM docs;
Getting Started
To get started, review the azure_local_ai extension documentation, enable the extension and begin creating embeddings from your text data without leaving the Azure Database for PostgreSQL boundary.
azure_local_ai extension overview
Generate vector embeddings with azure_local_ai extension
vector extension
Learn more about vector similarity search using pgvector
Microsoft Tech Community – Latest Blogs –Read More
What’s new in Azure AI Language | BUILD 2024
Introduction
At Azure AI Language, we believe that language is at the core of human and artificial intelligence. As part of Azure AI that offers a comprehensive suite of AI services and tools for AI developers, Azure AI Language is a service that empowers developers to build intelligent natural language solutions that leverage a set of state-of-the-art language models, including Z-Code++, fine-tuned GPT and more. While LLMs in Azure OpenAI and model catalog are good for general purposes, Azure AI Language provides a set of prebuilt and customizable natural language capabilities that are fine-tuned and optimized for a wide range of scenarios, such as Personal Identifier Information (PII) detection, document and conversation summarization, text analytics for healthcare domain, conversational intent identification, etc., with leading quality and cost efficiency. These capabilities are available through a unified API that simplifies the integration and orchestration of natural language capabilities with no need of complex prompt engineering.
Today, we’re thrilled to announce more new features and capabilities designed to make your workflow more seamless and efficient than ever before at this year’s Microsoft Build with the following key highlights: 1) a unified experience for Azure AI Language in Azure AI Studio and improved integration with prompt flow, 2) improvements in existing prebuilt features such as Summarization, PII and NER, and 3) enhancements in custom features, especially in Conversational Language Understanding (CLU) to provide intent identification and entity extraction with higher quality in more regions.
Azure AI Language now available in Azure AI Studio and prompt flow
As part of Azure AI services, Azure AI Language now supports the new Azure AI service resource type for prebuilt capabilities like summarization, Personally Identifiable Information (PII) detection, and many others. It lets you access all Azure AI services, including Language, Speech and Vision, etc., with one single resource, which makes it easier to integrate the AI capabilities from across Azure AI. In the next few months, we will also support the customization capabilities in Azure AI Language in Azure AI Studio.
We are excited to introduce Azure AI Language in Azure AI Studio with two new playgrounds for you to try out: Summarization and Personally Identifiable Information detection. Both help infuse generative AI into your solutions. In Azure AI Studio, you have more options to try out and explore how to use them effectively for your needs.
Prompt flow in Azure AI Studio is a development tool designed to streamline the entire development cycle of AI applications. We are happy to announce that Language’s prompt flow tooling is now available in Azure AI prompt flow gallery. With that, you can explore and use various natural language processing features from Azure AI Language in prompt flow. You can quickly start to make use of Azure AI Language, reduce your time to value, and deploy solutions with reliable evaluation.
What’s new in prebuilt features in Azure AI Language service
Azure AI Language’s prebuilt capabilities enable customers to set up and running quickly without the need for model training. These prebuilt services are designed to accelerate time-to-value through pretrained models optimized for specific Language AI tasks, including Personally Identifiable Information (PII), Named Entity Recognition (NER), Summarization, Text Analytics for Health, Language Detection, Key Phrase Extraction and Sentiment Analysis and opinion mining, etc.
As we learned a lot of customers want to use Language AI to derive insights from native documents like Word docs and PDFs, to minimize the time and eliminates the need for data preprocessing, we have recently released a public preview of native documents support for PII detection and Summarization service. More file formats and capabilities will be added into the feature towards its GA.
Here is more information regarding what’s new in Azure AI Language’s prebuilt features:
2.1. Announcing GA general availability of Conversational PII
Azure AI Language’s PII service can help to detect and protect an individual’s identity and privacy in both generative and non-generative AI applications which are critical for highly regulated industries such as financial services, healthcare or government. This PII service also supports Protected Health Information (PHI) and Payment Card Industry (PCI) data, and it’s available in 79 languages for around 30 general entity categories and more than 90 region-specific entity categories. By enabling users to identify, categorize, and redact sensitive information directly from complex text files, and native documents in .pdf, .docx and .txt file format, the PII service enables our customers to adhere to the highest standards of data privacy, security, and compliance with only 1 API call.
Today, we are excited to announce the general availability of conversational PII redaction in English-language contexts to further support customers looking to recognize and redact sensitive information in conversations, particularly now in speech transcriptions from meetings and calls for 6 recognized entity categories for conversations. Customers can now redact transcript, chat, and other text written in a conversational style (i.e. text with “um”s, “ah”s, multiple speakers, sensitive info in non-complete sentences, and the spelling out of words for more clarity) with better confidence in AI quality, Azure SLA support and production environment support, and enterprise-grade security in mind.
Conversational PII will be available starting in late June. Please see here for the full list of supported languages for the PII service and here for supported recognized for PII entities for conversation.
2.2. Enhanced address recognition for UK contexts with NER model updates
We are excited to share an updated NER model with improved AI quality and accuracy for both NER and PII detection. This model update will largely benefit location entities (e.g. addresses), finance entities (e.g. bank account numbers), and single letter spell outs where a speaker in a transcript may be spelling out a relevant entity (e.g. “M. I. CRO. S. O. F. and T”) where our new model shows improved F1 scores and decreased false positive recognitions. The updated model will be available starting in late June.
2.3. General availability of Recap summary for conversations in Summarization
Azure AI Language’s Summarization service enables users to extract key points from the textual content and provide a comprehensive summary of documents or conversations. This service is powered by an ensemble of two sophisticated natural language models in which one is specifically trained for text extraction while the other fine-tuned GPT model is further optimized for text summarization without the need of any prompt engineering. In addition, Azure AI Language’s Summarization service comes with built-in hallucination detection capability.
We appreciate customers’ enthusiasm for Azure AI Language’s Summarization service since we announced its general availability last year. Document abstractive summarization and Conversation summarization capabilities are currently available in 6 regions and 11 languages whereas Custom Summarization is available in East US in English language. Please see Summarization region support article for the full list of supported regions, and Summarization language support article for supported languages.
Today, we are excited to announce the general availability of Recap summary for conversations in Azure AI Language service. This recap summary compresses a long conversation into one short paragraph and captures key information, which has been highly praised by preview customers, especially for many high-volume call center customers. Check out our product document to learn more about the key features in conversation summarization.
What’s new in custom features in Azure AI Language service
Azure AI Language’s custom capabilities empower customers to customize their multilingual machine learning models based on a few labeled examples according to their specific use case. These custom service include but are not limited to Custom Text Classification, Custom Named Entity Recognition (NER), and Conversational Language Understanding (CLU). Powered by the state-of-the-art transformer models, Azure AI Language’s custom multilingual models can be trained in one language and used for multiple other languages. In addition to custom features in Azure AI Language service, the advanced low-touch customization capability in Azure AI Language now also powers Azure AI Content Safety’s Custom Category feature for custom content moderation.
As part of custom services in Azure AI Language, Conversational Language Understanding (CLU) enables reliable conversational AI experience with intent identification and entity extraction. Today, we are excited to announce three new features in CLU as follows:
Enhanced support for CLU applications to automate training data augmentation for diacritics
Today, we are introducing a suite of improvements to increase the AI quality of your CLU apps. Many customers already enjoy our training configuration that allows customers to train in one language and use the app in 100+ languages. Since many customers around the world use English keyboards to type in Germanic and Slavic languages, it can be more difficult to classify the utterance into the correct intent without diacritic characters. Because of this, we’re excited to announce a new feature that allows you to automate the training data augmentation for diacritics. When this setting is enabled in your CLU project, CLU will automatically augment your training dataset to reduce the model’s sensitivity to diacritic characters.
Derive more insights from additional granular entities in CLU applications
Many of our customers enjoy the ease of leveraging prebuilt entity recognition, like location, in their custom models. However, it can be helpful to know even more information about an entity phrase. We are excited to introduce more granular entities in CLU. So, for an utterance such as “New York”, you can now recognize more than just location, but also additional details such as city or state. Check out CLU supported prebuilt entity components for a full list of support prebuilt entities.
Improved CLU training configuration to address CLU model scoring inconsistencies
We have released a new CLU training configuration that is designed to address scoring inconsistencies, especially related to managing confidence scores and ‘None’ intent classification for off-topic utterances. We are excited to see how this new training configuration (available in 2024-06-01-preview via REST API) improves your model’s performance.
Availability of CLU authoring service in Azure US Government cloud
As our government and defense customers expand their use of conversational AI, the need for Azure AI in government-compliant clouds has grown, so we are announcing that CLU authoring service is now available in the Azure US Government cloud. This means that you can build, manage, and deploy your custom CLU models for government use cases with the same ease and functionality as in the public cloud.
We are looking forward to seeing how these new CLU capabilities will provide you with more flexibility and control, as you develop conversational AI solutions in your enterprise.
Summary
We look forward to seeing our customers use these capabilities to enhance productivity, summarize insights, protect data privacy and build intelligent chat experiences based on content in natural language. As always, Azure AI Language team remains committed to delivering innovative solutions that enable our customers to achieve their goals. We welcome your feedback as we strive to continuously improve and evolve our services with state-of-the-art AI models to offer the best managed and compliant natural language processing capabilities to our customers in Azure AI Language service.
Learn more about Azure AI Language in the following resources:
Azure AI Language homepage: https://aka.ms/azure-language
Azure AI Language product documentation: https://aka.ms/language-docs
Azure AI Language product demo videos: https://aka.ms/language-videos
Explore Azure AI Language in Azure AI Studio: https://aka.ms/AzureAiLanguage
Prompt flow in Azure AI Studio: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/prompt-flow
Native document support for PII and Summarization: https://aka.ms/language-native-docs-support
Conversational PII detection: https://aka.ms/conversational-pii
Summarization overview: https://aka.ms/summarization-docs
Conversational Language Understanding overview: https://aka.ms/language-clu
Microsoft Tech Community – Latest Blogs –Read More
Developing AI-enhanced apps of the future with Microsoft’s adaptive cloud approach
As our annual Build conference is about to kick off this week, I’m thrilled to share several product announcements to empower developers to take advantage of Azure’s adaptive cloud approach: Edge Storage Accelerator public preview, Azure Monitor pipeline public preview, Secrets Sync Controller private preview, Jumpstart Agora for Manufacturing general availability, Jumpstart Drops public preview, Visual Studio Code Extension public preview.
There has never been a more exciting time to be an application developer. With cloud native practices and hyperscale cloud services increasingly available at the edge, developers can access data, build for environments and extend to use cases previously unavailable to them. At the same time AI advances are driving efficiency into the application development process and enabling the creation of innovative industry solutions.
However, to take advantage of this progress, developers and adjacent teams need to manage the challenges stemming from legacy systems, heterogeneous environments, fragmented data and lack of standardization. The need for a unified platform and system to achieve this potential and overcome these obstacles becomes increasingly evident. We believe Azure is the platform that can help, and we have been investing in Azure Arc to solve these problems. We see an opportunity to do more by bringing together agility and intelligence so that our customers can proactively adapt to change. This is what we refer to as our adaptive cloud approach.
This approach has enabled customers like US-based DICK’S Sporting Goods to re-imagine its customer experience and implement a “one store” strategy where they can write, deploy, manage and monitor software across all 800+ locations nationwide. Similarly, Coles, an Australian supermarket retailer, has embraced AI-driven solutions for inventory management, personalized shopping experiences, loss prevention and more.
“Win-win solutions are those where we are helping our team members and our customers at the same time. Our technological investments into operational efficiency have translated into real, tangible benefits for our shoppers.”
– Silvio Giorgio, GM of Data & Intelligence at Coles Group
The AI-infused developer opportunity
One of the key principles of our Adaptive cloud approach is Kubernetes everywhere, providing the same scalability and agility developers expect with their cloud solutions, when they build for the edge. Azure Arc, our solution for consistent multi-cloud and on-premises management, works with any CNCF-certified Kubernetes clusters including our first-party Azure Kubernetes Service to enable applications developers to build and run software seamlessly across the cloud and edge. As a result, developers can focus on the application itself instead of worrying about where and how it is going to run across their company’s physical footprint.
The starting point for developers to begin building distributed applications is the same toolset they currently use now, powered by recent releases and improvements. GitHub Actions gives developers the ability to automate, customize, and execute their software development workflows in their GitHub repository. GitHub Copilot will further speed their development of edge solutions with coding suggestions, help solving problems and more.
These tools, combined with Flux and Azure Container Registry, complete the GitOps workflow for consistent and efficient application rollouts across cloud to edge environments.
Distributing software updates via GitOps
DevOps and beyond
There is, however, a lot more to building and scaling applications across boundaries than Arc-enabled Kubernetes and GitOps workflows can deliver alone. DevOps teams need to create pipelines for deployment, testing, and monitoring applications. They want to manage network connectivity, automate application security, deploy and manage infrastructure as code (IaC) components and maintain the overall container orchestration layer.
To support these requirements, we are building a robust set of foundational services that will be available natively and fully supported by via Azure Arc. Once you integrate Azure Arc, these services will be available on the clusters for applications to take dependencies on and use. In terms of these foundational services, we have recently announced the release of Edge Storage Accelerator, and Secrets Sync Controller (details below), with other announcements coming soon.
Foundational Services
Solution orchestration for the edge
The environments that edge applications operate in are heterogeneous and diverse, causing challenges like not having a single programming interface (API), for developers and engineers that are trying to stitch together a larger solution (a factory solution, a software defined vehicle, etc.). To help solve this Microsoft is investing in the Eclipse Foundation Symphony project. Symphony is a platform-independent “orchestrator for orchestrator” engine, allowing solution providers to declare a single deployment manifest for various endpoint deployments. Symphony then ingests the deployment manifest, orchestrates the various orchestration platforms, such as Kubernetes, Linux Shell, Windows and returns feedback whether the deployment was successful. We welcome ecosystem contributions to this project.
Getting the most out of the Adaptive Cloud Ecosystem
While many of our customers decide to develop edge applications themselves, many if not all also purchase solutions from third parties. The specific types of applications differ by industry but there are two key partner types that play a major role in customer edge solutions.
Independent Software Vendors (ISVs)
ISVs play a critical role in providing 3rd-party edge solutions for customers. To ensure that an ISV’s solution can run on Arc-enabled Kubernetes we have created the Azure Arc ISV partner program, a technical validation of the partner’s solution on the platform. Isovalent, Hashicorp and Intel are examples of partners that have completed the program.
ISVs can also publish their containerized applications on the Azure Marketplace as a Kubernetes app for deployment on Arc enabled Kubernetes clusters. Kubernetes apps provide flexible billing options to enable ISVs to charge customers through the Azure Marketplace.
System Integration (SI) partners
For custom solution development or simply help with deployment of an application developed in-house, customers typically employ an SI. We work with an active ecosystem of SIs that are versed in modern application development, deployment and management practices. Partners like Avanade and Maibornwolff are good examples of SIs making an impact for customers with Kubernetes-based application development and deployment at the edge.
“For us, the easy deployment and monitoring of ML models from Azure ML in Kubernetes clusters at the edge is THE game-changing feature of Azure Arc – alongside the ability to use Azure IoT Operations. Both capabilities are essential when we build hybrid cloud smart factory platforms based on Azure technologies.”
– Marc Jäckle, Technical Head of IoT at MaibornWolff
“Azure Arc has enabled us to bring Cloud native services to the Edge of our client’s Industrial solutions without increasing the complexity and effort to manage this fleet of devices that are used to control the shop floor in digital operations scenarios. Having a Standards based execution environment like Kubernetes available to run custom workloads at the Edge or in the Cloud is a big benefit for our customers. Azure and especially Azure Arc fully support these deployments.”
-Juergen Mayrbaeurl, Senior Director at Avanade
Announcements
Ways to help build resilient, observable and secure applications at the edge
Edge Storage Accelerator public preview – At the edge, Kubernetes storage capabilities vary in durability, persistence, and performance, posing a challenge for customers seeking reliable solutions. To address these challenges, we recently introduced Edge Storage Accelerator (ESA), a storage system designed for Arc-connected Kubernetes clusters. ESA offers fault-tolerant, highly available cloud-native persistent storage, empowering customers to confidently host stateful applications, custom apps, and other Arc extensions with ease and reliability. Through standard Kubernetes APIs, users can effortlessly attach containerized applications managing file data stored on Azure Blob storage, leveraging its limitless cloud storage capacity for edge applications. ESA’s flexible deployment options, simplified connection via a Container Storage Interface (CSI) driver, and platform neutrality transforms edge storage solutions, alleviating customer pain points and enabling seamless operations at the edge.
Azure Monitor pipeline public preview – As enterprises scale their infrastructure and applications, the volume of observability data naturally increases, and it is challenging to collect telemetry from certain restricted environments. We are extending our Azure Monitor pipeline at the edge to enable customers to collect telemetry at scale from their edge environment and route to Azure Monitor for observability. With Azure Monitor pipeline at edge, customers can collect telemetry from the resources in segmented networks that do not have a line of sight to cloud. Additionally, the pipeline prevents data loss by caching the telemetry locally during intermittent connectivity periods and backfilling to the cloud, improving reliability and resiliency.
Secret Sync Controller private preview – Customers want the confidence and scalability that comes with unified secrets management in the cloud, while maintaining disconnection-resilience for operational activities at the edge. To help them with this, the new Secret Synchronization Controller for Kubernetes automatically synchronizes secrets from an Azure Key Vault to a Kubernetes cluster for offline access. This means customers can use Azure Key Vault to store, maintain, and rotate secrets, even when running a Kubernetes cluster in a semi-disconnected state. Synchronized secrets are stored in the cluster secret store, making them available as Kubernetes secrets to be used in all the usual ways—mounted as data volumes or exposed as environment variables to a container in a pod.
Exciting ways to engage and get started with Jumpstart and VSCode
Jumpstart Agora for Manufacturing general availability – Customers want interactive test environments that cover real industry scenarios to learn more about what Azure Arc and other Azure technologies can help them accomplish for their business. Jumpstart Agora for Manufacturing is a set of comprehensive cloud-to-edge scenarios brought to life through the story of Contoso Motors and its solutions for digital innovation and employee safety. Users will learn how to deploy and interact with the technology behind Contoso Motor’s quality optimization, AI hazard detection, defect detection and IT/OT observability and control solutions. https://aka.ms/JumpstartAgoraMotorsBlog
Jumpstart Drops public preview – Azure Arc Jumpstart contributors want a unified, accessible and shareable repository for scripts, sample apps, libraries, dashboards, automations or comprehensive tutorials useful in the testing and deployment of Azure Arc-enabled solutions. Jumpstart Drops is a new page on the Jumpstart website that enables users to search for and use pre-built code and artifacts of all types. Users can filter their search by scenarios (Edge/Cloud), tools/languages, tags, code owner and more. Jumpstart Drops also includes a defined template for making contributions and giving back to the community. Embracing an open-source ethos, all contributions are licensed under MIT License. So, dive in, explore the collection of amazing Drops already available, and join us and the community as we share knowledge. https://aka.ms/JumpstartDropsBlog
Visual Studio Code extension public preview – Developers want a single pane of glass and workbench to complete the entire developer workflow for Arc-enabled applications. We released an Arc Visual Studio Code extension in public preview for Arc and AKS which has sample code to access these services, a local environment to test and debug the services and an environment in the cloud to test at a larger scale. The extension provides a one-stop shop for developers and helps accelerate development for both workloads that will run on the edge and that are going to be published on the Azure Marketplace.
Together these resources offer the perfect starting point to learn about industry-specific adaptive cloud approach solutions, find code snippets or contribute to the Jumpstart Drops repository and get started with edge application development. To learn more about these and other exciting offerings that support our adaptive cloud approach please join us in-person or virtually at Microsoft Build.
Here is a list of our sessions. You can also find us on the 5th floor of the convention center at the adaptive cloud approach and community demo stations (within the Expert Meet-Up area).
Breakout session BRK126 | Adaptive cloud approach: Build and scale apps from cloud to edge
Breakout session BRKFP292 | AI Everywhere – Accelerate your development from edge to cloud
Breakout session BRK127 | Azure Monitor: Observability from Code to Cloud
Demo session DEM172 | Next-gen monitoring on Azure
Lab | Taking Azure Kubernetes out of the cloud and into your world (Tuesday/Wednesday/Thursday)
On-demand session OD545 | What’s new in Azure Monitor?
On-demand session OD540 | Improve Application Resilience Using Azure Chaos Studio
To read more about Azure’s adaptive cloud approach here are some of our latest blogs:
Advancing hybrid cloud to adaptive cloud with Azure | Microsoft Azure Blog
Harmonizing AI-enhanced physical and cloud operations | Microsoft Azure Blog
Hannover Messe 2024: Scaling Industrial Transformation with Azure’s Adaptive Cloud Approach – Microsoft Community Hub
Microsoft Tech Community – Latest Blogs –Read More
Build 2024: Azure AI Video Indexer integration with language models for textual video summary
We are thrilled to introduce textual video summarization for recorded video and audio files, powered by large and small language models (LLM and SLM).
AI application developers can leverage APIs to create textual summaries for audio and video files, anywhere.
Data analysts, instead of watching entire videos, can benefit from concise summaries of video and audio content and adjust it to their needs.
Azure AI Video Indexer, a cloud and edge video solution, enables textual video summarization with the following build announcements:
Preview at the cloud: Textual video summarization in Azure AI Video Indexer powered by Azure Open AI
The feature of textual video summarization in Azure AI Video Indexer, cloud edition is powered by Azure Open AI. This innovative addition allows customers who have created an AOAI resource in Azure, to seamlessly integrate it with Video Indexer. By leveraging deployments such as GPT4, users can now enjoy concise textual summaries of their videos, presented as an insightful extract alongside the player page. The video summary not only enhances the viewing experience but also empowers video analysts to tailor the summary’s nuances and to align with specific business requirements.
The summary that encapsulates the essence of the video content, utilizing not only the transcript but also additional elements derived from the visual and audio aspects of the video like a siren and crowd reactions in the background, or any visual text that appear on the screen like signs, text, visual objects and more.
Preview at the edge (on premise): Extend Azure AI Video Indexer enabled by Arc with integration with SLM through Phi3
The preview version of Azure AI Video Indexer enabled by Arc now includes integration with SLM through Phi3. The innovation containerizes both the Azure AI and Phi3 models, providing video analysts the ability to perform video summarization. It represents a significant stride in our generative AI capabilities utilizing the cutting-edge Phi3 model at the edge. The Phi3 model opens new avenues for AI applications, especially in settings where computing resources are limited, by offering a more streamlined and efficient approach to video analysis.
The Phi3 model, developed in line with Microsoft’s Responsible AI principles and trained on high-quality data, is a testament to our dedication to safety and excellence in AI. It’s a lightweight, state-of-the-art model designed for long-context support, making it ideal for generating responsive and relevant text in chat formats.
Use cases for video summarization across industries
In education, summarized videos can serve as study aids, allowing students to review lecture content quickly. The capability can distill lengthy training videos into key takeaways, saving employees’ time and improving knowledge retention, e.g., in corporate trainings.
In media, it helps in quickly understanding the content of large video libraries, like movies or series, without watching the entire footage. This can be particularly useful for editors and content creators who need to create promos or trailers.
In manufacturing, summarized videos can serve as training material or evidence of compliance with regulatory standards and can quickly highlight parts of footage where potential quality issues are detected on the production line.
Retailers can use video summaries to understand customer traffic patterns and preferences without watching hours of footage.
In modern safety, textual summaries can pinpoint instances of theft or suspicious behavior, streamlining the review process for security teams, enhance the review process of training exercises, identifying key moments for analysis and improvement.
Watch the demo recording to learn more:
Video summarization flavors and customization
Video analysts utilizing the summarization feature will appreciate the added flexibility of feature customization. Tailor your summaries to meet specific needs with selectable options such as “Shorter” for concise overviews, “Longer” for detailed accounts, “Formal” for professional contexts, and “Casual” for a more relaxed tone. This personalized approach ensures that your summaries align perfectly with your intended audience and purpose.
How to make it available in my Azure AI Video Indexer account?
Use Textual Video Summarization in Your Public Cloud Environment:
If you already have an existing Azure Video Indexer account, follow these steps to use the video summarization:
Create an Azure Open AI resource in your subscription.
Connect your Azure Open AI resource to your Video Indexer resource in the Azure Portal.
Go to Azure Video Indexer portal, select a video and choose “generate summary”.
For detailed instructions on how to set up this integration, refer to this guidance . Please note that this feature is not available in Video Indexer trial accounts or on legacy accounts which uses Azure Media services. Leverage this opportunity also to remove your dependency on Azure Media services by following these instructions.
Use Textual Video Summarization in Your Edge Environment, enabled by Arc:
If your edge appliances are integrated with the Azure Platform via Azure Arc, you’re in for a treat! Here’s how to activate the feature:
Register for Video Indexer (VI) enabled by Arc using this form. Rest assured, we are dedicated to activating the Azure AI Video Indexer Arc-enabled extension in your Video Indexer account within 30 days of your request. of your request.
Once activated, create an Azure AI Video Indexer service extension by adhering to these guidelines.
Navigate to the Azure Video Indexer portal, select a video, and click on “Generate Summary” to see the magic happen.
Our Video-to-text API (aka Prompt Content API) now also support Llama, Phi2 and GPTv4
The prompt content API, that converts video to text based on video Indexer’s extracted insights, now supports additional models: Llama, Phi2 and GPTv4. It provides more flexibility when converting video content to text. To learn more about this API, refer to this API documentation.
Read More
About the feature
Video summarization: Public feature documentation
Transparency note
Prompt content: Video-to-text API
About Azure AI Video Indexer
Use Azure AI Video Indexer website to access product website
Get started with Azure AI Video Indexer, Enabled by Arc by following this Arc Jumpstart scenario
Visit Azure AI Video Indexer Developer Portal to learn about our APIs
Search the Azure Video Indexer GitHub repository
Review our product documentation.
Get to know the recent features using Azure AI Video Indexer release notes
Use Stack overflow community for technical questions.
To report an issue with Azure AI Video Indexer, go to Azure portal Help + support. Create a new support request. Your request will be tracked within SLA.
For any other question, contact our support distribution list at visupport@microsoft.com
Microsoft Tech Community – Latest Blogs –Read More
Microsoft List branching is this possible
Hi,
Is there an option or away of using Branching when creating a MS Form from the new SharePoint List route??
I know it’s an option when creating a directly from the MS Forms app
Hi,Is there an option or away of using Branching when creating a MS Form from the new SharePoint List route??I know it’s an option when creating a directly from the MS Forms app Read More
Pop-up window announcement in Microsoft Teams
This morning, when I logged into Microsoft Teams, I noticed a pop-up window announcing the Microsoft Teams Public Preview & Targeted Release. It seems this notification may have also been displayed to other users. We are planning to implement a retention policy for Teams chats and would like to distribute similar information to all our Microsoft Teams users. While I am familiar with creating Teams and posting announcements within a channel, I am looking for a way to share this message without setting up a new team or channel. If anyone has experience with this feature or something similar, your insights would be greatly appreciated.
This morning, when I logged into Microsoft Teams, I noticed a pop-up window announcing the Microsoft Teams Public Preview & Targeted Release. It seems this notification may have also been displayed to other users. We are planning to implement a retention policy for Teams chats and would like to distribute similar information to all our Microsoft Teams users. While I am familiar with creating Teams and posting announcements within a channel, I am looking for a way to share this message without setting up a new team or channel. If anyone has experience with this feature or something similar, your insights would be greatly appreciated. Read More
How to fetch / filter users from AD faster using Get-ADUser command.
Recently I saw few scripts which are fetching users from AD like below mentioned.
Get-ADUser -LDAPFilter “(whenCreated>=$date)”
or
Get-ADUser -filter {Enabled -eq $True -and PasswordNeverExpires -eq $False -and PasswordLastSet -gt 0}
or
Get-ADUser -Filter ‘Enabled -eq $True’
But using like above is taking quite a lot of time or sometimes giving Timeout error.
is there any way can make this faster? Will using –LDAPFilter instead of -Filter make it faster?
Error Message: The operation returned because the timeout limit was exceeded.
Recently I saw few scripts which are fetching users from AD like below mentioned. Get-ADUser -LDAPFilter “(whenCreated>=$date)”orGet-ADUser -filter {Enabled -eq $True -and PasswordNeverExpires -eq $False -and PasswordLastSet -gt 0}orGet-ADUser -Filter ‘Enabled -eq $True’ But using like above is taking quite a lot of time or sometimes giving Timeout error.is there any way can make this faster? Will using -LDAPFilter instead of -Filter make it faster? Error Message: The operation returned because the timeout limit was exceeded. Read More
Semantic search in Azure AI Studio
Hi everyone
I’ve setup an Azure AI Search index that points to a SharePoint library and connected this to Azure AI Studio so a Chat-GPT model can be used to query the documents in the library. This all works fine if I use the Keyword search type in the chat playground, but if I change this to Semantic I get the following error:
Semantic Ranker is enabled in my Azure AI Search service instance and all works fine when tested in the search settings in the Azure portal. The error isn’t giving me any further information so I’m not quire sure where I can go from here?
Any assistance would be gratefully received.
Thanks in advance.
Hi everyone I’ve setup an Azure AI Search index that points to a SharePoint library and connected this to Azure AI Studio so a Chat-GPT model can be used to query the documents in the library. This all works fine if I use the Keyword search type in the chat playground, but if I change this to Semantic I get the following error: Semantic Ranker is enabled in my Azure AI Search service instance and all works fine when tested in the search settings in the Azure portal. The error isn’t giving me any further information so I’m not quire sure where I can go from here? Any assistance would be gratefully received. Thanks in advance. Read More
Windows 11 Notifications
hello all,
hoping someone might be able to help. We are looking for a way to stop our users from being able to change the Notification settings on our windows 10 & 11 devices (eg, not be able to turn off, change what notifications are allowed and which ones are not etc)
we are hoping there may be a way via the registry, Group Policy or a config profile in Intune though we have had a look but cant find anything.Windows 11, notifications
many thanks
hello all,hoping someone might be able to help. We are looking for a way to stop our users from being able to change the Notification settings on our windows 10 & 11 devices (eg, not be able to turn off, change what notifications are allowed and which ones are not etc)we are hoping there may be a way via the registry, Group Policy or a config profile in Intune though we have had a look but cant find anything.Windows 11, notificationsmany thanks Read More
Using OR in a formula
I need a formula to give 3 different answers based on the value of one cell in a worksheet that could change.
J19 is the variable cell in my worksheet. The value of (J9-J20) may be a positive or negative number and then I need a value in J21 based on positive or negative. If positive, I need the sum. If negative, I need the cell to be 0.
If J20 is zero I need the value to be the sum of another cell J23
These are the 3 formulas that will give the correct answers but I need it to be an OR to each formula to get the correct answer in cell J25
J25
=IF(J20>J9,J9,J9-J20)+J22+J23+E18 works if the deductible is LARGE
=IF(J20>J9,J9,J9-J20)+J22+J23+E18 works if the deductible is smaller than the charges
=IF(j20=0,J23)+E18 works if the deductible is zero and there is a copay
Is this even possible to solve for?
Thank you,
Donna
I need a formula to give 3 different answers based on the value of one cell in a worksheet that could change.J19 is the variable cell in my worksheet. The value of (J9-J20) may be a positive or negative number and then I need a value in J21 based on positive or negative. If positive, I need the sum. If negative, I need the cell to be 0.If J20 is zero I need the value to be the sum of another cell J23 These are the 3 formulas that will give the correct answers but I need it to be an OR to each formula to get the correct answer in cell J25J25=IF(J20>J9,J9,J9-J20)+J22+J23+E18 works if the deductible is LARGE=IF(J20>J9,J9,J9-J20)+J22+J23+E18 works if the deductible is smaller than the charges=IF(j20=0,J23)+E18 works if the deductible is zero and there is a copay Is this even possible to solve for?Thank you,Donna Read More
Announcing general availability of real-time diarization
We are excited to announce Generally Available of real-time diarization which is an enhanced add-on feature of Azure Speech service. With this feature, you can get live (in real time) speech to text transcription by speakers (Guest1, Guest2, Guest3, etc.), so that you know which speaker was speaking a particular part transcribed speech conversation transcription.
What’s Real-time Diarization
The diarization is a feature that differentiates speakers in an audio. Real-time diarization is capable of distinguishing speakers’ voices through single channel audio in streaming mode. Diarization combined with speech to text functionality can provide transcription outputs that contain a speaker entry for each transcribed segment. The transcription output is tagged as GUEST1, GUEST2, GUEST3, etc. based on the number of speakers in the audio conversation. Below graph demonstrates the difference between the transcription results with and without diarization.
Use Cases and Scenarios
Real-time diarization can be used in a wide range of scenarios. Below lists some typical use cases. It can also be used to help with accessibility scenarios.
Live Conversation/Meeting Transcription
When speakers are all in the same room with a single microphone setup, do live transcription about which speaker (e.g. Guest-1, Guest-2, or Guest-3) talks about what transcription. Combined with GPT based on the diarized transcription, you can also do meeting/conversation summary, recap, or ask questions about the conversation/meeting, etc.
Microsoft Teams, for instance, is leveraging the diarization featrue to show live meeting transcription in Teams. Based on the meeting transcription, Microsoft Teams’ Copilot provides a meeting summary, recap, and many other cool features for people to interact Teams’ Copilot about the meetings.
Real-time Agent Asist
Use Speech Analytics (which is another new feature that Azure Speech Service provides at Build) with real-time diarization, you can do the live transcription analytics to help on the Agent Asist scenarios to optimally address the customers questions and concerns.
Live Caption and Subtitle (Translated Caption)
Show live captions or subtitles (translated captions) of meetings, videos, or audios.
What’s Improved Since Public Preview
After the public preview, we put in a lot of effort to improve the diarization quality. This is the major feedback we heard from Preview users regarding the quality of real-time diarization. We released a new diarization model and improved diarization quality by ~3% on WDER. In addition, we removed the limitation of 7 seconds of continuous audio data from a single speaker. In the Preview version, when a speaker first talks, the diarization would start to perform with better quality after the 7 seconds of continuous audio of the speaker. Now in GA version, we don’t have this limitation anymore.
Early Adopters from Diverse Area
So far, we have over a thousand customers from diverse industries trying out real-time diarization on a variety of scenarios. Below are some examples.
Medical
Live transcription between doctor and patient, and transcription analytics
Banking
Live meeting transcription
Telecommunication
Conversation transcription, summarization, transcription analytics
Legal
App to assist trial and appellate attorney who are preparing for oral arguments (e.g. capture the attorneys’ and judges’ positions during mock oral arguments, etc.)
Try it Out
To try out the real-time diarization, you can go to Speech Studio (Speech Studio – Real-time speech to text (microsoft.com)) and do the following steps (shown in the below screenshot) to experience the feature,
Click on “Show advanced options”.
Use the “Speaker diarization” toggle to turn on or off the real-time diarization.
Real-time diarization is available to all the regions that Azure Speech Service supports. It is released through Speech SDK (version 1.31.0 or higher). The feature is available in the following SDKs.
C#,
C++
Java
JavaScript
Python
Please feel free to follow the Quickstart: Real-time diarization to start experiencing the feature.
Microsoft Tech Community – Latest Blogs –Read More
No access to reservationpage
Best,
1 of me co workers has access to 4 reservation page in bookings with admin rights. When she accesses a reservation page she keeps getting the message that she has no permissions to the reservation page and that she has 3 out of 4 reservation page no access to it.
i have already done the following:
– cleared browser history.
– cleared cache.
– assigned the same permissions to each reservation page.
– tried on a mobile device.
But every step above has not helped anything.
Can anyone help me to get the persistent problem solved.
Regards,
Robby
Best,1 of me co workers has access to 4 reservation page in bookings with admin rights. When she accesses a reservation page she keeps getting the message that she has no permissions to the reservation page and that she has 3 out of 4 reservation page no access to it. i have already done the following:- cleared browser history.- cleared cache.- assigned the same permissions to each reservation page.- tried on a mobile device. But every step above has not helped anything.Can anyone help me to get the persistent problem solved. Regards, Robby Read More