Category: News
Introducing in-database embedding generation for Azure Database for PostgreSQL
Introducing in-database embedding generation for Azure Database for PostgreSQL:
via the azure_local_ai extension to Azure Database for PostgreSQL
We are excited to announce the public preview release of azure_local_ai, a new extension for Azure Database for PostgreSQL that enables you to create text embeddings from a model deployed within the same VM as your PostgreSQL database.
Vector embeddings enable AI models to better understand relationships and similarities between data, which is key for intelligent apps. Azure Database for PostgreSQL is proud to be the industry’s that has in-database embedding generation with a text embedding model deployed within the PostgreSQL boundary. can be generated right within the database – offering,
single-digit millisecond latency
predictable costs
confidence that data will remain compliant for confidential workloads
In this release, the extension will deploy a single model, multilingual-e5-small, to your Azure Database for PostgreSQL Flexible Server instance. The first time an embedding is created, the model is loaded into memory. Preview terms for the azure_local_ai extension.
azure_local_ai extension – Preview
Generate embeddings from within the database with a single line of SQL code invoking a UDF.
Harness the power of a text embedding model alongside your operational data without leaving your PostgreSQL database boundary.
During this public preview, the azure_local_ai extension will be available in these Azure regions,
East USA
West USA
West Europe
UK South
France Central
Japan East
Australia East
How does the azure_local_ai extension work?
In-database embedding architecture
ONNX Runtime Configuration
– azure_local_ai supports reviewing the configuration parameters of ONNX Runtime thread-pool within the ONNX Runtime Service. Changes are not currently allowed. See ONNX Runtime performance tuning.
Valid values for the key are:
– intra_op_parallelism: Sets total number of threads used for parallelizing single operator by ONNX Runtime thread-pool. By default, we maximize the number of intra ops threads as much as possible as it improves the overall throughput much (half of the available CPUs by default).
– inter_op_parallelism: Sets total number of threads used for computing multiple operators in parallel by ONNX Runtime thread-pool. By default, we set it to minimum possible thread, which is 1. Increasing it often hurts performance due to frequent context switches between threads.
– spin_control: Switches ONNX Runtime thread-pool’s spinning for requests. When disabled, it uses less cpu and hence causes more latency. By default, it is set to true (enabled).
SELECT azure_local_ai.get_setting(key TEXT);
Generate embeddings
The azure_local_ai extension for Azure Database for PostgreSQL makes it easy to generate an embedding from a simple inline UDF call in your SQL statement passing the model name and the data input to generate the embedding.
— Single embedding
SELECT azure_local_ai.create_embeddings(‘multilingual-e5-small:v1’, ‘Vector embeddings power GenAI applications’);
— Simple array embedding
SELECT azure_local_ai.create_embeddings(‘multilingual-e5-small:v1’, array[‘Recommendation System with Azure Database for PostgreSQL – Flexible Server and Azure OpenAI.’, ‘Generative AI with Azure Database for PostgreSQL – Flexible Server.’]);
Here’s a quick example that demonstrates:
Adding a vector column to a table with a default that generates an embedding and stores it when data is inserted.
Creating an HNSW index.
Completing a semantic search by generating an embedding for a search string and comparing with stored vectors with a cosine similarity search.
–Create docs table
CREATE TABLE docs(doc_id INT GENERATED ALWAYS AS IDENTITY PRIMARY KEY, doc TEXT NOT NULL, last_update TIMESTAMPTZ DEFAULT NOW());
— Add a vector column and generate vector embeddings from locally deployed model
ALTER TABLE docs
ADD COLUMN doc_vector vector(384) — multilingual-e5 embeddings are 384 dimensions
GENERATED ALWAYS AS — Generated on inserts
(azure_local_ai.create_embeddings(‘multilingual-e5-small:v1’, doc)::vector) STORED; — TEXT string sent to local model
— Create a HNSW index
CREATE INDEX ON docs USING hnsw (doc_vector vector_ip_ops);
–Insert data into the docs table
INSERT INTO docs(doc) VALUES (‘Create in-database embeddings with azure_local_ai extension.’),
(‘Enable RAG patterns with in-database embeddings and vectors on Azure Database for PostgreSQL – Flexible server.’), (‘Generate vector embeddings in PostgreSQL with azure_local_ai extension.’),(‘Generate text embeddings in PostgreSQL for retrieval augmented generation (RAG) patterns with azure_local_ai extension and locally deployed LLM.’), (‘Use vector indexes and Azure OpenAI embeddings in PostgreSQL for retrieval augmented generation.’);
— Semantic search using vector similarity match
SELECT doc_id, doc, doc_vector
FROM docs d
ORDER BY
d.doc_vector <#> azure_local_ai.create_embeddings(‘multilingual-e5-small:v1’, ‘Generate text embeddings in PostgreSQL.’)::vector
LIMIT 1;
— Add a single record to the docs table and the vector embedding using azure_local_ai and locally deployed model will be automatically generated
INSERT INTO docs(doc) VALUES (‘Semantic Search with Azure Database for PostgreSQL – Flexible Server and Azure OpenAI’);
–View all doc entries and their doc_vector column. A vector embedding will have been generated for single record added above.
SELECT doc, doc_vector, last_update FROM docs;
Getting Started
To get started, review the azure_local_ai extension documentation, enable the extension and begin creating embeddings from your text data without leaving the Azure Database for PostgreSQL boundary.
azure_local_ai extension overview
Generate vector embeddings with azure_local_ai extension
vector extension
Learn more about vector similarity search using pgvector
Microsoft Tech Community – Latest Blogs –Read More
What’s new in Azure AI Language | BUILD 2024
Introduction
At Azure AI Language, we believe that language is at the core of human and artificial intelligence. As part of Azure AI that offers a comprehensive suite of AI services and tools for AI developers, Azure AI Language is a service that empowers developers to build intelligent natural language solutions that leverage a set of state-of-the-art language models, including Z-Code++, fine-tuned GPT and more. While LLMs in Azure OpenAI and model catalog are good for general purposes, Azure AI Language provides a set of prebuilt and customizable natural language capabilities that are fine-tuned and optimized for a wide range of scenarios, such as Personal Identifier Information (PII) detection, document and conversation summarization, text analytics for healthcare domain, conversational intent identification, etc., with leading quality and cost efficiency. These capabilities are available through a unified API that simplifies the integration and orchestration of natural language capabilities with no need of complex prompt engineering.
Today, we’re thrilled to announce more new features and capabilities designed to make your workflow more seamless and efficient than ever before at this year’s Microsoft Build with the following key highlights: 1) a unified experience for Azure AI Language in Azure AI Studio and improved integration with prompt flow, 2) improvements in existing prebuilt features such as Summarization, PII and NER, and 3) enhancements in custom features, especially in Conversational Language Understanding (CLU) to provide intent identification and entity extraction with higher quality in more regions.
Azure AI Language now available in Azure AI Studio and prompt flow
As part of Azure AI services, Azure AI Language now supports the new Azure AI service resource type for prebuilt capabilities like summarization, Personally Identifiable Information (PII) detection, and many others. It lets you access all Azure AI services, including Language, Speech and Vision, etc., with one single resource, which makes it easier to integrate the AI capabilities from across Azure AI. In the next few months, we will also support the customization capabilities in Azure AI Language in Azure AI Studio.
We are excited to introduce Azure AI Language in Azure AI Studio with two new playgrounds for you to try out: Summarization and Personally Identifiable Information detection. Both help infuse generative AI into your solutions. In Azure AI Studio, you have more options to try out and explore how to use them effectively for your needs.
Prompt flow in Azure AI Studio is a development tool designed to streamline the entire development cycle of AI applications. We are happy to announce that Language’s prompt flow tooling is now available in Azure AI prompt flow gallery. With that, you can explore and use various natural language processing features from Azure AI Language in prompt flow. You can quickly start to make use of Azure AI Language, reduce your time to value, and deploy solutions with reliable evaluation.
What’s new in prebuilt features in Azure AI Language service
Azure AI Language’s prebuilt capabilities enable customers to set up and running quickly without the need for model training. These prebuilt services are designed to accelerate time-to-value through pretrained models optimized for specific Language AI tasks, including Personally Identifiable Information (PII), Named Entity Recognition (NER), Summarization, Text Analytics for Health, Language Detection, Key Phrase Extraction and Sentiment Analysis and opinion mining, etc.
As we learned a lot of customers want to use Language AI to derive insights from native documents like Word docs and PDFs, to minimize the time and eliminates the need for data preprocessing, we have recently released a public preview of native documents support for PII detection and Summarization service. More file formats and capabilities will be added into the feature towards its GA.
Here is more information regarding what’s new in Azure AI Language’s prebuilt features:
2.1. Announcing GA general availability of Conversational PII
Azure AI Language’s PII service can help to detect and protect an individual’s identity and privacy in both generative and non-generative AI applications which are critical for highly regulated industries such as financial services, healthcare or government. This PII service also supports Protected Health Information (PHI) and Payment Card Industry (PCI) data, and it’s available in 79 languages for around 30 general entity categories and more than 90 region-specific entity categories. By enabling users to identify, categorize, and redact sensitive information directly from complex text files, and native documents in .pdf, .docx and .txt file format, the PII service enables our customers to adhere to the highest standards of data privacy, security, and compliance with only 1 API call.
Today, we are excited to announce the general availability of conversational PII redaction in English-language contexts to further support customers looking to recognize and redact sensitive information in conversations, particularly now in speech transcriptions from meetings and calls for 6 recognized entity categories for conversations. Customers can now redact transcript, chat, and other text written in a conversational style (i.e. text with “um”s, “ah”s, multiple speakers, sensitive info in non-complete sentences, and the spelling out of words for more clarity) with better confidence in AI quality, Azure SLA support and production environment support, and enterprise-grade security in mind.
Conversational PII will be available starting in late June. Please see here for the full list of supported languages for the PII service and here for supported recognized for PII entities for conversation.
2.2. Enhanced address recognition for UK contexts with NER model updates
We are excited to share an updated NER model with improved AI quality and accuracy for both NER and PII detection. This model update will largely benefit location entities (e.g. addresses), finance entities (e.g. bank account numbers), and single letter spell outs where a speaker in a transcript may be spelling out a relevant entity (e.g. “M. I. CRO. S. O. F. and T”) where our new model shows improved F1 scores and decreased false positive recognitions. The updated model will be available starting in late June.
2.3. General availability of Recap summary for conversations in Summarization
Azure AI Language’s Summarization service enables users to extract key points from the textual content and provide a comprehensive summary of documents or conversations. This service is powered by an ensemble of two sophisticated natural language models in which one is specifically trained for text extraction while the other fine-tuned GPT model is further optimized for text summarization without the need of any prompt engineering. In addition, Azure AI Language’s Summarization service comes with built-in hallucination detection capability.
We appreciate customers’ enthusiasm for Azure AI Language’s Summarization service since we announced its general availability last year. Document abstractive summarization and Conversation summarization capabilities are currently available in 6 regions and 11 languages whereas Custom Summarization is available in East US in English language. Please see Summarization region support article for the full list of supported regions, and Summarization language support article for supported languages.
Today, we are excited to announce the general availability of Recap summary for conversations in Azure AI Language service. This recap summary compresses a long conversation into one short paragraph and captures key information, which has been highly praised by preview customers, especially for many high-volume call center customers. Check out our product document to learn more about the key features in conversation summarization.
What’s new in custom features in Azure AI Language service
Azure AI Language’s custom capabilities empower customers to customize their multilingual machine learning models based on a few labeled examples according to their specific use case. These custom service include but are not limited to Custom Text Classification, Custom Named Entity Recognition (NER), and Conversational Language Understanding (CLU). Powered by the state-of-the-art transformer models, Azure AI Language’s custom multilingual models can be trained in one language and used for multiple other languages. In addition to custom features in Azure AI Language service, the advanced low-touch customization capability in Azure AI Language now also powers Azure AI Content Safety’s Custom Category feature for custom content moderation.
As part of custom services in Azure AI Language, Conversational Language Understanding (CLU) enables reliable conversational AI experience with intent identification and entity extraction. Today, we are excited to announce three new features in CLU as follows:
Enhanced support for CLU applications to automate training data augmentation for diacritics
Today, we are introducing a suite of improvements to increase the AI quality of your CLU apps. Many customers already enjoy our training configuration that allows customers to train in one language and use the app in 100+ languages. Since many customers around the world use English keyboards to type in Germanic and Slavic languages, it can be more difficult to classify the utterance into the correct intent without diacritic characters. Because of this, we’re excited to announce a new feature that allows you to automate the training data augmentation for diacritics. When this setting is enabled in your CLU project, CLU will automatically augment your training dataset to reduce the model’s sensitivity to diacritic characters.
Derive more insights from additional granular entities in CLU applications
Many of our customers enjoy the ease of leveraging prebuilt entity recognition, like location, in their custom models. However, it can be helpful to know even more information about an entity phrase. We are excited to introduce more granular entities in CLU. So, for an utterance such as “New York”, you can now recognize more than just location, but also additional details such as city or state. Check out CLU supported prebuilt entity components for a full list of support prebuilt entities.
Improved CLU training configuration to address CLU model scoring inconsistencies
We have released a new CLU training configuration that is designed to address scoring inconsistencies, especially related to managing confidence scores and ‘None’ intent classification for off-topic utterances. We are excited to see how this new training configuration (available in 2024-06-01-preview via REST API) improves your model’s performance.
Availability of CLU authoring service in Azure US Government cloud
As our government and defense customers expand their use of conversational AI, the need for Azure AI in government-compliant clouds has grown, so we are announcing that CLU authoring service is now available in the Azure US Government cloud. This means that you can build, manage, and deploy your custom CLU models for government use cases with the same ease and functionality as in the public cloud.
We are looking forward to seeing how these new CLU capabilities will provide you with more flexibility and control, as you develop conversational AI solutions in your enterprise.
Summary
We look forward to seeing our customers use these capabilities to enhance productivity, summarize insights, protect data privacy and build intelligent chat experiences based on content in natural language. As always, Azure AI Language team remains committed to delivering innovative solutions that enable our customers to achieve their goals. We welcome your feedback as we strive to continuously improve and evolve our services with state-of-the-art AI models to offer the best managed and compliant natural language processing capabilities to our customers in Azure AI Language service.
Learn more about Azure AI Language in the following resources:
Azure AI Language homepage: https://aka.ms/azure-language
Azure AI Language product documentation: https://aka.ms/language-docs
Azure AI Language product demo videos: https://aka.ms/language-videos
Explore Azure AI Language in Azure AI Studio: https://aka.ms/AzureAiLanguage
Prompt flow in Azure AI Studio: https://learn.microsoft.com/en-us/azure/ai-studio/how-to/prompt-flow
Native document support for PII and Summarization: https://aka.ms/language-native-docs-support
Conversational PII detection: https://aka.ms/conversational-pii
Summarization overview: https://aka.ms/summarization-docs
Conversational Language Understanding overview: https://aka.ms/language-clu
Microsoft Tech Community – Latest Blogs –Read More
Developing AI-enhanced apps of the future with Microsoft’s adaptive cloud approach
As our annual Build conference is about to kick off this week, I’m thrilled to share several product announcements to empower developers to take advantage of Azure’s adaptive cloud approach: Edge Storage Accelerator public preview, Azure Monitor pipeline public preview, Secrets Sync Controller private preview, Jumpstart Agora for Manufacturing general availability, Jumpstart Drops public preview, Visual Studio Code Extension public preview.
There has never been a more exciting time to be an application developer. With cloud native practices and hyperscale cloud services increasingly available at the edge, developers can access data, build for environments and extend to use cases previously unavailable to them. At the same time AI advances are driving efficiency into the application development process and enabling the creation of innovative industry solutions.
However, to take advantage of this progress, developers and adjacent teams need to manage the challenges stemming from legacy systems, heterogeneous environments, fragmented data and lack of standardization. The need for a unified platform and system to achieve this potential and overcome these obstacles becomes increasingly evident. We believe Azure is the platform that can help, and we have been investing in Azure Arc to solve these problems. We see an opportunity to do more by bringing together agility and intelligence so that our customers can proactively adapt to change. This is what we refer to as our adaptive cloud approach.
This approach has enabled customers like US-based DICK’S Sporting Goods to re-imagine its customer experience and implement a “one store” strategy where they can write, deploy, manage and monitor software across all 800+ locations nationwide. Similarly, Coles, an Australian supermarket retailer, has embraced AI-driven solutions for inventory management, personalized shopping experiences, loss prevention and more.
“Win-win solutions are those where we are helping our team members and our customers at the same time. Our technological investments into operational efficiency have translated into real, tangible benefits for our shoppers.”
– Silvio Giorgio, GM of Data & Intelligence at Coles Group
The AI-infused developer opportunity
One of the key principles of our Adaptive cloud approach is Kubernetes everywhere, providing the same scalability and agility developers expect with their cloud solutions, when they build for the edge. Azure Arc, our solution for consistent multi-cloud and on-premises management, works with any CNCF-certified Kubernetes clusters including our first-party Azure Kubernetes Service to enable applications developers to build and run software seamlessly across the cloud and edge. As a result, developers can focus on the application itself instead of worrying about where and how it is going to run across their company’s physical footprint.
The starting point for developers to begin building distributed applications is the same toolset they currently use now, powered by recent releases and improvements. GitHub Actions gives developers the ability to automate, customize, and execute their software development workflows in their GitHub repository. GitHub Copilot will further speed their development of edge solutions with coding suggestions, help solving problems and more.
These tools, combined with Flux and Azure Container Registry, complete the GitOps workflow for consistent and efficient application rollouts across cloud to edge environments.
Distributing software updates via GitOps
DevOps and beyond
There is, however, a lot more to building and scaling applications across boundaries than Arc-enabled Kubernetes and GitOps workflows can deliver alone. DevOps teams need to create pipelines for deployment, testing, and monitoring applications. They want to manage network connectivity, automate application security, deploy and manage infrastructure as code (IaC) components and maintain the overall container orchestration layer.
To support these requirements, we are building a robust set of foundational services that will be available natively and fully supported by via Azure Arc. Once you integrate Azure Arc, these services will be available on the clusters for applications to take dependencies on and use. In terms of these foundational services, we have recently announced the release of Edge Storage Accelerator, and Secrets Sync Controller (details below), with other announcements coming soon.
Foundational Services
Solution orchestration for the edge
The environments that edge applications operate in are heterogeneous and diverse, causing challenges like not having a single programming interface (API), for developers and engineers that are trying to stitch together a larger solution (a factory solution, a software defined vehicle, etc.). To help solve this Microsoft is investing in the Eclipse Foundation Symphony project. Symphony is a platform-independent “orchestrator for orchestrator” engine, allowing solution providers to declare a single deployment manifest for various endpoint deployments. Symphony then ingests the deployment manifest, orchestrates the various orchestration platforms, such as Kubernetes, Linux Shell, Windows and returns feedback whether the deployment was successful. We welcome ecosystem contributions to this project.
Getting the most out of the Adaptive Cloud Ecosystem
While many of our customers decide to develop edge applications themselves, many if not all also purchase solutions from third parties. The specific types of applications differ by industry but there are two key partner types that play a major role in customer edge solutions.
Independent Software Vendors (ISVs)
ISVs play a critical role in providing 3rd-party edge solutions for customers. To ensure that an ISV’s solution can run on Arc-enabled Kubernetes we have created the Azure Arc ISV partner program, a technical validation of the partner’s solution on the platform. Isovalent, Hashicorp and Intel are examples of partners that have completed the program.
ISVs can also publish their containerized applications on the Azure Marketplace as a Kubernetes app for deployment on Arc enabled Kubernetes clusters. Kubernetes apps provide flexible billing options to enable ISVs to charge customers through the Azure Marketplace.
System Integration (SI) partners
For custom solution development or simply help with deployment of an application developed in-house, customers typically employ an SI. We work with an active ecosystem of SIs that are versed in modern application development, deployment and management practices. Partners like Avanade and Maibornwolff are good examples of SIs making an impact for customers with Kubernetes-based application development and deployment at the edge.
“For us, the easy deployment and monitoring of ML models from Azure ML in Kubernetes clusters at the edge is THE game-changing feature of Azure Arc – alongside the ability to use Azure IoT Operations. Both capabilities are essential when we build hybrid cloud smart factory platforms based on Azure technologies.”
– Marc Jäckle, Technical Head of IoT at MaibornWolff
“Azure Arc has enabled us to bring Cloud native services to the Edge of our client’s Industrial solutions without increasing the complexity and effort to manage this fleet of devices that are used to control the shop floor in digital operations scenarios. Having a Standards based execution environment like Kubernetes available to run custom workloads at the Edge or in the Cloud is a big benefit for our customers. Azure and especially Azure Arc fully support these deployments.”
-Juergen Mayrbaeurl, Senior Director at Avanade
Announcements
Ways to help build resilient, observable and secure applications at the edge
Edge Storage Accelerator public preview – At the edge, Kubernetes storage capabilities vary in durability, persistence, and performance, posing a challenge for customers seeking reliable solutions. To address these challenges, we recently introduced Edge Storage Accelerator (ESA), a storage system designed for Arc-connected Kubernetes clusters. ESA offers fault-tolerant, highly available cloud-native persistent storage, empowering customers to confidently host stateful applications, custom apps, and other Arc extensions with ease and reliability. Through standard Kubernetes APIs, users can effortlessly attach containerized applications managing file data stored on Azure Blob storage, leveraging its limitless cloud storage capacity for edge applications. ESA’s flexible deployment options, simplified connection via a Container Storage Interface (CSI) driver, and platform neutrality transforms edge storage solutions, alleviating customer pain points and enabling seamless operations at the edge.
Azure Monitor pipeline public preview – As enterprises scale their infrastructure and applications, the volume of observability data naturally increases, and it is challenging to collect telemetry from certain restricted environments. We are extending our Azure Monitor pipeline at the edge to enable customers to collect telemetry at scale from their edge environment and route to Azure Monitor for observability. With Azure Monitor pipeline at edge, customers can collect telemetry from the resources in segmented networks that do not have a line of sight to cloud. Additionally, the pipeline prevents data loss by caching the telemetry locally during intermittent connectivity periods and backfilling to the cloud, improving reliability and resiliency.
Secret Sync Controller private preview – Customers want the confidence and scalability that comes with unified secrets management in the cloud, while maintaining disconnection-resilience for operational activities at the edge. To help them with this, the new Secret Synchronization Controller for Kubernetes automatically synchronizes secrets from an Azure Key Vault to a Kubernetes cluster for offline access. This means customers can use Azure Key Vault to store, maintain, and rotate secrets, even when running a Kubernetes cluster in a semi-disconnected state. Synchronized secrets are stored in the cluster secret store, making them available as Kubernetes secrets to be used in all the usual ways—mounted as data volumes or exposed as environment variables to a container in a pod.
Exciting ways to engage and get started with Jumpstart and VSCode
Jumpstart Agora for Manufacturing general availability – Customers want interactive test environments that cover real industry scenarios to learn more about what Azure Arc and other Azure technologies can help them accomplish for their business. Jumpstart Agora for Manufacturing is a set of comprehensive cloud-to-edge scenarios brought to life through the story of Contoso Motors and its solutions for digital innovation and employee safety. Users will learn how to deploy and interact with the technology behind Contoso Motor’s quality optimization, AI hazard detection, defect detection and IT/OT observability and control solutions. https://aka.ms/JumpstartAgoraMotorsBlog
Jumpstart Drops public preview – Azure Arc Jumpstart contributors want a unified, accessible and shareable repository for scripts, sample apps, libraries, dashboards, automations or comprehensive tutorials useful in the testing and deployment of Azure Arc-enabled solutions. Jumpstart Drops is a new page on the Jumpstart website that enables users to search for and use pre-built code and artifacts of all types. Users can filter their search by scenarios (Edge/Cloud), tools/languages, tags, code owner and more. Jumpstart Drops also includes a defined template for making contributions and giving back to the community. Embracing an open-source ethos, all contributions are licensed under MIT License. So, dive in, explore the collection of amazing Drops already available, and join us and the community as we share knowledge. https://aka.ms/JumpstartDropsBlog
Visual Studio Code extension public preview – Developers want a single pane of glass and workbench to complete the entire developer workflow for Arc-enabled applications. We released an Arc Visual Studio Code extension in public preview for Arc and AKS which has sample code to access these services, a local environment to test and debug the services and an environment in the cloud to test at a larger scale. The extension provides a one-stop shop for developers and helps accelerate development for both workloads that will run on the edge and that are going to be published on the Azure Marketplace.
Together these resources offer the perfect starting point to learn about industry-specific adaptive cloud approach solutions, find code snippets or contribute to the Jumpstart Drops repository and get started with edge application development. To learn more about these and other exciting offerings that support our adaptive cloud approach please join us in-person or virtually at Microsoft Build.
Here is a list of our sessions. You can also find us on the 5th floor of the convention center at the adaptive cloud approach and community demo stations (within the Expert Meet-Up area).
Breakout session BRK126 | Adaptive cloud approach: Build and scale apps from cloud to edge
Breakout session BRKFP292 | AI Everywhere – Accelerate your development from edge to cloud
Breakout session BRK127 | Azure Monitor: Observability from Code to Cloud
Demo session DEM172 | Next-gen monitoring on Azure
Lab | Taking Azure Kubernetes out of the cloud and into your world (Tuesday/Wednesday/Thursday)
On-demand session OD545 | What’s new in Azure Monitor?
On-demand session OD540 | Improve Application Resilience Using Azure Chaos Studio
To read more about Azure’s adaptive cloud approach here are some of our latest blogs:
Advancing hybrid cloud to adaptive cloud with Azure | Microsoft Azure Blog
Harmonizing AI-enhanced physical and cloud operations | Microsoft Azure Blog
Hannover Messe 2024: Scaling Industrial Transformation with Azure’s Adaptive Cloud Approach – Microsoft Community Hub
Microsoft Tech Community – Latest Blogs –Read More
Build 2024: Azure AI Video Indexer integration with language models for textual video summary
We are thrilled to introduce textual video summarization for recorded video and audio files, powered by large and small language models (LLM and SLM).
AI application developers can leverage APIs to create textual summaries for audio and video files, anywhere.
Data analysts, instead of watching entire videos, can benefit from concise summaries of video and audio content and adjust it to their needs.
Azure AI Video Indexer, a cloud and edge video solution, enables textual video summarization with the following build announcements:
Preview at the cloud: Textual video summarization in Azure AI Video Indexer powered by Azure Open AI
The feature of textual video summarization in Azure AI Video Indexer, cloud edition is powered by Azure Open AI. This innovative addition allows customers who have created an AOAI resource in Azure, to seamlessly integrate it with Video Indexer. By leveraging deployments such as GPT4, users can now enjoy concise textual summaries of their videos, presented as an insightful extract alongside the player page. The video summary not only enhances the viewing experience but also empowers video analysts to tailor the summary’s nuances and to align with specific business requirements.
The summary that encapsulates the essence of the video content, utilizing not only the transcript but also additional elements derived from the visual and audio aspects of the video like a siren and crowd reactions in the background, or any visual text that appear on the screen like signs, text, visual objects and more.
Preview at the edge (on premise): Extend Azure AI Video Indexer enabled by Arc with integration with SLM through Phi3
The preview version of Azure AI Video Indexer enabled by Arc now includes integration with SLM through Phi3. The innovation containerizes both the Azure AI and Phi3 models, providing video analysts the ability to perform video summarization. It represents a significant stride in our generative AI capabilities utilizing the cutting-edge Phi3 model at the edge. The Phi3 model opens new avenues for AI applications, especially in settings where computing resources are limited, by offering a more streamlined and efficient approach to video analysis.
The Phi3 model, developed in line with Microsoft’s Responsible AI principles and trained on high-quality data, is a testament to our dedication to safety and excellence in AI. It’s a lightweight, state-of-the-art model designed for long-context support, making it ideal for generating responsive and relevant text in chat formats.
Use cases for video summarization across industries
In education, summarized videos can serve as study aids, allowing students to review lecture content quickly. The capability can distill lengthy training videos into key takeaways, saving employees’ time and improving knowledge retention, e.g., in corporate trainings.
In media, it helps in quickly understanding the content of large video libraries, like movies or series, without watching the entire footage. This can be particularly useful for editors and content creators who need to create promos or trailers.
In manufacturing, summarized videos can serve as training material or evidence of compliance with regulatory standards and can quickly highlight parts of footage where potential quality issues are detected on the production line.
Retailers can use video summaries to understand customer traffic patterns and preferences without watching hours of footage.
In modern safety, textual summaries can pinpoint instances of theft or suspicious behavior, streamlining the review process for security teams, enhance the review process of training exercises, identifying key moments for analysis and improvement.
Watch the demo recording to learn more:
Video summarization flavors and customization
Video analysts utilizing the summarization feature will appreciate the added flexibility of feature customization. Tailor your summaries to meet specific needs with selectable options such as “Shorter” for concise overviews, “Longer” for detailed accounts, “Formal” for professional contexts, and “Casual” for a more relaxed tone. This personalized approach ensures that your summaries align perfectly with your intended audience and purpose.
How to make it available in my Azure AI Video Indexer account?
Use Textual Video Summarization in Your Public Cloud Environment:
If you already have an existing Azure Video Indexer account, follow these steps to use the video summarization:
Create an Azure Open AI resource in your subscription.
Connect your Azure Open AI resource to your Video Indexer resource in the Azure Portal.
Go to Azure Video Indexer portal, select a video and choose “generate summary”.
For detailed instructions on how to set up this integration, refer to this guidance . Please note that this feature is not available in Video Indexer trial accounts or on legacy accounts which uses Azure Media services. Leverage this opportunity also to remove your dependency on Azure Media services by following these instructions.
Use Textual Video Summarization in Your Edge Environment, enabled by Arc:
If your edge appliances are integrated with the Azure Platform via Azure Arc, you’re in for a treat! Here’s how to activate the feature:
Register for Video Indexer (VI) enabled by Arc using this form. Rest assured, we are dedicated to activating the Azure AI Video Indexer Arc-enabled extension in your Video Indexer account within 30 days of your request. of your request.
Once activated, create an Azure AI Video Indexer service extension by adhering to these guidelines.
Navigate to the Azure Video Indexer portal, select a video, and click on “Generate Summary” to see the magic happen.
Our Video-to-text API (aka Prompt Content API) now also support Llama, Phi2 and GPTv4
The prompt content API, that converts video to text based on video Indexer’s extracted insights, now supports additional models: Llama, Phi2 and GPTv4. It provides more flexibility when converting video content to text. To learn more about this API, refer to this API documentation.
Read More
About the feature
Video summarization: Public feature documentation
Transparency note
Prompt content: Video-to-text API
About Azure AI Video Indexer
Use Azure AI Video Indexer website to access product website
Get started with Azure AI Video Indexer, Enabled by Arc by following this Arc Jumpstart scenario
Visit Azure AI Video Indexer Developer Portal to learn about our APIs
Search the Azure Video Indexer GitHub repository
Review our product documentation.
Get to know the recent features using Azure AI Video Indexer release notes
Use Stack overflow community for technical questions.
To report an issue with Azure AI Video Indexer, go to Azure portal Help + support. Create a new support request. Your request will be tracked within SLA.
For any other question, contact our support distribution list at visupport@microsoft.com
Microsoft Tech Community – Latest Blogs –Read More
Internal error while creating code interface description file: codeInfo.mat. Aborting code generation. (asbQuadcopter-parrot mambo-R2021b)
Sir,
I received the following error while trying to build ‘asbQuadcopter fileSir,
I received the following error while trying to build ‘asbQuadcopter file Sir,
I received the following error while trying to build ‘asbQuadcopter file parrot minidrone error MATLAB Answers — New Questions
How to use function ping in Matlab 2024?
Hello world,
I have a problem with database connection in Matlab 2024. I think it is because of the lack some toolbox packages, which are not there.
Can someone please answer me, what additional toolbox packages do I need beside <Database> toolbox to be able to run <ping.m> function?
Thanks,
CarolineHello world,
I have a problem with database connection in Matlab 2024. I think it is because of the lack some toolbox packages, which are not there.
Can someone please answer me, what additional toolbox packages do I need beside <Database> toolbox to be able to run <ping.m> function?
Thanks,
Caroline Hello world,
I have a problem with database connection in Matlab 2024. I think it is because of the lack some toolbox packages, which are not there.
Can someone please answer me, what additional toolbox packages do I need beside <Database> toolbox to be able to run <ping.m> function?
Thanks,
Caroline ping, database connection MATLAB Answers — New Questions
Warning: Error updating FunctionLine. The following error was reported evaluating the function in FunctionLine update:Array indices must be positive integers or logical value
A=10^(-4);
l= 250*10^(-6);
g=0.1517;
e=0.132205;
d=0.142;
K= 2.34*10^(-12);
Yr = @ (w) w*A*e*{d(1-K)+gK}/[l{(1-K)^2 +(gK)^2} ];
fplot(Yr,[10,100],"r")
xlabel(‘Frequency’)
ylabel(‘Conductance’)
grid onA=10^(-4);
l= 250*10^(-6);
g=0.1517;
e=0.132205;
d=0.142;
K= 2.34*10^(-12);
Yr = @ (w) w*A*e*{d(1-K)+gK}/[l{(1-K)^2 +(gK)^2} ];
fplot(Yr,[10,100],"r")
xlabel(‘Frequency’)
ylabel(‘Conductance’)
grid on A=10^(-4);
l= 250*10^(-6);
g=0.1517;
e=0.132205;
d=0.142;
K= 2.34*10^(-12);
Yr = @ (w) w*A*e*{d(1-K)+gK}/[l{(1-K)^2 +(gK)^2} ];
fplot(Yr,[10,100],"r")
xlabel(‘Frequency’)
ylabel(‘Conductance’)
grid on matlab MATLAB Answers — New Questions
Microsoft List branching is this possible
Hi,
Is there an option or away of using Branching when creating a MS Form from the new SharePoint List route??
I know it’s an option when creating a directly from the MS Forms app
Hi,Is there an option or away of using Branching when creating a MS Form from the new SharePoint List route??I know it’s an option when creating a directly from the MS Forms app Read More
Pop-up window announcement in Microsoft Teams
This morning, when I logged into Microsoft Teams, I noticed a pop-up window announcing the Microsoft Teams Public Preview & Targeted Release. It seems this notification may have also been displayed to other users. We are planning to implement a retention policy for Teams chats and would like to distribute similar information to all our Microsoft Teams users. While I am familiar with creating Teams and posting announcements within a channel, I am looking for a way to share this message without setting up a new team or channel. If anyone has experience with this feature or something similar, your insights would be greatly appreciated.
This morning, when I logged into Microsoft Teams, I noticed a pop-up window announcing the Microsoft Teams Public Preview & Targeted Release. It seems this notification may have also been displayed to other users. We are planning to implement a retention policy for Teams chats and would like to distribute similar information to all our Microsoft Teams users. While I am familiar with creating Teams and posting announcements within a channel, I am looking for a way to share this message without setting up a new team or channel. If anyone has experience with this feature or something similar, your insights would be greatly appreciated. Read More
How to fetch / filter users from AD faster using Get-ADUser command.
Recently I saw few scripts which are fetching users from AD like below mentioned.
Get-ADUser -LDAPFilter “(whenCreated>=$date)”
or
Get-ADUser -filter {Enabled -eq $True -and PasswordNeverExpires -eq $False -and PasswordLastSet -gt 0}
or
Get-ADUser -Filter ‘Enabled -eq $True’
But using like above is taking quite a lot of time or sometimes giving Timeout error.
is there any way can make this faster? Will using –LDAPFilter instead of -Filter make it faster?
Error Message: The operation returned because the timeout limit was exceeded.
Recently I saw few scripts which are fetching users from AD like below mentioned. Get-ADUser -LDAPFilter “(whenCreated>=$date)”orGet-ADUser -filter {Enabled -eq $True -and PasswordNeverExpires -eq $False -and PasswordLastSet -gt 0}orGet-ADUser -Filter ‘Enabled -eq $True’ But using like above is taking quite a lot of time or sometimes giving Timeout error.is there any way can make this faster? Will using -LDAPFilter instead of -Filter make it faster? Error Message: The operation returned because the timeout limit was exceeded. Read More
Semantic search in Azure AI Studio
Hi everyone
I’ve setup an Azure AI Search index that points to a SharePoint library and connected this to Azure AI Studio so a Chat-GPT model can be used to query the documents in the library. This all works fine if I use the Keyword search type in the chat playground, but if I change this to Semantic I get the following error:
Semantic Ranker is enabled in my Azure AI Search service instance and all works fine when tested in the search settings in the Azure portal. The error isn’t giving me any further information so I’m not quire sure where I can go from here?
Any assistance would be gratefully received.
Thanks in advance.
Hi everyone I’ve setup an Azure AI Search index that points to a SharePoint library and connected this to Azure AI Studio so a Chat-GPT model can be used to query the documents in the library. This all works fine if I use the Keyword search type in the chat playground, but if I change this to Semantic I get the following error: Semantic Ranker is enabled in my Azure AI Search service instance and all works fine when tested in the search settings in the Azure portal. The error isn’t giving me any further information so I’m not quire sure where I can go from here? Any assistance would be gratefully received. Thanks in advance. Read More
Windows 11 Notifications
hello all,
hoping someone might be able to help. We are looking for a way to stop our users from being able to change the Notification settings on our windows 10 & 11 devices (eg, not be able to turn off, change what notifications are allowed and which ones are not etc)
we are hoping there may be a way via the registry, Group Policy or a config profile in Intune though we have had a look but cant find anything.Windows 11, notifications
many thanks
hello all,hoping someone might be able to help. We are looking for a way to stop our users from being able to change the Notification settings on our windows 10 & 11 devices (eg, not be able to turn off, change what notifications are allowed and which ones are not etc)we are hoping there may be a way via the registry, Group Policy or a config profile in Intune though we have had a look but cant find anything.Windows 11, notificationsmany thanks Read More
Using OR in a formula
I need a formula to give 3 different answers based on the value of one cell in a worksheet that could change.
J19 is the variable cell in my worksheet. The value of (J9-J20) may be a positive or negative number and then I need a value in J21 based on positive or negative. If positive, I need the sum. If negative, I need the cell to be 0.
If J20 is zero I need the value to be the sum of another cell J23
These are the 3 formulas that will give the correct answers but I need it to be an OR to each formula to get the correct answer in cell J25
J25
=IF(J20>J9,J9,J9-J20)+J22+J23+E18 works if the deductible is LARGE
=IF(J20>J9,J9,J9-J20)+J22+J23+E18 works if the deductible is smaller than the charges
=IF(j20=0,J23)+E18 works if the deductible is zero and there is a copay
Is this even possible to solve for?
Thank you,
Donna
I need a formula to give 3 different answers based on the value of one cell in a worksheet that could change.J19 is the variable cell in my worksheet. The value of (J9-J20) may be a positive or negative number and then I need a value in J21 based on positive or negative. If positive, I need the sum. If negative, I need the cell to be 0.If J20 is zero I need the value to be the sum of another cell J23 These are the 3 formulas that will give the correct answers but I need it to be an OR to each formula to get the correct answer in cell J25J25=IF(J20>J9,J9,J9-J20)+J22+J23+E18 works if the deductible is LARGE=IF(J20>J9,J9,J9-J20)+J22+J23+E18 works if the deductible is smaller than the charges=IF(j20=0,J23)+E18 works if the deductible is zero and there is a copay Is this even possible to solve for?Thank you,Donna Read More
Announcing general availability of real-time diarization
We are excited to announce Generally Available of real-time diarization which is an enhanced add-on feature of Azure Speech service. With this feature, you can get live (in real time) speech to text transcription by speakers (Guest1, Guest2, Guest3, etc.), so that you know which speaker was speaking a particular part transcribed speech conversation transcription.
What’s Real-time Diarization
The diarization is a feature that differentiates speakers in an audio. Real-time diarization is capable of distinguishing speakers’ voices through single channel audio in streaming mode. Diarization combined with speech to text functionality can provide transcription outputs that contain a speaker entry for each transcribed segment. The transcription output is tagged as GUEST1, GUEST2, GUEST3, etc. based on the number of speakers in the audio conversation. Below graph demonstrates the difference between the transcription results with and without diarization.
Use Cases and Scenarios
Real-time diarization can be used in a wide range of scenarios. Below lists some typical use cases. It can also be used to help with accessibility scenarios.
Live Conversation/Meeting Transcription
When speakers are all in the same room with a single microphone setup, do live transcription about which speaker (e.g. Guest-1, Guest-2, or Guest-3) talks about what transcription. Combined with GPT based on the diarized transcription, you can also do meeting/conversation summary, recap, or ask questions about the conversation/meeting, etc.
Microsoft Teams, for instance, is leveraging the diarization featrue to show live meeting transcription in Teams. Based on the meeting transcription, Microsoft Teams’ Copilot provides a meeting summary, recap, and many other cool features for people to interact Teams’ Copilot about the meetings.
Real-time Agent Asist
Use Speech Analytics (which is another new feature that Azure Speech Service provides at Build) with real-time diarization, you can do the live transcription analytics to help on the Agent Asist scenarios to optimally address the customers questions and concerns.
Live Caption and Subtitle (Translated Caption)
Show live captions or subtitles (translated captions) of meetings, videos, or audios.
What’s Improved Since Public Preview
After the public preview, we put in a lot of effort to improve the diarization quality. This is the major feedback we heard from Preview users regarding the quality of real-time diarization. We released a new diarization model and improved diarization quality by ~3% on WDER. In addition, we removed the limitation of 7 seconds of continuous audio data from a single speaker. In the Preview version, when a speaker first talks, the diarization would start to perform with better quality after the 7 seconds of continuous audio of the speaker. Now in GA version, we don’t have this limitation anymore.
Early Adopters from Diverse Area
So far, we have over a thousand customers from diverse industries trying out real-time diarization on a variety of scenarios. Below are some examples.
Medical
Live transcription between doctor and patient, and transcription analytics
Banking
Live meeting transcription
Telecommunication
Conversation transcription, summarization, transcription analytics
Legal
App to assist trial and appellate attorney who are preparing for oral arguments (e.g. capture the attorneys’ and judges’ positions during mock oral arguments, etc.)
Try it Out
To try out the real-time diarization, you can go to Speech Studio (Speech Studio – Real-time speech to text (microsoft.com)) and do the following steps (shown in the below screenshot) to experience the feature,
Click on “Show advanced options”.
Use the “Speaker diarization” toggle to turn on or off the real-time diarization.
Real-time diarization is available to all the regions that Azure Speech Service supports. It is released through Speech SDK (version 1.31.0 or higher). The feature is available in the following SDKs.
C#,
C++
Java
JavaScript
Python
Please feel free to follow the Quickstart: Real-time diarization to start experiencing the feature.
Microsoft Tech Community – Latest Blogs –Read More
C2000 use the CCS project to simulate a model like HIL
Dear Support Team,
I am wondering whether there is a quick guide on how to use TI launchpad DELFINO F28379D with a Simulink model from a precompiled Code Composer Studio project in real time.
What are the requirements to do this task? The task to perform is to have the control implemented via the CCS on the launchpad running in Debug mode, and using UART to send data and receive feedback real time.
Thanks in advanceDear Support Team,
I am wondering whether there is a quick guide on how to use TI launchpad DELFINO F28379D with a Simulink model from a precompiled Code Composer Studio project in real time.
What are the requirements to do this task? The task to perform is to have the control implemented via the CCS on the launchpad running in Debug mode, and using UART to send data and receive feedback real time.
Thanks in advance Dear Support Team,
I am wondering whether there is a quick guide on how to use TI launchpad DELFINO F28379D with a Simulink model from a precompiled Code Composer Studio project in real time.
What are the requirements to do this task? The task to perform is to have the control implemented via the CCS on the launchpad running in Debug mode, and using UART to send data and receive feedback real time.
Thanks in advance launchpad, hil, simulink, delfino, c2000 MATLAB Answers — New Questions
How can check if GUI fully created?
Hello all,
So I have an app which create additional tabs with content(tables, plots etc.) somewhere in the middle of process. I have cover this code with progress dialog, but after this dialog dissapear it takes for app litterally additional ~5min to actual make this elements fully availiable.
So my question: Is it possible somehow check if app finish their routine or not?Hello all,
So I have an app which create additional tabs with content(tables, plots etc.) somewhere in the middle of process. I have cover this code with progress dialog, but after this dialog dissapear it takes for app litterally additional ~5min to actual make this elements fully availiable.
So my question: Is it possible somehow check if app finish their routine or not? Hello all,
So I have an app which create additional tabs with content(tables, plots etc.) somewhere in the middle of process. I have cover this code with progress dialog, but after this dialog dissapear it takes for app litterally additional ~5min to actual make this elements fully availiable.
So my question: Is it possible somehow check if app finish their routine or not? matlab gui, app designer MATLAB Answers — New Questions
No access to reservationpage
Best,
1 of me co workers has access to 4 reservation page in bookings with admin rights. When she accesses a reservation page she keeps getting the message that she has no permissions to the reservation page and that she has 3 out of 4 reservation page no access to it.
i have already done the following:
– cleared browser history.
– cleared cache.
– assigned the same permissions to each reservation page.
– tried on a mobile device.
But every step above has not helped anything.
Can anyone help me to get the persistent problem solved.
Regards,
Robby
Best,1 of me co workers has access to 4 reservation page in bookings with admin rights. When she accesses a reservation page she keeps getting the message that she has no permissions to the reservation page and that she has 3 out of 4 reservation page no access to it. i have already done the following:- cleared browser history.- cleared cache.- assigned the same permissions to each reservation page.- tried on a mobile device. But every step above has not helped anything.Can anyone help me to get the persistent problem solved. Regards, Robby Read More
Teams does not manage properly External Monitor on iPad
I’ve an iPad Air 5 that supports display to external HDMI monitor through usb-c port.
When i configure the external monitor as extendend display (not mirrored), Teams seems unable to manage properly that configuration. More in detail i’ve observed the following issues:
1. When Teams is already open, switching to the external monitor has the effect that it’s not possible to join meetings (tapping/clicking on Join meeting has no effect)
2. closing the application and re-opeing it (with external monitor connected) sometimes let to join the meeting, but the app becomes unusable because the meeting window is shown in the external monitor as very small window (and no other apps can apparently co-exist with it), while the “main” Teams application is on iPad dispaly. When the main Teams application is moved to external monitor, the “meeting” window disappears
This is annoying, each time i’ve to join a meeting i’ve to detach the cable connection to external monitor if i want to run the meeting properly…
I’ve an iPad Air 5 that supports display to external HDMI monitor through usb-c port. When i configure the external monitor as extendend display (not mirrored), Teams seems unable to manage properly that configuration. More in detail i’ve observed the following issues: 1. When Teams is already open, switching to the external monitor has the effect that it’s not possible to join meetings (tapping/clicking on Join meeting has no effect)2. closing the application and re-opeing it (with external monitor connected) sometimes let to join the meeting, but the app becomes unusable because the meeting window is shown in the external monitor as very small window (and no other apps can apparently co-exist with it), while the “main” Teams application is on iPad dispaly. When the main Teams application is moved to external monitor, the “meeting” window disappearsThis is annoying, each time i’ve to join a meeting i’ve to detach the cable connection to external monitor if i want to run the meeting properly… Read More
How to export all sheets as separate files: sheetName.pdf from workbook?
Hello, we’re using Microsoft Excel for mac version 16.84. We can create workbooks with sheets. But, we cannot see how to export all workbook sheets as separate pdfs documents, with their names as file names.
It is possible to export the whole workbook as a pdf and then drag the individual pages out as their own .pdf docuuments; but they are saved as 1(dragged).pdf 2(dragged.pdf) etc. We lose the name.
Has anybody else had this issue? Is there any way to export them with their names, as in previous versions of the software?
Thanks all.
Hello, we’re using Microsoft Excel for mac version 16.84. We can create workbooks with sheets. But, we cannot see how to export all workbook sheets as separate pdfs documents, with their names as file names. It is possible to export the whole workbook as a pdf and then drag the individual pages out as their own .pdf docuuments; but they are saved as 1(dragged).pdf 2(dragged.pdf) etc. We lose the name.Has anybody else had this issue? Is there any way to export them with their names, as in previous versions of the software? Thanks all. Read More
Daily Agenda Mail
Hello,
We are using the new Outlook App and are wondering where the option “Receive daily agenda e-mail” is.
Has this feature been removed or where can I find it?
Hello,We are using the new Outlook App and are wondering where the option “Receive daily agenda e-mail” is.Has this feature been removed or where can I find it? Read More