Month: August 2024
Text to Speech Avatar in Azure AI is now generally available
Today, we are excited to announce that Text to Speech (TTS) Avatar, a capability of Azure AI Speech service, is now generally available for developers, enterprises and content creators.
This service brings natural-sounding voices and photorealistic avatars to life, enhancing customer engagement and overall experience. With TTS Avatar, developers can create personalized and engaging experiences for their customers and employees, while also improving efficiency and providing innovative solutions.
The TTS Avatar service provides developers with a variety of pre-built avatars, featuring a diverse portfolio of natural-sounding voices and an option to create custom synthetic voices using Azure Custom Neural Voice. Additionally, the photorealistic avatars can be customized to match a company’s branding. Developers can use TTS Avatar to generate speech and avatars in real-time or through a batch mode, depending on the needs of their applications.
Prioritizing responsible AI is fundamental to our Text to Speech Avatar capability. We develop it to adhere to our responsible AI principles and offer Custom Avatar as a limited access service with only a select number of use cases approved through a controlled application and review process. Scroll to the end of this blog to learn more about approach to responsible AI for TTS Avatar.
Selected use cases and customers
Let’s take a closer look at some of the key use cases for TTS Avatar:
Customer service
Chatbots are a popular way for businesses to provide 24/7 customer service. Azure TTS Avatar could help enhance customer experience by providing a more personalized and engaging interaction. An avatar can answer customer questions, provide troubleshooting assistance, and even help customers complete transactions. This improves customer satisfaction and reduces the workload on customer service agents.
With the general availability of TTS Avatar, we are closely collaborating with customers and partners around the world to develop engaging customer service solutions for a variety of industries.
KPMG, a multinational professional services network, is leveraging TTS Avatar to create personalized and engaging customer service solutions for their customers.
“By utilizing Microsoft Azure’s TTS Avatar service with Custom Neural Voice, businesses can create personalized and engaging experiences for their customers and employees, while also improving efficiency and providing innovative solutions, as well as reducing costs in certain customer service areas,” says Sina Steidl-Küster, Managing Partner of KPMG Germany/Region Southwest.
Fujifilm is incorporating TTS Avatar with NURA, the world’s first AI-powered health screening center.
“Embracing the Azure TTS Avatar at NURA as our 24-hour AI assistant marks a pivotal step in healthcare innovation. At NURA, we envision a future where AI-powered assistants redefine customer interactions, brand management, and healthcare delivery. Working with Microsoft, we’re honored to pioneer the next generation of digital experiences, revolutionizing how businesses connect with customers and elevate brand experiences, paving the way for a new era of personalized care and engagement. Let’s bring more smiles together,” says Dr. Kasim, Executive Director and COO, Nura AI Health Screening, Fujifilm.
MAPFRE, an insurance company in Spain, is using Azure TTS Avatar to generate videos that improve communication and efficiency, drive innovation, and optimize processes.
“In MAPFRE, we have assessed Microsoft’s Avatar service, and it has demonstrated great value to us because of its ability to enhance the user experience and promote collaboration. Additionally, its use can drive innovation and optimize processes, adding significant value to our organization,” says Ubaldo Gonzalez, Chief Data Officer MAPFRE Spain.
Dentsu Digital, a comprehensive digital marketing company, is using Azure TTS Avatar to generate lifelike voices and avatars to enhance the overall customer experience and promoting collaboration.
“New challenges invariably demand bold approaches. We are deeply honored to collaborate with Microsoft, leveraging their cutting-edge technology and expertise as we aim to implement this vision into society and usher in a new era,” says Tomohiko Sugiura, Executive Vice President, Dentsu Digital Inc.
Bank SinoPac is enable their chatbot to talk to and interact with customers using TTS Avatar in their Kiosks.
“Azure’s TTS Avatar technology has sparked great expectations for lifelike agents. With the imminent arrival of AGI second level and continuous evolution, I am confident that there will be more diverse and innovative applications for financial services and efficiency improvement,” says Coolson Shen, Chief Information Officer of Bank SinoPac.
Herbalife is working with Microsoft to build real-time chatbots for their products.
“Herbalife has always been committed to finding innovation solutions to elevate well-being. Partnering with Microsoft propels us into the future and connects our global community like never before. With AI avatars that leverage Text-to-Speech and custom neural voice pro technology, we have more agility to answer inquiries, offer wellness tips and provide advice to empower our consumers to live their best lives.”says Monica Kedzierski, VP Global Data, Analytics & AI, Herbalife.
Lokeshwar R Vangala, Senior Director of Engineering, Data & AI at Coca Cola, aptly stated, “Plain vanilla chatbots are a relic of the past. Enter the new era with virtual avatars and influencers! Microsoft’s virtual avatar with custom neural voice (CNV) revolutionizes customer support and marketing, offering lifelike interactions that engage users like never before. These avatars enhance user experience, provide personalized assistance, and boost brand loyalty. In the competitive GenAI arena, Microsoft’s scalable technology is the key to staying ahead and delivering unmatched value.”
E-commerce
Avatars are also being used in e-commerce to offer a more personalized and engaging shopping experience. Videos represent a powerful means for businesses to engage with their customers. Streaming commerce, a fresh approach to shopping, involves live streaming videos of products and services. This allows customers to engage with the host and make real-time purchases.
As an example, Microsoft Store on JD.com is leveraging avatars to enhance the streaming commerce experience. During live streaming events, a lifelike avatar could interact with customers in real-time, providing product information and answering customer questions. The avatar could also assist with the purchasing process, making it easy for customers to complete their transactions without leaving the streaming platform. With TTS Avatar, Microsoft Store on JD.com was able to drive sales and increase customer engagement, while also promoting collaboration and trust between the customer and the brand.
Content consumption
TTS Avatar significantly enhances content consumption by converting text into natural, human-like speech, making content accessible and convenient. The avatar’s visual element increases engagement through human-like emotions, while its customization capabilities offer personalized user experiences, fostering greater satisfaction and loyalty. Additionally, by supporting multiple languages, TTS Avatar breaks language barriers, making content more inclusive and accessible to a broader audience.
Mediapro, a leading group in the European audiovisual sector, unique in content integration, production and audiovisual distribution, is working with Microsoft to innovate their digital communications. “We have created AIMar, an avatar based on MSFT technology purposefully designed for the Communications department. AIMar mimics a real Communications professional and enables generating communication messages and campaigns at any time, in any language,” says Mayte Hidalgo, Head of AI Center of Excellence of Grup Mediapro.
TTS Avatar with GPT-4o
It’s easy to get started with TTS avatars for video creation using batch synthesis and live chats using real-time synthesis with Azure OpenAI Service GPT-4o integrated.
Developers can take advantage of Azure TTS Avatar’s API and SDKs to integrate the service into their applications. The API and SDKs provide a simple and easy-to-use interface for generating speech and avatars, making it easy for developers to incorporate Azure TTS Avatar into their workflows. Check out the documentation on live-chat synthesis avatar and batch synthesis avatar.
We also provide sample code to aid in integrating the text-to-speech avatar with the GPT-4o model. Learn more about how to create lifelike chatbots with real-time avatars and Azure OpenAI Service, or dive into code samples here (JS code sample, and python code sample). For guidance on creating a live chat app using Azure OpenAI Service On Your Data, please refer to this sample code (search “On Your Data”).
Here is a demo of TTS live chat avatar integrated with GPT-4o.
For regional availability of the TTS Avatar capability, learn more here.
Responsible AI considerations
Microsoft
Microsoft believes that when you create technologies that can change the world, you must also ensure that the technology is used responsibly. Our goal is to develop and deploy AI that will have a beneficial impact and earn trust from society. Our work is guided by a core set of principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. We take a cross-company approach through cutting-edge research, best-of-breed engineering systems, and excellence in policy and governance.
Microsoft is committed to helping our customers use our AI products responsibly, sharing our learnings, and building trust-based partnerships through tools like Transparency Notes and Impact Assessments. Many of these resources can be found at https://aka.ms/RAI.
Text to Speech service
As part of this commitment, we have integrated safety and security features and guidelines into Azure TTS Avatar. This includes measures to promote transparency in user interactions, mechanisms to identify and mitigate potential bias or harmful synthetic content, among other features.
In this transparency note, we describe the technology and capabilities for TTS Avatar, its approved use cases, considerations when choosing use cases, its limitations, fairness considerations and best practice for improving system performance.
We require all developers and content creators to adhere to our code of conduct when using avatar features including prebuilt and custom avatars.
To ensure the responsible use of the technology, we have limited access to the custom avatar features. Custom avatars are available by registration only, and only for certain use cases. To access the feature, follow the limited access instructions to register your use case. Besides the limited access, it is required that you obtain explicit permission from the avatar talent prior to creating an avatar model that resembles the actor’s appearance. We require every customer to upload a recorded video file with a pre-defined statement from the avatar talent acknowledging that the customer will use the talent’s image and voice to create a TTS avatar.
Content Safety and Watermark
Azure AI Content Safety is integrated into the batch synthesis process of text to speech avatars for video creation scenarios. This added layer of text moderation allows for the detection of offensive, risky, or undesirable text input, thereby preventing the avatar from producing harmful output. The text moderation feature spans multiple categories, including sexual, violent, hate, self-harm content, and more. It’s available for batch synthesis of text-to-speech avatars both in Speech Studio and via the batch synthesis API.
To provide clearer insights into the source and history of video content created by text to speech avatars, we’ve adopted the Coalition for Content Provenance and Authenticity (C2PA) Standard. This standard offers transparent information about AI-generation of video content. For more details on the integration of C2PA with text to speech avatars, refer to Content Credentials in Azure Text to Speech Avatar .
Additionally, invisible watermarks are added to avatar outputs. These watermarks allow approved users to identify whether a video is synthesized using Azure AI Speech’s avatar feature. Eligible customers can use Azure AI Speech avatar watermark detection capabilities. To request watermark detection on a given video, please contact avatarvoice[at]microsoft.com.
Microsoft Azure
TTS Avatar is built on Microsoft Azure, a secure and compliant cloud infrastructure. Learn more about how your data will be processed and protected here.
Get started
Azure TTS Avatar is a powerful tool for developers looking to enhance customer engagement and improve overall experience. With a variety of use cases and customer references, it’s clear that Azure TTS Avatar is paving the way for a new era of customer engagement and innovation. As developers, you can use Azure TTS Avatar to create personalized and engaging experiences for your customers and employees with a rich choice of prebuilt avatars and voices available. You can also leverage Custom Avatar and Custom Neural Voice to create custom synthetic voices and images that sound like your brand. With responsible AI features that promote transparency and fairness, Azure TTS Avatar helps you create inclusive and ethical applications that serve a diverse range of users.
Learn more:
Create a video using prebuilt avatars
Try our live chat demo with prebuilt avatars
Apply for access to Custom Avatar and Custom Neural Voice
Microsoft Tech Community – Latest Blogs –Read More
Integrated vectorization with Azure OpenAI for Azure AI Search now generally available
We’re excited to announce the general availability of integrated vectorization with Azure OpenAI embeddings in Azure AI Search. This marks an important milestone in our ongoing mission to streamline and expedite data preparation and index creation for Retrieval-Augmented Generation (RAG) and traditional applications.
Why is vectorization important?
Vectorization is the process of transforming data into embeddings (vector representations) in order to perform vector search. Vector search aids in identifying similarities and differences in data, enabling businesses to deliver more accurate and relevant search results. Getting your data prepared for vectorization and indexed also involves various steps, including cracking, enrichment and chunking. The way you perform each of these steps offers opportunities to make your retrieval system more efficient and effective. Take a look at the blog post Outperforming vector search with hybrid retrieval and ranking capabilities that showcases the configurations that would work better depending on the scenario.
What is integrated vectorization?
Integrated vectorization, a feature of Azure AI Search, streamlines indexing pipelines and RAG workflows from source file to index query. It incorporates data chunking and text/image vector conversions into one flow, enabling vector search across your proprietary data with minimal friction.
Integration vectorization simplifies the steps required to prepare and process your data for vector retrieval. As part of the indexing pipeline, it handles the splitting of original documents into chunks, automatically creates embeddings with its Azure OpenAI integration, and maps the newly vectorized chunks to an Azure AI Search index. It also enables the automated vectorization of user queries sent to the AI Search index.
This index can be used as your retrieval system wherever you are building your RAG application, including Azure AI Studio and Azure OpenAI Studio.
What functionality is now generally available?
The following functionalities within integrated vectorization are generally available as part of REST API version 2024-07-01:
Azure OpenAI embedding skill and vectorizer: These features allow for automatic vectorization of text data during data ingestion and query time.
Index Projections: This feature enables mapping of one source document associated with multiple chunks, enhancing the relevance of search results.
Split skill functionality for chunking with overlap: This functionality divides your data into smaller, manageable chunks for independent processing.
Custom Vectorizer functionality: This allows for connection to other embedding endpoints apart from Azure OpenAI.
Shared Private Link for Azure OpenAI accounts: This feature, which is part of the latest AI Search management API version 2023-11-01, provides secure and private connectivity from a virtual network to linked Azure services.
Customer Managed Keys for indexes with vectorizers: This feature allows for additional security and control over your data through the use of your own keys. When you configure CMK in your AI Search index, your vectorizers operations at query time are also encrypted with your own keys.
How can you get started with integrated vectorization from the Azure portal?
The Import and vectorize data wizard in the Azure portal simplifies the creation of integrated vectorization components, including document chunking, automatic Azure OpenAI embedding creation, index definition and data mapping. This wizard now supports Azure Data Lake Storage Gen2, in addition to Azure Blob Storage and OneLake (in preview), facilitating data ingestion from diverse data sources. Coming soon, the wizard will also support source document additional metadata mapping to chunks and the Azure portal will provide debug sessions functionality for skillsets configured with index projections.
Azure AI Search also allows you to personalize your indexing pipeline through code and take advantage of integrated vectorization using any of its directly supported data sources. For example, here’s a blog post of how to achieve this for Azure SQL Server data with integrated vectorization: Vector Search with Azure SQL Database.
What’s still in public preview?
We also have support for image (multimodal) embeddings and Azure AI Studio model catalog embeddings which remain in public preview. For more information about this functionality visit Azure AI Search now supports AI Vision multimodal and AI Studio embedding models – Microsoft Community Hub.
Customers and benefits
Streamlined RAG pipelines allow your organization to scale and accelerate app development. Integrated vectorization’s managed embedding processing enables organizations to offer turnkey RAG systems for new projects, so teams can quickly build a GenAI application specific to their datasets and need, without having to build a custom deployment each time.
Customer: SGS & Co
For over 70 years, SGS & CO has been at the forefront of design, graphic services, and graphic manufacturing. Our specialized teams at Marks and SGS collaborate with clients worldwide to ensure a consistent and seamless brand experience.
“A key priority has been to equip our global teams with efficient tools that streamline their workflows, starting with our sourcing and research processes. We recognized the need for a system that allows for searchable assets without depending solely on order administration input, which can be inconsistent or deviate from actual data. This discrepancy posed a challenge for our AI modules.”
“SGS AI Visual Search is a GenAI application built on Azure for our global production teams to more effectively find sourcing and research information pertinent to their project. The most significant advantage offered by SGS AI Visual Search is utilizing RAG, with Azure AI Search as the retrieval system, to accurately locate and retrieve relevant assets for project planning and production.”
“Thanks to RAG’s Azure AI Search’s vector search capabilities, we can surpass the limitations of exact and fuzzy matching with contextual retrieval. This allows our employees to access information swiftly and effectively, enhancing service delivery to both our internal teams and global clients.”
“Additionally, the integrated vectorization feature in AI Search has greatly streamlined our data processing workflows. It automates batching and chunking, making it faster and easier to index data without requiring separate compute instances. Azure’s seamless handling of vectorization during live searches saves development time and reduces deployment costs. This capability enables us to efficiently create and manage indexes for multiple clients without extensive pipeline management. Moreover, integrating this feature with other RAG applications, such as chatbots and data retrieval systems, further enhances our ability to deliver comprehensive solutions across various platforms.”
Laura Portelli, Product Manager, SGS & Co
Customer: Denzibank
Intertech is the software house of Denzibank, Turkey’s 5th largest private bank. They built one centralized RAG system using Azure AI Search and integrated vectorization, to support multiple GenAI applications and minimize data processing and management.
“At Intertech, we were in search of a solution to disseminate and more efficiently utilize information from our current documentation, solutions offered in our ticket system, and company procedures. This solution needed to act as a central vectorization and search solution for our various, different GenAI applications being built. Thanks to Azure AI Search’s integrated vectorization, we had access to the latest models offered by OpenAI, including embedding-3-large, and our job became much easier, allowing us to develop various applications very quickly and effortlessly.”
Salih Eligüzel, Head of DevOps and MLOps, Intertech
FAQ
What’s integrated vectorization pricing?
As part of your AI Search service pricing you have an allowed included limit of built-in indexers. Split skill (data chunking), native data parsing and index projections, which are necessary for integrated vectorization are offered at no extra cost. Azure OpenAI embedding calls are billed to your Azure OpenAI service according to its pricing model.
What customizations are available with integrated vectorization?
Azure portal supports the most common scenarios via the “Import and vectorize data” wizard. However, if your business needs extend beyond these common scenarios and require further customization, Azure AI Search you can customize your indexing pipeline through code and use the integrated vectorization functionality using any of its directly supported data sources.
Customization options include enabling features available through other skills in the AI Enrichment suite. For instance, you can make use of custom code through Custom WebApi skill to implement other chunking strategies, utilize AI Document Intelligence for chunking, parsing, and preserving table structure, and call upon any of the available built-in skills for data transformation, among others. Skillset configuration serves to enhance functionality to better suit your business needs.
For a more comprehensive understanding, we encourage you to explore our AI Search vector GitHub repository, which houses sample codes, and our Azure AI Search Power Skills repository, containing examples of custom skills. For example, this custom skill code is used to call an external embedding endpoint (aside from Azure OpenAI) and can be called the custom indexing pipeline and vectorizer at the query time.
Some scenarios that are a good fit for integrated vectorization
Integrated Vectorization is particularly beneficial when preparing data with AI enrichment before chunking and vectorizing it. Azure AI Search provides AI enrichment capabilities for OCR and other data transformation before placing it in the index for convenience.
Integrated vectorization is ideal for RAG solutions that require quick deployment without constant developer intervention. Once identified, needed patterns can be made available for teams to use for their convenient RAG and constant deployments. Examples of this would be projects, per-use-case scenarios with specific documents, etc.
In essence, if you aim to expedite your time to market for RAG scenarios with low/no-code for retriever creation, integrated vectorization offers a promising option.
More news
Azure AI Search is also launching binary quantization, along with other vector relevance features, to General Availability today! Dive into the details of these new additions in our Binary Quantization GA announcement blog post.
What’s next?
Stay tuned for more updates on the latest features of Azure AI Search and their role in simplifying integration for RAG applications!
Getting started with Azure AI Search
Learn more about Azure AI Search and about all the latest features.
Start creating a search service in the Azure Portal, Azure CLI, the Management REST API, ARM template, or a Bicep file.
Learn about Retrieval Augmented Generation in Azure AI Search.
Explore our preview client libraries in Python, .NET, Java, and JavaScript, offering diverse integration methods to cater to varying user needs.
Explore how to create end-to-end RAG applications with Azure AI Studio.
Microsoft Tech Community – Latest Blogs –Read More
Announcing the General Availability of the VS Code extension for Azure Machine Learning
Machine learning and artificial intelligence are transforming the world as we know it. With the power of data, you will have countless opportunities to create something new, unique, and exciting. Whether you are a seasoned data scientist or a curious beginner, you need a platform that can help you build, train, deploy, and manage your machine learning models with ease and efficiency. Azure Machine Learning has always been the backbone for machine learning tasks, and we want to further help you in your machine learning journey by improving the way you write code.
The VS Code extension for Azure Machine Learning has been in preview for a while and we are excited to announce the of the VS Code extension for Azure Machine Learning. You can use your favorite VS Code setup, either desktop or web, to build, train, deploy, debug, and manage machine learning models with Azure Machine Learning from within VS Code. This means that the extension is stable, reliable, ready for production use, and comes with additional features, such as VNET support. The update will roll out throughout the day.
“We have been using the VS Code extension for Azure Machine Learning since its preview release, and it has significantly streamlined our workflow. The ability to manage everything from building to deploying models directly within our preferred VS Code environment has been a game-changer. The seamless integration and robust features like interactive debugging and VNET support have enhanced our productivity and collaboration. We are thrilled about its general availability and look forward to leveraging its full potential in our AI projects.” –Ornaldo Ribas Fernandes: Co-founder and CEO, Fashable
Azure Machine Learning
Azure Machine Learning (Azure ML) is a cloud-based service that enables you to build, train, deploy, and manage machine learning models.
With Azure Machine Learning service, you can:
Build and train machine learning models faster, and easily deploy to the cloud or the edge.
Use the latest open-source technologies such as TensorFlow, PyTorch, or Jupyter.
Experiment locally and then quickly scale up or out with large GPU-enabled clusters in the cloud.
Interactively debug experiments, pipelines, and deployments using the built-in VS Code debugger.
Speed up data science with automated machine learning and hyper-parameter tuning.
Track your experiments, manage models, and easily deploy with integrated CI/CD tooling.
With this extension installed, you can accomplish much of this workflow directly from Visual Studio Code. The VS Code extension provides a user interface to create and manage Azure ML resources, such as experiments, compute targets, environments, and deployments. It also supports the Azure ML 2.0 CLI, which is the new command-line tool that simplifies the specification and execution of machine learning tasks.
Get started with Azure Machine Learning extension
One-Click Connect to VS Code from Azure ML Studio
To get started with VS Code, navigate to the compute section of your Azure Machine Learning Studio. Find the desired compute instance and click on the VS Code (Web) or VS Code (Desktop) links under the “Applications” section.
Don’t have an Azure ML workspace or compute instance? Check out the guide here: Tutorial: Create workspace resources – Azure Machine Learning | Microsoft Learn
VS Code Desktop
After clicking on the link for VS Code desktop, the browser will ask you for your permission to launch the VS Code Desktop application. VS Code desktop will ask you to sign in using your Microsoft/Azure account.
Follow the sign-in prompts, then you should be all set up to develop your own machine learning models using your favorite VS Code set up!
VS Code Web
After clicking on the link, VS Code (Web) will open to a new tab on your browser. It may ask you to sign in using your Microsoft/Azure account, so VS Code will have permission to access your Azure subscription and workspace. Note the connection process may take a few minutes.
After signing in, you should now be connected to your Azure Machine Learning workspace inside of VS Code. Time to build your own machine learning model using the full power of VS Code!
Feedback
Give the Azure Machine Learning extension a try and let us know what you think. If you have any questions or feedback, please let us know your thoughts in this survey! You can also file an issue on our public GitHub repo with any questions or concerns you may have.
Need a guide to help you get started or documentation? Check out the tutorials here: Azure Machine Learning documentation | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
Document Field Extraction with Generative AI
Adoption of Generative AI technologies is accelerating, driven by the transformative potential they offer across various industry sectors. Azure AI enables organizations to create interactive and responsive AI solutions customized to their requirements, playing a significant part helping businesses harness Generative AI effectively. With the new custom field extraction preview, you can leverage generative AI to efficiently extract fields from documents, ensuring standardized output and a repeatable process to support document automation workflows.
Field Extraction using Large Language Models
To extract fields from documents using Large Language Models (LLMs) or Generative AI, you typically need to create a complex orchestration workflow, as shown below, that includes multiple services to manage tasks like text extraction, document chunking, vectorization, search index creation, and prompt engineering.
Size and Complexity of Prompts: Managing prompts to accommodate variations can be difficult, resulting in a large number of prompts and associated costs.
Inconsistent Results: Results may vary across multiple runs of the same document, leading to reliability issues.
Grounding: Ensuring that values are accurately extracted and traceable to address issues with hallucination.
Lack of Confidence Scores: Absence of confidence scores makes it challenging to automate downstream processes.
Imagine harnessing the benefits of generative AI without the complexities of developing your own workflow. With the new custom field extraction capability, you simply define your schema, let the model extract the necessary fields, and correct any prediction errors. Once model is trained, you can integrate the model into your document processing workflows with a single API call. This approach provides grounded results and confidence scores, offering guardrails to ensure the extracted values align with your business needs.
Azure AI Document Intelligence
Azure AI Document Intelligence is an AI service offering a streamlined set of APIs and a studio experience to efficiently extract content, structure (such as tables, paragraphs, sections, and figures), and fields – whether predefined for specific document types or custom-defined for any document or form. With the Document Intelligence APIs, you can easily split, classify, and extract fields or content from any document or form at scale, tailored to meet your business needs. The latest Document field extraction model leverages generative AI to extract user-specified fields from documents across a wide variety of visual templates. This custom extraction model combines the power of document understanding with Large Language Models (LLMs) and the rigor and schema from custom extraction capabilities to create a model with high accuracy in minutes.
Why Choose Azure Document Field Extraction?
Accuracy and Reliability: Our AI models are built to deliver accurate data extraction, reducing errors and improving efficiency.
Scalability: Easily scale your document processing capabilities to meet the growing demands of your business.
Customizability: Tailor our extraction models to your specific requirements, ensuring the perfect fit for your unique workflows.
Grounded results: Localize the data extracted in the documents, ensuring the response is generated from the content, to enable human review workflows.
Confidence scores: Maximize efficiency and minimize costs in automation workflows, leveraging confidence scores.
Cost Efficiency: With our new pricing, enjoy the best-in-class AI technology at a fraction of the cost.
Building a Custom Field Extraction Model
The new field extraction model is available in Azure AI Studio under AI Services – Vision + Document. Start by creating a project to work with your documents.
Once you select on the project, you should now be in the Define schema window. The files you uploaded are listed and you can use the drop-down option to select files. You can start adding fields by clicking on the Add new field button. Enter a name, description, and type for the field to be extracted. Once all the fields are added, select the save button at the bottom of the screen.
After the schema is saved, all the uploaded training documents are analyzed, and field values are automatically extracted. The auto extracted fields are tagged as Predicted. Review the predicted values. If the field value is incorrect or isn’t extracted, you can hover over the predicted field. Select the edit button to make the changes and after the labels are reviewed and corrected for all the training documents, proceed to build your model.
On the Build model dialog page, provide a unique model name and, optionally, a description. Select Build to initiate the training process. Generative models train instantly! Refresh the page to select the model once status is changed to succeeded.
Once the model training is complete, you can test your model by selecting Test button. Upload your test files and select Run Analysis to extract field values from the documents. Validate your model accuracy by evaluating the results for each field.
You can use the REST API or client libraries to submit a document for analysis. The custom generative AI model is highly effective at extracting simple fields from documents without requiring labeled samples. However, providing a few labeled samples can significantly enhance the extraction accuracy for more complex fields and user-defined fields like tables.
Business Scenarios
Loan & Mortgage Applications – Automation of loan and mortgage application process enables banks, lenders, and government entities to process loan and mortgage applications quicker.
Financial Services – Analyze complex documents like financial reports and asset management reports, with the new custom field extraction model.
Contract Lifecycle Management – Build a custom field extraction model to extract the fields, clauses, and obligations from a wide array of contract types.
Expense Management – Receipts and invoices from various retailers and businesses need to be parsed to validate the expenses. Custom field extraction can extract expenses across different formats and documents with varying templates.
Get Started!
Custom generative models are available with the 2024-07-31-preview version and later models. To learn how to build and train a custom field extraction model using generative AI, you can follow the instructions here – Use AI Studio to build and train a custom field extraction. Start building your custom document field extraction models today!
Microsoft Tech Community – Latest Blogs –Read More
A better Phi Family is coming – multi-language support, better vision, intelligence MOEs
After the release of Phi-3 at Microsoft Build 2024, it has received different attention, especially the application of Phi-3-mini and Phi-3-vision on edge devices. In the June update, we improved Benchmark and System role support by adjusting high-quality data training. In the August update, based on community and customer feedback, we brought Phi-3.5-mini-128k-instruct multi-language support, Phi-3.5-vision-128k with multi-frame image input, and provided Phi-3.5 MOE newly added for AI Agent. Next, let’s take a look
Multi-language support
In previous versions, Phi-3-mini had good English corpus support, but weak support for non-English languages. When we tried to ask questions in Chinese, there were often some wrong questions, such as
But in the new version, we can have better understanding and corpus support with the new Chinese prediction support
Better vision
images = []
placeholder = “”
for i in range(1,22):
with open(“../output/keyframe_”+str(i)+“.jpg”, “rb”) as f:
images.append(Image.open(“../output/keyframe_”+str(i)+“.jpg”))
placeholder += f”<|image_{i}|>n”
Intelligence MOEs
Faster pre-training speed than dense models
Faster inference speed than models with the same number of parameters
Requires a lot of video memory because all expert systems need to be loaded into memory
There are many challenges in fine-tuning, but recent research shows that instruction tuning for mixed expert models has great potential.
“””
sys_msg = “””You are a helpful AI assistant, you are an agent capable of using a variety of tools to answer a question. Here are a few of the tools available to you:
– Blog: This tool helps you describe a certain knowledge point and content, and finally write it into Twitter or Facebook style content
– Translate: This is a tool that helps you translate into any language, using plain language as required
– Final Answer: the final answer tool must be used to respond to the user. You must use this when you have decided on an answer.
To use these tools you must always respond in JSON format containing `“tool_name”` and `“input”` key–value pairs. For example, to answer the question, “Build Muliti Agents with MOE models” you must use the calculator tool like so:
{
“tool_name”: “Blog”,
“input”: “Build Muliti Agents with MOE models”
}
Or to translate the question “can you introduce yourself in Chinese” you must respond:
{
“tool_name”: “Search”,
“input”: “can you introduce yourself in Chinese”
}
Remember just output the final result, ouput in JSON format containing `“agentid”`,`“tool_name”` , `“input”` and `“output”` key–value pairs .:
[
{ “agentid”: “step1”,
“tool_name”: “Blog”,
“input”: “Build Muliti Agents with MOE models”,
“output”: “………”
},
{ “agentid”: “step2”,
“tool_name”: “Search”,
“input”: “can you introduce yourself in Chinese”,
“output”: “………”
},
{
“agentid”: “final”
“tool_name”: “Result”,
“output”: “………”
}
]
The users answer is as follows.
“””
We can see that by telling the model what skills it needs to have and the arrangement of tasks, we can assign them to different tasks through Phi-3.5 MOEs to complete related work.
[
{
“agentid”: “step1”,
“tool_name”: “Blog”,
“input”: “Generative AI with MOE”,
“output”: “Generative AI with MOE (Mixture of Experts) is a powerful approach that combines the strengths of generative models and the flexibility of MOE architecture. This hybrid model can generate high-quality, diverse, and contextually relevant content, making it suitable for various applications such as content creation, data augmentation, and more.”
},
{
“agentid”: “step2”,
“tool_name”: “Translate”,
“input”: “Generative AI with MOE is a powerful approach that combines the strengths of generative models and the flexibility of MOE architecture. This hybrid model can generate high-quality, diverse, and contextually relevant content, making it suitable for various applications such as content creation, data augmentation, and more.”,
“output”: “基于生成AI的MOE(Mixture of Experts)是一种强大的方法,它结合了生成模型的优势和MOE架构的灵活性。这种混合模型可以生成高质量、多样化且上下文相关的内容,使其适用于各种应用,如内容创建、数据增强等。”
},
{
“agentid”: “final”,
“tool_name”: “Result”,
“output”: “基于生成AI的MOE(Mixture of Experts)是一种强大的方法,它结合了生成模型的优势和MOE架构的灵活性。这种混合模型可以生成高质量、多样化且上下文相关的内容,使其适用于各种应用,如内容创建、数据增强等。”
}
]
Thoughts on SLMs
Resources
Microsoft Tech Community – Latest Blogs –Read More
Bus selector cannot find signals where the name has been changed programmatically?
Hi,
I programmatically need to change signal names in simulink.
I use a code similar to:
lineH = find_system(gcs, ‘FindAll’, ‘on’, ‘type’, ‘line’);
set(lineH, ‘Name’, ‘NewName’);
So far so good, but when the changed signal is fed into a bus and should then be selected with a bus selector block, the selector block does not find the "NewName".
I would need to manually open the bus selector block and select the "NewName". Problem is, that there are hundrets of signals to be changed and even much more bus selectors, where the change needs then to be done manually.
Interresting is, that if a signal name change in Simulink is done manually, the bus selectors are somehow updated and the "NewName" is detected automatically without the need of selecting the signal in the respective bus selectors.
Has anyone a solution for this problem?
It would be nice, if there was a built in matlab/simulink function that can automatically update all bus selectors after a programmatically performed signal name change?
Thanks for you help.
ThomasHi,
I programmatically need to change signal names in simulink.
I use a code similar to:
lineH = find_system(gcs, ‘FindAll’, ‘on’, ‘type’, ‘line’);
set(lineH, ‘Name’, ‘NewName’);
So far so good, but when the changed signal is fed into a bus and should then be selected with a bus selector block, the selector block does not find the "NewName".
I would need to manually open the bus selector block and select the "NewName". Problem is, that there are hundrets of signals to be changed and even much more bus selectors, where the change needs then to be done manually.
Interresting is, that if a signal name change in Simulink is done manually, the bus selectors are somehow updated and the "NewName" is detected automatically without the need of selecting the signal in the respective bus selectors.
Has anyone a solution for this problem?
It would be nice, if there was a built in matlab/simulink function that can automatically update all bus selectors after a programmatically performed signal name change?
Thanks for you help.
Thomas Hi,
I programmatically need to change signal names in simulink.
I use a code similar to:
lineH = find_system(gcs, ‘FindAll’, ‘on’, ‘type’, ‘line’);
set(lineH, ‘Name’, ‘NewName’);
So far so good, but when the changed signal is fed into a bus and should then be selected with a bus selector block, the selector block does not find the "NewName".
I would need to manually open the bus selector block and select the "NewName". Problem is, that there are hundrets of signals to be changed and even much more bus selectors, where the change needs then to be done manually.
Interresting is, that if a signal name change in Simulink is done manually, the bus selectors are somehow updated and the "NewName" is detected automatically without the need of selecting the signal in the respective bus selectors.
Has anyone a solution for this problem?
It would be nice, if there was a built in matlab/simulink function that can automatically update all bus selectors after a programmatically performed signal name change?
Thanks for you help.
Thomas busselector, bus selector, programmatically, signal name change programmatically, signal name change, programmatically change simulink signal name, signal name change with m script MATLAB Answers — New Questions
Converting cell array with elements to numerical or double matrix
I have to import a cell array that is composed of strings and numbers from excel to MATLAB using readcell, and one part of the larger input data that is of interest to me is an array with mostly numbers, and otherwise some strings in the first column and empty spaces in the numerical section that appear as <missing>. Usually, I used to do this using xlsread, but due to the increasing pressure by MATLAB to use its more specific functions for such operations, and also as it seems like the readcell function DOES work faster, I am trying to get familiar with readcell here.
I would like to treat the particular section of interest as a table with a numerical array, and it would be OK for me to e.g. convert the <missing> elements to number 0 in my final numerical matrix. Also keep in my mind that this section is just a small part of a much larger data file, and using readtable will not work.
Below is the code:
clc
InputData=readcell(‘MATLAB_readcell.xlsx’,"Sheet","Sample Data")
InputString=string(InputData)
Numbers=InputString(:,2:end)
[ii,jj]=find(ismissing(Numbers))
Numbers(ii,jj)=’0′
But here I have a problem: So far I can only convert the input to a string first, then look for <missing> in the resulting string array using ismissing, then replace those elements with a string zeor (as ‘0’), then convert the whole thing to a numerical array using str2num.
First of all, I assume there must be an easier way to do this, but regardless of that, the resulting numerical matrix here has saome faults, in that all the vales that appear as 70 in the string array, are equal to 0 in final matrix. In other words, although I want the program to only set the string value of every <missing> element with in particular ii&jj posiztion, it actually sets the whole row equal to ‘0’.
is there an easier way to do this and if not, how can I at least solve the problem with the faulty rows?I have to import a cell array that is composed of strings and numbers from excel to MATLAB using readcell, and one part of the larger input data that is of interest to me is an array with mostly numbers, and otherwise some strings in the first column and empty spaces in the numerical section that appear as <missing>. Usually, I used to do this using xlsread, but due to the increasing pressure by MATLAB to use its more specific functions for such operations, and also as it seems like the readcell function DOES work faster, I am trying to get familiar with readcell here.
I would like to treat the particular section of interest as a table with a numerical array, and it would be OK for me to e.g. convert the <missing> elements to number 0 in my final numerical matrix. Also keep in my mind that this section is just a small part of a much larger data file, and using readtable will not work.
Below is the code:
clc
InputData=readcell(‘MATLAB_readcell.xlsx’,"Sheet","Sample Data")
InputString=string(InputData)
Numbers=InputString(:,2:end)
[ii,jj]=find(ismissing(Numbers))
Numbers(ii,jj)=’0′
But here I have a problem: So far I can only convert the input to a string first, then look for <missing> in the resulting string array using ismissing, then replace those elements with a string zeor (as ‘0’), then convert the whole thing to a numerical array using str2num.
First of all, I assume there must be an easier way to do this, but regardless of that, the resulting numerical matrix here has saome faults, in that all the vales that appear as 70 in the string array, are equal to 0 in final matrix. In other words, although I want the program to only set the string value of every <missing> element with in particular ii&jj posiztion, it actually sets the whole row equal to ‘0’.
is there an easier way to do this and if not, how can I at least solve the problem with the faulty rows? I have to import a cell array that is composed of strings and numbers from excel to MATLAB using readcell, and one part of the larger input data that is of interest to me is an array with mostly numbers, and otherwise some strings in the first column and empty spaces in the numerical section that appear as <missing>. Usually, I used to do this using xlsread, but due to the increasing pressure by MATLAB to use its more specific functions for such operations, and also as it seems like the readcell function DOES work faster, I am trying to get familiar with readcell here.
I would like to treat the particular section of interest as a table with a numerical array, and it would be OK for me to e.g. convert the <missing> elements to number 0 in my final numerical matrix. Also keep in my mind that this section is just a small part of a much larger data file, and using readtable will not work.
Below is the code:
clc
InputData=readcell(‘MATLAB_readcell.xlsx’,"Sheet","Sample Data")
InputString=string(InputData)
Numbers=InputString(:,2:end)
[ii,jj]=find(ismissing(Numbers))
Numbers(ii,jj)=’0′
But here I have a problem: So far I can only convert the input to a string first, then look for <missing> in the resulting string array using ismissing, then replace those elements with a string zeor (as ‘0’), then convert the whole thing to a numerical array using str2num.
First of all, I assume there must be an easier way to do this, but regardless of that, the resulting numerical matrix here has saome faults, in that all the vales that appear as 70 in the string array, are equal to 0 in final matrix. In other words, although I want the program to only set the string value of every <missing> element with in particular ii&jj posiztion, it actually sets the whole row equal to ‘0’.
is there an easier way to do this and if not, how can I at least solve the problem with the faulty rows? readcell, str2num, reading cells, xlsread, excel input, string arrays, missing MATLAB Answers — New Questions
Formula for Excel Online
I am trying to create a formula that when a specific cell number equals or is less than 1, the cell will equal $40. But, if the cell is more than 1 it will equal $40+($25 * “the cell #”) then subtract $25 from that total.
If a tech works an hour or less they get $40 but if they work more than an hour they get $40 plus ($25 multiplied by the extra time over the hour).
ex: working 1.6 hours would equal $55 (($40+($25×1.6))-$25)
I am trying to create a formula that when a specific cell number equals or is less than 1, the cell will equal $40. But, if the cell is more than 1 it will equal $40+($25 * “the cell #”) then subtract $25 from that total.If a tech works an hour or less they get $40 but if they work more than an hour they get $40 plus ($25 multiplied by the extra time over the hour).ex: working 1.6 hours would equal $55 (($40+($25×1.6))-$25) Read More
How to keep the carriage return entered into a Rich Text column type?
I have a Sharepoint list that when a carriage return (hard return) is entered into a Sharepoint custom form the carriage return is ignored in the actual list view of a list library. It is a ‘Multiple lines of text’ column type with ‘Rich text’ enabled. In the SP custom form the carriage returns are there, however in the SP view it ignores the carriage returns. I am using Power Automate to generate a Word doc from the list and would like to keep the carriage returns.
I have a Sharepoint list that when a carriage return (hard return) is entered into a Sharepoint custom form the carriage return is ignored in the actual list view of a list library. It is a ‘Multiple lines of text’ column type with ‘Rich text’ enabled. In the SP custom form the carriage returns are there, however in the SP view it ignores the carriage returns. I am using Power Automate to generate a Word doc from the list and would like to keep the carriage returns. Read More
24 hour format – Onedrive
Good afternoon.
My time format in OneDrive, on the regional site is 24 hours.
But the file history or file change history in OneDrive is shown in 12 hours.
An example
If I make a change to a file at 14:00, OneDrive shows 02:00
Here in Spain we use 24 hours as the preferred format and for our company with more than 100 employees it is important to control this well.
Can you help me?
Regards, Thank you
Good afternoon. My time format in OneDrive, on the regional site is 24 hours.But the file history or file change history in OneDrive is shown in 12 hours.An exampleIf I make a change to a file at 14:00, OneDrive shows 02:00Here in Spain we use 24 hours as the preferred format and for our company with more than 100 employees it is important to control this well. Can you help me?Regards, Thank you Read More
Copilot In Windows for Windows Settings Not Working nor Image Upload
I’ve taken a few training courses on this and from December 2023 the Copilot for Windows when you wrote “open Excel” as an example would reply with “Open an App. Sure, would you like me to open Excel app for you” with Yes and No. Now, it doesn’t do this at all and just list a long list of things you have to do. No Windows Settings work now.
I also can’t use the image upload as everything I’ve tried to upload from my local PC just says “The file couldn’t be uploaded. Please try again.”
I’m logged in with a work profile and do not have the copilot license but all of these features are supposed to be included.
I’ve taken a few training courses on this and from December 2023 the Copilot for Windows when you wrote “open Excel” as an example would reply with “Open an App. Sure, would you like me to open Excel app for you” with Yes and No. Now, it doesn’t do this at all and just list a long list of things you have to do. No Windows Settings work now. I also can’t use the image upload as everything I’ve tried to upload from my local PC just says “The file couldn’t be uploaded. Please try again.” I’m logged in with a work profile and do not have the copilot license but all of these features are supposed to be included. Read More
Warranty Surface Laptop 7 with 4 year extended
I have made requests via the web, chat, callback request , calling direct and cannot get service at all. Does anyone have a phone number for executive complaints please
I have made requests via the web, chat, callback request , calling direct and cannot get service at all. Does anyone have a phone number for executive complaints please Read More
Enforcing PasswordProtectedTransport for application sso in Entra ID
I need guidance on configuring RequestedAuthnContext in Entra ID for an application that requires re-authn during e-sign process. Currently, the only prompt is username but would like to have both username and password. Specifically, I’m looking for help with modifying the SAML request settings or the application manifest to enforce PasswordProtectedTransport. If anyone has experience with similar configurations or insights on best practices, your assistance would be greatly appreciated.
I need guidance on configuring RequestedAuthnContext in Entra ID for an application that requires re-authn during e-sign process. Currently, the only prompt is username but would like to have both username and password. Specifically, I’m looking for help with modifying the SAML request settings or the application manifest to enforce PasswordProtectedTransport. If anyone has experience with similar configurations or insights on best practices, your assistance would be greatly appreciated. Read More
Windows deactivated after upgrade to 24H2
Hi,
upgrated from 23H2 to 24H2 26100.1586 and after the upgrade Windows is telling me it cannot activate anymore…
Error code 0XC004F012
Anybody got this problem ?
thank you
Hi, upgrated from 23H2 to 24H2 26100.1586 and after the upgrade Windows is telling me it cannot activate anymore… Error code 0XC004F012 Anybody got this problem ? thank you Read More
Advanced Time Series Anomaly Detector in Fabric
Introduction
Anomaly Detector, one of Azure AI services, enables you to monitor and detect anomalies in your time series data. This service is based on advanced algorithms, SR-CNN for univariate analysis and MTAD-GAT for multivariate analysis. This service is being retired by October 2026, and as part of the migration process
The algorithms were open sourced and published by the new time-series-anomaly-detector · PyPI package.
We offer a time series anomaly detection workflow in Microsoft Fabric data platform.
Time Series Anomaly Detection in Fabric RTI
There are few options for time series anomaly detection in Fabric RTI (Real Time Intelligence):
For univariate analysis, KQL contains the native function series_decompose_anomalies() that can perform anomaly detection on thousands of time series in seconds. For further info on using this function take a look at Time series anomaly detection & forecasting in Azure Data Explorer.
For multivariate analysis, there are few KQL library functions leveraging known multivariate analysis algorithms in scikit-learn , taking advantage of ADX capability to run inline Python as part of the KQL query. For further info see Multivariate Anomaly Detection in Azure Data Explorer – Microsoft Community Hub.
For both univariate and multivariate analysis you can now use the new workflow, which is based on the time-series-anomaly-detector package, as described below.
Using time-series-anomaly-detector in Fabric
In the following example we shall
Upload stocks change table to Fabric
Train the multivariate anomaly detection model in a Python notebook using Spark engine
Predict anomalies by applying the trained model to new data using Eventhouse (Kusto) engine
Below we briefly present the steps, see Multivariate anomaly detection – Microsoft Fabric | Microsoft Learn for the detailed tutorial.
Creating the environments
Create a Workspace
Create Eventhouse – to store the incoming streaming data
Enable OneLake availability – so the older data that was ingested to the Eventhouse can be seamlessly accessed by the Spark Notebook for training the anomaly detection model
Enable KQL Python plugin – to be used for real time predictions of anomalies on the new streaming data. Select 3.11.7 DL image that contains the time-series-anomaly-detector package
Create a Spark environment that includes the time-series-anomaly-detector package
Training & storing the Anomaly Detection model
Upload the stocks data to the Eventhouse
Create a notebook to train the model
Load the data from the Eventhouse using the OneLake path:
onelake_uri = “OneLakeTableURI” # Replace with your OneLake table URI
abfss_uri = convert_onelake_to_abfss(onelake_uri)
df = spark.read.format(‘delta’).load(abfss_uri)
df = df.toPandas().set_index(‘Date’)
View the data:
import plotly.graph_objects as go
fig = go.Figure()
fig.add_trace(go.Scatter(x=df.index, y=df[‘AAPL’], mode=’lines’, name=’AAPL’))
fig.add_trace(go.Scatter(x=df.index, y=df[‘AMZN’], mode=’lines’, name=’AMZN’))
fig.add_trace(go.Scatter(x=df.index, y=df[‘GOOG’], mode=’lines’, name=’GOOG’))
fig.add_trace(go.Scatter(x=df.index, y=df[‘MSFT’], mode=’lines’, name=’MSFT’))
fig.add_trace(go.Scatter(x=df.index, y=df[‘SPY’], mode=’lines’, name=’SPY’))
fig.update_layout(
title=’Stock Prices change’,
xaxis_title=’Date’,
yaxis_title=’Change %’,
legend_title=’Tickers’
)
fig.show()
Prepare the data for training:
features_cols = [‘AAPL’, ‘AMZN’, ‘GOOG’, ‘MSFT’, ‘SPY’]
cutoff_date = pd.to_datetime(‘2023-01-01’)
train_df = df[df.Date < cutoff_date]
Train the model:
import mlflow
from anomaly_detector import MultivariateAnomalyDetector
model = MultivariateAnomalyDetector()
sliding_window = 200
param s = {“sliding_window”: sliding_window}
model.fit(train_df, params=params)
Save the model in Fabric ML model registry
with mlflow.start_run():
mlflow.log_params(params)
mlflow.set_tag(“Training Info”, “MVAD on 5 Stocks Dataset”)
model_info = mlflow.pyfunc.log_model(
python_model=model,
artifact_path=”mvad_artifacts”,
registered_model_name=”mvad_5_stocks_model”,
)
Extract the mode path (to be used by the Eventhouse for the prediction):
mi = mlflow.search_registered_models(filter_string=”name=’mvad_5_stocks_model'”)[0]
model_abfss = mi.latest_versions[0].source
print(model_abfss)
Create a Query set and attached the Eventhouse to it
Run the ‘.create-or-alter function’ query to define predict_fabric_mvad_fl() stored function:
.create-or-alter function with (folder = “Packages\ML”, docstring = “Predict MVAD model in Microsoft Fabric”)
predict_fabric_mvad_fl(samples:(*), features_cols:dynamic, artifacts_uri:string, trim_result:bool=false)
{
let s = artifacts_uri;
let artifacts = bag_pack(‘MLmodel’, strcat(s, ‘/MLmodel;impersonate’), ‘conda.yaml’, strcat(s, ‘/conda.yaml;impersonate’),
‘requirements.txt’, strcat(s, ‘/requirements.txt;impersonate’), ‘python_env.yaml’, strcat(s, ‘/python_env.yaml;impersonate’),
‘python_model.pkl’, strcat(s, ‘/python_model.pkl;impersonate’));
let kwargs = bag_pack(‘features_cols’, features_cols, ‘trim_result’, trim_result);
let code = “`if 1:
import os
import shutil
import mlflow
model_dir = ‘C:/Temp/mvad_model’
model_data_dir = model_dir + ‘/data’
os.mkdir(model_dir)
shutil.move(‘C:/Temp/MLmodel’, model_dir)
shutil.move(‘C:/Temp/conda.yaml’, model_dir)
shutil.move(‘C:/Temp/requirements.txt’, model_dir)
shutil.move(‘C:/Temp/python_env.yaml’, model_dir)
shutil.move(‘C:/Temp/python_model.pkl’, model_dir)
features_cols = kargs[“features_cols”]
trim_result = kargs[“trim_result”]
test_data = df[features_cols]
model = mlflow.pyfunc.load_model(model_dir)
predictions = model.predict(test_data)
predict_result = pd.DataFrame(predictions)
samples_offset = len(df) – len(predict_result) # this model doesn’t output predictions for the first sliding_window-1 samples
if trim_result: # trim the prefix samples
result = df[samples_offset:]
result.iloc[:,-4:] = predict_result.iloc[:, 1:] # no need to copy 1st column which is the timestamp index
else:
result = df # output all samples
result.iloc[samples_offset:,-4:] = predict_result.iloc[:, 1:]
“`;
samples
| evaluate python(typeof(*), code, kwargs, external_artifacts=artifacts)
}
Run the prediction query that will detect multivariate anomalies on the 5 stocks, based on the trained model, and render it as anomalychart. Note that the anomalous points are rendered on the first stock (AAPL), though they represent multivariate anomalies, i.e. anomalies of the vector of the 5 stocks in the specific date.
let cutoff_date=datetime(2023-01-01);
let num_predictions=toscalar(demo_stocks_change | where Date >= cutoff_date | count); // number of latest points to predict
let sliding_window=200; // should match the window that was set for model training
let prefix_score_len = sliding_window/2+min_of(sliding_window/2, 200)-1;
let num_samples = prefix_score_len + num_predictions;
demo_stocks_change
| top num_samples by Date desc
| order by Date asc
| extend is_anomaly=bool(false), score=real(null), severity=real(null), interpretation=dynamic(null)
| invoke predict_fabric_mvad_fl(pack_array(‘AAPL’, ‘AMZN’, ‘GOOG’, ‘MSFT’, ‘SPY’),
// NOTE: Update artifacts_uri to model path
artifacts_uri=’enter your model URI here’,
trim_result=true)
| summarize Date=make_list(Date), AAPL=make_list(AAPL), AMZN=make_list(AMZN), GOOG=make_list(GOOG), MSFT=make_list(MSFT), SPY=make_list(SPY), anomaly=make_list(toint(is_anomaly))
| render anomalychart with(anomalycolumns=anomaly, title=’Stock Price Changest in % with Anomalies’)
Summary
The addition of the time-series-anomaly-detector package to Fabric makes it the top platform for univariate & multivariate time series anomaly detection. Choose the anomaly detection method that best fits your scenario – from native KQL function for univariate analysis at scale, through standard multivariate analysis techniques and up to the best of breed time series anomaly detection algorithms implemented in the time-series-anomaly-detector package. For more information see the overview and tutorial.
Microsoft Tech Community – Latest Blogs –Read More
Binary quantization in Azure AI Search: optimized storage and faster search
As organizations continue to harness the power of Generative AI for building Retrieval-Augmented Generation (RAG) applications and agents, the need for efficient, high-performance, and scalable solutions has never been greater. Today, we’re excited to introduce Binary Quantization, a new feature that reduces vector sizes by up to 96% while reducing search latency by up to 40%.
What is Binary Quantization?
Binary Quantization (BQ) is a technique that compresses high-dimensional vectors by representing each dimension as a single bit. This method drastically reduces the memory footprint of a vector index and accelerates vector comparison operations at the cost of recall. The loss of recall can be compensated for with two techniques called oversampling and reranking, giving you tools to choose what to prioritize in your application: recall, speed, or cost.
Why should I use Binary Quantization?
Binary quantization is most applicable to customers who want to store a very large number of vectors at a low cost. Azure AI Search keeps the vector indexes in memory to offer the best possible search performance. Binary Quantization (BQ) allows to reduce the size of the in-memory vector index, which in turn reduces the number of Azure AI Search partitions you need to fit your data, leading to cost reductions.
Binary quantization reduces the size of the in-memory vector index by converting 32-bit floating point numbers into 1-bit values, can achieve up to a 28x reduction in vector index size (slightly less than the theoretical 32x due to overheads introduced by the index data structures). The table below shows the impact of binary quantization on vector index size and storage use.
Table 1.1: Vector Index Storage Benchmarks
Compression Configuration
Document Count
Vector Index Size (GB)
Total Storage Size (GB)
% Vector Index Savings
% Storage Savings
Uncompressed
1M
5.77
24.77
SQ
1M
1.48
20.48
74%
17%
BQ
1M
0.235
19.23
96%
22%
Table 1.1 compares the storage metrics of three different vector compression configurations: Uncompressed, Scalar Quantization (SQ), and Binary Quantization (BQ). The data illustrates significant storage and performance improvements with Binary Quantization, showing up to 96% savings in vector index size and 22% in overall storage. MTEB/dbpedia was used with default vector search settings and OpenAI text-embeddings-ada-002 @1536 dimensions.
Increased Performance
Binary Quantization (BQ) enhances performance, reducing query latencies by 10-40% compared to uncompressed indexes. The improvement will vary based on oversampling rate, dataset size, vector dimensionality, and service configuration. BQ is fast for a few reasons, such as Hamming distance being faster to compute than cosine similarity, and packed bit vectors being smaller yielding improved locality. This makes it a great choice where speed is critical, and allows moderate oversampling to be applied to balance speed with relevance.
Quality Retainment
Reduction in storage use and improvements in the search performance come at the cost of recall when binary quantization is used. However, the tradeoff can be managed effectively using techniques like oversampling and reranking. Oversampling retrieves a greater set of potential documents to offset the resolution loss due to quantization. Reranking will recalculate similarity scores using the full-resolution vectors. The table below shows a subset of the MTEB datasets for OpenAI and Cohere embeddings with binary quantization mean NDCG@10 with and without reranking/oversampling.
Table 1.2: Impact of Binary Quantization on Mean NDCG@10 Across MTEB Subset
Model
No Rerank (Δ / %)
Rerank 2x Oversampling (Δ / %)
Cohere Embed V3 (1024d)
-4.883 (-9.5%)
-0.393 (-0.76%)
OpenAI text-embedding-3-small (1536d)
-2.312 (-4.55%)
+0.069 (+0.14%)
OpenAI text-embedding-3-large (3072d)
-1.024 (-1.86%)
+0.006 (+0.01%)
Table 1.2 compares the relative point differences of Mean NDCG@10 when using Binary Quantization from an Uncompressed index across different embeddings models from a subset of MTEB datasets.
Key takeaways:
BQ+Reranking yields higher retrieval quality compared to no reranking
The impact of reranking is more pronounced in models with lower dimensions, while for higher dimensions, the effect is smaller and sometimes negligible
Strongly considering reranking with full precision vectors to minimize or even eliminate recall loss caused by quantization
When to Use Binary Quantization
Binary Quantization is recommended for applications with high-dimensional vectors and large datasets, where storage efficiency and fast search performance are critical. It is particularly effective for embeddings with dimensions greater than 1024. However, for smaller dimensions, we recommend testing BQ’s quality or considering SQ as an alternative. Additionally, BQ performs exceptionally well when embeddings are centered around zero, as seen in popular embedding models like OpenAI and Cohere.
BQ + reranking/oversampling works by searching over a compressed vector index in-memory and reranking using full-precision vectors stored on disk, allowing you to significantly reduce costs while maintaining strong search quality. This approach achieves the goal of efficiently operating on memory-constrained settings by leveraging both memory and SSDs to deliver high performance and scalability with large datasets.
BQ adds to our price-performance enhancements made over the past several months, offering storage savings and performance improvements. By adopting this feature, organizations can achieve faster search results and lower operational costs, ultimately driving better outcomes and user experiences.
More Functionality now Generally Available
We’re pleased to share several vector search enhancements are now generally available in Azure AI Search. These updates provide users with more control over their retriever in RAG solutions and optimize LLM performance. Here are the key highlights:
Integrated vectorization with Azure OpenAI for Azure AI Search is now generally available!
Support for Binary Vector Types: Azure AI Search supports narrow vector types including binary vectors. This feature enables the storage and processing of larger vector datasets at lower costs while maintaining fast search capabilities.
Vector Weighting: This feature allows users to assign relative importance to vector queries over term queries in hybrid search scenarios. It gives more control over the final result set by enabling users to favor vector similarity over keyword similarity.
Document Boosting: Boost your search results with scoring profiles tailored to vector and hybrid search queries. Whether you prioritize freshness, geolocation, or specific keywords, our new feature allows for targeted document boosting, ensuring more relevant results for your needs.
Getting started with Azure AI Search
To get started with binary quantization, visit our official documentation here: Reduce vector size – Azure AI Search | Microsoft Learn
Learn more about Azure AI Search and about all the latest features.
Start creating a search service in the Azure Portal, Azure CLI, the Management REST API, ARM template, or a Bicep file.
Learn about Retrieval Augmented Generation in Azure AI Search.
Explore our preview client libraries in Python, .NET, Java, and JavaScript, offering diverse integration methods to cater to varying user needs.
Explore how to create end-to-end RAG applications with Azure AI Studio.
Microsoft Tech Community – Latest Blogs –Read More
Navigating Copilot: Viva Glint’s early learnings and best practices
On August 20, the Viva People Science team held the fifth webinar in its AI Empowerment series. This session was focused on driving Copilot and AI adoption at a team level, complementing what we have learned in earlier sessions about driving this change at an organizational level.
I was joined by colleagues from across the Viva Glint team, including Bryan Dobkin (Principal People Scientist), Suni Kasibhatla (Customer Experience Program Manager) and Julie Morris (Program Manager). As a team of internal Copilot Champions, Bryan, Suni and Julie provided an outline of the Copilot adoption initiatives they have been driving within Viva Glint. They discussed the strategy around these initiatives, some practical ideas to reinforce learning, sharing and behavior change when it comes to Copilot, and the importance of measuring impact and reiterating along the way.
The speakers also shared their individual journeys and experiences of using Copilot in their personal and professional lives, including their top tips for teams based on these experiences.
We invite you to watch the recording and access the slides from this event, and those from our earlier events in this series below. Discover more, engage with the content, and let’s embark on this journey together!
AI Empowerment: Introducing our Viva People-Science series for HR
AI Empowerment: Preparing your organization for AI with learnings from Microsoft
Ready, Set, AI: What our People Science research tells us about AI Readiness
Microsoft Tech Community – Latest Blogs –Read More
为什么我会收到”BLAS 加载错误: C:MATLAB7binwin32atlas_Athlon.dll: 找不到指定的模块。”,我下载的是Matlab2024a(D盘中),以前下载过另一款MATLAB(C盘)但删除了
clear
m0=1;
lamda=532*10^(-9);%lamda=[417,457,488,532,632.8]*10^(-9); %光束波长
z=50;%z=eps+[0:5:50]; %传播距离
R=0.03;%接受孔径的半径
A0=10;%光束功率常数
det=0.01:0.01:0.15; %横向相干长度
L=[1,5,10,20,30];%光总数指数
e=1*10^(-5);%e=[0.01,0.1,1,10,100]*10^(-5);%动能耗散率
X=1*10^(-8);% 温度耗5散
eta=0.001;% 内尺度因子
abc=3;%各项异性子
gama=-3;%温度盐度贡献比
NUM=10;
m=m0-NUM:1:m0+NUM;
k=2*pi/lamda;
for iiii=1:length(L)
C0=0;
for ll=1:L(iiii)
C0=C0+factorial(L(iiii))/factorial(ll)/factorial(L(iiii)-ll)*(-1)^(ll-1)/ll;
end
for iii=1:length(det)
for ii=1:length(m)
qtemp=0;
funt1=pi^2/(k*z)*(factorial(m0)*A0)^2/C0;
rouc2=8.705*10^(-8)*k^2*(e*eta)^(-1/3)*abc^(-2)*X*z*(1-2.605*gama^(-1)+7.007*gama^(-2));
for lll=1:L(iiii)
funt2=factorial(L(iiii))/(factorial(lll)*factorial(L(iiii)-lll))*(-1)^(lll-1)/lll;
fun=@(r) r.*funt1.*funt2.*(besselj(m0./2,k./4./z.*r.^2)).^2.*exp(-r.^2*(1./(lll.*det(iii).^2)+2.*rouc2))…
.*besseli(m(ii)-m0,(1./(lll.*det(iii).^2)+2.*rouc2).*r.^2);
qtemp=qtemp+integral(fun,0,R);
end
q(ii)=qtemp;
end
P(iiii,iii)=q(NUM+1)/sum(q);
end
end
figure
plot(det,P(1,:),’rs-‘);hold on
plot(det,P(2,:),’b+-‘);hold on
plot(det,P(3,:),’c*-‘);hold on
plot(det,P(4,:),’md-‘);hold on
plot(det,P(5,:),’g^-‘);
legend(‘L=1′,’L=5′,’L=10′,’L=20′,’L=30’)
%legend(‘λ=417nm’,’λ=457nm’,’λ=488nm’,’λ=532nm’,’λ=632.8nm’)
%legend(‘e=10^{-5}’,’e=3*10^{-5}’,’e=5*10^{-5}’,’e=7*10^{-5}’,’e=9*10^{-5}’)
xlabel(‘delta/m’);ylabel(‘P_{l_{0}}’);
axis([0.01,0.15,0.35,0.85]);hold on
demo1 = 0.35:0.1:0.85;
demo2 = [0.01,0.03,0.06,0.09,0.12,0.15];%demo2 = 0.01:0.02:0.1;
set(gca,’yTick’,demo1)
set(gca,’xTick’,demo2)
%xlabel(‘delta/m’);ylabel(‘P_{l_{0}}’);
%axis([0.01,0.1,0.35,0.85]);hold on
%demo1 = 0.35:0.1:0.85;
%demo2 = 0.01:0.02:0.1;
%set(gca,’yTick’,demo1)
%set(gca,’xTick’,demo2)
% [X0,Y0] = meshgrid(lamda,det);
% figure,surf(X0,Y0,P)
% %figure,bar3(P)
% xlabel(‘lamda’);ylabel(‘det’);zlabel(‘P_{m_{0}}’);hold on
>> MHBew_beamindex_hengxiangxiangganchagndu
BLAS: trying environment BLAS_VERSION…
BLAS: loading C:MATLAB7binwin32atlas_Athlon.dll
BLAS: unloading libraries
错误使用 *
BLAS 加载错误:
C:MATLAB7binwin32atlas_Athlon.dll: 找不到指定的模块。
出错 integralCalc>iterateScalarValued (第 330 行)
x = NODES*halfh + midpt; % NNODES x nsubs
出错 integralCalc>vadapt (第 148 行)
[q,errbnd] = iterateScalarValued(u,tinterval,pathlen, …
出错 integralCalc (第 77 行)
[q,errbnd] = vadapt(vfunAB,interval, …
出错 integral (第 87 行)
Q = integralCalc(fun,a,b,opstruct);
出错 MHBew_beamindex_hengxiangxiangganchagndu (第 36 行)
qtemp=qtemp+integral(fun,0,R);clear
m0=1;
lamda=532*10^(-9);%lamda=[417,457,488,532,632.8]*10^(-9); %光束波长
z=50;%z=eps+[0:5:50]; %传播距离
R=0.03;%接受孔径的半径
A0=10;%光束功率常数
det=0.01:0.01:0.15; %横向相干长度
L=[1,5,10,20,30];%光总数指数
e=1*10^(-5);%e=[0.01,0.1,1,10,100]*10^(-5);%动能耗散率
X=1*10^(-8);% 温度耗5散
eta=0.001;% 内尺度因子
abc=3;%各项异性子
gama=-3;%温度盐度贡献比
NUM=10;
m=m0-NUM:1:m0+NUM;
k=2*pi/lamda;
for iiii=1:length(L)
C0=0;
for ll=1:L(iiii)
C0=C0+factorial(L(iiii))/factorial(ll)/factorial(L(iiii)-ll)*(-1)^(ll-1)/ll;
end
for iii=1:length(det)
for ii=1:length(m)
qtemp=0;
funt1=pi^2/(k*z)*(factorial(m0)*A0)^2/C0;
rouc2=8.705*10^(-8)*k^2*(e*eta)^(-1/3)*abc^(-2)*X*z*(1-2.605*gama^(-1)+7.007*gama^(-2));
for lll=1:L(iiii)
funt2=factorial(L(iiii))/(factorial(lll)*factorial(L(iiii)-lll))*(-1)^(lll-1)/lll;
fun=@(r) r.*funt1.*funt2.*(besselj(m0./2,k./4./z.*r.^2)).^2.*exp(-r.^2*(1./(lll.*det(iii).^2)+2.*rouc2))…
.*besseli(m(ii)-m0,(1./(lll.*det(iii).^2)+2.*rouc2).*r.^2);
qtemp=qtemp+integral(fun,0,R);
end
q(ii)=qtemp;
end
P(iiii,iii)=q(NUM+1)/sum(q);
end
end
figure
plot(det,P(1,:),’rs-‘);hold on
plot(det,P(2,:),’b+-‘);hold on
plot(det,P(3,:),’c*-‘);hold on
plot(det,P(4,:),’md-‘);hold on
plot(det,P(5,:),’g^-‘);
legend(‘L=1′,’L=5′,’L=10′,’L=20′,’L=30’)
%legend(‘λ=417nm’,’λ=457nm’,’λ=488nm’,’λ=532nm’,’λ=632.8nm’)
%legend(‘e=10^{-5}’,’e=3*10^{-5}’,’e=5*10^{-5}’,’e=7*10^{-5}’,’e=9*10^{-5}’)
xlabel(‘delta/m’);ylabel(‘P_{l_{0}}’);
axis([0.01,0.15,0.35,0.85]);hold on
demo1 = 0.35:0.1:0.85;
demo2 = [0.01,0.03,0.06,0.09,0.12,0.15];%demo2 = 0.01:0.02:0.1;
set(gca,’yTick’,demo1)
set(gca,’xTick’,demo2)
%xlabel(‘delta/m’);ylabel(‘P_{l_{0}}’);
%axis([0.01,0.1,0.35,0.85]);hold on
%demo1 = 0.35:0.1:0.85;
%demo2 = 0.01:0.02:0.1;
%set(gca,’yTick’,demo1)
%set(gca,’xTick’,demo2)
% [X0,Y0] = meshgrid(lamda,det);
% figure,surf(X0,Y0,P)
% %figure,bar3(P)
% xlabel(‘lamda’);ylabel(‘det’);zlabel(‘P_{m_{0}}’);hold on
>> MHBew_beamindex_hengxiangxiangganchagndu
BLAS: trying environment BLAS_VERSION…
BLAS: loading C:MATLAB7binwin32atlas_Athlon.dll
BLAS: unloading libraries
错误使用 *
BLAS 加载错误:
C:MATLAB7binwin32atlas_Athlon.dll: 找不到指定的模块。
出错 integralCalc>iterateScalarValued (第 330 行)
x = NODES*halfh + midpt; % NNODES x nsubs
出错 integralCalc>vadapt (第 148 行)
[q,errbnd] = iterateScalarValued(u,tinterval,pathlen, …
出错 integralCalc (第 77 行)
[q,errbnd] = vadapt(vfunAB,interval, …
出错 integral (第 87 行)
Q = integralCalc(fun,a,b,opstruct);
出错 MHBew_beamindex_hengxiangxiangganchagndu (第 36 行)
qtemp=qtemp+integral(fun,0,R); clear
m0=1;
lamda=532*10^(-9);%lamda=[417,457,488,532,632.8]*10^(-9); %光束波长
z=50;%z=eps+[0:5:50]; %传播距离
R=0.03;%接受孔径的半径
A0=10;%光束功率常数
det=0.01:0.01:0.15; %横向相干长度
L=[1,5,10,20,30];%光总数指数
e=1*10^(-5);%e=[0.01,0.1,1,10,100]*10^(-5);%动能耗散率
X=1*10^(-8);% 温度耗5散
eta=0.001;% 内尺度因子
abc=3;%各项异性子
gama=-3;%温度盐度贡献比
NUM=10;
m=m0-NUM:1:m0+NUM;
k=2*pi/lamda;
for iiii=1:length(L)
C0=0;
for ll=1:L(iiii)
C0=C0+factorial(L(iiii))/factorial(ll)/factorial(L(iiii)-ll)*(-1)^(ll-1)/ll;
end
for iii=1:length(det)
for ii=1:length(m)
qtemp=0;
funt1=pi^2/(k*z)*(factorial(m0)*A0)^2/C0;
rouc2=8.705*10^(-8)*k^2*(e*eta)^(-1/3)*abc^(-2)*X*z*(1-2.605*gama^(-1)+7.007*gama^(-2));
for lll=1:L(iiii)
funt2=factorial(L(iiii))/(factorial(lll)*factorial(L(iiii)-lll))*(-1)^(lll-1)/lll;
fun=@(r) r.*funt1.*funt2.*(besselj(m0./2,k./4./z.*r.^2)).^2.*exp(-r.^2*(1./(lll.*det(iii).^2)+2.*rouc2))…
.*besseli(m(ii)-m0,(1./(lll.*det(iii).^2)+2.*rouc2).*r.^2);
qtemp=qtemp+integral(fun,0,R);
end
q(ii)=qtemp;
end
P(iiii,iii)=q(NUM+1)/sum(q);
end
end
figure
plot(det,P(1,:),’rs-‘);hold on
plot(det,P(2,:),’b+-‘);hold on
plot(det,P(3,:),’c*-‘);hold on
plot(det,P(4,:),’md-‘);hold on
plot(det,P(5,:),’g^-‘);
legend(‘L=1′,’L=5′,’L=10′,’L=20′,’L=30’)
%legend(‘λ=417nm’,’λ=457nm’,’λ=488nm’,’λ=532nm’,’λ=632.8nm’)
%legend(‘e=10^{-5}’,’e=3*10^{-5}’,’e=5*10^{-5}’,’e=7*10^{-5}’,’e=9*10^{-5}’)
xlabel(‘delta/m’);ylabel(‘P_{l_{0}}’);
axis([0.01,0.15,0.35,0.85]);hold on
demo1 = 0.35:0.1:0.85;
demo2 = [0.01,0.03,0.06,0.09,0.12,0.15];%demo2 = 0.01:0.02:0.1;
set(gca,’yTick’,demo1)
set(gca,’xTick’,demo2)
%xlabel(‘delta/m’);ylabel(‘P_{l_{0}}’);
%axis([0.01,0.1,0.35,0.85]);hold on
%demo1 = 0.35:0.1:0.85;
%demo2 = 0.01:0.02:0.1;
%set(gca,’yTick’,demo1)
%set(gca,’xTick’,demo2)
% [X0,Y0] = meshgrid(lamda,det);
% figure,surf(X0,Y0,P)
% %figure,bar3(P)
% xlabel(‘lamda’);ylabel(‘det’);zlabel(‘P_{m_{0}}’);hold on
>> MHBew_beamindex_hengxiangxiangganchagndu
BLAS: trying environment BLAS_VERSION…
BLAS: loading C:MATLAB7binwin32atlas_Athlon.dll
BLAS: unloading libraries
错误使用 *
BLAS 加载错误:
C:MATLAB7binwin32atlas_Athlon.dll: 找不到指定的模块。
出错 integralCalc>iterateScalarValued (第 330 行)
x = NODES*halfh + midpt; % NNODES x nsubs
出错 integralCalc>vadapt (第 148 行)
[q,errbnd] = iterateScalarValued(u,tinterval,pathlen, …
出错 integralCalc (第 77 行)
[q,errbnd] = vadapt(vfunAB,interval, …
出错 integral (第 87 行)
Q = integralCalc(fun,a,b,opstruct);
出错 MHBew_beamindex_hengxiangxiangganchagndu (第 36 行)
qtemp=qtemp+integral(fun,0,R); integral MATLAB Answers — New Questions
Intune- Not installing Apps
I have created AVD with personal VMs AAD joined enrolled to Intune. Device shows in the Intune portal however not able to install any applications . Error code: 0x87D30068 – Error downloading content.
need help with troubleshooting steps.
I have created AVD with personal VMs AAD joined enrolled to Intune. Device shows in the Intune portal however not able to install any applications . Error code: 0x87D30068 – Error downloading content.need help with troubleshooting steps. Read More
How to find or identify active users with BookingWithMe
Hi,
is there any setting or variable set on users profile, mailbox, etc to identify users who have the personal “BookingWithMe” activated?
We might want to add a booking button to users signatures, showing to their personal booking URL.
However, if the service is not set up, and you open the URL you get an error message.
I only found articles, to find scheduling mailboxes, but this are shared booking pages, not the personal ones …
Hi, is there any setting or variable set on users profile, mailbox, etc to identify users who have the personal “BookingWithMe” activated? We might want to add a booking button to users signatures, showing to their personal booking URL.However, if the service is not set up, and you open the URL you get an error message. I only found articles, to find scheduling mailboxes, but this are shared booking pages, not the personal ones … Read More