Tag Archives: microsoft
New Blog | Face Check is now generally available
By Ankur Patel
Earlier this year we announced the public preview of Face Check with Microsoft Entra Verified ID – a privacy-respecting facial matching feature for high-assurance identity verifications and the first premium capability of Microsoft Entra Verified ID. Today I’m excited to announce that Face Check with Microsoft Entra Verified ID is generally available. It is offered both by itself and as part of the Microsoft Entra Suite, a complete identity solution that delivers Zero Trust access by combining network access, identity protection, governance, and identity verification capabilities.
Unlocking high-assurance verifications at scale
There’s a growing risk of impersonation and account takeover. Bad actors use insecure credentials in 66% of attack paths. For example, impersonators may use a compromised password to fraudulently log in to a system. With advancements in generative AI, complex impersonation tactics such as deepfakes are growing as well. Many organizations regularly onboard new employees remotely and offer a remote help desk. Without strong identity verification, how can organizations know who is on the other side of these digital interactions? Impersonators can easily bypass common verification methods such as counting bicycles on a CAPTCHA or asking which street you grew up on. As fraud skyrockets for businesses and consumers, and impersonation tactics have become increasingly complex, identity verification has never been more important.
Microsoft Entra Verified ID is based on open standards, enabling organizations to verify the widest variety of credentials using a simple API. Verified ID integrates with some of the leading verification partners to verify identity attributes for individuals (for example, a driver’s license and a liveness match) across 192 countries. Today, hundreds of organizations rely on Verified ID to remotely onboard new users and reduce fraud when providing self-service recovery. For example, using Verified ID, Skype has reduced fraudulent cases of registering Skype Phone Numbers in Japan by 90%.
Read the full post here: Face Check is now generally available
By Ankur Patel
Earlier this year we announced the public preview of Face Check with Microsoft Entra Verified ID – a privacy-respecting facial matching feature for high-assurance identity verifications and the first premium capability of Microsoft Entra Verified ID. Today I’m excited to announce that Face Check with Microsoft Entra Verified ID is generally available. It is offered both by itself and as part of the Microsoft Entra Suite, a complete identity solution that delivers Zero Trust access by combining network access, identity protection, governance, and identity verification capabilities.
Unlocking high-assurance verifications at scale
There’s a growing risk of impersonation and account takeover. Bad actors use insecure credentials in 66% of attack paths. For example, impersonators may use a compromised password to fraudulently log in to a system. With advancements in generative AI, complex impersonation tactics such as deepfakes are growing as well. Many organizations regularly onboard new employees remotely and offer a remote help desk. Without strong identity verification, how can organizations know who is on the other side of these digital interactions? Impersonators can easily bypass common verification methods such as counting bicycles on a CAPTCHA or asking which street you grew up on. As fraud skyrockets for businesses and consumers, and impersonation tactics have become increasingly complex, identity verification has never been more important.
Microsoft Entra Verified ID is based on open standards, enabling organizations to verify the widest variety of credentials using a simple API. Verified ID integrates with some of the leading verification partners to verify identity attributes for individuals (for example, a driver’s license and a liveness match) across 192 countries. Today, hundreds of organizations rely on Verified ID to remotely onboard new users and reduce fraud when providing self-service recovery. For example, using Verified ID, Skype has reduced fraudulent cases of registering Skype Phone Numbers in Japan by 90%.
Read the full post here: Face Check is now generally available
Work Smarter: Copilot Productivity Tips
No matter if you’re an inbox-zero enthusiast or someone who lets emails pile up, one thing is certain: managing emails can be time-consuming and draining. Whether it’s personal or work-related, we all face the challenge of a busy inbox. Let Copilot in Outlook assist you in organizing your emails, enhancing your communication, and freeing up your time for what truly matters.
As part of a new weekly series that provides Copilot productivity tips, today our team at Microsoft will share with you three specific ways to use Copilot in Outlook. We launched this blog so you can start every week with more ways to save time at work.
Read along for Copilot tips in Outlook!
Tip 1: Organize my inbox
A quick way to tame that wild inbox is to create systems and categories. This not only allows me to tackle emails one category at a time but lets me prioritize which group I should respond to first.
In Outlook, navigate to the upper right corner and select the Copilot icon. From there will drop down four prompts, select the second one, “Organize my inbox.”
The prompt will now appear in the prompt box for you to fill in the details. For example, I want to make sure I catch up with anything that came in directly from my manager. I could ask Copilot to “Create an inbox rule to categorize all emails from Angela Byers as blue.”
From there, Outlook will bring up the rules box to confirm your directive.
I used to find creating rules a bit of a chore but that has since changed. I now have different rules to categorize emails by subject and by sender, and it’s helped me ensure I never miss an important email. The color coding is also visually *chef’s kiss*. Rinse and repeat for that email inbox of your dreams!
Tip 2: Catch up
Have one of those days where your email is busting at the seams? Use this productivity tip to get a summary of your emails from Copilot. Navigate to the upper right corner and click the Copilot icon.
Once the Copilot chat has opened, key in this prompt: “Catch me up on emails from the past day. Organize and summarize by topic.”
(I can’t show you a screenshot of my inbox or the results, but just give it a try and let me know in the comments what you think).
Tip 3: Draft with Copilot
Now that your inbox is color coded and you’re received a download of your recent messages, it’s time to save some time actually drafting emails. Copilot helps me get more efficient by taking what I hope to convey in the prompt and writing a first draft for me.
In Outlook, you can start a new email (either a fresh email or hitting reply to an existing thread) navigate to the middle of the menu ribbon and select the Copilot logo, from there a drop–down menu will appear, select Draft with Copilot.
Having Copilot work out a first draft saves me an underrated amount of time. I find decision-making much quicker when I have something to react to than when I have to draft something myself.
We hope you can apply these tips throughout the week to tame your Outlook inbox! Stay tuned for more productivity tips next Monday to learn additional ways to unlock more value with Copilot for Microsoft 365!
Microsoft Tech Community – Latest Blogs –Read More
Now Available: the Copilot for Microsoft 365 Risk Assessment QuickStart Guide
Copilot for Microsoft 365 is an intelligent assistant designed to enhance user productivity by leveraging relevant information and insights from various sources such as SharePoint, OneDrive, Outlook, Teams, Bing, and third-party solutions via connectors and extensions. Using natural language processing and machine learning, Copilot understands user queries and delivers personalized results, generating summaries, insights, and recommendations.
This QuickStart guide aims to assist organizations in performing a comprehensive risk assessment of Copilot for Microsoft 365. The document serves as an initial reference for risk identification, mitigation exploration, and stakeholder discussions. It is structured to cover:
AI Risks and Mitigations Framework: Outlining the primary categories of AI risks and how Microsoft addresses them at both company and service levels.
Sample Risk Assessment: Presenting a set of real customer-derived questions and answers to assess the service and its risk posture.
Additional Resources: Providing links to further materials on Copilot for Microsoft 365 and AI risk management.
Copilot for Microsoft 365 Risks and Mitigations
Bias
AI technologies can unintentionally perpetuate societal biases. Copilot for Microsoft 365 uses foundation models from OpenAI, which incorporate bias mitigation strategies during their training phases. Microsoft builds upon these mitigations by designing AI systems to provide equitable service quality across demographic groups, implementing measures to minimize disparities in outcomes for marginalized groups, and developing AI systems that avoid stereotyping or demeaning any cultural or societal group.
Disinformation
Disinformation is false information spread to deceive. This QuickStart guide covers Copilot for Microsoft 365 mitigations which include grounding responses in customer data and web data and requiring explicit user instruction for any action.
Overreliance and Automation Bias
Automation bias occurs when users over-rely on AI-generated information, potentially leading to misinformation. The QuickStart guide discusses methods of mitigating automation bias through measures such as informing users they are interacting with AI, disclaimers about the fallibility of AI, and more.
Ungroundedness (Hallucination)
AI models sometimes generate information not based on input data or grounding data. The QuickStart guide explores various mitigations for ungroundedness, including performance and effectiveness measures, metaprompt engineering, harms monitoring, and more.
Privacy
Data is a critical element for the functionality of an AI system, and without proper safeguards, this data may be exposed to risks. The QuickStart guide talks about how Microsoft ensures customer data remains private and is governed by stringent privacy commitments. Access controls and data usage parameters are also discussed.
Resiliency
Service disruptions can impact organizations. The QuickStart guide discusses mitigations such as redundancy, data integrity checking, uptime SLAs, and more.
Data Leakage
The QuickStart guide explores data leakage prevention (DLP) measures including zero trust, logical isolation, and rigorous encryption.
Security Vulnerabilities
Security is integral to AI development. Microsoft follows Security Development Lifecycle (SDL) practices, which include training, threat modelling, static and dynamic security testing, incident response, and more.
Sample Risk Assessment: Questions & Answers
This section contains a comprehensive set of questions and answers based on real customer inquiries. These cover privacy, security, supplier relationships, and model development concerns. The responses are informed by various Microsoft teams and direct attestations from OpenAI. Some key questions include:
Privacy: How personal data is anonymized before model training.
Security: Measures in place to prevent AI model compromise.
Supplier Relationships: Due diligence resources on OpenAI, a Microsoft strategic partner.
Model Development: Controls for data integrity, access management, and threat modeling.
By utilizing this guide, organizations can better understand the AI risk landscape integral to understanding Copilot for Microsoft 365 in an efficient manner enabling enterprise deployment. It serves as a foundational tool for risk assessment and frames further dialogue with Microsoft to address specific concerns or requirements.
Additional Resources
In addition to the framework and the sample assessment, the QuickStart guide provides links to a host of resources and materials that offer further detailed insights into Copilot for Microsoft 365 and AI risk management.
Microsoft Tech Community – Latest Blogs –Read More
Face Check is now generally available
Earlier this year we announced the public preview of Face Check with Microsoft Entra Verified ID – a privacy-respecting facial matching feature for high-assurance identity verifications and the first premium capability of Microsoft Entra Verified ID. Today I’m excited to announce that Face Check with Microsoft Entra Verified ID is generally available. It is offered both by itself and as part of the Microsoft Entra Suite, a complete identity solution that delivers Zero Trust access by combining network access, identity protection, governance, and identity verification capabilities.
Unlocking high-assurance verifications at scale
There’s a growing risk of impersonation and account takeover. Bad actors use insecure credentials in 66% of attack paths. For example, impersonators may use a compromised password to fraudulently log in to a system. With advancements in generative AI, complex impersonation tactics such as deepfakes are growing as well. Many organizations regularly onboard new employees remotely and offer a remote help desk. Without strong identity verification, how can organizations know who is on the other side of these digital interactions? Impersonators can easily bypass common verification methods such as counting bicycles on a CAPTCHA or asking which street you grew up on. As fraud skyrockets for businesses and consumers, and impersonation tactics have become increasingly complex, identity verification has never been more important.
Microsoft Entra Verified ID is based on open standards, enabling organizations to verify the widest variety of credentials using a simple API. Verified ID integrates with some of the leading verification partners to verify identity attributes for individuals (for example, a driver’s license and a liveness match) across 192 countries. Today, hundreds of organizations rely on Verified ID to remotely onboard new users and reduce fraud when providing self-service recovery. For example, using Verified ID, Skype has reduced fraudulent cases of registering Skype Phone Numbers in Japan by 90%.
Face Check with Microsoft Entra Verified ID
Powered by Azure AI services, Face Check adds a critical layer of trust by matching a user’s real-time selfie and the photo on their Verified ID, which is usually from a trusted source such as a passport or driver’s license. By sharing only match results and not any sensitive identity data, Face Check strengthens an organization’s identity verification while protecting user privacy. It can detect and reject various spoofing techniques, including deepfakes, to fully protect your users’ identities.
BEMO, a security solution provider for SMBs, integrated Face Check into its help desk to increase verification accuracy, reduce verification time, and lower costs. The company used Face Check with Microsoft Entra Verified ID to protect its most sensitive accounts which belong to C-level executives and IT administrators.
Face Check not only helps BEMO improve customer security and strengthen user data privacy, but it also created a 90% efficiency improvement in addressing customer issues. BEMO’s help desk now completes a manual identity verification in 30 minutes, down from 5.5 hours before implementing Face Check.
“Security is always great when you apply it in layers, and this verification is an additional layer that we’ll be able to provide to our customers. It’s one more way we can help them feel secure.” – Jose Castelan, Support and Managed Services Team Lead, BEMO
Check out the video below to learn more about how your organization can use Face Check with Microsoft Entra Verified ID:
Jumpstart with partners
Our partners specialize in implementing Face Check with Microsoft Entra Verified ID in specific use cases or verifying certain identity attributes such as employment status, education, or government-issued IDs (with partners like LexisNexis® Risk Solutions, Au10tix, and IDEMIA). These partners extend Verified ID’s capabilities to provide a variety of verification solutions that will work for your business’s specific needs.
Explore our partner gallery to learn more about our partners and how they can help you get started with Verified ID.
Start using Face Check with Microsoft Entra Verified ID
Face Check is a premium feature of Verified ID. After you set up your Verified ID tenant, there are two purchase options to enable Face Check and start verifying:
1. Begin the Entra Suite free trial, which includes 8 Face Check verifications per user per month.
2. Enable Face Check within Verified ID and pay $0.25 per verification.
Visit the Microsoft Entra pricing page for more details.
What’s Next?
Learn more about how Microsoft Entra Verified ID works and how organizations are using it today, and join us for the Microsoft Entra Suite Tech Accelerator on August 14 to learn about the latest identity management and end-to-end security innovations.
Ankur Patel, Head of Product for Microsoft Entra Verified ID
Read more on this topic
Watch the Zero Trust spotlight
Learn about the Microsoft Entra Suite
Learn more about Face Check with Microsoft Entra Verified ID in the FAQ
Learn more about Microsoft Entra
Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds.
Microsoft Entra News and Insights | Microsoft Security Blog
Microsoft Entra blog | Tech Community
Microsoft Entra documentation | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
eDiscovery launches a modern, intuitive user experience
This month, we have launched a redesigned Microsoft Purview eDiscovery product experience in public preview. This improved user experience revolutionizes your data search, review and export tasks within eDiscovery. Our new user-friendly and feature-rich eDiscovery experience is not just about finding and preserving data, it’s about doing it with unprecedented efficiency and ease. The modern user experience of eDiscovery addresses some long-standing customer requests, such as enhanced search capabilities with MessageID, Sensitive Information Types (SITs) and sensitivity labels. It also introduces innovative features like draft query with Copilot and search using audit log. These changes, driven by customer feedback and our commitment to innovation, offer tangible value by saving time and reducing costs in the eDiscovery process.
The new eDiscovery experience is exclusively available in the Microsoft Purview portal. The new Microsoft Purview portal is a unified platform that streamlines data governance, data security, and data compliance across your entire data estate. It offers a more intuitive experience, allowing users to easily navigate and manage their compliance needs.
Unified experience
One of the benefits of the new improved eDiscovery offers a unified, consistent, and intuitive experience across different licensing tiers. Whether your license includes eDiscovery standard or premium, you can use the same workflow to create cases, conduct searches, apply holds, and export data. This simplifies the training and education process for organizations that upgrade their license and want to access premium eDiscovery features. Unlike the previous experience, where Content Search, eDiscovery (Standard), and eDiscovery (Premium) had different workflows and behaviors, the new experience lets you access eDiscovery capabilities seamlessly regardless of your license level. E5 license holders have the option to use premium features such as exporting cloud attachments and Teams conversation threading at the appropriate steps in the workflow. Moreover, users still have access to all existing Content Searches and both Standard and Premium eDiscovery cases on the unified eDiscovery case list page in the Microsoft Purview portal.
The new experience also strengthens the security controls for Content Search by placing them in an eDiscovery case. This allows eDiscovery administrators to control who can access and use existing Content Searches and generated exports. Administrators can add or remove users from the Content Search case as needed. This way, they can prevent unauthorized access to sensitive search data and stop Content Search when it is no longer required. Moreover, this helps maintain the integrity and confidentiality of the investigation process. The new security controls ensure that only authorized personnel can access sensitive data, reducing the risk of data breaches and complying with legal and regulatory standards.
Enhanced data source management
Efficient litigation and investigation workflows hinge on the ability to precisely select data sources and locations in the eDiscovery process. This enables legal teams to swiftly preserve relevant information and minimize the risk of missing critical evidence. The improved data source picking capability allows for a more targeted and effective search, which is essential in responding to legal matters or internal investigations. It enables users to apply holds and conduct searches with greater accuracy, ensuring that all pertinent information is captured without unnecessary data proliferation. This improvement not only enhances the quality of the review, but also reduces the overall costs associated with data storage and management.
The new eDiscovery experience makes data source location mapping and management better as well. You can now perform a user or group search with different identifiers and see their data hierarchy tree, including their mailbox and OneDrive. For example, eDiscovery users can use any of following identifiers: Name, user principal name (UPN), SMTP address, or OneDrive URL. The data source picker streamlines the eDiscovery workflow by displaying all potential matches and their locations, along with related sources such as frequent collaborators, group memberships, and direct reports. This allows for the addition of these sources to search or hold scope without relying on external teams for information on collaboration patterns, Teams/Group memberships, or organizational hierarchies.
The “sync” capability in the new data source management flow is a significant addition that ensures eDiscovery users are always informed about the latest changes in data locations. With this feature, users can now query whether a specific data source has newly provisioned data locations or if any have been removed. For example, if a private channel is created for a Teams group, this feature alerts eDiscovery users to the new site’s existence, allowing them to quickly and easily include it in their search scope, ensuring no new data slips through the cracks. This real-time update capability empowers users to make informed decisions about including or excluding additional data locations in their investigations. This capability ensures that their eDiscovery process remains accurate and up-to-date with the latest data landscape changes. It is a proactive approach to data management that enhances the efficiency and effectiveness of eDiscovery operations, providing users with the agility to adapt to changes swiftly.
Improved integration with Microsoft Information Protection
The new eDiscovery experience now supports querying by Sensitive Information Types (SITs) and sensitivity labels. Labeling, classifying, and encrypting your organization’s data is a best practice that serves multiple essential purposes. It helps to ensure that sensitive information is handled appropriately, reducing the risk of unauthorized access and data breaches. By classifying data, organizations can apply the right level of protection to different types of information, which is crucial for compliance with various regulations and standards. Moreover, encryption adds a layer of security that keeps data safe even if it falls into the wrong hands. It ensures that only authorized users can access and read the information, protecting it from external threats and internal leaks.
The new eDiscovery search functionality supports searches for emails and documents classified by SITs or specific sensitivity labels, facilitating the collection and review of data aligned with its classification for thorough investigations. This capability compresses the volume of evidence required for review, significantly reducing both the time and cost of the process. The support of efficient document location and management by targeting specific sensitivity labels unlocks the ability for organizations to validate and understand how sensitivity labels are utilized. This is exemplified by the ability to conduct collections across locations or the entire tenant for a particular label, using the review set to assess label application. Additionally, combining this with SIT searches helps verify correct data classification. For example, it ensures that all credit card data is appropriately labeled as highly confidential by reviewing items containing credit card data that are not marked as such, thereby streamlining compliance and adherence to security policies.
Enhanced investigation capabilities
The new eDiscovery experience introduces a powerful capability to expedite security investigations, particularly in scenarios involving a potentially compromised account. By leveraging the ability to search by audit log, investigators can swiftly assess the account’s activities, pinpointing impacted files. As part of the investigative feature, eDiscovery search can also make use of evidence file as search input. It enables a rapid analysis of file content patterns or signatures. This feature is crucial for identifying similar or related content, providing a streamlined approach to discover if sensitive files have been copied or moved, thereby enhancing the efficiency and effectiveness of the security response.
The enhanced search capability by identifier in the new eDiscovery UX is a game-changer for customers, offering a direct route to the exact message or file needed. With the ability to search using a messageID for mailbox items or a path for SharePoint items, users can quickly locate and retrieve the specific item they require. This precision not only streamlines evidence collection but also accelerates the process of purging leaked data for spillage cleanup. It’s a significant time-saver that simplifies the workflow, allowing customers to focus on what matters most – securing and managing their digital environment efficiently, while targeting relevant data.
Building on the data spillage scenario, our search and purge tool for mailbox items, including Teams messages, also received a significant 10x enhancement. Where previously administrators could only purge 10 items per mailbox location, they can now purge up to 100 items per mailbox location. This enhancement is a benefit for administrators tasked with responding to data spills or needing to remediate data within Teams or Exchange, allowing for a more comprehensive and efficient purge process. With all these investigative capability updates, now the security operations team is ready to embrace the expanded functionality and take their eDiscovery operations to the next level.
Microsoft Security Copilot capabilities
The recently released Microsoft Security Copilot’s capabilities in eDiscovery are transformative, particularly in generating KeyQL from natural language and providing contextual summarization and answering abilities in review sets. These features significantly lower the learning curve for KeyQL, enabling users to construct complex queries with ease. Instead of mastering the intricacies of KeyQL, users can simply describe what they are looking for using natural language, and Copilot translates that into a precise KeyQL statement. This not only saves time but also makes the power of eDiscovery accessible to a broader range of users, regardless of their technical expertise.
Moreover, Copilot’s summarization skills streamline the review process by distilling key insights from extensive datasets. Users can quickly grasp the essence of large volumes of data, which accelerates the review process and aids in identifying the most pertinent information. This is particularly beneficial in legal and compliance contexts, where time is often of the essence, and the ability to rapidly process and understand information can have significant implications.
Additional export options
The new eDiscovery experience introduces a highly anticipated suite of export setting enhancements. The contextual conversation setting is now distinct from the conversation transcript setting, offering greater flexibility in how Teams conversations are exported. The ability to export into a single PST allows for the consolidation of files/items from multiple locations, simplifying the post-export workflow. Export can now give friendly names to each item, eliminating the need for users to decipher item GUIDs, and making identification straightforward. Truncation in export addresses the challenges of zip file path character limits. Additionally, the expanded versioning options empower users to include all versions or select the latest 10 or 100, providing tailored control over the data. These improvements not only meet user expectations but also significantly benefit customers by streamlining the eDiscovery process and enhancing overall efficiency.
Additional enhancements
As part of the new experience, we are introducing the review set query report, which generates a hit-by-term report based on a KQL query. This query report allows users to quickly see the count and volume of items hit on a particular keyword or a list of compound queries, and can be optionally downloaded. By providing a detailed breakdown of where and how often each term appears, it streamlines the review by focusing on the most relevant documents, reducing the volume of data that needs to be manually reviewed, and offers a better understanding of which terms may be too broad or too narrow.
As part of the improved user experience, all long-running processes now show a transparent and informative progress bar. This progress bar provides users with real-time visibility into the status of their searches and exports, allowing eDiscovery practitioners to better plan their workflow and manage their time effectively. This feature is particularly beneficial in the context of legal investigations, where timing is often critical, and users need to anticipate when they can proceed to the next steps. This level of process transparency allows users to stay informed and make decisions accordingly.
In addition to progress transparency, all processes in the new eDiscovery experience will include a full report detailing the information related to completed processes. The defensibility of eDiscovery cases and investigations is paramount. The full reporting capabilities for processes such as exports, searches, and holds provide critical transparency. For example, it allows for a comprehensive audit of what was searched or exported, the specific timing, and the settings used. For customers, this means a significant increase in trust and defensibility of the eDiscovery process. This enhancement not only bolsters the integrity of the eDiscovery process but also reinforces the commitment to delivering customer-centric solutions that meet the rigorous demands of legal compliance and data management.
Hold policy detail view also received an upgrade as part of this new eDiscovery release. Customers now can access the hold policy view with detailed information on all locations and their respective hold status. This detailed view is instrumental in providing a transparent audit of what location is on hold, ensuring that all relevant data is preserved, and that no inadvertent destruction of evidence occurs during the process. Customers can download and analyze the full detailed hold location report, ensuring that all necessary content is accounted for and that legal obligations are met.
As we conclude this exploration of the modernized Microsoft Purview eDiscovery (preview) experience, it’s clear that the transformative enhancements are set to redefine the landscape of legal compliance and security investigations. The new experience, with its intuitive design and comprehensive set of new capabilities, streamlines the eDiscovery process, making it more efficient and accessible than ever before. The new eDiscovery experience is currently in public preview and is expected to be Generally Available by the end of 2024.
Thank you for joining us on this journey through the latest advancements in eDiscovery. We are excited to see how these changes will empower legal and compliance teams to achieve new levels of efficiency and effectiveness in their important work. To learn more about the changes in eDiscovery, visit our product documentation. As always, we are eager to hear your feedback and continue innovating to improve your experience. We welcome your thoughts via the Microsoft Purview portal’s feedback button.
We hope these enhancements improve your day-to-day experience and ultimately streamline the eDiscovery process, making it more efficient and accessible than ever before. The new eDiscovery experience is currently in public preview and is expected to be Generally Available by the end of 2024.
Learn more
We are excited to see how these changes will empower legal and compliance teams to achieve new levels of efficiency and effectiveness in their important work. Check out our interactive guide at https://aka.ms/eDiscoverynewUX to better understand the changes in eDiscovery. As always, we are eager to hear your feedback and continue innovating to improve your experience. We welcome your thoughts via the Microsoft Purview portal’s feedback button.
To learn more about eDiscovery, visit our Microsoft documentation at http://aka.ms/eDiscoveryPremium, or our “Become an eDiscovery Ninja” page at https://aka.ms/ediscoveryninja. If you have yet to try Microsoft Purview solutions, we are happy to share that there is an easy way for eligible customers to begin a free trial within the Microsoft Purview compliance portal. By enabling the trial in the compliance portal, you can quickly start using all capabilities of Microsoft Purview, including Insider Risk Management, Records Management, Audit, eDiscovery, Communication Compliance, Information Protection, Data Lifecycle Management, Data Loss Prevention, and Compliance Manager.
Microsoft Tech Community – Latest Blogs –Read More
Combinations
Hello. I need some help,
Hello. I need some help, I want to know if there is a way to : If I have the following table, I want to extract all combination with the values….. Greece, USA = 1000Greece, UK=2000, Spain, USA= 400 etc…. USA UkGreece. 1000 2000Spain. 400 5000Italy. 450 800 Read More
ERROR 18456
EVENTID: 18456 —- Login failed for user ‘sa’. Reason: Password did not match that for the login provided.
This message is being sent hundreds of times daily to my server tracing back to my computer’s IP address. I’m not sure why it’s trying to log in under “sa” because I’m using windows auth to gain access to the server under a different user. Everything seems to work fine with the exception of very recently my account seems to be creating deadlock sometimes even when not actively running any queries.
Any help would be great.
EVENTID: 18456 —- Login failed for user ‘sa’. Reason: Password did not match that for the login provided. This message is being sent hundreds of times daily to my server tracing back to my computer’s IP address. I’m not sure why it’s trying to log in under “sa” because I’m using windows auth to gain access to the server under a different user. Everything seems to work fine with the exception of very recently my account seems to be creating deadlock sometimes even when not actively running any queries. Any help would be great. Read More
Linked Service to specific ADLS Gen2 container/filesystem
Is it possible to create a Linked Service in Synapse to a specific container/filesystem in an ADLSg2 account?
The ServicePrincipal I’m connecting with is only authorised on a specific container in the storage account, so I’m unable to link to the storage account itself.
If I ignore the Test Connection warning, I can create the LS, and then create a DS towards a file in the container I have permissions to – that works. However, many users give up when they’re unable to test the LS itself, and it would be very nice to be able to browse the container in the Data pane in Synapse, just like you can browse a whole linked ADLSg2 account.
I was hoping there should be an Advanced JSON parameter to define the container/filesystem on LS level, but I’ve been unable to find any in the documentation.
Is it possible to create a Linked Service in Synapse to a specific container/filesystem in an ADLSg2 account? The ServicePrincipal I’m connecting with is only authorised on a specific container in the storage account, so I’m unable to link to the storage account itself. If I ignore the Test Connection warning, I can create the LS, and then create a DS towards a file in the container I have permissions to – that works. However, many users give up when they’re unable to test the LS itself, and it would be very nice to be able to browse the container in the Data pane in Synapse, just like you can browse a whole linked ADLSg2 account. I was hoping there should be an Advanced JSON parameter to define the container/filesystem on LS level, but I’ve been unable to find any in the documentation. Read More
Cannot add custom filters for RefinableString200+
We’re using RefinableStrings 200-219 per our client’s request. We need to create custom filters in M365 Search verticals using these refinables. However, the M365 Search Admin Center only lists RefinableStrings up to 199. I checked for an API or PS script but couldn’t find anything available.
We’re using RefinableStrings 200-219 per our client’s request. We need to create custom filters in M365 Search verticals using these refinables. However, the M365 Search Admin Center only lists RefinableStrings up to 199. I checked for an API or PS script but couldn’t find anything available. Read More
Teams Town Hall – Missing Q&A
We are testing the Town Hall feature of Teams in replacement of Live Events. We are GCC licensed and not finding the Q&A option — either before the meeting starts or after the meeting has started. The option is just not there. We have Q&A turned on in the meetings policy in Teams and it works when we are in Live Events. I found a post that said Viva Engage is needed and then I found another post that said GCC doesn’t have Q&A in Town Hall. Any idea?
In addition, when we end a Town Hall by going to Leave/End Event, the attendee view shows a blank screen with text that reads something like “The host has turned off their camera”. The event doesn’t end until the organizer quits Teams. Any idea on this one?
We are testing the Town Hall feature of Teams in replacement of Live Events. We are GCC licensed and not finding the Q&A option — either before the meeting starts or after the meeting has started. The option is just not there. We have Q&A turned on in the meetings policy in Teams and it works when we are in Live Events. I found a post that said Viva Engage is needed and then I found another post that said GCC doesn’t have Q&A in Town Hall. Any idea?In addition, when we end a Town Hall by going to Leave/End Event, the attendee view shows a blank screen with text that reads something like “The host has turned off their camera”. The event doesn’t end until the organizer quits Teams. Any idea on this one? Read More
Best way to remove access to some SharePoint Online Site/Libraries from M365 Admin/Engineer
My organization of ~400 users uses M365 with SharePoint Online. We are hiring a new M365 Engineer/Admin who needs a lot of SPO (and other) Admin access to do all the things he’ll need to do.
My Dir of IT would like us to prevent this new engineer/admin for being able to access a couple SPO Sites like Finance and Exec Team and prevent access to a couple document libraries in our HR site.
What is the best way (or best practices) to set this up in Entra ID and/or M365? Oh, we are fully cloud-based, so no local Domain Controllers and servers.
Thanks!
Brian
My organization of ~400 users uses M365 with SharePoint Online. We are hiring a new M365 Engineer/Admin who needs a lot of SPO (and other) Admin access to do all the things he’ll need to do. My Dir of IT would like us to prevent this new engineer/admin for being able to access a couple SPO Sites like Finance and Exec Team and prevent access to a couple document libraries in our HR site. What is the best way (or best practices) to set this up in Entra ID and/or M365? Oh, we are fully cloud-based, so no local Domain Controllers and servers. Thanks!Brian Read More
Earn a free Microsoft Power Platform Community Conf Pass!
We are excited to announce the upcoming Power Platform Conference taking place September 18-20th in Las Vegas, where industry experts, thought leaders, and enthusiasts will come together to explore the latest trends, best practices, and success stories related to the Power Platform.
As a valued partner, we invite you to participate in this event as a Demand Generation Agent. By signing up, you’ll have the opportunity to engage with your customers and drive demand for the conference.
Here’s how it works:
Sign Up: Visit the following link to register as a Demand Gen Agent: Partner Demand Gen Registration Form.
Share the Unique Code: Upon completing the registration form, we will send you a unique registration code which will include a $100 discount. Share this code with your customers and encourage them to register for the conference using it. Note: the term for this promotion is June 1 – Aug 31, only customers who register with the unique code within this timeframe will be eligible. See attached terms.
Generate Demand: As your customers register using the unique code, you’ll contribute to the overall success of the conference. If 10 or more customer individuals register using your code, we’ll reward you with one (1) free conference pass!
Don’t miss this opportunity to connect with the Power Platform community, learn from experts, and expand your network. We look forward to having you on board!
Thank you for your continued partnership.
We are excited to announce the upcoming Power Platform Conference taking place September 18-20th in Las Vegas, where industry experts, thought leaders, and enthusiasts will come together to explore the latest trends, best practices, and success stories related to the Power Platform.
As a valued partner, we invite you to participate in this event as a Demand Generation Agent. By signing up, you’ll have the opportunity to engage with your customers and drive demand for the conference.
Here’s how it works:
Sign Up: Visit the following link to register as a Demand Gen Agent: Partner Demand Gen Registration Form.
Share the Unique Code: Upon completing the registration form, we will send you a unique registration code which will include a $100 discount. Share this code with your customers and encourage them to register for the conference using it. Note: the term for this promotion is June 1 – Aug 31, only customers who register with the unique code within this timeframe will be eligible. See attached terms.
Generate Demand: As your customers register using the unique code, you’ll contribute to the overall success of the conference. If 10 or more customer individuals register using your code, we’ll reward you with one (1) free conference pass!
Don’t miss this opportunity to connect with the Power Platform community, learn from experts, and expand your network. We look forward to having you on board!
Thank you for your continued partnership.
Read More
I can’t see my booking pages.
I hope you are very well and you can help me.
From nowhere I can no longer see the bookings I had created, it’s not something of access since I created them, I don’t know why I can’t see them anymore
I hope you are very well and you can help me.From nowhere I can no longer see the bookings I had created, it’s not something of access since I created them, I don’t know why I can’t see them anymore Read More
Using Excel Copilot to split columns
Hi Everyone, this is the first in a series of posts to show you some of the things that are possible to do with your workbooks using Copilot. Today I will start with this list of employees:
I would like to have the names in this list separated into 2 columns for the first and last names. To accomplish this, I’ll start by clicking on the copilot button on the right side of the Home tab, showing the copilot pane and type the prompt:
Split the name column into first and last name
Excel Copilot looks at the content in the list and then suggests inserting 2 new calculated column formulas to split the first and last names from the Name column.
Hovering the mouse cursor over the “Insert columns” button in the copilot pane shows a preview of what inserting the new column formulas will look like. From the preview, it looks like it is doing what I wanted.
Clicking on the Insert Columns button will accept the proposed change, inserting 2 new columns with calculated column formulas that split out the first and last names, giving me the result I was looking for!
Over the coming weeks I will be sharing more examples of what you can do with Copilot in Excel.
Thanks for reading,
Eric Patterson – Product Manager, Microsoft Excel
*Disclaimer: If you try these types of prompts and they do not work as expected, it is most likely due to our gradual feature rollout process. Please try again in a few weeks.
Hi Everyone, this is the first in a series of posts to show you some of the things that are possible to do with your workbooks using Copilot. Today I will start with this list of employees:
Table with these columns: Name Address City State. First two rows of data are: Claude Paulet 123 Main Avenue Bellevue Washington Jatindra Sanyal 1122 First Place Ln N Corona California
I would like to have the names in this list separated into 2 columns for the first and last names. To accomplish this, I’ll start by clicking on the copilot button on the right side of the Home tab, showing the copilot pane and type the prompt:
Split the name column into first and last name
Excel Copilot looks at the content in the list and then suggests inserting 2 new calculated column formulas to split the first and last names from the Name column.
Picture showing Excel Copilot pane containing this text:Looking at A1:D17, here are 2 formula columns to review and insert in Columns E and F: 1. First name Extracts the first name of each individual by taking the text before the first space in the “Name” column and removing any extra spaces. =TRIM(LEFT([@Name],FIND(” “,[@Name])-1)) Show explanation 2. Last name Extracts the last name of each individual by finding the space in their full name and taking the text that follows it. =TRIM(MID([@Name],FIND(” “,[@Name])+1,LEN([@Name])))
Hovering the mouse cursor over the “Insert columns” button in the copilot pane shows a preview of what inserting the new column formulas will look like. From the preview, it looks like it is doing what I wanted.
Picture of the list of employees with a preview of 2 new columns that would be added. First name column is being shown in column E and Last Name in column F.
Clicking on the Insert Columns button will accept the proposed change, inserting 2 new columns with calculated column formulas that split out the first and last names, giving me the result I was looking for!
Picture showing the Excel workbook with copilot pane open. Includes the employee table with 2 new columns added.
Over the coming weeks I will be sharing more examples of what you can do with Copilot in Excel.
Thanks for reading,
Eric Patterson – Product Manager, Microsoft Excel
*Disclaimer: If you try these types of prompts and they do not work as expected, it is most likely due to our gradual feature rollout process. Please try again in a few weeks. Read More
SAP & Teams Integration with Copilot Studio and Generative AI
SAP & Teams Integration with Copilot Studio and Generative AI
Introduction
In this blog, we provide a detailed guide on leveraging AI to optimize SAP workflows within Microsoft Teams. This solution is particularly advantageous for mobile users or those with limited SAP experience, enabling them to efficiently manage even complex, repetitive SAP tasks.
By integrating SAP with Microsoft Teams using Copilot Studio and Generative AI, we can significantly enhance productivity and streamline workflows. This blog will take you through the entire process of setting up a Copilot to interact with SAP data in Teams. We’ll utilize the Power Platform and SAP OData Connector to achieve this integration. By following along you will create and configure a Copilot, test and deploy it within Teams, enable Generative AI, build automation flows, and create adaptive cards for dynamic data representation and even change data in SAP.
1. Overview of the Solution
The solution consists of three main components: Copilot Studio, Power Automate Flow and SAP OData Connector.
Copilot Studio is a web-based tool that allows you to create and manage conversational AI agents, called Copilots, that can interact with users through Microsoft Teams.
Power Automate Flow is a is a tool that allows you to automate workflows between your applications.
SAP OData Connector is a custom connector that enables you to connect to SAP systems using the OData protocol.
The following diagram illustrates how these components work together to provide a seamless SAP and Teams integration experience.
2. Prerequisites
Before you start building your Copilot, you need to make sure that you have access to the Power Platform and to an SAP system. You can leverage the licenses and SAP systems that are available in your company, or alternatively you can use a trial license for the Power Platform and a public SAP demo system. The following links will guide you on how to obtain these resources if you don’t have them already.
2.1. Power Platform Access
Trial license: https://learn.microsoft.com/en-us/power-apps/maker/signup-for-powerapps
2.2. SAP System Access
Request SAP Gateway Demo System ES5 Login: https://developers.sap.com/tutorials/gateway-demo-signup.html
3. Create a Copilot
Now that you have an overview of the solution and have ensured you meet the prerequisites, it’s time to dive into the hands-on process of creating a Copilot. This section will guide you through the detailed steps to set up and configure your Copilot, enabling it to interact with SAP data within Microsoft Teams. You’ll learn how to leverage Power Automate Flow and the SAP OData Connector to build a robust automated workflow. By the end of this chapter, you will have a fully functional Copilot that can retrieve information about products from the SAP system.
3.1. Create a Copilot
Create a Copilot with the name “SAP Product Copilot”:
And enter the following details:
Activate “Generative AI” Feature in the Settings of the Copilot
3.2. Setup Flow + Connector
Create a new “Instant cloud flow” in Power Automate:
Provide the name and chose “Run a flow from Copilot” as trigger:
Add an input variable to the trigger action:
Add an SAP OData action. Choose Query OData entities:
Configure the connection:
OData Base URI: https://sapes5.sapdevcenter.com/sap/opu/odata/iwbep/GWSAMPLE_BASIC
Enter the OData Entity “Product Set”:
Presse “Show all” to enter a filter in the advanced parameters:
In $Filter you’ll enter a Power FX expression that will filter on the provided Category.
Add this expression:
concat(‘Category eq ‘, ””, triggerBody()[‘text’], ””)
Add a final action that will return the found products.
Find the required action by searching for Copilot
Add an output variable
In the output add a Power FX expression:
body(‘Query_OData_entities’)
Finally, the action should look like this:
3.3. Test the flow
Choose Manually and enter the category “Keyboards”:
The successful run will indicate that the flow and the connector work fine.
3.4. Connect the Copilot with the Flow.
Add an action:
Chose the previously created Flow:
Next:
Edit the Input:
And add the following text into the Description. This ensures Gen AI knows how to set the input for the flow:
Product Category. Only one single category can be chosen as input from this list. It is case-sensitive and must be written exactly like below:
Accessories,
Notebooks,
Laser Printers,
Mice,
Keyboards,
Mousepads,
Scanners,
Speakers,
Headsets,
Software,
PCs,
Smartphones,
Tablets,
Servers,
Projectors,
MP3 Players,
Camcorders
Edit the Description of the action output in the same way:
Products found in SAP of a given category.
Present the result as HTML table including following information: ProductID; Name; Category; Description; Supplier; Price; Currency.
3.5. Test the Copilot in the test pane
Open the Test pane and give it a try:
For the first test you need to connect with the SAP OData Connector:
Now you should get the response in the chat window:
3.6. Add the Copilot to Teams
Publish the Copilot first:
Then connect to Teams in the Channels Tab:
Open the Copilot in Teams
Finally test the Copilot in MS Teams
You have successfully built a Copilot that can retrieve up-to-date information of products stored in an SAP system and present it in a table format in Microsoft Teams.
4. Use Adaptive Cards to present SAP information
Now let’s move on by creating adaptive cards to display SAP data dynamically within Microsoft Teams. In this section you will
Create and configure topics.
Activate topics using the appropriate trigger phrases.
Call flows from within topics.
Utilize various entities of the SAP OData Connector.
Handle special situations, such as when no product is found.
Parse JSON data and assign values to topic variables.
Design and implement adaptive cards.
Understand the differences between actions and topics.
By mastering these skills, you will enhance the functionality and interactivity of your Copilot, providing users with a more intuitive and efficient way to interact with SAP data.
4.1. Create a Topic “SAP Product Data”
Create a new topic from blank
In the section “Describe what the topic does” Enter the following:
You can copy/paste this description:
This tool can handle queries like these:
sap product update.
Update SAP product.
Update SAP product data.
Edit product information in SAP.
Edit SAP product data.
Show SAP product details.
Edit SAP product information.
Save the Topic
Open the Topic Details and add the Description
Description: “Show and update information about a product in the SAP system.”
Create the Input variable
Save again.
Add a node that asks for the Product ID.
Question:
Which product do you want to update?
Please provide the Product ID.
Example: HT-1000.
In the Identify field choose “User’s entire response”:
Make sure the response will be saved in the variable ProductID:
Note: With GenAI feature enabled the question might not be asked when the ProductID is already known within the context of the conversation, which is very convenient.
Add a Message that will help us to verify if up to here the topic works as designed:
This message can be removed later when all is working fine.
4.2. Create a Flow to get SAP product details
Take the flow from Lab1 and make a copy
Refresh the page to see the new flow and then turn the flow on
Change the Filter in the new Flow
The new filter must be changed to filter on ProductID:
concat(‘ProductID eq ‘, ””, triggerBody()[‘text’], ””)
Then update the action and save the flow.
4.3. Call the Flow from the Topic
Add a node “Call Action” and choose the “List SAP product details” flow:
As Input provide the ProductID variable:
Save again.
4.4. Add error handling when no data is found
Add some error handling in case we got the ProductID wrong, and nothing was found in SAP:
Steps are:
Set a condition where Output is equal to “[]” which means no product was found and an empty JSON string was returned.
Send a message to inform the user: “The product with ID “ProductID” was not found.”
Add a route via “Topic Management” -> “Got to step” and select the destination step where the topic asks for the Product ID.
Save again.
4.5. “Parse value” the Flow Output
The flow returns a JSON elopement that contains all product details. These must be parsed and assigned to a Table variable.
Select the Output from the Flow as Input.
As data type pick “From sample data”
Get the sample data from a flow test run with a known Product ID
You can copy/paste the sample data from the successful flow run. Either get it in the output of respond to copilot or take it from the Odata query output
Enter the sample data to create the schema
Save the result into a new variable called Product.
4.6. Adaptive Card with SAP Data
Create a Send message node
Send the message as adaptive card
Enter the following draft adaptive card JSON to start with:
The card will look like this:
As a next step make the adaptive card dynamically showing the values from the Product Table. For this you must switch to Formula:
This will change the format slightly removing all the double quotes of the variable names.
Edit the last Entry from
“text”: “${Topic.ProductID}”
To:
text: Topic.ProductID
Save and test again. Now you have an adaptive card showing dynamically a value returned by the flow:
As a next step add the full code from the link below to show all the SAP information in the adaptive card:
Note: You can create your own design and adaptive cards JSON code here: https://adaptivecards.io/designer/
Save and run another test:
5. Changing the data within SAP
To advance the copilot capabilities even further, you can enable changing product information in SAP. Therefore, the following steps must be done:
Initiate actions from adaptive cards by using the “Ask with adaptive card” node.
Create another flow to update data and use the update entity in the SAP OData connector.
Configure more additional variables handling in the topic
Update the Copilot to use the “update” flow.
5.1. Add “Ask with adaptive card” Node
Add a node “Ask with adaptive card”
See also:
Ask with Adaptive Cards – Microsoft Copilot Studio | Microsoft Learn
Take the code from the previously created adaptive card.
Add the action “Submit” at the end of the AC code.
,
actions: [
{
type: “Action.Submit”,
title: “Submit Changes”,
horizontalAlignment: “Center”,
data: {
action: “submitProductChanges”
}
}
]
Save this adaptive card.
Delete the previously created AC as we don’t need it any longer.
When you get the error about missing properties, meaning the Adaptive Card editor did not automatically create the output structure you need to edit the schema manually.
Edit schema on the bottom right.
Enter the variable types into the schema binding:
kind: Record
properties:
action: String
actionSubmitId: String
currencyCode: String
description: String
price: Number
productName: String
Confirm this, save the topic and test again
5.2. Create “Update SAP Product Details” Flow
The final step is to run another flow that will update the product information in SAP.
Create another flow with the same copy procedure as before.
Refresh the page and “Turn on” the flow.
At second position add an action “SAP OData Connector” with the Entity “Update OData entity”.
Set the ProductID Input as shown:
In the advanced parameters mark those where we want to allow updates. These are Name, Description, Price and CurrencyCode:
Add the additionally required Flow Parameters. (Note: Price is a number):
Update the “Update Odata entity” Actions with the relevant variables in the corresponding fields:
Update the last action in the flow with the success message:
5.3. Update the Copilot Topic to call the update flow
After the adaptive card node add the node “Call an action” and pick the “Update SAP product detail” flow:
Open the “variable” pane and activate the required variables.
Fill in the variables in the respective input fields.
The last step is to send the message about the successful product information update:
Save and test in the copilot test pane.
When all works fine you can publish again and test the functionality in MS Teams. You’ll need to trigger the Copilot update in MS Teams with the “Start over” trigger phrase.
We hope you enjoyed following along this blog and that you will find it useful for your own SAP projects.
6. Conclusion
With the steps outlined in this blog, you are now equipped to fully leverage the integration of SAP with Microsoft Teams using Copilot Studio and Generative AI. You can use any available SAP OData service or even create your own, enabling seamless access and management of SAP data within Teams. This integration not only simplifies workflows but also transforms simple, repetitive tasks into significant value-adding activities for users.
You can enable even those users with little or no SAP know-how to complete SAP-specific tasks, thanks to the built-in Generative AI feature that can always help and answer every question. As you explore and implement these capabilities, you’ll discover new opportunities to enhance productivity and drive innovation in your SAP related digital workspace.
The future of integrated, AI-powered collaboration is here, and it hopefully starts with your next SAP and Teams integration scenario.
Microsoft Tech Community – Latest Blogs –Read More
Unlocking the Potential of Unstructured Data with Microsoft Copilot and Azure Native Qumulo
It has been a true pleasure to see our friends at Qumulo constantly innovating and delivering a service that adds more value with every release. In the post below, authored by Qumulo, you will learn more about the value of their NEW Copilot integration and how to implement it with Azure Native Qumulo. Enjoy!
Principal PM Manager, Azure Storage
Partner post from our friends at Qumulo
Learn more about Azure Native Qumulo here!
When working with unstructured data, employees may spend hours every week looking for information within internal data sources, performing analyses, writing reports, making presentations, finding and creating insights in dashboards, or customizing information for different clients and groups. Microsoft Copilot helps employees to be more creative, data-informed, efficient, ready, and productive when dealing with unstructured data.
Using Microsoft Copilot + the M365 suite of productivity products is seamless, but as organizations become more data-driven and lean into Copilot across their data estate, the need for Copilot integration with full featured file systems is more apparent than ever. Copilot can unlock the value contained in petabyte scale systems, delivering the latent intelligence already earned by an organizational dataset. Azure Native Qumulo (ANQ) Copilot Connector enables an organization to take full advantage of the data typically stored in enterprise scale file systems.
The Challenge: Deep Analysis of Unstructured Data
Using natural language to access the value of their file data is beneficial for customers in all industries. In fact, most data is stored in unstructured formats because files have been the standard structure for applications that do not rely solely on an underlying database. Microsoft Copilot can extract insights rapidly from both legacy documentation and modern application outputs by using a centralized unstructured data store like Azure Native Qumulo.
For example:
In the healthcare industry, medical professionals use imagery technology to aid the diagnostic process. In manycases, these images need to be retained for decades so that radiologists, cardiologists, etc., can perform patient studies across long time horizons. Hospitals with large networks must manage and search across petabytes of data generated from modern and legacy medical imaging applications.
The telecom industry leverages AI to boost network performance and find reliability faults within their infrastructure. This helps to automate operations and use data-driven insights for enhanced network coverage. Multiple petabytes of data is generated monthly, and resides in unstructured data storage. Understanding this data requires highly trained technicians to summarize findings.
AI-based self-driving cars have sensors, automotive analytics, and connections to cloud services. These cars use data analytics to make real-time decisions based on the data that it gathers from the in-car sensors. The storage capacity per vehicle could reach 11 terabytes by 2030 making centralized data stored in the 100’s of exabyte range. Summarizing patterns requires complex modeling and data science, but the insights from individual files is also valuable.
Energy companies optimize oil & gas production by forecasting future events and improving flow methods using AI. Combining these complex models with natural language prompts enable greater access across the business unit, increase the time to insights, and identifies patterns for demand forecasting, oil exploration, and predictive maintenance. These datasets often reach petabyte scale and include multiple different file types.
The above industries often struggle with analyzing billions of unstructured files due to limitations in existing tools:
Legacy Search Tools: Designed for structured data, these tools are inefficient for unstructured data and involve costly, time-consuming data migrations.
Open Source AI Solutions: These require sharing data with unregulated third-party companies, raising significant security concerns and potentially creating a risk of data leakage to competitors.
Manual/OCR Methods: Slow, inefficient, and often failing to provide new insights despite reformatting data.
Finding Insights with Microsoft Copilot
Microsoft Copilot combined with Azure Native Qumulo provides a seamless solution for reading and analyzing unstructured data in place, without requiring duplication or alteration of your data. Custom connectors allow Copilot to handle various file types, from PDFs and spreadsheets to text files. With this integration, Copilot helps integrate large data stores with the daily workflow of high value team members.
Flexibility and Scalability:
Customizable Connectors: Engineers can develop custom connectors for various file types, allowing Copilot to analyze virtually any data type stored in ANQ.
Petabyte-Scale Analysis: Microsoft Copilot is capable of handling extensive data sets, without the need for data movement or migration by using ANQ.
Security and Privacy:
Azure Tenant Integration: Copilot operates entirely within the secure confines of your Azure tenant, ensuring that data remains protected.
Exclusive Organizational Access: Analysis results are accessible only within your organization, maintaining strict data privacy and integrity.
User-Friendly Interface:
Seamless Integration: Works with the standard Copilot search interface, providing a familiar user experience within the applications already most used for daily productivity, e.g., Teams, Outlook, M365, etc.
Natural Language Processing (NLP): Users can submit and execute queries using plain language, making sophisticated data analysis accessible to non-technical staff.
Outcomes
In the industry scenarios we described earlier each customer benefits from giving more decision makers access to their data without requiring data scientists or highly trained personnel to serve as middlemen. Both new and legacy data become useful for comparisons and pattern analysis without spending costly hours trying to find the right files for analysis. Lastly, their Copilot implementation is no longer limited to OneDrive or SharePoint, and each business can take advantage of the data stored across the entire network attached storage estate.
The cost savings in terms of increased productivity are profound.
Getting Started
Qumulo has published their Azure Native Qumulo Copilot Connector on GitHub at: GitHub – Qumulo/QumuloCustomConnector
Launching Azure Native Qumulo in the Azure Portal takes 12-15 minutes, and the connector can be set up within an hour. You can begin securely interacting with data using natural language as soon as the connector is established.
Embrace the Future of Data Management
For technical teams ready to revolutionize their data management strategy, the integration of Microsoft Copilot with Azure Native Qumulo offers a cutting-edge solution. Unlock the full potential of your unstructured data and stay ahead in the data-driven world.
Explore the possibilities today with Microsoft Copilot and Azure Native Qumulo and transform your approach to data analysis and management.
Microsoft Tech Community – Latest Blogs –Read More
How Glint enriches the Microsoft Viva experience
In July 2023, Glint moved from its home at LinkedIn to join the Microsoft Viva products providing a comprehensive approach to improving employee engagement and performance for organizations. It’s been a year of incredible accomplishments that have supported Glint legacy customers – and new Viva customers – to achieve their business and people goals using a human-centric approach.
We are proud to have so many successes in our first year as Viva Glint:
Completing Microsoft compliance for our annual privacy review so our customers know that the security of their data is our number one priority.
Achieving among the highest level of product accessibility and inclusivity in our field.
Previewing Copilot in Viva Glint, our AI tool to drive meaningful action on employee feedback by quickly summarizing large quantities of survey comments.
Adding survey items to our taxonomy to measure the impact of Copilot and AI transformation as well as employee productivity in addition to employee engagement.
What’s to come for Viva Glint?
As you can see on our roadmap, there is much more to come in the next 12 months. Integrations with Microsoft 365 and other Viva apps have begun to preview with rave reviews and thoughtful feedback. Look ahead to these notable features:
Integrate Viva Glint with Viva Insights: View employee engagement survey scores alongside aggregated patterns of how people work to reveal new insights into your teams’ strengths and opportunities and drive meaningful improvements to the employee experience.
360 Feedback: Give employees a deeper understanding of their strengths and areas for development, from multiple viewpoints, leading to personal and business performance enhancements.
Teams Integration: Enhance notifications and Nudge capabilities by enabling easy communications in the daily flow of work.
Dozens of platform upgrades providing timesaving, self-serve experiences for admins and managers: Onboarding and Exit survey templates, Raw Data Export features, retroactive updates, and enhanced control over data – to name just a few!
Copilot, Copilot, Copilot! Use Viva Glint to understand and quantify your AI journey. We will continue to innovate the Copilot in Viva Glint comment summarization capabilities for improved AI-powered employee feedback and action taking experiences.
So many resources to take advantage of
We have the assets you need to support your Viva Glint journey! Thanks to our customers who prompted the debut of this extensive catalog of peer and expert forums and the training and guidance resources available to ensure you get every benefit from your Viva Glint programming:
Join the Viva Glint Product Council: Be part of sessions that bring customers directly into our product-building process. Sign up once for your organization and bring all your people who can be part of an insightful discussion!
Attend our Ask the Experts series: Attend live discussions with Viva Glint experts to learn best practices, engage in peer-to-peer learning, and get your questions answered about the Glint product.
Complete learning paths and earn Viva Glint badges: Use the Microsoft Learn training site to gain deeper expertise for using Glint programs. When completing a learning path, apply for digital badges to showcase your learning achievements.
Use Microsoft Learn to guide your Viva Glint journey: Find technical documentation and guidance to help you through key stages of your Viva Glint journey.
Sign up for our communications to learn about product releases and how to participate in product previews, and to register for upcoming events from the Viva team of experts.
Join our live thought leadership events. Check out our AI Empowerment series as well as our Think Like a People Science series to learn what is new from the Viva People Science research team about employee engagement, productivity, AI transformation, and more.
Join us at Microsoft Viva
For customers joining us from LinkedIn, we encourage you to move up your migration date to take advantage of Copilot in Viva Glint and all the other new and upcoming features mentioned above. Talk to your CxPM or if you’re not supported by a CxPM, reach out to VivaGlintMigration@microsoft.com to get your migration process started. Review the steps you’ll take: Learn about the licensing steps here so you can get started on your journey.
Microsoft Tech Community – Latest Blogs –Read More
Configuring auto_explain for Advanced Query Analysis
The auto_explain module in PostgreSQL is a powerful tool for diagnosing query performance issues by automatically logging execution plans of slow queries. Properly configuring auto_explain can significantly enhance your ability to troubleshoot and optimize complex queries and stored procedures. Here’s a detailed guide on configuring auto_explain effectively:
Note: Adding auto_explain to the shared_preload_libraries parameter requires a restart of the PostgreSQL server to take effect.
Customize behavior with the following settings to tailor the level of detail captured in the logs:
Purpose: This parameter controls the minimum duration a query must take to be logged by auto_explain.
Usage:
Setting this to a specific value (in milliseconds) means that only queries exceeding this duration will be logged. For instance, setting it to 100 will log only queries that take longer than 100 milliseconds.
If you set it to 0, every query will be logged, regardless of how quickly it executes. This is useful in a development environment where you want to capture all activity, but in a production environment, you may want to set a higher threshold to focus on slower queries.
Notes: The parameter only considers the time spent executing the query (ExecutorRun). It does not include the time taken for query planning or compilation. This means if a query is slow due to planning or compilation issues, those won’t be reflected in the auto_explain logs.
Purpose: When this is set to true, the module will run the query as EXPLAIN ANALYZE, which includes actual run-time statistics.
Details:
It provides detailed timing information for each stage of the query, including parsing, planning, and execution.
This helps in identifying where time is being spent within the query execution, offering deeper insights compared to a standard EXPLAIN that only shows the query plan without timing.
Consideration:
Resource Intensity: Enabling log_analyze can be resource-intensive. Since EXPLAIN ANALYZE requires the database to time each operation, it adds overhead to the execution of each query.
All Queries Executed with EXPLAIN ANALYZE: When log_analyze is enabled, PostgreSQL runs all queries with EXPLAIN ANALYZE, regardless of whether they meet the log_min_duration threshold. This is because the system cannot predict upfront how long a query will take. Only after the query completes does PostgreSQL compare its actual duration to the log_min_duration threshold. If the duration exceeds this threshold, the query, along with its runtime statistics, is logged. If not, the statistics are discarded, but the overhead of collecting them has already been incurred.
Overhead of Timing Operations: Collecting runtime statistics involves overhead, particularly due to system clock readings, which can vary depending on the system’s clock source. This can add significant resource consumption, especially on high-traffic systems.
Purpose: When enabled, this logs buffer usage during query execution.
Details:
This setting helps you understand how much data is being read from or written to disk buffers, which is crucial for analyzing I/O performance.
For example, if a query shows high buffer usage, it may indicate that the query is performing a lot of disk I/O, which could be a performance bottleneck, especially if the data is not cached in memory.
Purpose: Similar to log_analyze, this setting logs detailed timing information for each phase of the query.
Details:
It captures the time spent on different operations within the query execution, allowing you to pinpoint where delays are occurring.
This is particularly useful for complex queries where the execution time is not evenly distributed across different operations.
Considerations:
Performance Overhead: Detailed timing information introduces some performance overhead. The need to read the system clock frequently during query execution can affect overall query performance, particularly in high-throughput environments.
Increased Log Volume: Enabling this setting will generate more log data, which can impact log management and storage. This is especially relevant for production systems where log volume needs to be carefully managed.
Environment Suitability: It’s best to use this feature in development or testing environments rather than production, where the overhead and increased log volume might be less acceptable.
Purpose: This option logs details about triggers fired during query execution.
Details:
Triggers can have a significant impact on performance, especially if they involve complex operations or if they are fired frequently.
Logging trigger activity can help you understand their role in query performance, making it easier to optimize both the query and the associated triggers.
Purpose: When set to true, this provides a more detailed output of the execution plan.
Details:
The verbose output includes additional information such as join types, exact row estimates, and other details that might be omitted in a standard EXPLAIN output.
This level of detail is particularly useful for diagnosing complex queries where understanding the exact nature of the execution plan is critical.
Purpose: This parameter logs information about Write-Ahead Logging (WAL) during query execution.
Details:
Captures details on WAL activity, which can be useful for understanding how much data is being written to the WAL and how it impacts performance.
Helps in diagnosing performance issues related to write operations and understanding the impact of transactions on WAL.
Purpose: Logs the settings of the auto_explain module for each query.
Details:
This parameter helps you track the configuration used for each query, making it easier to understand and reproduce the conditions under which specific performance characteristics were observed.
Useful for debugging issues and ensuring that the correct settings are applied consistently.
Example: Optimizing Query Performance Using auto_explain in PostgreSQL
Prerequisites
Log Analytics Workspace: Ensure you have a Log Analytics workspace created. If not, create one in the same region as your PostgreSQL server to minimize latency and costs.
Permissions: Ensure you have the necessary permissions to configure diagnostic settings for the PostgreSQL server and access the Log Analytics workspace.
To create a Log Analytics Workspace in Azure, you can refer to the official Microsoft documentation. Here’s a direct link to the guide that walks you through the process: Create a Log Analytics Workspace
Configure Diagnostic settings:
Navigate to Your PostgreSQL Server
In the left-hand menu, select All services.
Search for and select Azure Database for PostgreSQL servers.
Choose the PostgreSQL server you want to configure.
Configure Diagnostic Settings
In the PostgreSQL server blade, scroll down to the Monitoring section.
Click on Diagnostic settings.
Add a Diagnostic Setting
Click on + Add diagnostic setting.
Provide a name for your diagnostic setting.
Select Logs and Metrics
In the Diagnostic settings pane, you’ll see options to configure logs and metrics.
Logs: Check the logs you want to collect. Typical options include:
PostgreSQLLogs (server logs)
ErrorLogs (error logs)
Metrics: Check the metrics you want to collect if needed.
Send to Log Analytics
Under Destination details, select Send to Log Analytics.
Choose your Log Analytics workspace from the drop-down menu. Make sure it is in the same region as your PostgreSQL server.
If you don’t see your workspace, ensure it’s in the same region and refresh the list.
Review and Save
Review your settings.
Click Save to apply the diagnostic settings.
Download Server Log for offline analysis
— Create the stores table
CREATE TABLE IF NOT EXISTS stores (
id SMALLINT PRIMARY KEY,
name VARCHAR(50) NOT NULL,
location VARCHAR(100)
);
— Create the sales_stats table
CREATE TABLE IF NOT EXISTS sales_stats (
store_id INTEGER NOT NULL,
store_name TEXT COLLATE pg_catalog.”default” NOT NULL,
sales_time TIMESTAMP WITHOUT TIME ZONE NOT NULL,
daily_sales DOUBLE PRECISION,
customer_count INTEGER,
average_transaction_value DOUBLE PRECISION,
PRIMARY KEY (store_id, sales_time)
);
— Insert sample data into stores table
INSERT INTO stores (id, name, location)
VALUES
(1, ‘Main Street Store’, ‘123 Main St’),
(2, ‘Mall Outlet’, ‘456 Mall Rd’),
(3, ‘Downtown Boutique’, ‘789 Downtown Ave’);
— Create or replace the procedure to generate sales statistics
CREATE OR REPLACE PROCEDURE public.generate_sales_stats(
IN start_date TIMESTAMP WITHOUT TIME ZONE,
IN end_date TIMESTAMP WITHOUT TIME ZONE
)
LANGUAGE plpgsql
AS $$
BEGIN
INSERT INTO sales_stats (store_id, store_name, sales_time, daily_sales, customer_count, average_transaction_value)
SELECT
stores.id AS store_id,
stores.name AS store_name,
s1.time AS sales_time,
random() * 10000 AS daily_sales,
(random() * 200 + 50)::INTEGER AS customer_count,
random() * 200 + 20 AS average_transaction_value
FROM generate_series(
start_date,
end_date,
INTERVAL ’50 second’
) AS s1(time)
CROSS JOIN (
SELECT
id,
name
FROM stores
) stores
ORDER BY
stores.id,
s1.time;
END;
$$;
Configure following Server Parameter
auto_explain.log_analyze = ON
auto_explain.log_buffers = ON
auto_explain.log_min_duration = 5000
auto_explain.log_verbose = ON
auto_explain.log_wal = ON
auto_explain.log_timing = ON
Execute Procedure
— Execute the procedure to generate sales statistics
CALL generate_sales_stats(‘2010-01-01 00:00:00’, ‘2024-01-01 23:59:59’);
Access Logs
In the PostgreSQL server blade, look for the Monitoring section in the left-hand menu.
Click on Logs to open the Log Analytics workspace associated with your server. This may redirect you to the Log Analytics workspace where logs are stored.
Run KQL Query
— Filter execution plans for queries taking more than 10 seconds
AzureDiagnostics
| where Message contains ” plan”
| extend DurationMs = todouble(extract(“duration: (\d+\.\d+) ms”, 1, Message))
| where isnotnull(DurationMs) and DurationMs > 10000
| project TimeGenerated, Message, DurationMs
| order by DurationMs desc
— Filter logs for a specific instance using the LogicalServerName_s
AzureDiagnostics
| where LogicalServerName_s == “myServerInstance” // Filter for the specific instance
| where Message contains ” plan”
| extend DurationMs = todouble(extract(“duration: (\d+\.\d+) ms”, 1, Message))
| where isnotnull(DurationMs) and DurationMs > 10000
| project TimeGenerated, Message, DurationMs
| order by DurationMs desc
— Queries with Sort nodes that have used disk-based sorting methods, where the actual total execution time is over 70% of the total statement time, and that have read or written at least X temporary blocks.
AzureDiagnostics
| where Message contains “execution plan” // Filter for logs containing execution plans
| extend
SortMethod = extract(“Sort Method: (\w+)”, 1, Message), // Extract the sort method
TotalExecutionTimeMs = todouble(extract(“total execution time: (\d+\.\d+) ms”, 1, Message)), // Extract total execution time
DiskSortTimeMs = todouble(extract(“disk sort time: (\d+\.\d+) ms”, 1, Message)), // Extract disk-based sort time
TmpBlksRead = todouble(extract(“tmp_blks_read: (\d+)”, 1, Message)), // Extract temporary blocks read
TmpBlksWritten = todouble(extract(“tmp_blks_written: (\d+)”, 1, Message)) // Extract temporary blocks written
| where
isnotnull(TotalExecutionTimeMs) and
isnotnull(DiskSortTimeMs) and
DiskSortTimeMs > 0 and // Ensure disk-based sorting occurred
(DiskSortTimeMs / TotalExecutionTimeMs) > 0.70 and // Check if disk sort time is over 70% of total execution time
(TmpBlksRead > X or TmpBlksWritten > X) // Replace X with the threshold value for temporary blocks
| project
TimeGenerated,
Message,
SortMethod,
TotalExecutionTimeMs,
DiskSortTimeMs,
TmpBlksRead,
TmpBlksWritten
| order by TotalExecutionTimeMs desc
This KQL query filters logs to find entries containing query execution plans, extracts and converts the duration of these queries to milliseconds, and then focuses on those that took longer than 10 seconds. It displays the timestamp, the log message, and the execution duration. For more details on the AzureDiagnostics table, refer to Azure Monitor documentation.
Here is a detailed execution plan extracted from the log analytics workspace:
2024-08-08 20:14:51 UTC-66b52434.3e45-LOG: duration: 194804.905 ms plan:
Query Text: INSERT INTO miner_stats (miner_id, miner_name, stime, cpu_usage, average_mhs, temperature, fan_speed)
SELECT
miners.id AS miner_id,
miners.name AS miner_name,
s1.time AS stime,
random() * 100 AS cpu_usage,
(random() * 4 + 26) * miners.graphic_cards AS average_mhs,
random() * 40 + 50 AS temperature,
random() * 100 AS fan_speed
FROM generate_series(
‘2000-10-14 00:00:00’,
‘2024-08-30 23:59:59’,
INTERVAL ’59 second’) AS s1(time)
CROSS JOIN (
SELECT
id,
name,
graphic_cards
FROM miners
) miners
ORDER BY
miners.id,
s1.time;
Insert on public.miner_stats (cost=153024.30..204774.30 rows=0 width=0) (actual time=194804.893..194804.897 rows=0 loops=1)
Buffers: shared hit=39107638 read=97 dirtied=395027 written=399443, temp read=496743 written=497630
WAL: records=38317668 fpi=1 bytes=4253262922
-> Subquery Scan on “*SELECT*” (cost=153024.30..204774.30 rows=900000 width=76) (actual time=75088.036..113372.899 rows=38317668 loops=1)
Output: “*SELECT*”.miner_id, “*SELECT*”.miner_name, “*SELECT*”.stime, “*SELECT*”.cpu_usage, “*SELECT*”.average_mhs, “*SELECT*”.temperature, “*SELECT*”.fan_speed
Buffers: shared hit=2, temp read=496743 written=497630
-> Result (cost=153024.30..191274.30 rows=900000 width=100) (actual time=75088.030..103714.013 rows=38317668 loops=1)
Output: miners.id, miners.name, s1.”time”, (random() * ‘100’::double precision), (((random() * ‘4’::double precision) + ’26’::double precision) * (miners.graphic_cards)::double precision), ((random() * ’40’::double precision) + ’50’::double precision), (random() * ‘100’::double precision)
Buffers: shared hit=2, temp read=496743 written=497630
-> Sort (cost=153024.30..155274.30 rows=900000 width=70) (actual time=75088.021..90570.249 rows=38317668 loops=1)
Output: miners.id, miners.name, s1.”time”, miners.graphic_cards
Sort Key: miners.id, s1.”time”
Sort Method: external merge Disk: 1249832kB
Buffers: shared hit=2, temp read=496743 written=497630
-> Nested Loop (cost=0.00..11281.25 rows=900000 width=70) (actual time=1621.592..12365.782 rows=38317668 loops=1)
Output: miners.id, miners.name, s1.”time”, miners.graphic_cards
Buffers: shared hit=2, temp read=28065 written=28065
-> Function Scan on pg_catalog.generate_series s1 (cost=0.00..10.00 rows=1000 width=8) (actual time=1621.564..2913.084 rows=12772556 loops=1)
Output: s1.”time”
Function Call: generate_series(‘2000-10-14 00:00:00+00’::timestamp with time zone, ‘2024-08-30 23:59:59+00′::timestamp with time zone, ’00:00:59’::interval)
Buffers: shared hit=1, temp read=28065 written=28065
-> Materialize (cost=0.00..23.50 rows=900 width=62) (actual time=0.000..0.000 rows=3 loops=12772556)
Output: miners.id, miners.name, miners.graphic_cards
Buffers: shared hit=1
-> Seq Scan on public.miners (cost=0.00..19.00 rows=900 width=62) (actual time=0.016..0.017 rows=3 loops=1)
Output: miners.id, miners.name, miners.graphic_cards
Buffers: shared hit=1
Lack of Support for Cancelled Queries:
One notable limitation of auto_explain is that it does not capture queries that are cancelled before they complete. Here’s what that means:
What It Logs: auto_explain is designed to log the execution plans and performance statistics of queries that finish running and meet specific criteria, such as a minimum execution time.
What It Misses: If a query is interrupted or cancelled—whether due to a timeout, user intervention, or some other reason—it won’t be logged by auto_explain. This means you won’t have detailed information about queries that were terminated prematurely, which can be crucial for diagnosing issues.
Conclusion
Debugging and optimizing stored procedures in PostgreSQL, particularly with complex queries and large datasets, can be a daunting task. However, using tools like the auto_explain module significantly enhances the ability to diagnose performance issues by automatically logging detailed execution plans and runtime statistics. Configuring auto_explain to capture various metrics, such as query duration, buffer usage, and WAL activity, provides deep insights into the performance characteristics of queries.
By combining these detailed logs with additional strategies such as indexing, memory tuning, and using third-party monitoring tools, developers can effectively pinpoint and resolve performance bottlenecks. This methodical approach not only aids in optimizing stored procedures but also in maintaining overall database performance.
For additional insights on optimizing PostgreSQL performance, you can refer to our previous blog on optimizing query performance with work_mem here.
Microsoft Tech Community – Latest Blogs –Read More
Error while using sharepoint migration tool
Hi All,
I’m getting the above error while migrating the files from one drive to SharePoint site, could someone please help.
Hi All, I’m getting the above error while migrating the files from one drive to SharePoint site, could someone please help. Read More
Edit Permissions for Calendar App only – Read Permissions for other site pages
I have a SharePoint site that 2 users need to be able to only edit the Calendar App. The users DO NOT need to edit other site pages. I’ve looked around and I haven’t found any information on how to do this. SOS
I have a SharePoint site that 2 users need to be able to only edit the Calendar App. The users DO NOT need to edit other site pages. I’ve looked around and I haven’t found any information on how to do this. SOS Read More