New Blog | Secure your data to confidently take advantage of Generative AI with Microsoft Purview
By Liz Willets
Security teams often find themselves in the dark when it comes to data security risks associated with AI usage. An alarming 80% of leaders cite the leakage of sensitive data as their primary concern [1]. And more than 30% of decision makers say they don’t know where or what their sensitive business critical data is [2], and with generative AI generating more data, getting that visibility into how sensitive data is flowing through AI and how your users are interacting with generative AI applications is essential. Without proper visibility, organizations struggle to safeguard their assets effectively. Organizations want to get ahead of and minimize the inherent risks of data being shared with generative AI applications, such as data oversharing, data leakage and non-compliant use of GenAI apps. Instead of restricting AI use to avoid these outcomes, security teams can mitigate and manage risks more effectively by proactively gaining visibility into AI usage within the organization and implementing corresponding protection and governance controls.
On top of that, evolving regulatory environments and rapid technological advancements create complex challenges for customers. Adhering to new regulations, particularly those pertaining to cutting-edge technologies like GenAI, is essential for devising an effective security and compliance strategy. In this era where AI regulations and standards, such as the EU AI Act and NIST AI RMF, are taking shape, it is imperative for organizations to develop and use AI applications in a manner that is safe, transparent and responsible. According to Gartner®, “by 2027 at least one global company will see its AI deployment banned by a regulator for noncompliance with data protection or AI governance legislation.” [3] AI will be a catalyst for regulatory changes, and having secure and compliant AI will become fundamental.
At Microsoft Ignite 23’ and Microsoft Secure 24’, we introduced new capabilities to help organizations discover, protect and govern data in an AI-first world with Microsoft Purview.
Today, we are excited to announce new innovations from Microsoft Purview to help you secure and govern AI:
Microsoft Purview AI Hub, which helps organizations discover how AI applications such as Copilot for M365 and third-party AI apps are being used in their organization and provides ready-to-use policies to protect data, is now available in public preview.
New insights into unlabeled files and SharePoint sites referenced by Microsoft Copilot for Microsoft 365 will also be included in the public preview release of the AI Hub. This helps organizations prioritize the most critical data risks and put in place protection policies to prevent potential oversharing of sensitive data.
New insights into non-compliant and unethical use of AI interactions will also be included in the public preview release of the AI Hub. This helps organizations quickly gain insight into unethical use such as regulatory collusion, money laundering, targeted harassment and more.
New Compliance Manager assessment templates for EU AI Act, NIST AI RMF, ISO/IEC 23894:2023 and ISO/IEC 42001 to help assess, implement and strengthen compliance controls to meet AI regulatory requirements and standards. These insights will also be surfaced in the AI Hub and available in public preview.
Discover how AI applications are being used
To help customers gain a better understanding of which AI applications are being used and how – we are announcing the public preview of Microsoft Purview AI Hub – which includes insights like sensitive data shared with AI apps (whether Copilot for M365 or third-party AI apps), total number of users interacting with AI apps and their associated risk level, pulled from Microsoft Purview Insider Risk Management, and more.
As organizations adopt Copilot for Microsoft 365, data security controls become paramount to avoid potential overexposure of sensitive data or SharePoint sites. And we know that it is challenging to manage and label vast amounts of information, often leaving sensitive data vulnerable to data oversharing. Microsoft Purview AI Hub addresses this challenge by surfacing unlabeled files and SharePoint sites referenced by Copilot, helping you prioritize your most critical data risks and prevent potential oversharing of sensitive data.
Read the full post here: Secure your data to confidently take advantage of Generative AI with Microsoft Purview
By Liz Willets
Security teams often find themselves in the dark when it comes to data security risks associated with AI usage. An alarming 80% of leaders cite the leakage of sensitive data as their primary concern [1]. And more than 30% of decision makers say they don’t know where or what their sensitive business critical data is [2], and with generative AI generating more data, getting that visibility into how sensitive data is flowing through AI and how your users are interacting with generative AI applications is essential. Without proper visibility, organizations struggle to safeguard their assets effectively. Organizations want to get ahead of and minimize the inherent risks of data being shared with generative AI applications, such as data oversharing, data leakage and non-compliant use of GenAI apps. Instead of restricting AI use to avoid these outcomes, security teams can mitigate and manage risks more effectively by proactively gaining visibility into AI usage within the organization and implementing corresponding protection and governance controls.
On top of that, evolving regulatory environments and rapid technological advancements create complex challenges for customers. Adhering to new regulations, particularly those pertaining to cutting-edge technologies like GenAI, is essential for devising an effective security and compliance strategy. In this era where AI regulations and standards, such as the EU AI Act and NIST AI RMF, are taking shape, it is imperative for organizations to develop and use AI applications in a manner that is safe, transparent and responsible. According to Gartner®, “by 2027 at least one global company will see its AI deployment banned by a regulator for noncompliance with data protection or AI governance legislation.” [3] AI will be a catalyst for regulatory changes, and having secure and compliant AI will become fundamental.
At Microsoft Ignite 23’ and Microsoft Secure 24’, we introduced new capabilities to help organizations discover, protect and govern data in an AI-first world with Microsoft Purview.
Today, we are excited to announce new innovations from Microsoft Purview to help you secure and govern AI:
Microsoft Purview AI Hub, which helps organizations discover how AI applications such as Copilot for M365 and third-party AI apps are being used in their organization and provides ready-to-use policies to protect data, is now available in public preview.
New insights into unlabeled files and SharePoint sites referenced by Microsoft Copilot for Microsoft 365 will also be included in the public preview release of the AI Hub. This helps organizations prioritize the most critical data risks and put in place protection policies to prevent potential oversharing of sensitive data.
New insights into non-compliant and unethical use of AI interactions will also be included in the public preview release of the AI Hub. This helps organizations quickly gain insight into unethical use such as regulatory collusion, money laundering, targeted harassment and more.
New Compliance Manager assessment templates for EU AI Act, NIST AI RMF, ISO/IEC 23894:2023 and ISO/IEC 42001 to help assess, implement and strengthen compliance controls to meet AI regulatory requirements and standards. These insights will also be surfaced in the AI Hub and available in public preview.
Discover how AI applications are being used
To help customers gain a better understanding of which AI applications are being used and how – we are announcing the public preview of Microsoft Purview AI Hub – which includes insights like sensitive data shared with AI apps (whether Copilot for M365 or third-party AI apps), total number of users interacting with AI apps and their associated risk level, pulled from Microsoft Purview Insider Risk Management, and more.
As organizations adopt Copilot for Microsoft 365, data security controls become paramount to avoid potential overexposure of sensitive data or SharePoint sites. And we know that it is challenging to manage and label vast amounts of information, often leaving sensitive data vulnerable to data oversharing. Microsoft Purview AI Hub addresses this challenge by surfacing unlabeled files and SharePoint sites referenced by Copilot, helping you prioritize your most critical data risks and prevent potential oversharing of sensitive data.
Read the full post here: Secure your data to confidently take advantage of Generative AI with Microsoft Purview Read More