Enhance the reliability of your generative AI with new hallucination correction capability
Today, we are excited to announce a preview of “correction,” a new capability within Azure AI Content Safety‘s groundedness detection feature. With this enhancement, groundedness detection not only identifies inaccuracies in AI outputs but also corrects them, fostering greater trust in generative AI technologies.
What is Groundedness Detection?
Groundedness detection is a feature that identifies ungrounded or hallucinated content in AI outputs, helping developers enhance generative AI applications by pinpointing responses that lack a foundation in connected data sources.
Since we introduced groundedness detection in March of this year, our customers have asked us: “What else can we do with this information once it’s detected besides blocking?” This highlights a significant challenge in the rapidly evolving generative AI landscape, where traditional content filters often fall short in addressing the unique risks posed by Generative AI hallucinations.
Introducing the Correction Capability
This is why we are introducing the correction capability. Empowering our customers to both understand and take action on ungrounded content and hallucinations is crucial, especially as the demand for reliability and accuracy in AI-generated content continues to rise.
Building on our existing Groundedness Detection feature, this groundbreaking capability allows Azure AI Content Safety to both identify and correct hallucinations in real-time before users of generative AI applications encounter them.
How Correction Works
To use groundedness detection, a generative AI application must connect to grounding documents, which are used in document summarization and RAG-based Q&A scenarios.
The developer of the application needs to enable the correction capability.
Then, when an ungrounded sentence is detected, this triggers a new request to the generative AI model for a correction.
The LLM then assesses the ungrounded sentence against the grounding document.
If the sentence lacks any content related to the grounding document, it may be filtered out completely.
However, if there is content sourced from the grounding document, the foundation model will rewrite the ungrounded sentence to ensure it aligns with the grounding document.
Step-by-Step Guide for Groundedness Detection
Detection: First, Azure AI Content Safety scans AI-generated content for ungrounded content. Hallucination isn’t an all-or-nothing problem; most ungrounded outputs actually contain some grounded content too. This is why groundedness detection pinpoints specific segments of ungrounded material. When ungrounded content is identified, the model highlights the specific text that is incorrect, irrelevant, or fabricated.
Reasoning: Users can enable reasoning. After identifying ungrounded segments, the model generates an explanation for why certain text has been flagged. This transparency is essential because it enables users of Azure AI Content Safety to isolate the point of ungroundedness and to assess the severity of the ungroundedness.
Correction: Users can enable correction. Once ungrounded content is flagged, the system initiates the rewriting process in real-time. The inaccurate portions are revised to ensure alignment with connected data sources. This correction happens before the user is able to see the initial ungrounded content.
Output: Finally, the corrected content is returned to the user.
What are generative AI hallucinations?
Hallucinations refer to the generation of content that lacks support in grounding data. This phenomenon is particularly associated with large language models (LLMs), which can unintentionally produce misleading information.
This issue can become critical in high-stakes fields like medicine, where accurate information is essential. While AI has the potential to improve access to vital information, hallucinations can lead to misunderstandings and misrepresentation, posing risks in these important domains.
Why the Correction Capability Matters
The introduction of this correction capability is significant for several reasons.
First, filtering is not always the mitigation that makes sense and can result in a poor user experience when it outputs text rendered incoherent without redacted content. Correction represents a first-of-its-kind capability to move beyond blocking.
Second, concerns about hallucination have held back many GenAI deployments in higher-stakes domains like medicine. Correction helps to unblock these applications.
Third, hallucination concerns have also held back broader deployment of Copilots to the public. Correction empowers organizations to deploy conversational interfaces to their customers with confidence.
Other Generative AI Grounding Tactics
In addition to using Groundedness Detection and its new correction capability, there are several steps you can take to enhance the grounding of your generative AI applications. Key actions include adjusting your system message and connecting your generative application to reliable data sources.
Crafting your System Message
Curating Grounding Documents and Data
Configuring Generation Hyperparameters
Configuring Retrieval Hyperparameters (RAG Applications)
Getting Started with Groundedness Detection
Learn more about Azure AI Content Safety – https://aka.ms/contentsafety.
Explore our documentation – https://aka.ms/GroundednessDetectionDocs
Watch our correction video – https://aka.ms/GroundednessCorrection-Video
Read about Microsoft’s Approach to Trustworthy AI – https://aka.ms/MicrosoftTrustworthyAI
Microsoft Tech Community – Latest Blogs –Read More