Tag Archives: microsoft
Updates for Town Hall in Microsoft Teams and Teams Live Events
Our goal in Teams is to make hybrid work and communication easier and more inclusive than ever before. This pursuit is core to the effort we put into creating meaningful connections between people through our end-to-end events platforms, whether one-to-one meetings or large one-to-many hosted digital events. We introduced our new digital streaming event solution for large events, town hall, in September 2023. Town hall has continued to drive new, exciting experiences for our customers, such as the ability to bring multiple presenters on stage, send out attendee emails, and see real-time health analytics for the event. As we move forward, we are excited to continue to share our latest features with you and let you know what to expect from town hall in the next year.
Additionally, we will not retire Teams Live Events in September 2024, as previously announced. Town hall will continue to be the platform where our new features and value land, and we encourage Teams Live Events users to take advantage of these new innovations by upgrading to town hall when ready. We’ve spoken with customers and understand how important it is to ensure a smooth transition to town hall. We are committed to making it as easy and beneficial as possible for customers to experiment, adopt, and implement town hall as their destination for large-scale digital events, as well as allow customers to upgrade from Live Events to town hall on their own schedule. In the coming days, customers who are still using Teams Live Events, and wish to continue to do so past September 30th, 2024, will be able to schedule Teams Live Events instances beyond this date.
Updates about features that will be rolling out to town hall can be found on our town hall adoption page, and we will communicate future updates about Teams Live Events plans via blogs, MC posts, and any other forums where this announcement is distributed.
Town hall innovations deliver new ways to engage your audience
Town hall adoption continues to grow as we continue to prioritize driving new value for our users. In the last quarter, we saw significant increases in new customers trying town hall, total usage, and the number of hosted events. Our mission is to continue to add new additional capabilities to town hall that make your streaming digital events more impactful to audiences and more seamless to execute. As we look ahead to the coming year, we will be delivering key features to continue to build on the highly engaging and interactive experiences that town hall delivers. Attendees will soon be able to express their feedback and engagement through live reactions, streaming chat and presenters can interact with their audience via raise hands. Advanced production experiences such as the producer role, queuing shared content and preview scene support are also coming to town hall, providing a new level of event execution capabilities.
When we initially announced town hall in September of 2023, we made our users aware that we would continue to release town hall features that provide a similar experience in town hall as Teams Live Events. In the next twelve months, we plan to continue to focus on these areas in town hall to ensure that we provide the same feature effectiveness that customers have come to expect from Teams Live Events. Some key features that will be available in town hall in the next year to help achieve this effectiveness include:
Engagement capabilities (certain Q&A functions: voting, filters, sorting, and archive questions; export questions to CSVdownload Q&A report)
Device capabilities (MTR-W support for presenters and attendees and CVI and VDI support)
Advanced production experiences such as producer role, queuing shared content, and preview scenes.
The combination of ease of use and adoption of town hall, achieving feature effectiveness between the Live Events and town hall, and the new additive value that is exclusive to town hall going forward are all great reasons for current Live Events users to consider upgrading to town hall to take advantage of what we are building.
For the latest updates, feature timelines, and news about what is coming for Teams town hall, please visit our town hall adoption page.
Microsoft Tech Community – Latest Blogs –Read More
Best Practices to Manage and Mitigate Security Recommendations
In the fast-evolving landscape of cloud security, Microsoft Defender for Cloud (MDC) stands as a robust Cloud Native Application Protection Platform (CNAPP). One of its standout features is the premium Cloud Security Posture Management (CSPM) solution, known as Defender CSPM. Among the myriads of advanced capabilities offered by Defender CSPM, the “Governance Rule” feature is a game-changer. This empowers security teams to streamline and automate the assignment, management, and tracking of security recommendations.
In this blog, we’ll delve into best practices for leveraging Governance Rule to ensure effective, efficient, and timely remediation actions and explore practical use cases for maximizing its potential.
Understanding Governance Rule
Governance Rule in Defender CSPM is designed to simplify the management of security recommendations by enhancing accountability. You can define rules that assign an owner and a due date for addressing recommendations for specific resources. This provides resource owners with a clear set of tasks and deadlines for remediating recommendations. By making the assignment and tracking of these tasks more visible, Governance Rule ensures that critical security issues are promptly addressed, reducing the risk of breaches and enhancing overall security posture.
Best Practices for Utilizing Governance Rule
Define Clear Remediation Ownership
Assigning remediation tasks to specific owners is crucial for accountability. Governance Rule allows you to specify who is responsible for each security recommendation. Ensure that each task is assigned to the most appropriate individual or team with the necessary expertise and authority to address the issue. Clear ownership helps avoid confusion and ensures that remediation actions are taken seriously.
Set Realistic ETAs and Grace Periods
Establishing realistic Estimated Time of Arrival (ETA) and grace periods for remediation tasks is essential for maintaining a balance between urgency and feasibility. Overly aggressive timelines can lead to rushed and potentially ineffective fixes, while overly lenient deadlines may delay critical security improvements. Analyze the complexity and impact of each security finding to set achievable timelines that encourage timely resolution without compromising quality.
Prioritize Based on Risk
Not all security recommendations are created equal. Use severity-based prioritization to determine which issues need immediate attention and which can be scheduled for later remediation. Defender CSPM’s Governance Rule allows you to categorize tasks based on their severity and potential impact on your organization’s security posture. Focus on high-severity findings first to mitigate the most significant threats promptly.
Automate Workflow Integration
Leverage the automation capabilities of Governance Rule to integrate remediation workflows with your existing security tools and processes. Automated notifications, status updates, and task assignments can significantly reduce manual effort and improve coordination across teams. By integrating these workflows, you ensure that security recommendations are seamlessly managed from detection to resolution.
Regularly Monitor and Adjust Rules
The dynamic nature of cloud environments means that security needs can change rapidly. Regularly review and adjust your Governance Rules to ensure they remain aligned with your organization’s security objectives and compliance requirements. Monitor the performance of these rules and gather feedback from your security teams to identify areas for improvement.
Foster a Culture of Continuous Improvement
Encourage a culture where continuous improvement is the norm. Use insights gained from the Governance Rule feature to identify recurring security issues and root causes. Implement lessons learned to refine your security policies and practices, reducing the likelihood of similar issues arising in the future.
Before you begin
The Defender Cloud Security Posture Management (CSPM) plan must be enabled.
You need Contributor, Security Admin, or Owner permissions on the Azure subscriptions.
For AWS accounts and GCP projects, you need Contributor, Security Admin, or Owner permissions on the Defender for Cloud AWS or GCP connectors.
Using Governance Rule Priorities in Microsoft Defender for Cloud: A Practical Use Case
The Governance Rule feature in Microsoft Defender for Cloud (MDC) offers a powerful way to prioritize and manage security recommendations by assigning a Priority value from 1 (highest) to 1000 (lowest). This granularity allows organizations to tailor their remediation efforts based on the criticality of the issues at hand. Let’s explore a practical use case to illustrate how setting multiple rules with different priorities can enhance your security posture.
Multi-Tiered Security Remediation Strategy
Scenario: An organization operates a cloud infrastructure that supports various critical business functions, including financial transactions, customer data management, and internal communication systems. Each of these functions has different security requirements and a potential impact on the business if compromised.
Objective: To implement a multi-tiered security remediation strategy that ensures the most critical security issues are addressed first, while less critical issues are still managed effectively within appropriate timelines.
Step-by-Step Implementation
Identify Security Segments and Their Impact:
Tier 1: High-impact areas such as financial transaction systems and customer data management. Compromise in these areas could lead to significant financial loss and regulatory penalties.
Tier 2: Medium-impact areas such as internal communication systems and non-critical business applications. Breaches here could disrupt operations but with manageable consequences.
Tier 3: Low-impact areas such as development and testing environments. Issues here have a minimal immediate impact on business operations.
Set Governance Rules with Priorities:
Rule 1: High Priority (1-100)
Criteria: Security recommendations related to Tier 1 systems.
Priority Value: 1-100
Description: Assign the highest priority to vulnerabilities and security findings in financial transaction systems and customer data management platforms. These tasks should be addressed immediately to prevent significant damage.
Rule 2: Medium Priority (101-500)
Criteria: Security recommendations related to Tier 2 systems.
Priority Value: 101-500
Description: Assign a medium priority to issues in internal communication systems and non-critical business applications. These should be remediated promptly but can be scheduled after Tier 1 issues are addressed.
Rule 3: Low Priority (501-1000)
Criteria: Security recommendations related to Tier 3 systems.
Priority Value: 501-1000
Description: Assign the lowest priority to findings in development and testing environments. While still important, these issues can be managed with a longer timeline, focusing on addressing them during regular maintenance cycles.
Automate and Monitor:
Use MDC’s Governance Rule automation to assign these tasks to appropriate teams or individuals based on their expertise.
Set up automated notifications and tracking to ensure that each priority level is being addressed according to the defined timelines.
Regularly review the progress and adjust priorities as necessary based on new findings, business impact analysis, and changes in the threat landscape.
Benefits of Multi-Priority Governance Rules
Focused Resource Allocation: Ensures that critical resources are directed towards addressing the most impactful security issues first, optimizing the use of your security team’s time and expertise.
Risk Management: Reduces the risk of severe breaches by prioritizing high-impact areas, thereby protecting essential business functions.
Scalability: As the organization grows and the cloud environment evolves, this prioritization strategy can scale to include new systems and adjust to changing priorities.
Efficiency: Automated workflows and clear prioritization reduce the time spent on manual task assignment and tracking, increasing overall operational efficiency
Leveraging Governance Rule Conditions for Efficient Remediation
The Governance Rule feature in Microsoft Defender for Cloud allows for detailed configuration of conditions, making it a versatile tool for managing remediation tasks. Here are some key conditions and their valuable use cases:
Impacted Recommendations: By Severity or By Specific Recommendation
Set Owner: By Resource Tag or By Email Address (one address only)
Set Remediation Timeframe: 7, 14, 30, 90 days with an option to set an equal Grace Period so the recommendation doesn’t affect the Secure Score
Set Email Notifications: Notify owners weekly about open and overdue tasks, notify the owner’s direct manager weekly about open and overdue tasks. Email configuration day of the week – select a day of the week.
Use Case 1: Prioritizing High-Severity Recommendations
Condition Configuration:
Impacted Recommendations: By Severity (High)
Set Owner: By Resource Tag (e.g., “HighPriorityTeam”)
Set Remediation Timeframe: 7 days with an equal grace period
Set Email Notifications: Notify owners weekly about open and overdue tasks, email configuration day: Monday
Description: This use case focuses on ensuring that high-severity security recommendations are addressed with utmost urgency. By assigning these tasks to a dedicated high-priority team and setting a tight remediation timeframe, critical vulnerabilities are mitigated quickly. Weekly email notifications keep the owners informed, ensuring accountability and prompt action.
Use Case 2: Managing Specific Recommendations for Compliance
Condition Configuration:
Impacted Recommendations: By Specific Recommendation (e.g., “Enable Multi-Factor Authentication”)
Set Owner: By Email Address (specific compliance officer)
Set Remediation Timeframe: 30 days with an equal grace period
Set Email Notifications: Notify owners weekly about open and overdue tasks, notify the owner’s direct manager weekly about open and overdue tasks, email configuration day: Wednesday
Description: Certain security recommendations are crucial for compliance with regulatory requirements. By targeting specific recommendations, such as enabling multi-factor authentication, and assigning them to a compliance officer, organizations can ensure these critical tasks are completed within a reasonable timeframe. The grace period prevents these tasks from negatively impacting the Secure Score while they are being addressed. Regular notifications keep everyone on track.
Use Case 3: Efficient Resource Tag-Based Assignment
Condition Configuration:
Impacted Recommendations: By Severity (Medium)
Set Owner: By Resource Tag (e.g., “AppTeam”)
Set Remediation Timeframe: 14 days with an equal grace period
Set Email Notifications: Notify owners weekly about open and overdue tasks, email configuration day: Thursday
Description: For medium-severity issues, assigning tasks based on resource tags allows for efficient distribution of remediation efforts among different teams. This use case assigns recommendations to the application development team, ensuring they handle vulnerabilities related to their specific domain. The 14-day remediation period is sufficient to address these issues without overwhelming the team, while weekly notifications help maintain progress.
Use Case 4: Long-Term Low-Severity Management
Condition Configuration:
Impacted Recommendations: By Severity (Low)
Set Owner: By Email Address (general IT team lead)
Set Remediation Timeframe: 90 days with an equal grace period
Set Email Notifications: Notify owners weekly about open and overdue tasks, email configuration day: Friday
Description: Low-severity recommendations, while still important, can be managed over a longer period. This case assigns these tasks to the general IT team lead, allowing for a 90-day remediation period. The extended timeframe ensures that these issues are addressed without detracting them from more urgent tasks. Weekly notifications ensure that these tasks are not forgotten and are completed within the set period.
Use Case 5: Weekly Review and Reporting
Condition Configuration:
Impacted Recommendations: By Severity (All)
Set Owner: By Resource Tag (e.g., “SecurityOps”)
Set Remediation Timeframe: 30 days with an equal grace period
Set Email Notifications: Notify owners weekly about open and overdue tasks, email configuration day: Monday
Description: A comprehensive approach to managing all levels involves setting a 30-day remediation period for all recommendations and assigning them to the Security Operations team. Weekly notifications sent every Monday keep the team updated on open and overdue tasks, ensuring continuous review and progress on all security recommendations.
Integrating ServiceNow with Governance Rules in Microsoft Defender for Cloud
The integration of ServiceNow with Defender for Cloud allows you to create governance rules that automatically open tickets in ServiceNow for specific recommendations or severity levels. This capability provides significant value by enabling seamless collaboration between the two platforms. With ServiceNow tickets being created, viewed, and linked to recommendations directly from Defender for Cloud, organizations can streamline their incident management process. This integration ensures that security recommendations are promptly addressed, facilitating efficient and effective remediation efforts, and enhancing the overall security posture by providing clear visibility and accountability for each task.
For more detailed instructions, refer to the official documentation.
Conclusion
By configuring Governance Rules with specific conditions tailored to your organization’s needs, you can create a structured and efficient remediation process. Whether it’s prioritizing high-severity issues, managing compliance-related recommendations, or ensuring long-term management of low-severity findings, the flexible configuration options in MDC’s Governance Rule feature allow for a highly effective security strategy. Implementing these use cases will help your organization maintain a strong security posture, ensuring timely and efficient remediation actions across all areas of your cloud infrastructure.
The Governance Rule feature in Microsoft Defender CSPM is a powerful tool that can transform how organizations manage and mitigate security recommendations. By following these best practices, security teams can enhance their efficiency, effectiveness, and responsiveness to security findings. Embrace the capabilities of Governance Rule to stay ahead in the ever-changing world of cloud security, ensuring that your security measures are not only reactive but also proactive and adaptive.
Additional Resources
Watch a demonstration on how to use Governance Rule in this episode of Defender for Coud in the Field
Download the new Microsoft CNAPP eBook at aka.ms/MSCNAPP
Become a Defender for Cloud Ninja by taking the assessment at aka.ms/MDCNinja
Reviewers
Yuri Diogenes, Principal PM Manager, CxE Defender for Cloud
Tal Rosler, Senior PM lead, Microsoft Defender for Cloud
Microsoft Tech Community – Latest Blogs –Read More
Can’t see Tags on My Task view or filter by them
I keep seeing posts where, in the new Planner, we should be able to filter by tags. I have to open details on each individual task, and still cannot see my tags without opening each task. It’s making it difficult to see where tasks are in the process. Am I missing something or is this feature coming soon? I’m hoping tags will display on the grid soon.
I keep seeing posts where, in the new Planner, we should be able to filter by tags. I have to open details on each individual task, and still cannot see my tags without opening each task. It’s making it difficult to see where tasks are in the process. Am I missing something or is this feature coming soon? I’m hoping tags will display on the grid soon. Read More
Moving back email from group I created to my inbox.
Hello everyone,
I create a email group in Outlook and then I move email from my Inbox to the group.
Now I can’t move back the seme email from the group to my Inbox!
What I miss?
Hello everyone, I create a email group in Outlook and then I move email from my Inbox to the group. Now I can’t move back the seme email from the group to my Inbox!What I miss? Read More
Simple automate script mangles number formatting
I have a new laptop and I thought – i/o moving my macro – to create an Automated script.
I recorded the script with the same steps as the macro:
Select column A > data > text to column > Delimited on | sign > finish.
Attached is the Automation script after my first change: adjust the destination from B1 to A1, because otherwise a whole csv string remained in A1, and the the first item of the headers started in B2 i/o A1. Strangely enough the rest of the rows were filled correctly, with the first item in column A.
Later (after taking the screenshot) I also changed the range to A:A rather than a specified number of rows.
After the changes the script seemed to work as far as the Text to Column bit went. However: something strange happened with the numerical data in my ‘price’ column (U). By the way, my Excel regional settings have a comma separator.
The flat data in the csv contains this item in the form of 4 decimals. So 0 > 0,0000.
And actually the 0,0000 is the only one that remains the same after running the script.
All other amounts – while also starting out as a number with 4 digits after the comma – end up being translated as follows: 3,5500 becomes 35.500
I don’t understand why the script would cause this deviation., and frankly this experience doesn’t encourage playing around with it.
I have a new laptop and I thought – i/o moving my macro – to create an Automated script.I recorded the script with the same steps as the macro:Select column A > data > text to column > Delimited on | sign > finish. Attached is the Automation script after my first change: adjust the destination from B1 to A1, because otherwise a whole csv string remained in A1, and the the first item of the headers started in B2 i/o A1. Strangely enough the rest of the rows were filled correctly, with the first item in column A.Later (after taking the screenshot) I also changed the range to A:A rather than a specified number of rows. After the changes the script seemed to work as far as the Text to Column bit went. However: something strange happened with the numerical data in my ‘price’ column (U). By the way, my Excel regional settings have a comma separator.The flat data in the csv contains this item in the form of 4 decimals. So 0 > 0,0000.And actually the 0,0000 is the only one that remains the same after running the script.All other amounts – while also starting out as a number with 4 digits after the comma – end up being translated as follows: 3,5500 becomes 35.500 I don’t understand why the script would cause this deviation., and frankly this experience doesn’t encourage playing around with it. Read More
Intune disables Tamper Protection by default
We noticed a strange quirk about Intune and have repeatedly tested it across multiple tenants with freshly reinstalled workstations running Windows 10.
Normally, Intune much like AD should not apply policies unless given a policy to apply. But we noticed that by default Intune will always apply a policy to DISABLE Tamper Protection by group policy when devices are enrolled unless you specifically make a configuration profile or otherwise to tell Intune to enable Tamper Protection on end devices.
This seems like a strange behavior, and is not documented anywhere in the Microsoft Learn website.
Also, if you run the Powershell command Get-MpComputerStatus you will see that TamperProtectionSource now gets listed as “Signatures” with no explanation. Again, there is no documentation about this type in Microsoft Learn or any other public KBs. The KBs only had information about other states such as UI, Transition, etc.
Is there a way to request Microsoft to provide documentation to fill in these important gaps in their knowledge base?
We noticed a strange quirk about Intune and have repeatedly tested it across multiple tenants with freshly reinstalled workstations running Windows 10. Normally, Intune much like AD should not apply policies unless given a policy to apply. But we noticed that by default Intune will always apply a policy to DISABLE Tamper Protection by group policy when devices are enrolled unless you specifically make a configuration profile or otherwise to tell Intune to enable Tamper Protection on end devices. This seems like a strange behavior, and is not documented anywhere in the Microsoft Learn website. Also, if you run the Powershell command Get-MpComputerStatus you will see that TamperProtectionSource now gets listed as “Signatures” with no explanation. Again, there is no documentation about this type in Microsoft Learn or any other public KBs. The KBs only had information about other states such as UI, Transition, etc. Is there a way to request Microsoft to provide documentation to fill in these important gaps in their knowledge base? Read More
Selection column shows strange characters
I have a normal selection column in a Sharepoint list with the following values:
RD Value1
RD value2
RDMD value1
RDMD value2
However, in the list the values are displayed as follows:
;#RD Value1;#
;#RD Value2;#
RDMD value1
RDMD value2
I can’t explain why some values start with ;# and end with ;#. Does anyone know this phenomenon and how can I fix it?
RD Value1
RD value2
RDMD value1
RDMD value2
However, in the list the values are displayed as follows:
;#RD Value1;#
;#RD Value21;#
RDMD value1
RDMD value2
I can’t explain why some values start with ;# and end with ;#. Does anyone know this phenomenon and how can I fix it?
I have a normal selection column in a Sharepoint list with the following values:RD Value1RD value2RDMD value1RDMD value2However, in the list the values are displayed as follows:;#RD Value1;#;#RD Value2;#RDMD value1RDMD value2I can’t explain why some values start with ;# and end with ;#. Does anyone know this phenomenon and how can I fix it?RD Value1RD value2RDMD value1RDMD value2However, in the list the values are displayed as follows:;#RD Value1;#;#RD Value21;#RDMD value1RDMD value2I can’t explain why some values start with ;# and end with ;#. Does anyone know this phenomenon and how can I fix it? Read More
Background of Calendar Item – SharePoint Calendar on Teams Tab
Hi,
I converted a classic SharePoint calendar to use a modern calendar view as the default view (instructions used) and then added it as a tab on Teams. The calendar itself is displayed as expected except for the background of the items. The background color is different from a regular Teams calendar and the contrast between the background and text color is bad. I tried to change the color using view formatting for the day but that is not working. Is this a bug on Teams or do I need to adjust something in SharePoint?
Hi, I converted a classic SharePoint calendar to use a modern calendar view as the default view (instructions used) and then added it as a tab on Teams. The calendar itself is displayed as expected except for the background of the items. The background color is different from a regular Teams calendar and the contrast between the background and text color is bad. I tried to change the color using view formatting for the day but that is not working. Is this a bug on Teams or do I need to adjust something in SharePoint? Read More
Linked data types broken again
Every once in a while I get the same error message “You need to be online to refresh your linked data types. Check your connection and try again.” See image:
This happens when I click on “Refresh All” for Data when trying to get a stock update. As shown in the next image, the last time it was successful for me was after market close 5/23/2024 19:59 EDT, though it might have worked the next day or a little longer if I wasn’t trying to do a more recent update before it stopped working.
Presumably it broke for everyone at the same time since this error has been reported in previous threads, including one last July with 10 pages of replies, and the consensus is that it was a server error.
Every once in a while I get the same error message “You need to be online to refresh your linked data types. Check your connection and try again.” See image: This happens when I click on “Refresh All” for Data when trying to get a stock update. As shown in the next image, the last time it was successful for me was after market close 5/23/2024 19:59 EDT, though it might have worked the next day or a little longer if I wasn’t trying to do a more recent update before it stopped working. Presumably it broke for everyone at the same time since this error has been reported in previous threads, including one last July with 10 pages of replies, and the consensus is that it was a server error. Read More
Calculated Column based on answer from another field
Hello,
I am trying to create a enable/disable user account list/workflow. This list is tied to a workflow that sends an email out and that email is read by our TrackIT system and then put into a specific queue based on the title. So the title needs to be a specific thing. I’m trying to create a calculated column that equals a specific text, based on the response to another field. Basically, if the user selects “Create New Account” or “Disable New Account” I want the Calculated field text to be either A or B (just for this example).
Does anyone know how I could formulate this? I’ve been trying to do it on my own, but its quite difficult. I’m not sure where I’m going wrong.
Thanks,
Zayne
Hello, I am trying to create a enable/disable user account list/workflow. This list is tied to a workflow that sends an email out and that email is read by our TrackIT system and then put into a specific queue based on the title. So the title needs to be a specific thing. I’m trying to create a calculated column that equals a specific text, based on the response to another field. Basically, if the user selects “Create New Account” or “Disable New Account” I want the Calculated field text to be either A or B (just for this example). Does anyone know how I could formulate this? I’ve been trying to do it on my own, but its quite difficult. I’m not sure where I’m going wrong. Thanks, Zayne Read More
word on my mac
hi all,can anyone help ,when i open word on my mac it very slow to respond,it sluggish for 10 minutes then starts to freeze and stops working i cannot even close word,i have to force shut it.and restart my computer this only been happening a few days but very frustrating,
im a bit of an IT dinosaurs could you be so kind to explain in simple terms
kind regards
hi all,can anyone help ,when i open word on my mac it very slow to respond,it sluggish for 10 minutes then starts to freeze and stops working i cannot even close word,i have to force shut it.and restart my computer this only been happening a few days but very frustrating,im a bit of an IT dinosaurs could you be so kind to explain in simple termskind regards Read More
Evaluation Flows for Large Language Models (LLM) in Azure AI Studio
Large Language Models (LLMs) are incredibly useful for generating natural language texts for tasks like summarization, translation, question answering, and text generation. However, they aren’t always perfect and can sometimes produce outputs that are inaccurate, irrelevant, biased, or even harmful. That’s why it’s super important to evaluate the outcomes from LLMs to ensure they meet the quality and ethical standards required for their intended use.
Imagine you’re using an LLM to help create content for a website. Without proper evaluation, the model might generate text that doesn’t quite fit the tone you’re looking for, or worse, it might include biased or incorrect information. This is where evaluation flows come in handy. These systematic procedures help you assess and improve the LLM’s outputs, making it easier to spot and fix errors, biases, and potential risks. Plus, evaluation flows can provide valuable feedback and guidance, helping developers and users align the LLM’s performance with business goals and user expectations. By incorporating evaluation steps, you can ensure a more user-friendly and reliable experience for everyone involved.
In this article, we will explain what evaluation flows are, and how we can implement them in Azure AI Studio. We will start by pointing out the motivations for evaluating outcomes from LLMs and provide some examples of business situations where the absence of evaluation can incur in problems for the business. Then, we will describe the main components and steps of evaluation flows and show how to use Azure AI Studio to create and execute evaluation flows for LLMs. Finally, we will discuss some best practices and challenges of evaluation flows and provide some resources for further learning.
Motivations for Evaluating Outcomes from LLMs
Evaluating the outcomes from Large Language Models (LLMs) is crucial for several reasons. First and foremost, it ensures the quality and accuracy of the texts they generate. Imagine using an LLM to create product descriptions. Without proper evaluation, the model might produce descriptions that are misleading, inaccurate, or irrelevant to the product’s features. By evaluating the LLM’s outputs, you can catch and correct these errors, improving the overall quality and accuracy of the text.
Another important aspect is ensuring the ethical and social responsibility of the generated texts. LLMs can sometimes produce biased, offensive, harmful, or even illegal content. For instance, if an LLM is used to write news articles, it might inadvertently generate text that is racist, sexist, or defamatory. Evaluating the outputs helps identify and mitigate these biases and risks, ensuring the texts are ethical and socially responsible.
It’s essential to ensure that the LLM’s outputs align with business goals and user expectations. Picture an LLM generating marketing emails. Without evaluation, these emails might come across as too formal, too casual, or just too generic, missing the mark entirely. By assessing the outputs, you can optimize their impact and relevance, making sure they effectively meet the business’s objectives and resonate with the target audience.
Failing to evaluate LLM outputs can lead to serious problems for a business. For instance, if the generated texts are low-quality, unethical, or irrelevant, customers and users may lose trust and interest. Consider an LLM that produces fake or biased product reviews. Customers would likely stop trusting these reviews and might even turn to competitors.
Moreover, if the LLM generates harmful, offensive, or illegal content, the business could face legal, regulatory, or social repercussions. Imagine an LLM generating defamatory or false news articles; the business could end up facing lawsuits, fines, or boycotts, severely damaging its reputation and credibility.
Finally, the effectiveness of LLM outputs directly impacts a business’s competitive advantage and profitability. If the texts aren’t persuasive, personalized, or engaging—like in marketing emails—the business might fail to boost sales, conversions, or retention rates, ultimately losing its edge in the market.
Best Practices and Challenges of Evaluation Flows
Evaluation flows for LLMs are not trivial or straightforward, and they involve various best practices and challenges that users should be aware of and address, such as:
Defining clear and realistic evaluation goals and objectives. Users should specify what they want to evaluate, why they want to evaluate, and how they want to evaluate, and align their evaluation goals and objectives with the business goals and user expectations.
Choosing appropriate and reliable evaluation data and metrics. Users should select data and metrics that are representative, diverse, and sufficient for the evaluation task, and ensure that they are relevant, reliable, and valid for the evaluation task.
Choosing appropriate and robust evaluation methods and actions. Users should select methods and actions that are appropriate, robust, and scalable for the evaluation task, and ensure that they are transparent, explainable, and accountable for the evaluation results and impact.
Conducting iterative and continuous evaluation. Users should conduct evaluation in an iterative and continuous manner, and update and refine their evaluation data, metrics, methods, and actions based on the feedback and findings from the evaluation.
Collaborating and communicating with stakeholders. Users should collaborate and communicate with various stakeholders, such as developers, users, customers, and regulators, and involve them in the evaluation process and outcomes, and address their needs, concerns, and expectations.
Evaluation flows for LLMs are an essential and valuable part of the LLM lifecycle, and they can help users to ensure the quality, ethics, and effectiveness of the LLM outputs, and to achieve the desired outcomes and objectives of the LLM use cases. In general, there are several different ways to implement evaluation flows, but the best strategy will always rely on using proper tools for managing, deploying and monitoring both the LLMs behavior and its evaluation flows. And we have the proper tool to do that.
How to evaluate LLM models using Azure AI Studio?
Azure AI Studio is a cloud-based platform that enables users to build, deploy, and manage LLM models and applications. It provides various features and tools that support the evaluation flows for LLMs, such as:
You just have to provide a name and select the scenario for the evaluation project:
You can also select a Prompt Flow for the evaluation project. Although optional, it’s suggested the usage of prompt flow as an orchestrator due to its capacity to manage the connections and the requirements on each evaluation step, as well as for logging and properly decoupling each step of the flow. In this example, we use a prompt flow solution to provide the model’s answers to the questions and be evaluated against the ground truth. Feel free to use your own assistant here. Basically, the idea is to generate the answer for a given input (you can also pre-process this data and provide to the dataset):
Next, it’s time to select the dataset for the evaluation project. In the Azure-Samples/llm-evaluation (github.com) repository we provide some methods to synthetically generate the data based on a retrieval index. It’s important to notice that, since the task at hand is an evaluation based on reference texts and / or well-defined text outcomes, the data used on the task should be a set of questions and answers that allow the flow to tag what is improper, proper, and how well does a given response will fit under the provided reference.
Since those metrics are based on referenced values for a well-defined set of texts and, in particular for QnA tasks, based on a question-answer pair, it’s suitable for a limited amount of texts that is applicable for a wider range of situations, thus needing a single file that contains this referential question an answer pairs. Here is an example of a QnA dataset. In this case, we only have a pair of question/ground_truth data given the answer will be provide by the prompt flow assistant. Use Add your dataset to upload the file:
In our example, we use a Prompt Flow solution to provide the model’s answers to the questions and be evaluated against the ground truth. For this, we select the dataset column $(data.question). As in any evaluation job, you’ll need to define what is the proper metrics that you want to apply. Not only you can rely on Azure AI Studio’s built-in metrics, but you could also include extra metrics and strategies, including those that depend on evaluating the embeddings. For that, proceed as follows:
We use GPT-4 as the model for the evaluation project due to its higher inference and cognitive capacities, but the proper choice would depend on the task you have at hand. You can select the model from the list of available models. We also added Risk and safety metrics to mitigate any potential risks regarding model’s misuse.
As mentioned earlier, we use the Prompt Flow solution to provide the model’s answers to the questions. It’s an opt-in, where you could just use the default engine for the evaluation, but in this case we can map the output with the GPT similarity metric to evaluate the ground truth against the model’s answers:
Finally, you can submit the evaluation project and view the results .
In our example dataset we provided two samples with wrong ground truth answers to evaluate the model’s GPT-similarity metric. Notice that the results demonstrate a low performance for these two samples. It is expected as the ground truth answers are incorrect.
We can see that the Similarity score for the two incorrect samples are very low given the incorrect ground truth labels. In the real-world is expected that the models can produce wrong answers, contrasting with the perfect ground truths values. Finally, the main dashboard on your Azure AI Studio project gives the average score for each metric.
Resources
Evaluation of generative AI applications
How to evaluate generative AI apps with Azure AI Studio
Microsoft Tech Community – Latest Blogs –Read More
What to do with document “content” from docx file
Hello,
I can successfully connect to and call a Graph API to retrieve a document from a sharepoint document library using
https://graph.microsoft.com/v1.0/sites/{siteId}/drives/{driveId}/items/{itemId}/content
there is a screenshot below of the returned value using postman for a sample
What exactly is this?
What am I supposed to do with this?
A lot of googling seems to imply I need to write this to a local docx file on my system, but is there no way to convert/parse this to json or something so I am query it directly
Ultimately the end goal will be do something like
“external system places a document in a sp folder”
“use graph to get the document”
“do some magic with the document gotten from graph”
write a test to say “does the document contain the text ‘my string value'”
I am using typescript with playwright to write automated tests
Hello,I can successfully connect to and call a Graph API to retrieve a document from a sharepoint document library using https://graph.microsoft.com/v1.0/sites/{siteId}/drives/{driveId}/items/{itemId}/content there is a screenshot below of the returned value using postman for a sample What exactly is this?What am I supposed to do with this?A lot of googling seems to imply I need to write this to a local docx file on my system, but is there no way to convert/parse this to json or something so I am query it directlyUltimately the end goal will be do something like”external system places a document in a sp folder””use graph to get the document””do some magic with the document gotten from graph”write a test to say “does the document contain the text ‘my string value'”I am using typescript with playwright to write automated tests Read More
Expand a single row
Hello everyone !
I need to validate something with you.
In a pivot table, I have the same row in multiple sections (e.g : a product 1 in 5 different stores). I would like to expand the row for product 1 in store 1 without expanding the rows for product 1 in the other 4 stores.
In my case, when I expand product 1 in store 1, it also expands product 1 in the other stores.
Is it possible ?
Thank you
Hello everyone ! I need to validate something with you. In a pivot table, I have the same row in multiple sections (e.g : a product 1 in 5 different stores). I would like to expand the row for product 1 in store 1 without expanding the rows for product 1 in the other 4 stores. In my case, when I expand product 1 in store 1, it also expands product 1 in the other stores. Is it possible ? Thank you Read More
I have an Issue with event DropDown in combobox in Access’s form
n a form, I have a combo box that changes dynamically, i.e., during execution each time I select it. Everything works fine except for the DropDown, which does not “auto-expand.” I am sending the code I am using.
Thanks
Const MarcaSQL1 As String = “SELECT pres_marca.Marca FROM pres_marca;”
Const MarcaSQL2 As String = “SELECT pres_modelo.Descripcion FROM pres_modelo WHERE (((pres_modelo.marca)=[textomarca]));”
Const MarcaSQL3 As String = “SELECT pres_cisterna.descripcion, pres_cisterna.Marca FROM pres_cisterna WHERE (((pres_cisterna.Marca) = [Textomarca])) ORDER BY pres_cisterna.Cisterna;”
Private Sub Form_Current()
Seleccion = 1
Caracteristicas.RowSource = MarcaSQL1
Caracteristicas.Requery
End Sub
Private Sub Caracteristicas_AfterUpdate()
If Seleccion = 1 Then
Textomarca = Caracteristicas
Seleccion = 2
Caracteristicas.RowSource = MarcaSQL2
Caracteristicas.setfocus
DoCmd.GoToControl “Caracteristicas”
Caracteristicas.Dropdown
ElseIf Seleccion = 2 Then
Textomodelo = Caracteristicas
Seleccion = 3
Caracteristicas.RowSource = MarcaSQL3
Caracteristicas.Requery
Caracteristicas.setfocus
DoCmd.GoToControl “Caracteristicas”
Caracteristicas.Dropdown
ElseIf Seleccion = 3 Then
TextoCisterna = Caracteristicas
Seleccion = 1
Caracteristicas.RowSource = MarcaSQL1
Caracteristicas.Requery
Caracteristicas.setfocus
DoCmd.GoToControl “Caracteristicas”
Caracteristicas.Dropdown
End If
End Sub
n a form, I have a combo box that changes dynamically, i.e., during execution each time I select it. Everything works fine except for the DropDown, which does not “auto-expand.” I am sending the code I am using.Thanks Const MarcaSQL1 As String = “SELECT pres_marca.Marca FROM pres_marca;”
Const MarcaSQL2 As String = “SELECT pres_modelo.Descripcion FROM pres_modelo WHERE (((pres_modelo.marca)=[textomarca]));”
Const MarcaSQL3 As String = “SELECT pres_cisterna.descripcion, pres_cisterna.Marca FROM pres_cisterna WHERE (((pres_cisterna.Marca) = [Textomarca])) ORDER BY pres_cisterna.Cisterna;”
Private Sub Form_Current()
Seleccion = 1
Caracteristicas.RowSource = MarcaSQL1
Caracteristicas.Requery
End Sub
Private Sub Caracteristicas_AfterUpdate()
If Seleccion = 1 Then
Textomarca = Caracteristicas
Seleccion = 2
Caracteristicas.RowSource = MarcaSQL2
Caracteristicas.setfocus
DoCmd.GoToControl “Caracteristicas”
Caracteristicas.Dropdown
ElseIf Seleccion = 2 Then
Textomodelo = Caracteristicas
Seleccion = 3
Caracteristicas.RowSource = MarcaSQL3
Caracteristicas.Requery
Caracteristicas.setfocus
DoCmd.GoToControl “Caracteristicas”
Caracteristicas.Dropdown
ElseIf Seleccion = 3 Then
TextoCisterna = Caracteristicas
Seleccion = 1
Caracteristicas.RowSource = MarcaSQL1
Caracteristicas.Requery
Caracteristicas.setfocus
DoCmd.GoToControl “Caracteristicas”
Caracteristicas.Dropdown
End If
End Sub Read More
Action ‘terminate’- “Enter custom value” not working
ello,
the action ‘terminate’ provides the possibility to enter a custom value.
But every trial ended up with an error.
Can anyone give me a hint, how to use this?
Br.
ello,the action ‘terminate’ provides the possibility to enter a custom value.But every trial ended up with an error. Can anyone give me a hint, how to use this? Br. Read More
Microsoft Graph API: can’t filter by companyName
I am trying to filter users by the ‘companyName’ field and always receive a ‘400 Bad Request’ error. As suggested in many similar issues, I added ‘$count=true’ and the ‘ConsistencyLevel=eventual’ header, but still no luck. Is it possible at all to filter users in my tenant by this field?
I am trying to filter users by the ‘companyName’ field and always receive a ‘400 Bad Request’ error. As suggested in many similar issues, I added ‘$count=true’ and the ‘ConsistencyLevel=eventual’ header, but still no luck. Is it possible at all to filter users in my tenant by this field? betav1 Read More
Unable to get user details after Notification Bot is app installed
I’ve a Notification Bot, which is currently deployed to my organization, and all the users are able to see it and add to their teams app individually. I want to send welcome message to all users who’ve installed the app. I’ve an endpoint added in the bot framework which gives me an update whenever someone adds or remove the app in their teams app. But whenever someone is installing the app, I’m getting this error “Failed to acquire token”. I’ve tried to install the app from Admin teams to all users as well from particular user by going to “Teams store > Built for your organization > App > click on “Add””. But both of the options are giving me same error.
Here is the Bot permission given by admin in Portal Azure
I’ve added the Application Id as my BotId according to the documentation and BotSecret as well
Please help me in solving this problem.
P.S. I don’t know what this “d6d49420-xxxx-xxxx-xxxx-xxxxxx” resembles while the former one is my BotId i.e., “28c8b3b1-xxxx-xxxx-xxxx-xxxxxxxx”.
Thanks in advance.
I’ve a Notification Bot, which is currently deployed to my organization, and all the users are able to see it and add to their teams app individually. I want to send welcome message to all users who’ve installed the app. I’ve an endpoint added in the bot framework which gives me an update whenever someone adds or remove the app in their teams app. But whenever someone is installing the app, I’m getting this error “Failed to acquire token”. I’ve tried to install the app from Admin teams to all users as well from particular user by going to “Teams store > Built for your organization > App > click on “Add””. But both of the options are giving me same error.Here is the Bot permission given by admin in Portal AzureI’ve added the Application Id as my BotId according to the documentation and BotSecret as wellPlease help me in solving this problem.P.S. I don’t know what this “d6d49420-xxxx-xxxx-xxxx-xxxxxx” resembles while the former one is my BotId i.e., “28c8b3b1-xxxx-xxxx-xxxx-xxxxxxxx”.Thanks in advance. Read More
Outlook.live.com inbox not working and sending messages is working fine
what to do if no messages arrive on outlook.live.com? I don’t have any Microsoft 365 package, is it necessary to receive messages in the web version? I can send messages and it works fine, but receiving messages does not. the email address is from an external server. In outlook.live.com it is not possible to set POP and IMAP for an external mailbox. what should I do? These are the only POP and IMAP settings visible in the screenshot
what to do if no messages arrive on outlook.live.com? I don’t have any Microsoft 365 package, is it necessary to receive messages in the web version? I can send messages and it works fine, but receiving messages does not. the email address is from an external server. In outlook.live.com it is not possible to set POP and IMAP for an external mailbox. what should I do? These are the only POP and IMAP settings visible in the screenshot Read More
Global Policy for Channel Edit and Delete Settings
I can’t believe there is not a global policy for this setting. We create many New Teams a Month with members of multiple competency levels. Going into each Team individually and disabling the create, edit, delete options is not effective on multiple levels. As it stands, by default in Teams, any member can delete an entire channels worth of information in a single click. The only option around this is to go into every single team and change a setting which effects all users, including Owners. How is there a messaging policy for preventing users from editing and deleting messages, but not one for channels!
Does anyone know if there are plans to add this option. I have read there is a powershell to disable all users. But this defeats the point of policies and add issues when other admins need to make changes in the future.
I can’t believe there is not a global policy for this setting. We create many New Teams a Month with members of multiple competency levels. Going into each Team individually and disabling the create, edit, delete options is not effective on multiple levels. As it stands, by default in Teams, any member can delete an entire channels worth of information in a single click. The only option around this is to go into every single team and change a setting which effects all users, including Owners. How is there a messaging policy for preventing users from editing and deleting messages, but not one for channels! Does anyone know if there are plans to add this option. I have read there is a powershell to disable all users. But this defeats the point of policies and add issues when other admins need to make changes in the future. Read More