Category: Microsoft
Category Archives: Microsoft
Maximizing Performance: Leveraging PTUs with Client Retry Mechanisms in LLM Applications
Introduction
Achieving maximum performance in PTU environments requires sophisticated handling of API interactions, especially when dealing with rate limits (429 errors). This blog post introduces a technique that exemplifies how to maintain optimal performance using Azure OpenAI’s API by intelligently managing rate limits. This method strategically switches between PTU and Standard deployments, enhancing throughput and reducing latency.
Initial Interaction
The client initiates contact by sending a request to the PTU model.
Successful Response Handling
If the response from the PTU model is received without issues, the transaction concludes.
Rate Limit Management
When a rate limit error occurs, the script calculates the total elapsed time by summing the elapsed time since the initial request and the ‘retry-after-ms’ period indicated in the error.
This total is compared to a predefined ‘maximum wait time’.
If the total time surpasses this threshold, the script switches to the Standard model to reduce latency.
Conversely, if the total time is below the threshold, the script pauses for the ‘retry-after-ms’ period before reattempting with the PTU model.
This approach not only manages the 429 errors effectively but also ensures that the performance of your application is not hindered by unnecessary delays.
Benefits
Handling Rate Limits Gracefully
Automated Retry Logic: The script handles RateLimitError exceptions by automatically retrying after a specified delay, ensuring that temporary rate limit issues do not cause immediate failure.
Fallback Mechanism: If the rate limit would cause a significant delay, the script switches to a standard deployment, maintaining the application’s responsiveness and reliability.
Improved User Experience
Latency Management: By setting a maximum acceptable latency (PTU_MAX_WAIT), the script ensures that users do not experience excessive wait times. If the latency for the preferred deployment exceeds this threshold, the script switches to an alternative deployment to provide a quicker response.
Continuous Service Availability: Users receive responses even when the primary service (PTU model) is under heavy load, as the script can fall back to a secondary service (standard model).
Resilience and Robustness
Error Handling: The approach includes robust error handling for RateLimitError, preventing the application from crashing or hanging when the rate limit is exceeded.
Logging: Detailed logging provides insights into the application’s behavior, including response times and when fallbacks occur. This information is valuable for debugging and optimizing performance.
Optimized Resource Usage
Adaptive Resource Allocation: By switching between PTU and standard models based on latency and rate limits, the script optimizes resource usage, balancing between cost (PTU might be more cost-effective) and performance (standard deployment as a fallback).
Scalability
Dynamic Adaptation: As the application’s usage scales, the dynamic retry and fallback mechanism ensures that it can handle increased load without manual intervention. This is crucial for applications expecting varying traffic patterns.
Getting Started
To deploy this script in your environment:
Clone this repository to your machine.
Install required Python packages with pip install -r requirements.txt.
Configure the necessary environment variables:
OPENAI_API_BASE: The base URL of the OpenAI API.
OPEN_API_KEY: Your OpenAI API key.
PTU_DEPLOYMENT: The deployment ID of your PTU model.
STANDARD_DEPLOYMENT: The deployment ID of your standard model.
Adjust the MAX_RETRIES and PTU_MAX_WAIT constants within the script based on your specific needs.
Run the script using python smart_retry.py.
Key Constants in the Script
MAX_RETRIES: This constant governs the number of retries the script will attempt after a rate limit error, utilizing the Python SDK’s built-in retry capability.
PTU_MAX_WAIT: This constant sets the maximum allowable time (in milliseconds) that the script will wait before switching to the Standard deployment to maintain responsiveness.
By leveraging this smart retry mechanism, you can ensure your application’s performance remains optimal even under varying load conditions, providing a reliable and efficient user experience.
Conclusion
The Python script for Azure OpenAI discussed here is a critical tool for developers looking to optimize performance in PTU environments. By effectively managing 429 errors and dynamically switching between deployments based on real-time latency evaluations, it ensures that your applications remain fast and reliable. This strategy is vital for maintaining service quality in high-demand situations, making it an invaluable addition to any developer’s toolkit.
Microsoft Tech Community – Latest Blogs –Read More
Windows 11 not capable
We are attempting to identify assets in our environment that are Windows 11 capable. According to our Intune reporting we have 800 where the Sys Req Issues is saying “System Drive Size.” The assets in question have more that 200GB hard drive space available.
Any insight would be greatly appreciated.
We are attempting to identify assets in our environment that are Windows 11 capable. According to our Intune reporting we have 800 where the Sys Req Issues is saying “System Drive Size.” The assets in question have more that 200GB hard drive space available. Any insight would be greatly appreciated. Read More
Cannot create channel with CreateTeam Graph API cmd
Hello,
Since yesterday, I have been unable to create channels in Teams using the Graph API command.
It only creates the teams with the default General channel, and occasionally it does create the channels specified in the body of the Graph command.
This behavior has been random since I recreated the custom connector in Power Automate. I then completely recreated the custom connector with a new redirection URL, but it continues to function erratically. Do you have any ideas on this matter? I have exhausted all my resources.
Thank you in advance. :folded_hands:
Create team – Microsoft Graph v1.0 | Microsoft Learn
Hello,Since yesterday, I have been unable to create channels in Teams using the Graph API command.It only creates the teams with the default General channel, and occasionally it does create the channels specified in the body of the Graph command.This behavior has been random since I recreated the custom connector in Power Automate. I then completely recreated the custom connector with a new redirection URL, but it continues to function erratically. Do you have any ideas on this matter? I have exhausted all my resources.Thank you in advance. :folded_hands: Create team – Microsoft Graph v1.0 | Microsoft Learn Read More
Copy activity in Azure Data Factory silently fails
I am using the ADF Copy Activity to extract an Office365 mailbox to Blob Storage. I have the pipeline configured and running but it seems that it extracts incomplete data:
The source inputs a filter on date ranges. Specifically I am filtering on recievedDateTime.
Whenever I use a time range larger than 1 month, I am getting very little data in the blob storage (less than 20 Mb) and the line count is always a suspiciosely round number (600, 800,…).
When I use smaller intervals- I as much as 200 Mb per month of data with a ‘rgular’ non round number of a few thousand emails.
I know for sure this is not accidental- there is no reduced activity in the mailbox at the ‘problematic’ time range.
I am new to ADF, so reaching out to how can I even troubleshoot this?
I’ve tried enabling logs but I do not see anything interesting there (as if they are not being written).
My backup plan is to perform the extraction in small intervals and perhaps automate this process but looking for something that “just works”
I am using the ADF Copy Activity to extract an Office365 mailbox to Blob Storage. I have the pipeline configured and running but it seems that it extracts incomplete data:The source inputs a filter on date ranges. Specifically I am filtering on recievedDateTime. Whenever I use a time range larger than 1 month, I am getting very little data in the blob storage (less than 20 Mb) and the line count is always a suspiciosely round number (600, 800,…).When I use smaller intervals- I as much as 200 Mb per month of data with a ‘rgular’ non round number of a few thousand emails.I know for sure this is not accidental- there is no reduced activity in the mailbox at the ‘problematic’ time range.I am new to ADF, so reaching out to how can I even troubleshoot this?I’ve tried enabling logs but I do not see anything interesting there (as if they are not being written).My backup plan is to perform the extraction in small intervals and perhaps automate this process but looking for something that “just works” Read More
Teams Auto Attendant – Dial by Extension transfer to Resource Accounts
For the past couple of years, we have been able to populate the Business Phone field of the licensed Resource Account used for auto attendants or call queues, with an extension. Some are the full e.164 +17805551000;ext=1234 or just x1234, and to be clear, this is just in the Business Phone field in Entra, nothing to do with the LineURI.
Up until this week, calling into an AA with Dial by Extension, and entering in the extension assigned a resource account, the transfer has worked. Out client can’t have been the only one who have used this feature.
Anyone else using this and suddenly it’s not working?
For the past couple of years, we have been able to populate the Business Phone field of the licensed Resource Account used for auto attendants or call queues, with an extension. Some are the full e.164 +17805551000;ext=1234 or just x1234, and to be clear, this is just in the Business Phone field in Entra, nothing to do with the LineURI. Up until this week, calling into an AA with Dial by Extension, and entering in the extension assigned a resource account, the transfer has worked. Out client can’t have been the only one who have used this feature. Anyone else using this and suddenly it’s not working? Read More
Partner Alert: FY25 Partner Activities Update
Summary
Partner engagement with Business Applications Partner Activities was at an all-time high in FY24, with a record number of workshops delivered to customers to ensure successful partner-led implementation and adoption of Dynamics 365 and Power Platform. Looking ahead to FY25, we’ll be building on this momentum with improvements to the partner and customer experience. Based on partner feedback, we are making the following changes:
Extending access to partner engagements to the full fiscal year
Aligning the engagement portfolio with solution plays
Providing more consistent access to activities across markets
FY25 updates
Extension of partner engagements to full fiscal year
FY24 engagements will close on June 30, and FY25 engagement nominations will open on July 1. This change eliminates any pre-commit period, providing uninterrupted access to claiming and executing workshops from the first day of the fiscal year to the last day of the fiscal year.
Solution play-aligned engagements
Starting July 1st, we will be transitioning our portfolio of funded engagements to align to our solution plays. New engagements to watch for July 1, include:
Low Code Vision & Value: Build customer intent to innovate with AI in low-code environments. Provide guidance on how to develop a transformation vision, prioritize scenarios, define value, and accelerate adoption.
ERP Vision & Value: Drive customer intent to modernize their on-premises ERP systems with Dynamics 365. Develop a well-crafted vision of the customer’s future state with clear business outcomes and success metrics.
Customer Engagement Vision & Value: Build customer confidence by showcasing the vision and strategy for an AI-powered transformation through CRM migration to Dynamics 365.
Low Code Solution Accelerator: Accelerate post-sales adoption of low-code solutions and ensure customer value realization. Develop stakeholder buy-in through prioritized scenarios and demonstrated solution value.
*To view all available engagements in FY25, refer to the updated MCI Program Guide launching July 1.
More consistent access across markets
In FY25 we want to ensure customers in all geographies have access to the benefits offered by partner-led workshops. To achieve this, we’ll be implementing caps on the number of workshops eligible for credit per partner in any single country. These caps will be announced on July 1.
Immediate FY24 updates
To help facilitate the transition to the new FY25 activity portfolio, we are making some changes to FY24 engagements. Effective immediately, the following engagements are being retired:
Solution Assessment
Dynamics 365 Sales – Needs Assessment
Dynamics 365 Success Planning
Dynamics 365 Success by Design Performance Check
Dynamics 365 Value Realization
Dynamics 365 Solution Optimization
Power Platform Center of Excellence
Power Platform Pro Dev Success Enablement
Partners will no longer be able to submit new claims for the above workshops. However, these changes do not impact active claims. Partners can complete delivery of any workshops that have already been approved in accordance with the program rules.
We have capped the number of Envisioning and Business Value Assessment workshops that a partner is allowed to complete in any single country. These caps, outlined below, are effective immediately.
Workshop Name
Cap for Market A Countries
Cap for Market B Countries
Cap for Market C Countries
Business Value Assessment – Variable Payout
20
15
10
Envisioning – Variable – Variable Payout
30
15
10
Please review the MCI Incentives Guide for a list of Market Area Countries (page 149)
Next Steps
Thank you for your continued engagement with Business Applications Partner Activities. We’re excited about all the improvements coming in FY25 and look forward to sharing more information in the coming weeks. Please attend the June 14 Partner Office Hours and Partner Activities webinar series happening at the end of June to learn more about the upcoming changes.
In the meantime, you can keep claiming and executing FY24 engagements for the remainder of June. You’ll be able to submit nominations for FY25 engagements starting July 1!
Call to Action
Register for the June office hours and deep-dive webinar series to learn about FY25 updates:
June 14 (8am PST): Partner Activities Office Hours
June 24 (8am PST): Low Code Vision & Value Deep Dive
June 25 (8am PST): Customer Engagement Vision & Value Deep Dive
June 26 (8am PST): ERP Vision & Value Deep Dive
June 27 (8am PST): Low Code Solution Accelerator Deep Dive
Bookmark the Business Applications Partner Activities page to stay updated with the latest resources and program announcements for FY25
Submit a query for Partner Activities Tier 1 support and any other feedback or questions
Bookmark the Partner Center – Microsoft Commerce Incentives (MCI) Engagements Workspace
Review the MCI Program Guide and Resources
Stay Connected with Business Applications Partner Resources
NEW! Sign up for the Dynamics 365 and Power Platform Partner Newsletters
Follow the Dynamics 365 and Power Platform partner LinkedIn channels
Bookmark the Dynamics 365 and Power Platform Partner Hub pages
Join and engage in the Business Applications Microsoft Partner Community
Summary
Partner engagement with Business Applications Partner Activities was at an all-time high in FY24, with a record number of workshops delivered to customers to ensure successful partner-led implementation and adoption of Dynamics 365 and Power Platform. Looking ahead to FY25, we’ll be building on this momentum with improvements to the partner and customer experience. Based on partner feedback, we are making the following changes:
Extending access to partner engagements to the full fiscal year
Aligning the engagement portfolio with solution plays
Providing more consistent access to activities across markets
FY25 updates
Extension of partner engagements to full fiscal year
FY24 engagements will close on June 30, and FY25 engagement nominations will open on July 1. This change eliminates any pre-commit period, providing uninterrupted access to claiming and executing workshops from the first day of the fiscal year to the last day of the fiscal year.
Solution play-aligned engagements
Starting July 1st, we will be transitioning our portfolio of funded engagements to align to our solution plays. New engagements to watch for July 1, include:
Low Code Vision & Value: Build customer intent to innovate with AI in low-code environments. Provide guidance on how to develop a transformation vision, prioritize scenarios, define value, and accelerate adoption.
ERP Vision & Value: Drive customer intent to modernize their on-premises ERP systems with Dynamics 365. Develop a well-crafted vision of the customer’s future state with clear business outcomes and success metrics.
Customer Engagement Vision & Value: Build customer confidence by showcasing the vision and strategy for an AI-powered transformation through CRM migration to Dynamics 365.
Low Code Solution Accelerator: Accelerate post-sales adoption of low-code solutions and ensure customer value realization. Develop stakeholder buy-in through prioritized scenarios and demonstrated solution value.
*To view all available engagements in FY25, refer to the updated MCI Program Guide launching July 1.
More consistent access across markets
In FY25 we want to ensure customers in all geographies have access to the benefits offered by partner-led workshops. To achieve this, we’ll be implementing caps on the number of workshops eligible for credit per partner in any single country. These caps will be announced on July 1.
Immediate FY24 updates
To help facilitate the transition to the new FY25 activity portfolio, we are making some changes to FY24 engagements. Effective immediately, the following engagements are being retired:
Solution Assessment
Dynamics 365 Sales – Needs Assessment
Dynamics 365 Success Planning
Dynamics 365 Success by Design Performance Check
Dynamics 365 Value Realization
Dynamics 365 Solution Optimization
Power Platform Center of Excellence
Power Platform Pro Dev Success Enablement
Partners will no longer be able to submit new claims for the above workshops. However, these changes do not impact active claims. Partners can complete delivery of any workshops that have already been approved in accordance with the program rules.
We have capped the number of Envisioning and Business Value Assessment workshops that a partner is allowed to complete in any single country. These caps, outlined below, are effective immediately.
Workshop Name
Cap for Market A Countries
Cap for Market B Countries
Cap for Market C Countries
Business Value Assessment – Variable Payout
20
15
10
Envisioning – Variable – Variable Payout
30
15
10
Please review the MCI Incentives Guide for a list of Market Area Countries (page 149)
Next Steps
Thank you for your continued engagement with Business Applications Partner Activities. We’re excited about all the improvements coming in FY25 and look forward to sharing more information in the coming weeks. Please attend the June 14 Partner Office Hours and Partner Activities webinar series happening at the end of June to learn more about the upcoming changes.
In the meantime, you can keep claiming and executing FY24 engagements for the remainder of June. You’ll be able to submit nominations for FY25 engagements starting July 1!
Call to Action
Register for the June office hours and deep-dive webinar series to learn about FY25 updates:
June 14 (8am PST): Partner Activities Office Hours
June 24 (8am PST): Low Code Vision & Value Deep Dive
June 25 (8am PST): Customer Engagement Vision & Value Deep Dive
June 26 (8am PST): ERP Vision & Value Deep Dive
June 27 (8am PST): Low Code Solution Accelerator Deep Dive
Bookmark the Business Applications Partner Activities page to stay updated with the latest resources and program announcements for FY25
Submit a query for Partner Activities Tier 1 support and any other feedback or questions
Bookmark the Partner Center – Microsoft Commerce Incentives (MCI) Engagements Workspace
Review the MCI Program Guide and Resources
Stay Connected with Business Applications Partner Resources
NEW! Sign up for the Dynamics 365 and Power Platform Partner Newsletters
Follow the Dynamics 365 and Power Platform partner LinkedIn channels
Bookmark the Dynamics 365 and Power Platform Partner Hub pages
Join and engage in the Business Applications Microsoft Partner Community Read More
Google Drive Migration Service Not Available
I have come across an error when trying to perform a Drive migration in the SharePoint admin center. I have all the correct permissions in both the Google and Microsoft environments, but after installing the M365 migration app in the Google workspace, I am unable to proceed.
Does anyone else have this error? Is the service actually down? Or is this an error on my side?
Thanks for the help!
I have come across an error when trying to perform a Drive migration in the SharePoint admin center. I have all the correct permissions in both the Google and Microsoft environments, but after installing the M365 migration app in the Google workspace, I am unable to proceed. Does anyone else have this error? Is the service actually down? Or is this an error on my side? Thanks for the help! Read More
Azure App that was working with MS SQL Server database now not working with Azure SQL Database
We had a web page app in Azure that ran a form for data retrieval. When we removed the SQL server and moved the database to Azure SQL, the app no longer works. We tried the ADO.NET (SQL authentication) connection string from the database page, but the app is not working.
The error we are getting is “The page cannot be displayed because an internal server error has occurred.”
We are having trouble trying to figure out if we have a connection issue to the database or this is a problem with the app itself.
Any suggestions would be appreciated please.
We had a web page app in Azure that ran a form for data retrieval. When we removed the SQL server and moved the database to Azure SQL, the app no longer works. We tried the ADO.NET (SQL authentication) connection string from the database page, but the app is not working. The error we are getting is “The page cannot be displayed because an internal server error has occurred.” We are having trouble trying to figure out if we have a connection issue to the database or this is a problem with the app itself. Any suggestions would be appreciated please. Read More
Footnote numbers all change to 8 has been solved.
Microsoft indicates that Footnote numbers all change to 8 has been solved. This problem has been fixed by a service change, which fully rolled out on May 29th, 2024.
I have large documents( 500+ pages) where footnotes restart on every page. It has never been an issue. Now that Microsoft has fixed the issue, how do the documents get corrected.
I have automatic updates on and updates have been made, just not this one specifically. Thanks!
Microsoft indicates that Footnote numbers all change to 8 has been solved. This problem has been fixed by a service change, which fully rolled out on May 29th, 2024.I have large documents( 500+ pages) where footnotes restart on every page. It has never been an issue. Now that Microsoft has fixed the issue, how do the documents get corrected. I have automatic updates on and updates have been made, just not this one specifically. Thanks! Read More
Usage of surrogate keys
is it good practice to always use auto incremented primary keys to identify entities in a relational database. For example if an order is identified by three composite keys (customer_id, orderdateTime, orderItem) shouldn’t I just make a new surrogate PK order_id which identifies all other attributes?
is it good practice to always use auto incremented primary keys to identify entities in a relational database. For example if an order is identified by three composite keys (customer_id, orderdateTime, orderItem) shouldn’t I just make a new surrogate PK order_id which identifies all other attributes? Read More
AMA on client devices
We have followed the guidance outlined below to get AMA installed and working on a few test client devices and they are sending logs to the Event table in our Sentinel workspace.
https://learn.microsoft.com/en-us/azure/azure-monitor/agents/azure-monitor-agent-windows-client
The problem we face is with the Windows Security Events via AMA connector. Is there a supported way to get client devices to populate security events into the SecurityEvent table? I see the events in the ‘Event’ table but not the SecurityEvent table. It seems like the Sentinel security events connector only sees DCR’s that are created in Sentinel, it does not see the DCR’s that are created outside of Sentinel. Is that a bug or by design?
Any guidance is appreciated, we have had data in SecurityEvent from client devices via MMA for a few years and expected to be able to continue to ingest them properly via AMA.
We have followed the guidance outlined below to get AMA installed and working on a few test client devices and they are sending logs to the Event table in our Sentinel workspace. https://learn.microsoft.com/en-us/azure/azure-monitor/agents/azure-monitor-agent-windows-client The problem we face is with the Windows Security Events via AMA connector. Is there a supported way to get client devices to populate security events into the SecurityEvent table? I see the events in the ‘Event’ table but not the SecurityEvent table. It seems like the Sentinel security events connector only sees DCR’s that are created in Sentinel, it does not see the DCR’s that are created outside of Sentinel. Is that a bug or by design? Any guidance is appreciated, we have had data in SecurityEvent from client devices via MMA for a few years and expected to be able to continue to ingest them properly via AMA. Read More
Cloud security posture and contextualization across cloud boundaries from a single dashboard
Introduction:
Have you ever found yourself in a situation where you wanted to prioritize the riskiest misconfigurations on cloud workloads across Azure, AWS, and GCP? Have you ever wondered how to implement a unified dashboard for cloud security posture across a multicloud environment?
This article covers how you can achieve these scenarios by using Defender Cloud Security Posture Management’s (CSPM) native support for resources inside Azure, and resources in AWS and/or GCP.
For more information about Defender for Cloud’s multicloud support you can start at https://learn.microsoft.com/en-us/azure/defender-for-cloud/multicloud
To help you understand how to use Defender for Cloud to prioritize riskiest misconfigurations across your multicloud environment, all inside of a single dashboard, this article covers three topic in the following sequence:
Understanding the benefits of Defender CSPM for multicloud environments.
Implementing a unified security dashboard for cloud security posture.
Optimizing security response and compliance reporting.
Understand the benefits of Defender CSPM for multicloud environments:
When it comes to the plethora of different cloud service at your disposal, certain resource types could be more at risk than others, depending on how they’re configured, whether they’re exploitable and/or exposed to the Internet. Besides virtual machines, storage accounts, Kubernetes clusters, and databases come to mind.
Imagine if you have a compute resource, like an EC2 instance that is public exposed, with vulnerabilities and can access other resources in your environment. When combined together, these misconfigurations can represent a serious security risk to your environment, because an attacker might potentially use them to compromise your environment and move laterally inside of it.
For organizations pursuing a multicloud strategy, risky misconfigurations can even span public cloud providers. Have you ever found yourself in a situation where you use compute resources in one public cloud provider and databases in another public cloud provider? If an organization is using more than one public cloud provider, this can represent risk of attackers potentially compromising resources inside of one environment, and using those resources to move to other public cloud environments.
Defender CSPM can help organizations close off potential entry points for attackers by helping them understand what misconfigurations in their environment they need to focus on first (figure 1), and by doing that, increase their overall security posture and minimize the risk of their environment getting compromised.
By knowing what they need to focus on first, organizations can remediate misconfigurations faster and essentially do more with less, saving the organization both time and resources. By identifying what are the organization’s critical assets and potential threats to those assets, organizations can allocate resources more effectively and prioritize remediation efforts for business critical resources. This helps them address vulnerabilities more quickly and reduces the overall risk to their organization.
Implement a unified security dashboard for cloud security posture:
Organizations pursuing a multicloud strategy often find themselves in a situation where they need to operate more than one public cloud environment and manage it in ways that can differ across public cloud providers. This is applicable to security as well. Meaning you should take into consideration different security configurations for each resource type in each cloud provider that you’re using.
When you look at large environments, and especially for organizations pursuing a multicloud strategy, this can introduce security risks, particularly if there is lack of visibility across the entire environment and if security is managed in siloes.
This is also where standardization of cloud security posture across a multicloud estate can help. You need to be able to speak the same language across different public cloud providers. For example, using international standards and best practices, which can be a relevant reference point for senior management. Another one are metrics or key performance indicators (KPIs). You must be able to measure progress and avoid confusion when reporting security statuses. Also, when reporting vulnerabilities to the senior management. One good approach here is to have a centralized CSPM solution (figure 2).
By having CSPM as part of a Cloud Native Application Protection Platform (CNAPP), it helps organizations break down security siloes and connect the dots between CSPM and other areas of CNAPP to help paint a fuller picture.
Optimizing security response and compliance reporting:
Many security teams struggle with the sheer amount of security findings, and needing to prioritize is crucial for effectively minimizing risk in an organization’s environment. Organizations which are not able to prioritize their remediation efforts I see spending a lot of time and resources, and not getting their desired return of investment (ROI).
And ROI is important because it’s used to secure future budget allocations for cybersecurity initiatives. Therefore, it’s critical to have simple KPIs to showcase how efforts have prevented breaches, reduced downtime and minimized financial losses. Several organizations that I work with mentioned a real need for a simple KPI that will help them to break down complex security metrics into easy-to-understand KPI, both for the senior management and for the business owners.
This way, management and business owners, who might not be experts in cybersecurity, can quickly understand why these efforts matter for protecting the business, why they need to prioritize the remediation process, and understand the importance of investing budget in this area.
Another struggle that I see is the need to detect the relevant owners in the organization, who own resources on which an issue or security risk is detected. Ensuring workload owners understand the remediation steps and address the issues quickly is another key point that organizations need to consider. Many organizations already have existing processes in place for this, be it change management or an ITSM, so having a way to integrate with existing business processes and ITSMs can help with this regard (figure 3).
Conclusion:
This article provides food for thought when it comes to prioritizing riskiest misconfigurations across your multicloud environment, all inside of a single dashboard by using Defender CSPM.
Reviewers:
Giulio Astori, Principal Product Manager, Microsoft
Microsoft Tech Community – Latest Blogs –Read More
Better Debuggability with Enhanced Logging in Azure Load Testing
Debuggability of test scripts during load testing is crucial for identifying and resolving issues early in the testing process. It allows you to validate the test configuration, understand the application behavior under load, and troubleshoot any issues that arise. Today, we are excited to introduce Debug mode in Azure Load Testing, which enables running low scale test runs with better debuggability and enhanced logging.
Why Debug Mode?
Debug mode is designed to help you validate your test configuration and application behavior by running a load test with a single engine for up to 10 minutes. It provides debug logs for the test script, and request and response data for every failed request during the test run. This mode is powerful for troubleshooting issues with your test plan configuration.
Here are some key benefits of using Debug mode:
Validation: Debug mode allows you to validate your test configuration and application behavior before running a full-scale load test. This can save time and resources by identifying issues early.
Troubleshooting: With debug logs enabled, you can easily identify issues with your test script. This can be particularly useful when setting up complex test scenarios.
Detailed error analysis: Debug mode includes request and response data for every failed request during the test run. This can help you pinpoint the root cause of any issues and make necessary changes to your test script or application.
Resource efficiency: Tests run in debug mode are executed with a single engine and are limited to a maximum duration of 10 minutes. This can help you identify the number of virtual users can be generated on one engine by monitoring engine health metrics.
How to Enable Debug Mode?
Enabling debug mode is simple and straightforward. You can enable it for your first test run while creating a new test or when running an existing test. Just select the Debug mode in the Basics tab while creating or running your test and you’re good to go!
Next steps
Debug mode lets you see more information about your load tests, so you be confident that it runs as expected when run at high scale. It’s recommended to run the first test run in debug mode. Get started with Azure Load Testing here. If you already have been using the service, you can learn more about debug mode here. If you have any feedback, let us know through our feedback forum.
Happy load testing!
Microsoft Tech Community – Latest Blogs –Read More
can I use Microsoft Project Desktop client or pwa and Planner Premium with 1 p3 license in parrarel?
My question is:
Can I utilize 1 license – p3 for example if I want to use both Planner Premium and Project (desktop or online instance)?
Is there any documentation about it? I want to make sure that there will no problem using both solutions .
My question is:Can I utilize 1 license – p3 for example if I want to use both Planner Premium and Project (desktop or online instance)?Is there any documentation about it? I want to make sure that there will no problem using both solutions . Read More
Server 2022 KB5037782 Failed Error 8024200B, 8007000D
Greetings!
Been trying to complete updates for May on my newly built Microsoft Windows Server 2022 that is offline. I installed the OS from the disk and I was able to install the latest updates from March 2024, and April 2024 but for some reason this specific update for May 2024 will not install. I have tried installing it from my WSUS server and even from the Actual update file from the Microsoft update catalog. I have also tried resetting the WindowsUpdate components and the CatRoot2 and the SoftwareDistribution folder but I still keep getting these errors. I am working on a second server install and I will see if I get the same error. Has anyone else seen this problem in the wild?
2024-05 Cumulative update for Microsoft server operating system version 21H2 for x64-based Systems (KB5037782).
The WindowsUpdateLog shows the following
Agent *FAILED* [8024200B] file = onecoreenduserwindowsupdateclientenginehandlercbslibuhcbs.cpp line = 708
Deployment *FAILED* [8007000D] Deployment job Id 4BD248AC-579E-4B5F-9C33-62E55C2A26D7 : Installing for Top level update id b857bc87-0cf1-44df-8af8-d365775f96c3.501, bundled update id 475f0455-e871-4375-8116-4cc9d4fd2563b.501 [CUpdateDeploymentJob::DeploySingleUpdateInternal:3103]
Deployment *FAILED* [8024200B] Deployment job Id 4BD248AC-579E-4B5F-9C33-62E55C2A26D7 : Installing for Top level update id b857bc87-0cf1-44df-8af8-d365775f96c3.501, bundled update id 475f0455-e871-4375-8116-4cc9d4fd2563b.501 [CUpdateDeploymentJob::DeploySingleUpdateInternal:3122]
Deployment *FAILED* [8024200B] file = onecoreenduserwindowsupdateclientupdatedeploymentlibupdatedep
Greetings! Been trying to complete updates for May on my newly built Microsoft Windows Server 2022 that is offline. I installed the OS from the disk and I was able to install the latest updates from March 2024, and April 2024 but for some reason this specific update for May 2024 will not install. I have tried installing it from my WSUS server and even from the Actual update file from the Microsoft update catalog. I have also tried resetting the WindowsUpdate components and the CatRoot2 and the SoftwareDistribution folder but I still keep getting these errors. I am working on a second server install and I will see if I get the same error. Has anyone else seen this problem in the wild? 2024-05 Cumulative update for Microsoft server operating system version 21H2 for x64-based Systems (KB5037782). The WindowsUpdateLog shows the following Agent *FAILED* [8024200B] file = onecoreenduserwindowsupdateclientenginehandlercbslibuhcbs.cpp line = 757
Agent *FAILED* [8024200B] file = onecoreenduserwindowsupdateclientenginehandlercbslibuhcbs.cpp line = 708
Deployment *FAILED* [8007000D] Deployment job Id 4BD248AC-579E-4B5F-9C33-62E55C2A26D7 : Installing for Top level update id b857bc87-0cf1-44df-8af8-d365775f96c3.501, bundled update id 475f0455-e871-4375-8116-4cc9d4fd2563b.501 [CUpdateDeploymentJob::DeploySingleUpdateInternal:3103]
Deployment *FAILED* [8024200B] Deployment job Id 4BD248AC-579E-4B5F-9C33-62E55C2A26D7 : Installing for Top level update id b857bc87-0cf1-44df-8af8-d365775f96c3.501, bundled update id 475f0455-e871-4375-8116-4cc9d4fd2563b.501 [CUpdateDeploymentJob::DeploySingleUpdateInternal:3122]
Deployment *FAILED* [8024200B] file = onecoreenduserwindowsupdateclientupdatedeploymentlibupdatedep Read More
Migrating users from on-prem AD to AzureAD only
Hello,
We are in the process of migrating to AzureAD for all users and devices.
Users are currently synced from on-prem AD to AzureAD using the Azure Directory Sync tool.
We don’t have a significant number of users, and so use a manual process, that has problems.
To migrate users, our current process is as follows:
Move the user in on-prem AD to an OU that is not part of the Directory SynchronisationRun a delta sync on the Sync ToolIn AzureAD, the user is deleted. We manually re-enable them
The problem is that in carrying out this process, the user is removed from all the Teams Private Channels that they were a member of (they retain the overall team membership).
Is there a better way to break the AD sync for a user, retaining them in AzureAD and also retaining all their private channel memberships?
Thanks in advance.
Hello,We are in the process of migrating to AzureAD for all users and devices.Users are currently synced from on-prem AD to AzureAD using the Azure Directory Sync tool.We don’t have a significant number of users, and so use a manual process, that has problems. To migrate users, our current process is as follows:Move the user in on-prem AD to an OU that is not part of the Directory SynchronisationRun a delta sync on the Sync ToolIn AzureAD, the user is deleted. We manually re-enable themThe problem is that in carrying out this process, the user is removed from all the Teams Private Channels that they were a member of (they retain the overall team membership). Is there a better way to break the AD sync for a user, retaining them in AzureAD and also retaining all their private channel memberships? Thanks in advance. Read More
Windows Device Configuration Profiles
Is it possible to create a Windows Device Configuration Profile that will push a particular wallpaper to the device is the user’s Entra profile > Department value is “MHS”, for example?
Currently, we are sending a District branded lock screen and desktop image based on the device name but are wondering if this can be even more dynamic based on the primary user’s Entra profile, instead? We have different lock screens using this same logic for Staff, Student and Long-Term Subs.
Thank you for considering!
Is it possible to create a Windows Device Configuration Profile that will push a particular wallpaper to the device is the user’s Entra profile > Department value is “MHS”, for example? Currently, we are sending a District branded lock screen and desktop image based on the device name but are wondering if this can be even more dynamic based on the primary user’s Entra profile, instead? We have different lock screens using this same logic for Staff, Student and Long-Term Subs. Thank you for considering! Read More
Is there a roadmap for Windows 11 to provide the same full personalization features Windows 10 does?
Is there a roadmap for Windows 11 provide the same personalization features Windows 10 does?
Is there a roadmap for Windows 11 provide the same personalization features Windows 10 does? Read More
How to Plot Blood Pressure over date time?
I am trying to plot my husband’s blood pressure over date and time.
So that my doctor can see what date he took his blood pressure, what time of day, and I’m going to have to use to top number to do the plot and I was thinking I’d put the full reading in the label somehow, maybe inside the bar if I did bar charting.
I tried doing this having one column with the Date Time format and then the top blood pressure.
I get weird plots. This is some sample data I am trying to plot. Any help greatly appreciated!!
Date TimeTop5/25/2024 18:101445/26/2024 8:151315/27/2024 10:001495/27/2024 18:001675/28/2024 5:451345/28/2024 21:001525/29/2024 5:301325/29/2024 20:301405/30/2024 6:001355/30/2024 21:501495/31/2024 5:501385/31/2024 18:00138
I am trying to plot my husband’s blood pressure over date and time. So that my doctor can see what date he took his blood pressure, what time of day, and I’m going to have to use to top number to do the plot and I was thinking I’d put the full reading in the label somehow, maybe inside the bar if I did bar charting. I tried doing this having one column with the Date Time format and then the top blood pressure.I get weird plots. This is some sample data I am trying to plot. Any help greatly appreciated!! Date TimeTop5/25/2024 18:101445/26/2024 8:151315/27/2024 10:001495/27/2024 18:001675/28/2024 5:451345/28/2024 21:001525/29/2024 5:301325/29/2024 20:301405/30/2024 6:001355/30/2024 21:501495/31/2024 5:501385/31/2024 18:00138 Read More
The Evolution of GenAI Application Deployment Strategy: Building Custom Co-Pilot (PoC)
Building Custom Co-Pilot (PoC)
Azure OpenAI is a cloud service that allows developers to access the powerful capabilities of OpenAI, while enjoying the benefits of security, governance, and more of Azure. When moving from the first initial ideation phase, and starting to move towards a Proof of Concept(PoC)/ Proof of Value(PoV)/ Proof of Technology(PoT), there are a number of considerations that need to be made to ensure this phase is of success.
One of the most common applications of a PoC on Azure OpenAI is to build a custom co-pilot, a tool that can assist both internal employee’s and external users with a wide range of activities, such as summarization, content generation, or more technical such as, suggesting relevant code snippets, completing code blocks; or explaining code logic. This “co-pilot” approach is a tried and tested approach at all levels of maturity across enterprises, as it is a very low-hanging fruit, but a fruit that offers real benefits to those who are both developing and using the application at a PoC phase.
Given the wide scope of technologies that can encompass this phase, I have divided up and placed everything into four defined approaches; each with subsequent pro’s and con’s. To give a quick summary, essentially each path takes a level of code/no-code/low-code, or a combination of all; depending on the level of customization outside of the degrees of black-box that are found within each approach.
That is not to say that one approach is better than another one, but instead, that for a PoC, given simplicity is appreciated, one could look at the greatest abstractions (such as no-code), as the first option to test, given limited time-sink albeit at the cost of complexity, and work through the approaches one-by-one to find the perfect level of trade-offs that are acceptable. With a number of primary aims in a PoC, being to generate excitement between both business and technology, to prove hypothesises of technology and value, and drive towards the next phase; it is really important to be able to iterate quickly and decipher where the current iteration is succeeding or failing, which is where a level of complexity, found in the low/code first approaches provide more value; but again, at more of a time sink.
Let’s talk through some of the approaches
Code-First:
With the inclusion of various packages, such as Microsoft’s Semantic Kernel, or LangChain allow for the orchestration of Azure OpenAI and other microservices to create a copilot solution. This allows for the greatest level of complexity, through code, alias the greatest amount of time to set-up and run.
Usually, these frameworks would sit either in the backend of the code, or run as an orchestrator through some level of abstraction/serverless compute offering, such as a function app.
This deployment can be seen as robust and future-proofing but could be overcomplicating at an earlier stage than required. The newly launched Azure AI Studio is a trusted platform that enables users to create generative AI applications and custom Copilot experiences. Typical PoC’s at this stage implore typical use-cases, such as “Chat with Data”, or RAG (Retrieval Augmented Generation) patterns; which, given their tried and tested nature, can be comparatively easier to implement through our next pattern; being Low-Code.
Low-Code:
This approach takes advantage of some of the “black box” approaches and integrations of Azure, abstracting away some of the difficulty in orchestrating microservices that are found in the purely code-first approach. A number of these are, PromptFlow and Co-Pilot Studio. These offer a more streamlined approach towards that of a RAG-style co-pilot and allow for the goal of a PoC to be achieved, that much faster and more efficiently. A great example of this is found here.
Prompt Flow, as the orchestrator, offers special benefits through abstractions and prebuilt “nodes”, that can streamline and automate a large amount of the code that we would have to write, and even goes as far as one-click creation of complex data structures through automated embeddings and vectorization databases, massively speeding up this phase, and bringing us closer to real value.
No-Code:
Finally, we have a number of no code accelerators for typical co-pilot scenarios, that abstract everything through the GUI and allow us to very quickly adapt a predefined siloed dataset into the base of knowledge that we need for a co-pilot PoC. The typical one is called “Chat with your Data” from both the co-pilot studio and Azure OpenAI portals.
From a PoC point of view, this really allows for the speed and efficiency of this stage to be realised. Without complex code, or specific knowledge around GenAI, this method really allows us to drive and focus on value, before potentially including more complexity in a later stage.
Hybrid:
This approach involves using a combination of the above approaches, depending on the complexity of the co-pilot. For example, a developer in this phase can use the code first approach to write the core logic of the code, and then use the no-code approach to generate additional code features or functionalities. A great example of this is using Prompt Flow, first starting to work on the solution in either a no-code or low-code approach, and then iterating through code subsequently.
The process depicted above shows how the MSFT team is actively involved in assisting our customers in choosing a PoC path, regardless of the PoC development methodology. We will support customers in assessing the strategy, considering factors such as the use case, skills, technology preference, viability, and timeline.
Summary
To summarize, the text describes three different approaches to developing a co-pilot using GenAI:
Code first: This approach involves writing the code manually and then using GenAI to improve it or add new features. This is suitable for developers who have prior experience with coding and want to have more control over the code quality and functionality.
No-code: This approach involves using a graphical interface or natural language to specify the requirements and then using GenAI to generate the code automatically. This is suitable for non-developers who want to create a co-pilot without writing any code and focus on the value proposition.
Hybrid: This approach involves using a combination of the above approaches, depending on the complexity of the co-pilot. For example, a developer can use the code first approach to write the core logic and then use the no-code approach to generate additional features or functionalities. This is suitable for developers who want to leverage the best of both worlds and iterate quickly.
Series: Next article will discuss about consideration and approach of moving from GenAI PoC to MVP
Author: Morgan Gladwell,
Co-Author: @arung
@Paolo Colecchia @Stephen Rhoades @Taonga_Banda @renbafa @morgan Gladwell
Microsoft Tech Community – Latest Blogs –Read More