Category: News
Skilling snack: Plan, prepare, and deploy Windows 11
If you still haven’t made the move from Windows 10, the good news is that a well-executed upgrade can be seamless and user-friendly, all while carrying forward your investment in Windows 10. Check out the following resources to help you plan, prepare, and deploy Windows 11.
Time to learn: 134 minutes
LEARN
Plan to deploy updates for Windows clients and Microsoft 365 apps
Making a good plan starts with understanding what you’re planning for. In this learning module, you’ll learn all about the Windows servicing process, and how you can use it to optimize your update experience.
(46 mins)
Windows 10 + Microsoft 365 apps + Office + Compatibility
LEARN
Prepare to deploy updates for Windows client and Microsoft 365 apps
Now that you have your plan, you’re ready to make your preparations. This learning module will take you through using workstreams to get your organization ready for deployment.
(42 mins)
Windows 10 + Microsoft 365 apps + Office + Readiness
LEARN
Deploy updates for Windows client and Microsoft 365 apps
Do you know what to expect from your deployment phase? This learning module will equip you with everything you need to implement updates throughout your organization.
(46 mins)
Windows 10 + Microsoft 365 apps + Office + Deployment
BOOKMARK
Tune into this IT podcast, where you’ll hear interviews and discussions on the latest in Windows 11 innovations. Be on the lookout for talks on planning, preparing, and deploying Windows 11.
(time varies)
Windows 11 + Intune + Windows 365 + WUfB
As you work on preparing your IT department, review how you can also prepare people at your organization with the skilling snack on end-user readiness.
Once you deploy Windows 11 after careful planning and preparation, a new journey begins. Look forward to upcoming skilling snacks on how to manage your Windows 11 devices. Leave us a comment below with any suggestions for future skilling snacks.
Continue the conversation. Find best practices. Bookmark the Windows Tech Community, then follow us @MSWindowsITPro on X/Twitter. Looking for support? Visit Windows on Microsoft Q&A.
Microsoft Tech Community – Latest Blogs –Read More
Maximizing Performance: Leveraging PTUs with Client Retry Mechanisms in LLM Applications
Introduction
Achieving maximum performance in PTU environments requires sophisticated handling of API interactions, especially when dealing with rate limits (429 errors). This blog post introduces a technique that exemplifies how to maintain optimal performance using Azure OpenAI’s API by intelligently managing rate limits. This method strategically switches between PTU and Standard deployments, enhancing throughput and reducing latency.
Initial Interaction
The client initiates contact by sending a request to the PTU model.
Successful Response Handling
If the response from the PTU model is received without issues, the transaction concludes.
Rate Limit Management
When a rate limit error occurs, the script calculates the total elapsed time by summing the elapsed time since the initial request and the ‘retry-after-ms’ period indicated in the error.
This total is compared to a predefined ‘maximum wait time’.
If the total time surpasses this threshold, the script switches to the Standard model to reduce latency.
Conversely, if the total time is below the threshold, the script pauses for the ‘retry-after-ms’ period before reattempting with the PTU model.
This approach not only manages the 429 errors effectively but also ensures that the performance of your application is not hindered by unnecessary delays.
Benefits
Handling Rate Limits Gracefully
Automated Retry Logic: The script handles RateLimitError exceptions by automatically retrying after a specified delay, ensuring that temporary rate limit issues do not cause immediate failure.
Fallback Mechanism: If the rate limit would cause a significant delay, the script switches to a standard deployment, maintaining the application’s responsiveness and reliability.
Improved User Experience
Latency Management: By setting a maximum acceptable latency (PTU_MAX_WAIT), the script ensures that users do not experience excessive wait times. If the latency for the preferred deployment exceeds this threshold, the script switches to an alternative deployment to provide a quicker response.
Continuous Service Availability: Users receive responses even when the primary service (PTU model) is under heavy load, as the script can fall back to a secondary service (standard model).
Resilience and Robustness
Error Handling: The approach includes robust error handling for RateLimitError, preventing the application from crashing or hanging when the rate limit is exceeded.
Logging: Detailed logging provides insights into the application’s behavior, including response times and when fallbacks occur. This information is valuable for debugging and optimizing performance.
Optimized Resource Usage
Adaptive Resource Allocation: By switching between PTU and standard models based on latency and rate limits, the script optimizes resource usage, balancing between cost (PTU might be more cost-effective) and performance (standard deployment as a fallback).
Scalability
Dynamic Adaptation: As the application’s usage scales, the dynamic retry and fallback mechanism ensures that it can handle increased load without manual intervention. This is crucial for applications expecting varying traffic patterns.
Getting Started
To deploy this script in your environment:
Clone this repository to your machine.
Install required Python packages with pip install -r requirements.txt.
Configure the necessary environment variables:
OPENAI_API_BASE: The base URL of the OpenAI API.
OPEN_API_KEY: Your OpenAI API key.
PTU_DEPLOYMENT: The deployment ID of your PTU model.
STANDARD_DEPLOYMENT: The deployment ID of your standard model.
Adjust the MAX_RETRIES and PTU_MAX_WAIT constants within the script based on your specific needs.
Run the script using python smart_retry.py.
Key Constants in the Script
MAX_RETRIES: This constant governs the number of retries the script will attempt after a rate limit error, utilizing the Python SDK’s built-in retry capability.
PTU_MAX_WAIT: This constant sets the maximum allowable time (in milliseconds) that the script will wait before switching to the Standard deployment to maintain responsiveness.
By leveraging this smart retry mechanism, you can ensure your application’s performance remains optimal even under varying load conditions, providing a reliable and efficient user experience.
Conclusion
The Python script for Azure OpenAI discussed here is a critical tool for developers looking to optimize performance in PTU environments. By effectively managing 429 errors and dynamically switching between deployments based on real-time latency evaluations, it ensures that your applications remain fast and reliable. This strategy is vital for maintaining service quality in high-demand situations, making it an invaluable addition to any developer’s toolkit.
Microsoft Tech Community – Latest Blogs –Read More
i need help for this one please…i need to add each element of matrix A to corresponding element in matrix B
2 matrixes…add each element of corresponding matrixes2 matrixes…add each element of corresponding matrixes 2 matrixes…add each element of corresponding matrixes matrix MATLAB Answers — New Questions
Windows 11 not capable
We are attempting to identify assets in our environment that are Windows 11 capable. According to our Intune reporting we have 800 where the Sys Req Issues is saying “System Drive Size.” The assets in question have more that 200GB hard drive space available.
Any insight would be greatly appreciated.
We are attempting to identify assets in our environment that are Windows 11 capable. According to our Intune reporting we have 800 where the Sys Req Issues is saying “System Drive Size.” The assets in question have more that 200GB hard drive space available. Any insight would be greatly appreciated. Read More
Cannot create channel with CreateTeam Graph API cmd
Hello,
Since yesterday, I have been unable to create channels in Teams using the Graph API command.
It only creates the teams with the default General channel, and occasionally it does create the channels specified in the body of the Graph command.
This behavior has been random since I recreated the custom connector in Power Automate. I then completely recreated the custom connector with a new redirection URL, but it continues to function erratically. Do you have any ideas on this matter? I have exhausted all my resources.
Thank you in advance. :folded_hands:
Create team – Microsoft Graph v1.0 | Microsoft Learn
Hello,Since yesterday, I have been unable to create channels in Teams using the Graph API command.It only creates the teams with the default General channel, and occasionally it does create the channels specified in the body of the Graph command.This behavior has been random since I recreated the custom connector in Power Automate. I then completely recreated the custom connector with a new redirection URL, but it continues to function erratically. Do you have any ideas on this matter? I have exhausted all my resources.Thank you in advance. :folded_hands: Create team – Microsoft Graph v1.0 | Microsoft Learn Read More
Copy activity in Azure Data Factory silently fails
I am using the ADF Copy Activity to extract an Office365 mailbox to Blob Storage. I have the pipeline configured and running but it seems that it extracts incomplete data:
The source inputs a filter on date ranges. Specifically I am filtering on recievedDateTime.
Whenever I use a time range larger than 1 month, I am getting very little data in the blob storage (less than 20 Mb) and the line count is always a suspiciosely round number (600, 800,…).
When I use smaller intervals- I as much as 200 Mb per month of data with a ‘rgular’ non round number of a few thousand emails.
I know for sure this is not accidental- there is no reduced activity in the mailbox at the ‘problematic’ time range.
I am new to ADF, so reaching out to how can I even troubleshoot this?
I’ve tried enabling logs but I do not see anything interesting there (as if they are not being written).
My backup plan is to perform the extraction in small intervals and perhaps automate this process but looking for something that “just works”
I am using the ADF Copy Activity to extract an Office365 mailbox to Blob Storage. I have the pipeline configured and running but it seems that it extracts incomplete data:The source inputs a filter on date ranges. Specifically I am filtering on recievedDateTime. Whenever I use a time range larger than 1 month, I am getting very little data in the blob storage (less than 20 Mb) and the line count is always a suspiciosely round number (600, 800,…).When I use smaller intervals- I as much as 200 Mb per month of data with a ‘rgular’ non round number of a few thousand emails.I know for sure this is not accidental- there is no reduced activity in the mailbox at the ‘problematic’ time range.I am new to ADF, so reaching out to how can I even troubleshoot this?I’ve tried enabling logs but I do not see anything interesting there (as if they are not being written).My backup plan is to perform the extraction in small intervals and perhaps automate this process but looking for something that “just works” Read More
Teams Auto Attendant – Dial by Extension transfer to Resource Accounts
For the past couple of years, we have been able to populate the Business Phone field of the licensed Resource Account used for auto attendants or call queues, with an extension. Some are the full e.164 +17805551000;ext=1234 or just x1234, and to be clear, this is just in the Business Phone field in Entra, nothing to do with the LineURI.
Up until this week, calling into an AA with Dial by Extension, and entering in the extension assigned a resource account, the transfer has worked. Out client can’t have been the only one who have used this feature.
Anyone else using this and suddenly it’s not working?
For the past couple of years, we have been able to populate the Business Phone field of the licensed Resource Account used for auto attendants or call queues, with an extension. Some are the full e.164 +17805551000;ext=1234 or just x1234, and to be clear, this is just in the Business Phone field in Entra, nothing to do with the LineURI. Up until this week, calling into an AA with Dial by Extension, and entering in the extension assigned a resource account, the transfer has worked. Out client can’t have been the only one who have used this feature. Anyone else using this and suddenly it’s not working? Read More
Partner Alert: FY25 Partner Activities Update
Summary
Partner engagement with Business Applications Partner Activities was at an all-time high in FY24, with a record number of workshops delivered to customers to ensure successful partner-led implementation and adoption of Dynamics 365 and Power Platform. Looking ahead to FY25, we’ll be building on this momentum with improvements to the partner and customer experience. Based on partner feedback, we are making the following changes:
Extending access to partner engagements to the full fiscal year
Aligning the engagement portfolio with solution plays
Providing more consistent access to activities across markets
FY25 updates
Extension of partner engagements to full fiscal year
FY24 engagements will close on June 30, and FY25 engagement nominations will open on July 1. This change eliminates any pre-commit period, providing uninterrupted access to claiming and executing workshops from the first day of the fiscal year to the last day of the fiscal year.
Solution play-aligned engagements
Starting July 1st, we will be transitioning our portfolio of funded engagements to align to our solution plays. New engagements to watch for July 1, include:
Low Code Vision & Value: Build customer intent to innovate with AI in low-code environments. Provide guidance on how to develop a transformation vision, prioritize scenarios, define value, and accelerate adoption.
ERP Vision & Value: Drive customer intent to modernize their on-premises ERP systems with Dynamics 365. Develop a well-crafted vision of the customer’s future state with clear business outcomes and success metrics.
Customer Engagement Vision & Value: Build customer confidence by showcasing the vision and strategy for an AI-powered transformation through CRM migration to Dynamics 365.
Low Code Solution Accelerator: Accelerate post-sales adoption of low-code solutions and ensure customer value realization. Develop stakeholder buy-in through prioritized scenarios and demonstrated solution value.
*To view all available engagements in FY25, refer to the updated MCI Program Guide launching July 1.
More consistent access across markets
In FY25 we want to ensure customers in all geographies have access to the benefits offered by partner-led workshops. To achieve this, we’ll be implementing caps on the number of workshops eligible for credit per partner in any single country. These caps will be announced on July 1.
Immediate FY24 updates
To help facilitate the transition to the new FY25 activity portfolio, we are making some changes to FY24 engagements. Effective immediately, the following engagements are being retired:
Solution Assessment
Dynamics 365 Sales – Needs Assessment
Dynamics 365 Success Planning
Dynamics 365 Success by Design Performance Check
Dynamics 365 Value Realization
Dynamics 365 Solution Optimization
Power Platform Center of Excellence
Power Platform Pro Dev Success Enablement
Partners will no longer be able to submit new claims for the above workshops. However, these changes do not impact active claims. Partners can complete delivery of any workshops that have already been approved in accordance with the program rules.
We have capped the number of Envisioning and Business Value Assessment workshops that a partner is allowed to complete in any single country. These caps, outlined below, are effective immediately.
Workshop Name
Cap for Market A Countries
Cap for Market B Countries
Cap for Market C Countries
Business Value Assessment – Variable Payout
20
15
10
Envisioning – Variable – Variable Payout
30
15
10
Please review the MCI Incentives Guide for a list of Market Area Countries (page 149)
Next Steps
Thank you for your continued engagement with Business Applications Partner Activities. We’re excited about all the improvements coming in FY25 and look forward to sharing more information in the coming weeks. Please attend the June 14 Partner Office Hours and Partner Activities webinar series happening at the end of June to learn more about the upcoming changes.
In the meantime, you can keep claiming and executing FY24 engagements for the remainder of June. You’ll be able to submit nominations for FY25 engagements starting July 1!
Call to Action
Register for the June office hours and deep-dive webinar series to learn about FY25 updates:
June 14 (8am PST): Partner Activities Office Hours
June 24 (8am PST): Low Code Vision & Value Deep Dive
June 25 (8am PST): Customer Engagement Vision & Value Deep Dive
June 26 (8am PST): ERP Vision & Value Deep Dive
June 27 (8am PST): Low Code Solution Accelerator Deep Dive
Bookmark the Business Applications Partner Activities page to stay updated with the latest resources and program announcements for FY25
Submit a query for Partner Activities Tier 1 support and any other feedback or questions
Bookmark the Partner Center – Microsoft Commerce Incentives (MCI) Engagements Workspace
Review the MCI Program Guide and Resources
Stay Connected with Business Applications Partner Resources
NEW! Sign up for the Dynamics 365 and Power Platform Partner Newsletters
Follow the Dynamics 365 and Power Platform partner LinkedIn channels
Bookmark the Dynamics 365 and Power Platform Partner Hub pages
Join and engage in the Business Applications Microsoft Partner Community
Summary
Partner engagement with Business Applications Partner Activities was at an all-time high in FY24, with a record number of workshops delivered to customers to ensure successful partner-led implementation and adoption of Dynamics 365 and Power Platform. Looking ahead to FY25, we’ll be building on this momentum with improvements to the partner and customer experience. Based on partner feedback, we are making the following changes:
Extending access to partner engagements to the full fiscal year
Aligning the engagement portfolio with solution plays
Providing more consistent access to activities across markets
FY25 updates
Extension of partner engagements to full fiscal year
FY24 engagements will close on June 30, and FY25 engagement nominations will open on July 1. This change eliminates any pre-commit period, providing uninterrupted access to claiming and executing workshops from the first day of the fiscal year to the last day of the fiscal year.
Solution play-aligned engagements
Starting July 1st, we will be transitioning our portfolio of funded engagements to align to our solution plays. New engagements to watch for July 1, include:
Low Code Vision & Value: Build customer intent to innovate with AI in low-code environments. Provide guidance on how to develop a transformation vision, prioritize scenarios, define value, and accelerate adoption.
ERP Vision & Value: Drive customer intent to modernize their on-premises ERP systems with Dynamics 365. Develop a well-crafted vision of the customer’s future state with clear business outcomes and success metrics.
Customer Engagement Vision & Value: Build customer confidence by showcasing the vision and strategy for an AI-powered transformation through CRM migration to Dynamics 365.
Low Code Solution Accelerator: Accelerate post-sales adoption of low-code solutions and ensure customer value realization. Develop stakeholder buy-in through prioritized scenarios and demonstrated solution value.
*To view all available engagements in FY25, refer to the updated MCI Program Guide launching July 1.
More consistent access across markets
In FY25 we want to ensure customers in all geographies have access to the benefits offered by partner-led workshops. To achieve this, we’ll be implementing caps on the number of workshops eligible for credit per partner in any single country. These caps will be announced on July 1.
Immediate FY24 updates
To help facilitate the transition to the new FY25 activity portfolio, we are making some changes to FY24 engagements. Effective immediately, the following engagements are being retired:
Solution Assessment
Dynamics 365 Sales – Needs Assessment
Dynamics 365 Success Planning
Dynamics 365 Success by Design Performance Check
Dynamics 365 Value Realization
Dynamics 365 Solution Optimization
Power Platform Center of Excellence
Power Platform Pro Dev Success Enablement
Partners will no longer be able to submit new claims for the above workshops. However, these changes do not impact active claims. Partners can complete delivery of any workshops that have already been approved in accordance with the program rules.
We have capped the number of Envisioning and Business Value Assessment workshops that a partner is allowed to complete in any single country. These caps, outlined below, are effective immediately.
Workshop Name
Cap for Market A Countries
Cap for Market B Countries
Cap for Market C Countries
Business Value Assessment – Variable Payout
20
15
10
Envisioning – Variable – Variable Payout
30
15
10
Please review the MCI Incentives Guide for a list of Market Area Countries (page 149)
Next Steps
Thank you for your continued engagement with Business Applications Partner Activities. We’re excited about all the improvements coming in FY25 and look forward to sharing more information in the coming weeks. Please attend the June 14 Partner Office Hours and Partner Activities webinar series happening at the end of June to learn more about the upcoming changes.
In the meantime, you can keep claiming and executing FY24 engagements for the remainder of June. You’ll be able to submit nominations for FY25 engagements starting July 1!
Call to Action
Register for the June office hours and deep-dive webinar series to learn about FY25 updates:
June 14 (8am PST): Partner Activities Office Hours
June 24 (8am PST): Low Code Vision & Value Deep Dive
June 25 (8am PST): Customer Engagement Vision & Value Deep Dive
June 26 (8am PST): ERP Vision & Value Deep Dive
June 27 (8am PST): Low Code Solution Accelerator Deep Dive
Bookmark the Business Applications Partner Activities page to stay updated with the latest resources and program announcements for FY25
Submit a query for Partner Activities Tier 1 support and any other feedback or questions
Bookmark the Partner Center – Microsoft Commerce Incentives (MCI) Engagements Workspace
Review the MCI Program Guide and Resources
Stay Connected with Business Applications Partner Resources
NEW! Sign up for the Dynamics 365 and Power Platform Partner Newsletters
Follow the Dynamics 365 and Power Platform partner LinkedIn channels
Bookmark the Dynamics 365 and Power Platform Partner Hub pages
Join and engage in the Business Applications Microsoft Partner Community Read More
Google Drive Migration Service Not Available
I have come across an error when trying to perform a Drive migration in the SharePoint admin center. I have all the correct permissions in both the Google and Microsoft environments, but after installing the M365 migration app in the Google workspace, I am unable to proceed.
Does anyone else have this error? Is the service actually down? Or is this an error on my side?
Thanks for the help!
I have come across an error when trying to perform a Drive migration in the SharePoint admin center. I have all the correct permissions in both the Google and Microsoft environments, but after installing the M365 migration app in the Google workspace, I am unable to proceed. Does anyone else have this error? Is the service actually down? Or is this an error on my side? Thanks for the help! Read More
Azure App that was working with MS SQL Server database now not working with Azure SQL Database
We had a web page app in Azure that ran a form for data retrieval. When we removed the SQL server and moved the database to Azure SQL, the app no longer works. We tried the ADO.NET (SQL authentication) connection string from the database page, but the app is not working.
The error we are getting is “The page cannot be displayed because an internal server error has occurred.”
We are having trouble trying to figure out if we have a connection issue to the database or this is a problem with the app itself.
Any suggestions would be appreciated please.
We had a web page app in Azure that ran a form for data retrieval. When we removed the SQL server and moved the database to Azure SQL, the app no longer works. We tried the ADO.NET (SQL authentication) connection string from the database page, but the app is not working. The error we are getting is “The page cannot be displayed because an internal server error has occurred.” We are having trouble trying to figure out if we have a connection issue to the database or this is a problem with the app itself. Any suggestions would be appreciated please. Read More
Footnote numbers all change to 8 has been solved.
Microsoft indicates that Footnote numbers all change to 8 has been solved. This problem has been fixed by a service change, which fully rolled out on May 29th, 2024.
I have large documents( 500+ pages) where footnotes restart on every page. It has never been an issue. Now that Microsoft has fixed the issue, how do the documents get corrected.
I have automatic updates on and updates have been made, just not this one specifically. Thanks!
Microsoft indicates that Footnote numbers all change to 8 has been solved. This problem has been fixed by a service change, which fully rolled out on May 29th, 2024.I have large documents( 500+ pages) where footnotes restart on every page. It has never been an issue. Now that Microsoft has fixed the issue, how do the documents get corrected. I have automatic updates on and updates have been made, just not this one specifically. Thanks! Read More
Usage of surrogate keys
is it good practice to always use auto incremented primary keys to identify entities in a relational database. For example if an order is identified by three composite keys (customer_id, orderdateTime, orderItem) shouldn’t I just make a new surrogate PK order_id which identifies all other attributes?
is it good practice to always use auto incremented primary keys to identify entities in a relational database. For example if an order is identified by three composite keys (customer_id, orderdateTime, orderItem) shouldn’t I just make a new surrogate PK order_id which identifies all other attributes? Read More
AMA on client devices
We have followed the guidance outlined below to get AMA installed and working on a few test client devices and they are sending logs to the Event table in our Sentinel workspace.
https://learn.microsoft.com/en-us/azure/azure-monitor/agents/azure-monitor-agent-windows-client
The problem we face is with the Windows Security Events via AMA connector. Is there a supported way to get client devices to populate security events into the SecurityEvent table? I see the events in the ‘Event’ table but not the SecurityEvent table. It seems like the Sentinel security events connector only sees DCR’s that are created in Sentinel, it does not see the DCR’s that are created outside of Sentinel. Is that a bug or by design?
Any guidance is appreciated, we have had data in SecurityEvent from client devices via MMA for a few years and expected to be able to continue to ingest them properly via AMA.
We have followed the guidance outlined below to get AMA installed and working on a few test client devices and they are sending logs to the Event table in our Sentinel workspace. https://learn.microsoft.com/en-us/azure/azure-monitor/agents/azure-monitor-agent-windows-client The problem we face is with the Windows Security Events via AMA connector. Is there a supported way to get client devices to populate security events into the SecurityEvent table? I see the events in the ‘Event’ table but not the SecurityEvent table. It seems like the Sentinel security events connector only sees DCR’s that are created in Sentinel, it does not see the DCR’s that are created outside of Sentinel. Is that a bug or by design? Any guidance is appreciated, we have had data in SecurityEvent from client devices via MMA for a few years and expected to be able to continue to ingest them properly via AMA. Read More
Cloud security posture and contextualization across cloud boundaries from a single dashboard
Introduction:
Have you ever found yourself in a situation where you wanted to prioritize the riskiest misconfigurations on cloud workloads across Azure, AWS, and GCP? Have you ever wondered how to implement a unified dashboard for cloud security posture across a multicloud environment?
This article covers how you can achieve these scenarios by using Defender Cloud Security Posture Management’s (CSPM) native support for resources inside Azure, and resources in AWS and/or GCP.
For more information about Defender for Cloud’s multicloud support you can start at https://learn.microsoft.com/en-us/azure/defender-for-cloud/multicloud
To help you understand how to use Defender for Cloud to prioritize riskiest misconfigurations across your multicloud environment, all inside of a single dashboard, this article covers three topic in the following sequence:
Understanding the benefits of Defender CSPM for multicloud environments.
Implementing a unified security dashboard for cloud security posture.
Optimizing security response and compliance reporting.
Understand the benefits of Defender CSPM for multicloud environments:
When it comes to the plethora of different cloud service at your disposal, certain resource types could be more at risk than others, depending on how they’re configured, whether they’re exploitable and/or exposed to the Internet. Besides virtual machines, storage accounts, Kubernetes clusters, and databases come to mind.
Imagine if you have a compute resource, like an EC2 instance that is public exposed, with vulnerabilities and can access other resources in your environment. When combined together, these misconfigurations can represent a serious security risk to your environment, because an attacker might potentially use them to compromise your environment and move laterally inside of it.
For organizations pursuing a multicloud strategy, risky misconfigurations can even span public cloud providers. Have you ever found yourself in a situation where you use compute resources in one public cloud provider and databases in another public cloud provider? If an organization is using more than one public cloud provider, this can represent risk of attackers potentially compromising resources inside of one environment, and using those resources to move to other public cloud environments.
Defender CSPM can help organizations close off potential entry points for attackers by helping them understand what misconfigurations in their environment they need to focus on first (figure 1), and by doing that, increase their overall security posture and minimize the risk of their environment getting compromised.
By knowing what they need to focus on first, organizations can remediate misconfigurations faster and essentially do more with less, saving the organization both time and resources. By identifying what are the organization’s critical assets and potential threats to those assets, organizations can allocate resources more effectively and prioritize remediation efforts for business critical resources. This helps them address vulnerabilities more quickly and reduces the overall risk to their organization.
Implement a unified security dashboard for cloud security posture:
Organizations pursuing a multicloud strategy often find themselves in a situation where they need to operate more than one public cloud environment and manage it in ways that can differ across public cloud providers. This is applicable to security as well. Meaning you should take into consideration different security configurations for each resource type in each cloud provider that you’re using.
When you look at large environments, and especially for organizations pursuing a multicloud strategy, this can introduce security risks, particularly if there is lack of visibility across the entire environment and if security is managed in siloes.
This is also where standardization of cloud security posture across a multicloud estate can help. You need to be able to speak the same language across different public cloud providers. For example, using international standards and best practices, which can be a relevant reference point for senior management. Another one are metrics or key performance indicators (KPIs). You must be able to measure progress and avoid confusion when reporting security statuses. Also, when reporting vulnerabilities to the senior management. One good approach here is to have a centralized CSPM solution (figure 2).
By having CSPM as part of a Cloud Native Application Protection Platform (CNAPP), it helps organizations break down security siloes and connect the dots between CSPM and other areas of CNAPP to help paint a fuller picture.
Optimizing security response and compliance reporting:
Many security teams struggle with the sheer amount of security findings, and needing to prioritize is crucial for effectively minimizing risk in an organization’s environment. Organizations which are not able to prioritize their remediation efforts I see spending a lot of time and resources, and not getting their desired return of investment (ROI).
And ROI is important because it’s used to secure future budget allocations for cybersecurity initiatives. Therefore, it’s critical to have simple KPIs to showcase how efforts have prevented breaches, reduced downtime and minimized financial losses. Several organizations that I work with mentioned a real need for a simple KPI that will help them to break down complex security metrics into easy-to-understand KPI, both for the senior management and for the business owners.
This way, management and business owners, who might not be experts in cybersecurity, can quickly understand why these efforts matter for protecting the business, why they need to prioritize the remediation process, and understand the importance of investing budget in this area.
Another struggle that I see is the need to detect the relevant owners in the organization, who own resources on which an issue or security risk is detected. Ensuring workload owners understand the remediation steps and address the issues quickly is another key point that organizations need to consider. Many organizations already have existing processes in place for this, be it change management or an ITSM, so having a way to integrate with existing business processes and ITSMs can help with this regard (figure 3).
Conclusion:
This article provides food for thought when it comes to prioritizing riskiest misconfigurations across your multicloud environment, all inside of a single dashboard by using Defender CSPM.
Reviewers:
Giulio Astori, Principal Product Manager, Microsoft
Microsoft Tech Community – Latest Blogs –Read More
Better Debuggability with Enhanced Logging in Azure Load Testing
Debuggability of test scripts during load testing is crucial for identifying and resolving issues early in the testing process. It allows you to validate the test configuration, understand the application behavior under load, and troubleshoot any issues that arise. Today, we are excited to introduce Debug mode in Azure Load Testing, which enables running low scale test runs with better debuggability and enhanced logging.
Why Debug Mode?
Debug mode is designed to help you validate your test configuration and application behavior by running a load test with a single engine for up to 10 minutes. It provides debug logs for the test script, and request and response data for every failed request during the test run. This mode is powerful for troubleshooting issues with your test plan configuration.
Here are some key benefits of using Debug mode:
Validation: Debug mode allows you to validate your test configuration and application behavior before running a full-scale load test. This can save time and resources by identifying issues early.
Troubleshooting: With debug logs enabled, you can easily identify issues with your test script. This can be particularly useful when setting up complex test scenarios.
Detailed error analysis: Debug mode includes request and response data for every failed request during the test run. This can help you pinpoint the root cause of any issues and make necessary changes to your test script or application.
Resource efficiency: Tests run in debug mode are executed with a single engine and are limited to a maximum duration of 10 minutes. This can help you identify the number of virtual users can be generated on one engine by monitoring engine health metrics.
How to Enable Debug Mode?
Enabling debug mode is simple and straightforward. You can enable it for your first test run while creating a new test or when running an existing test. Just select the Debug mode in the Basics tab while creating or running your test and you’re good to go!
Next steps
Debug mode lets you see more information about your load tests, so you be confident that it runs as expected when run at high scale. It’s recommended to run the first test run in debug mode. Get started with Azure Load Testing here. If you already have been using the service, you can learn more about debug mode here. If you have any feedback, let us know through our feedback forum.
Happy load testing!
Microsoft Tech Community – Latest Blogs –Read More
libstdc++.so.6: version `GLIBCXX_3.4.30′ not found
Hi all,
I tried to run MATLAB in SUSE Linux SP15.4 and encountered the problem as shown in the title. I’ve udpated the gcc in my workstation from 7 to 11 and searched for newer GLIBC. However, the problem still exists. I was wondering if anyone knows how to update GLIBC or point me a path.
Thank youHi all,
I tried to run MATLAB in SUSE Linux SP15.4 and encountered the problem as shown in the title. I’ve udpated the gcc in my workstation from 7 to 11 and searched for newer GLIBC. However, the problem still exists. I was wondering if anyone knows how to update GLIBC or point me a path.
Thank you Hi all,
I tried to run MATLAB in SUSE Linux SP15.4 and encountered the problem as shown in the title. I’ve udpated the gcc in my workstation from 7 to 11 and searched for newer GLIBC. However, the problem still exists. I was wondering if anyone knows how to update GLIBC or point me a path.
Thank you linux, libstdc++ MATLAB Answers — New Questions
problem with the licence manager
Debian 10 is installed on the computer. I receive the following error message:
./flexnet.boot.linux start
Error: Cannot run the file /usr/local/MATLAB/R2021a/etc/glnxa64/lmgrd. This may not be an LSB compliant system.
What is the solution to the problem?Debian 10 is installed on the computer. I receive the following error message:
./flexnet.boot.linux start
Error: Cannot run the file /usr/local/MATLAB/R2021a/etc/glnxa64/lmgrd. This may not be an LSB compliant system.
What is the solution to the problem? Debian 10 is installed on the computer. I receive the following error message:
./flexnet.boot.linux start
Error: Cannot run the file /usr/local/MATLAB/R2021a/etc/glnxa64/lmgrd. This may not be an LSB compliant system.
What is the solution to the problem? lsb MATLAB Answers — New Questions
Create excel file from json variable value
I think there is an easy solution to this but I keep running into the same issue.
I want to create an excel file from a json value. The json file is stored in subject-specific folders (all with the same general path).
All json variables appear at the same line in the json file
My current code:
clear
subjs = {‘SUB2114’
‘SUB2116’
‘SUB2118’};
subjs=subjs.’;
for i=1:length(subjs)
%cd to folder with json files
cd ([‘/Volumes/myDirectory/’ subjs{i} ‘/folder/’]);
%read AP json file
jsonText = fileread(‘Fieldmapap.json’);
jsonData = jsondecode(jsonText);
ap = jsonData.PhaseEncodingDirection;
ap=ap.’;
% write json (not needed?)
encodedJSON = jsonencode(ap);
jsonText2 = jsonencode(jsonData);
fid = fopen(‘J_script_test.json’, ‘w’);
fprintf(fid, encodedJSON);
fclose(fid);
end %subject loop
%write table with subjects in first column and encodedJSON value in second column
T=cell2table([subjs encodedJSON]);
writetable(T,’Tester.csv’);
I have also tried the mytable function (below) with no positive results.
mytable=table(‘Size’,[3 2],’VariableTypes’,{‘cellstr’;’cellstr’},’VariableNames’,{‘subjects’;’direction’});
mytable{i,’subjects’} = {subjs};
mytable{i,’direction’} = {ap};
I keep getting an output that lists subjects horizontally with the last subjects direction value.
I think I am missing something simple (like, i+1 function), but do not know!
Any help would be appreciated!I think there is an easy solution to this but I keep running into the same issue.
I want to create an excel file from a json value. The json file is stored in subject-specific folders (all with the same general path).
All json variables appear at the same line in the json file
My current code:
clear
subjs = {‘SUB2114’
‘SUB2116’
‘SUB2118’};
subjs=subjs.’;
for i=1:length(subjs)
%cd to folder with json files
cd ([‘/Volumes/myDirectory/’ subjs{i} ‘/folder/’]);
%read AP json file
jsonText = fileread(‘Fieldmapap.json’);
jsonData = jsondecode(jsonText);
ap = jsonData.PhaseEncodingDirection;
ap=ap.’;
% write json (not needed?)
encodedJSON = jsonencode(ap);
jsonText2 = jsonencode(jsonData);
fid = fopen(‘J_script_test.json’, ‘w’);
fprintf(fid, encodedJSON);
fclose(fid);
end %subject loop
%write table with subjects in first column and encodedJSON value in second column
T=cell2table([subjs encodedJSON]);
writetable(T,’Tester.csv’);
I have also tried the mytable function (below) with no positive results.
mytable=table(‘Size’,[3 2],’VariableTypes’,{‘cellstr’;’cellstr’},’VariableNames’,{‘subjects’;’direction’});
mytable{i,’subjects’} = {subjs};
mytable{i,’direction’} = {ap};
I keep getting an output that lists subjects horizontally with the last subjects direction value.
I think I am missing something simple (like, i+1 function), but do not know!
Any help would be appreciated! I think there is an easy solution to this but I keep running into the same issue.
I want to create an excel file from a json value. The json file is stored in subject-specific folders (all with the same general path).
All json variables appear at the same line in the json file
My current code:
clear
subjs = {‘SUB2114’
‘SUB2116’
‘SUB2118’};
subjs=subjs.’;
for i=1:length(subjs)
%cd to folder with json files
cd ([‘/Volumes/myDirectory/’ subjs{i} ‘/folder/’]);
%read AP json file
jsonText = fileread(‘Fieldmapap.json’);
jsonData = jsondecode(jsonText);
ap = jsonData.PhaseEncodingDirection;
ap=ap.’;
% write json (not needed?)
encodedJSON = jsonencode(ap);
jsonText2 = jsonencode(jsonData);
fid = fopen(‘J_script_test.json’, ‘w’);
fprintf(fid, encodedJSON);
fclose(fid);
end %subject loop
%write table with subjects in first column and encodedJSON value in second column
T=cell2table([subjs encodedJSON]);
writetable(T,’Tester.csv’);
I have also tried the mytable function (below) with no positive results.
mytable=table(‘Size’,[3 2],’VariableTypes’,{‘cellstr’;’cellstr’},’VariableNames’,{‘subjects’;’direction’});
mytable{i,’subjects’} = {subjs};
mytable{i,’direction’} = {ap};
I keep getting an output that lists subjects horizontally with the last subjects direction value.
I think I am missing something simple (like, i+1 function), but do not know!
Any help would be appreciated! json, excel MATLAB Answers — New Questions
can I use Microsoft Project Desktop client or pwa and Planner Premium with 1 p3 license in parrarel?
My question is:
Can I utilize 1 license – p3 for example if I want to use both Planner Premium and Project (desktop or online instance)?
Is there any documentation about it? I want to make sure that there will no problem using both solutions .
My question is:Can I utilize 1 license – p3 for example if I want to use both Planner Premium and Project (desktop or online instance)?Is there any documentation about it? I want to make sure that there will no problem using both solutions . Read More
Server 2022 KB5037782 Failed Error 8024200B, 8007000D
Greetings!
Been trying to complete updates for May on my newly built Microsoft Windows Server 2022 that is offline. I installed the OS from the disk and I was able to install the latest updates from March 2024, and April 2024 but for some reason this specific update for May 2024 will not install. I have tried installing it from my WSUS server and even from the Actual update file from the Microsoft update catalog. I have also tried resetting the WindowsUpdate components and the CatRoot2 and the SoftwareDistribution folder but I still keep getting these errors. I am working on a second server install and I will see if I get the same error. Has anyone else seen this problem in the wild?
2024-05 Cumulative update for Microsoft server operating system version 21H2 for x64-based Systems (KB5037782).
The WindowsUpdateLog shows the following
Agent *FAILED* [8024200B] file = onecoreenduserwindowsupdateclientenginehandlercbslibuhcbs.cpp line = 708
Deployment *FAILED* [8007000D] Deployment job Id 4BD248AC-579E-4B5F-9C33-62E55C2A26D7 : Installing for Top level update id b857bc87-0cf1-44df-8af8-d365775f96c3.501, bundled update id 475f0455-e871-4375-8116-4cc9d4fd2563b.501 [CUpdateDeploymentJob::DeploySingleUpdateInternal:3103]
Deployment *FAILED* [8024200B] Deployment job Id 4BD248AC-579E-4B5F-9C33-62E55C2A26D7 : Installing for Top level update id b857bc87-0cf1-44df-8af8-d365775f96c3.501, bundled update id 475f0455-e871-4375-8116-4cc9d4fd2563b.501 [CUpdateDeploymentJob::DeploySingleUpdateInternal:3122]
Deployment *FAILED* [8024200B] file = onecoreenduserwindowsupdateclientupdatedeploymentlibupdatedep
Greetings! Been trying to complete updates for May on my newly built Microsoft Windows Server 2022 that is offline. I installed the OS from the disk and I was able to install the latest updates from March 2024, and April 2024 but for some reason this specific update for May 2024 will not install. I have tried installing it from my WSUS server and even from the Actual update file from the Microsoft update catalog. I have also tried resetting the WindowsUpdate components and the CatRoot2 and the SoftwareDistribution folder but I still keep getting these errors. I am working on a second server install and I will see if I get the same error. Has anyone else seen this problem in the wild? 2024-05 Cumulative update for Microsoft server operating system version 21H2 for x64-based Systems (KB5037782). The WindowsUpdateLog shows the following Agent *FAILED* [8024200B] file = onecoreenduserwindowsupdateclientenginehandlercbslibuhcbs.cpp line = 757
Agent *FAILED* [8024200B] file = onecoreenduserwindowsupdateclientenginehandlercbslibuhcbs.cpp line = 708
Deployment *FAILED* [8007000D] Deployment job Id 4BD248AC-579E-4B5F-9C33-62E55C2A26D7 : Installing for Top level update id b857bc87-0cf1-44df-8af8-d365775f96c3.501, bundled update id 475f0455-e871-4375-8116-4cc9d4fd2563b.501 [CUpdateDeploymentJob::DeploySingleUpdateInternal:3103]
Deployment *FAILED* [8024200B] Deployment job Id 4BD248AC-579E-4B5F-9C33-62E55C2A26D7 : Installing for Top level update id b857bc87-0cf1-44df-8af8-d365775f96c3.501, bundled update id 475f0455-e871-4375-8116-4cc9d4fd2563b.501 [CUpdateDeploymentJob::DeploySingleUpdateInternal:3122]
Deployment *FAILED* [8024200B] file = onecoreenduserwindowsupdateclientupdatedeploymentlibupdatedep Read More