Month: June 2024
Cloud security posture and contextualization across cloud boundaries from a single dashboard
Introduction:
Have you ever found yourself in a situation where you wanted to prioritize the riskiest misconfigurations on cloud workloads across Azure, AWS, and GCP? Have you ever wondered how to implement a unified dashboard for cloud security posture across a multicloud environment?
This article covers how you can achieve these scenarios by using Defender Cloud Security Posture Management’s (CSPM) native support for resources inside Azure, and resources in AWS and/or GCP.
For more information about Defender for Cloud’s multicloud support you can start at https://learn.microsoft.com/en-us/azure/defender-for-cloud/multicloud
To help you understand how to use Defender for Cloud to prioritize riskiest misconfigurations across your multicloud environment, all inside of a single dashboard, this article covers three topic in the following sequence:
Understanding the benefits of Defender CSPM for multicloud environments.
Implementing a unified security dashboard for cloud security posture.
Optimizing security response and compliance reporting.
Understand the benefits of Defender CSPM for multicloud environments:
When it comes to the plethora of different cloud service at your disposal, certain resource types could be more at risk than others, depending on how they’re configured, whether they’re exploitable and/or exposed to the Internet. Besides virtual machines, storage accounts, Kubernetes clusters, and databases come to mind.
Imagine if you have a compute resource, like an EC2 instance that is public exposed, with vulnerabilities and can access other resources in your environment. When combined together, these misconfigurations can represent a serious security risk to your environment, because an attacker might potentially use them to compromise your environment and move laterally inside of it.
For organizations pursuing a multicloud strategy, risky misconfigurations can even span public cloud providers. Have you ever found yourself in a situation where you use compute resources in one public cloud provider and databases in another public cloud provider? If an organization is using more than one public cloud provider, this can represent risk of attackers potentially compromising resources inside of one environment, and using those resources to move to other public cloud environments.
Defender CSPM can help organizations close off potential entry points for attackers by helping them understand what misconfigurations in their environment they need to focus on first (figure 1), and by doing that, increase their overall security posture and minimize the risk of their environment getting compromised.
By knowing what they need to focus on first, organizations can remediate misconfigurations faster and essentially do more with less, saving the organization both time and resources. By identifying what are the organization’s critical assets and potential threats to those assets, organizations can allocate resources more effectively and prioritize remediation efforts for business critical resources. This helps them address vulnerabilities more quickly and reduces the overall risk to their organization.
Implement a unified security dashboard for cloud security posture:
Organizations pursuing a multicloud strategy often find themselves in a situation where they need to operate more than one public cloud environment and manage it in ways that can differ across public cloud providers. This is applicable to security as well. Meaning you should take into consideration different security configurations for each resource type in each cloud provider that you’re using.
When you look at large environments, and especially for organizations pursuing a multicloud strategy, this can introduce security risks, particularly if there is lack of visibility across the entire environment and if security is managed in siloes.
This is also where standardization of cloud security posture across a multicloud estate can help. You need to be able to speak the same language across different public cloud providers. For example, using international standards and best practices, which can be a relevant reference point for senior management. Another one are metrics or key performance indicators (KPIs). You must be able to measure progress and avoid confusion when reporting security statuses. Also, when reporting vulnerabilities to the senior management. One good approach here is to have a centralized CSPM solution (figure 2).
By having CSPM as part of a Cloud Native Application Protection Platform (CNAPP), it helps organizations break down security siloes and connect the dots between CSPM and other areas of CNAPP to help paint a fuller picture.
Optimizing security response and compliance reporting:
Many security teams struggle with the sheer amount of security findings, and needing to prioritize is crucial for effectively minimizing risk in an organization’s environment. Organizations which are not able to prioritize their remediation efforts I see spending a lot of time and resources, and not getting their desired return of investment (ROI).
And ROI is important because it’s used to secure future budget allocations for cybersecurity initiatives. Therefore, it’s critical to have simple KPIs to showcase how efforts have prevented breaches, reduced downtime and minimized financial losses. Several organizations that I work with mentioned a real need for a simple KPI that will help them to break down complex security metrics into easy-to-understand KPI, both for the senior management and for the business owners.
This way, management and business owners, who might not be experts in cybersecurity, can quickly understand why these efforts matter for protecting the business, why they need to prioritize the remediation process, and understand the importance of investing budget in this area.
Another struggle that I see is the need to detect the relevant owners in the organization, who own resources on which an issue or security risk is detected. Ensuring workload owners understand the remediation steps and address the issues quickly is another key point that organizations need to consider. Many organizations already have existing processes in place for this, be it change management or an ITSM, so having a way to integrate with existing business processes and ITSMs can help with this regard (figure 3).
Conclusion:
This article provides food for thought when it comes to prioritizing riskiest misconfigurations across your multicloud environment, all inside of a single dashboard by using Defender CSPM.
Reviewers:
Giulio Astori, Principal Product Manager, Microsoft
Microsoft Tech Community – Latest Blogs –Read More
Better Debuggability with Enhanced Logging in Azure Load Testing
Debuggability of test scripts during load testing is crucial for identifying and resolving issues early in the testing process. It allows you to validate the test configuration, understand the application behavior under load, and troubleshoot any issues that arise. Today, we are excited to introduce Debug mode in Azure Load Testing, which enables running low scale test runs with better debuggability and enhanced logging.
Why Debug Mode?
Debug mode is designed to help you validate your test configuration and application behavior by running a load test with a single engine for up to 10 minutes. It provides debug logs for the test script, and request and response data for every failed request during the test run. This mode is powerful for troubleshooting issues with your test plan configuration.
Here are some key benefits of using Debug mode:
Validation: Debug mode allows you to validate your test configuration and application behavior before running a full-scale load test. This can save time and resources by identifying issues early.
Troubleshooting: With debug logs enabled, you can easily identify issues with your test script. This can be particularly useful when setting up complex test scenarios.
Detailed error analysis: Debug mode includes request and response data for every failed request during the test run. This can help you pinpoint the root cause of any issues and make necessary changes to your test script or application.
Resource efficiency: Tests run in debug mode are executed with a single engine and are limited to a maximum duration of 10 minutes. This can help you identify the number of virtual users can be generated on one engine by monitoring engine health metrics.
How to Enable Debug Mode?
Enabling debug mode is simple and straightforward. You can enable it for your first test run while creating a new test or when running an existing test. Just select the Debug mode in the Basics tab while creating or running your test and you’re good to go!
Next steps
Debug mode lets you see more information about your load tests, so you be confident that it runs as expected when run at high scale. It’s recommended to run the first test run in debug mode. Get started with Azure Load Testing here. If you already have been using the service, you can learn more about debug mode here. If you have any feedback, let us know through our feedback forum.
Happy load testing!
Microsoft Tech Community – Latest Blogs –Read More
libstdc++.so.6: version `GLIBCXX_3.4.30′ not found
Hi all,
I tried to run MATLAB in SUSE Linux SP15.4 and encountered the problem as shown in the title. I’ve udpated the gcc in my workstation from 7 to 11 and searched for newer GLIBC. However, the problem still exists. I was wondering if anyone knows how to update GLIBC or point me a path.
Thank youHi all,
I tried to run MATLAB in SUSE Linux SP15.4 and encountered the problem as shown in the title. I’ve udpated the gcc in my workstation from 7 to 11 and searched for newer GLIBC. However, the problem still exists. I was wondering if anyone knows how to update GLIBC or point me a path.
Thank you Hi all,
I tried to run MATLAB in SUSE Linux SP15.4 and encountered the problem as shown in the title. I’ve udpated the gcc in my workstation from 7 to 11 and searched for newer GLIBC. However, the problem still exists. I was wondering if anyone knows how to update GLIBC or point me a path.
Thank you linux, libstdc++ MATLAB Answers — New Questions
problem with the licence manager
Debian 10 is installed on the computer. I receive the following error message:
./flexnet.boot.linux start
Error: Cannot run the file /usr/local/MATLAB/R2021a/etc/glnxa64/lmgrd. This may not be an LSB compliant system.
What is the solution to the problem?Debian 10 is installed on the computer. I receive the following error message:
./flexnet.boot.linux start
Error: Cannot run the file /usr/local/MATLAB/R2021a/etc/glnxa64/lmgrd. This may not be an LSB compliant system.
What is the solution to the problem? Debian 10 is installed on the computer. I receive the following error message:
./flexnet.boot.linux start
Error: Cannot run the file /usr/local/MATLAB/R2021a/etc/glnxa64/lmgrd. This may not be an LSB compliant system.
What is the solution to the problem? lsb MATLAB Answers — New Questions
Create excel file from json variable value
I think there is an easy solution to this but I keep running into the same issue.
I want to create an excel file from a json value. The json file is stored in subject-specific folders (all with the same general path).
All json variables appear at the same line in the json file
My current code:
clear
subjs = {‘SUB2114’
‘SUB2116’
‘SUB2118’};
subjs=subjs.’;
for i=1:length(subjs)
%cd to folder with json files
cd ([‘/Volumes/myDirectory/’ subjs{i} ‘/folder/’]);
%read AP json file
jsonText = fileread(‘Fieldmapap.json’);
jsonData = jsondecode(jsonText);
ap = jsonData.PhaseEncodingDirection;
ap=ap.’;
% write json (not needed?)
encodedJSON = jsonencode(ap);
jsonText2 = jsonencode(jsonData);
fid = fopen(‘J_script_test.json’, ‘w’);
fprintf(fid, encodedJSON);
fclose(fid);
end %subject loop
%write table with subjects in first column and encodedJSON value in second column
T=cell2table([subjs encodedJSON]);
writetable(T,’Tester.csv’);
I have also tried the mytable function (below) with no positive results.
mytable=table(‘Size’,[3 2],’VariableTypes’,{‘cellstr’;’cellstr’},’VariableNames’,{‘subjects’;’direction’});
mytable{i,’subjects’} = {subjs};
mytable{i,’direction’} = {ap};
I keep getting an output that lists subjects horizontally with the last subjects direction value.
I think I am missing something simple (like, i+1 function), but do not know!
Any help would be appreciated!I think there is an easy solution to this but I keep running into the same issue.
I want to create an excel file from a json value. The json file is stored in subject-specific folders (all with the same general path).
All json variables appear at the same line in the json file
My current code:
clear
subjs = {‘SUB2114’
‘SUB2116’
‘SUB2118’};
subjs=subjs.’;
for i=1:length(subjs)
%cd to folder with json files
cd ([‘/Volumes/myDirectory/’ subjs{i} ‘/folder/’]);
%read AP json file
jsonText = fileread(‘Fieldmapap.json’);
jsonData = jsondecode(jsonText);
ap = jsonData.PhaseEncodingDirection;
ap=ap.’;
% write json (not needed?)
encodedJSON = jsonencode(ap);
jsonText2 = jsonencode(jsonData);
fid = fopen(‘J_script_test.json’, ‘w’);
fprintf(fid, encodedJSON);
fclose(fid);
end %subject loop
%write table with subjects in first column and encodedJSON value in second column
T=cell2table([subjs encodedJSON]);
writetable(T,’Tester.csv’);
I have also tried the mytable function (below) with no positive results.
mytable=table(‘Size’,[3 2],’VariableTypes’,{‘cellstr’;’cellstr’},’VariableNames’,{‘subjects’;’direction’});
mytable{i,’subjects’} = {subjs};
mytable{i,’direction’} = {ap};
I keep getting an output that lists subjects horizontally with the last subjects direction value.
I think I am missing something simple (like, i+1 function), but do not know!
Any help would be appreciated! I think there is an easy solution to this but I keep running into the same issue.
I want to create an excel file from a json value. The json file is stored in subject-specific folders (all with the same general path).
All json variables appear at the same line in the json file
My current code:
clear
subjs = {‘SUB2114’
‘SUB2116’
‘SUB2118’};
subjs=subjs.’;
for i=1:length(subjs)
%cd to folder with json files
cd ([‘/Volumes/myDirectory/’ subjs{i} ‘/folder/’]);
%read AP json file
jsonText = fileread(‘Fieldmapap.json’);
jsonData = jsondecode(jsonText);
ap = jsonData.PhaseEncodingDirection;
ap=ap.’;
% write json (not needed?)
encodedJSON = jsonencode(ap);
jsonText2 = jsonencode(jsonData);
fid = fopen(‘J_script_test.json’, ‘w’);
fprintf(fid, encodedJSON);
fclose(fid);
end %subject loop
%write table with subjects in first column and encodedJSON value in second column
T=cell2table([subjs encodedJSON]);
writetable(T,’Tester.csv’);
I have also tried the mytable function (below) with no positive results.
mytable=table(‘Size’,[3 2],’VariableTypes’,{‘cellstr’;’cellstr’},’VariableNames’,{‘subjects’;’direction’});
mytable{i,’subjects’} = {subjs};
mytable{i,’direction’} = {ap};
I keep getting an output that lists subjects horizontally with the last subjects direction value.
I think I am missing something simple (like, i+1 function), but do not know!
Any help would be appreciated! json, excel MATLAB Answers — New Questions
can I use Microsoft Project Desktop client or pwa and Planner Premium with 1 p3 license in parrarel?
My question is:
Can I utilize 1 license – p3 for example if I want to use both Planner Premium and Project (desktop or online instance)?
Is there any documentation about it? I want to make sure that there will no problem using both solutions .
My question is:Can I utilize 1 license – p3 for example if I want to use both Planner Premium and Project (desktop or online instance)?Is there any documentation about it? I want to make sure that there will no problem using both solutions . Read More
Server 2022 KB5037782 Failed Error 8024200B, 8007000D
Greetings!
Been trying to complete updates for May on my newly built Microsoft Windows Server 2022 that is offline. I installed the OS from the disk and I was able to install the latest updates from March 2024, and April 2024 but for some reason this specific update for May 2024 will not install. I have tried installing it from my WSUS server and even from the Actual update file from the Microsoft update catalog. I have also tried resetting the WindowsUpdate components and the CatRoot2 and the SoftwareDistribution folder but I still keep getting these errors. I am working on a second server install and I will see if I get the same error. Has anyone else seen this problem in the wild?
2024-05 Cumulative update for Microsoft server operating system version 21H2 for x64-based Systems (KB5037782).
The WindowsUpdateLog shows the following
Agent *FAILED* [8024200B] file = onecoreenduserwindowsupdateclientenginehandlercbslibuhcbs.cpp line = 708
Deployment *FAILED* [8007000D] Deployment job Id 4BD248AC-579E-4B5F-9C33-62E55C2A26D7 : Installing for Top level update id b857bc87-0cf1-44df-8af8-d365775f96c3.501, bundled update id 475f0455-e871-4375-8116-4cc9d4fd2563b.501 [CUpdateDeploymentJob::DeploySingleUpdateInternal:3103]
Deployment *FAILED* [8024200B] Deployment job Id 4BD248AC-579E-4B5F-9C33-62E55C2A26D7 : Installing for Top level update id b857bc87-0cf1-44df-8af8-d365775f96c3.501, bundled update id 475f0455-e871-4375-8116-4cc9d4fd2563b.501 [CUpdateDeploymentJob::DeploySingleUpdateInternal:3122]
Deployment *FAILED* [8024200B] file = onecoreenduserwindowsupdateclientupdatedeploymentlibupdatedep
Greetings! Been trying to complete updates for May on my newly built Microsoft Windows Server 2022 that is offline. I installed the OS from the disk and I was able to install the latest updates from March 2024, and April 2024 but for some reason this specific update for May 2024 will not install. I have tried installing it from my WSUS server and even from the Actual update file from the Microsoft update catalog. I have also tried resetting the WindowsUpdate components and the CatRoot2 and the SoftwareDistribution folder but I still keep getting these errors. I am working on a second server install and I will see if I get the same error. Has anyone else seen this problem in the wild? 2024-05 Cumulative update for Microsoft server operating system version 21H2 for x64-based Systems (KB5037782). The WindowsUpdateLog shows the following Agent *FAILED* [8024200B] file = onecoreenduserwindowsupdateclientenginehandlercbslibuhcbs.cpp line = 757
Agent *FAILED* [8024200B] file = onecoreenduserwindowsupdateclientenginehandlercbslibuhcbs.cpp line = 708
Deployment *FAILED* [8007000D] Deployment job Id 4BD248AC-579E-4B5F-9C33-62E55C2A26D7 : Installing for Top level update id b857bc87-0cf1-44df-8af8-d365775f96c3.501, bundled update id 475f0455-e871-4375-8116-4cc9d4fd2563b.501 [CUpdateDeploymentJob::DeploySingleUpdateInternal:3103]
Deployment *FAILED* [8024200B] Deployment job Id 4BD248AC-579E-4B5F-9C33-62E55C2A26D7 : Installing for Top level update id b857bc87-0cf1-44df-8af8-d365775f96c3.501, bundled update id 475f0455-e871-4375-8116-4cc9d4fd2563b.501 [CUpdateDeploymentJob::DeploySingleUpdateInternal:3122]
Deployment *FAILED* [8024200B] file = onecoreenduserwindowsupdateclientupdatedeploymentlibupdatedep Read More
Migrating users from on-prem AD to AzureAD only
Hello,
We are in the process of migrating to AzureAD for all users and devices.
Users are currently synced from on-prem AD to AzureAD using the Azure Directory Sync tool.
We don’t have a significant number of users, and so use a manual process, that has problems.
To migrate users, our current process is as follows:
Move the user in on-prem AD to an OU that is not part of the Directory SynchronisationRun a delta sync on the Sync ToolIn AzureAD, the user is deleted. We manually re-enable them
The problem is that in carrying out this process, the user is removed from all the Teams Private Channels that they were a member of (they retain the overall team membership).
Is there a better way to break the AD sync for a user, retaining them in AzureAD and also retaining all their private channel memberships?
Thanks in advance.
Hello,We are in the process of migrating to AzureAD for all users and devices.Users are currently synced from on-prem AD to AzureAD using the Azure Directory Sync tool.We don’t have a significant number of users, and so use a manual process, that has problems. To migrate users, our current process is as follows:Move the user in on-prem AD to an OU that is not part of the Directory SynchronisationRun a delta sync on the Sync ToolIn AzureAD, the user is deleted. We manually re-enable themThe problem is that in carrying out this process, the user is removed from all the Teams Private Channels that they were a member of (they retain the overall team membership). Is there a better way to break the AD sync for a user, retaining them in AzureAD and also retaining all their private channel memberships? Thanks in advance. Read More
Windows Device Configuration Profiles
Is it possible to create a Windows Device Configuration Profile that will push a particular wallpaper to the device is the user’s Entra profile > Department value is “MHS”, for example?
Currently, we are sending a District branded lock screen and desktop image based on the device name but are wondering if this can be even more dynamic based on the primary user’s Entra profile, instead? We have different lock screens using this same logic for Staff, Student and Long-Term Subs.
Thank you for considering!
Is it possible to create a Windows Device Configuration Profile that will push a particular wallpaper to the device is the user’s Entra profile > Department value is “MHS”, for example? Currently, we are sending a District branded lock screen and desktop image based on the device name but are wondering if this can be even more dynamic based on the primary user’s Entra profile, instead? We have different lock screens using this same logic for Staff, Student and Long-Term Subs. Thank you for considering! Read More
Is there a roadmap for Windows 11 to provide the same full personalization features Windows 10 does?
Is there a roadmap for Windows 11 provide the same personalization features Windows 10 does?
Is there a roadmap for Windows 11 provide the same personalization features Windows 10 does? Read More
How to Plot Blood Pressure over date time?
I am trying to plot my husband’s blood pressure over date and time.
So that my doctor can see what date he took his blood pressure, what time of day, and I’m going to have to use to top number to do the plot and I was thinking I’d put the full reading in the label somehow, maybe inside the bar if I did bar charting.
I tried doing this having one column with the Date Time format and then the top blood pressure.
I get weird plots. This is some sample data I am trying to plot. Any help greatly appreciated!!
Date TimeTop5/25/2024 18:101445/26/2024 8:151315/27/2024 10:001495/27/2024 18:001675/28/2024 5:451345/28/2024 21:001525/29/2024 5:301325/29/2024 20:301405/30/2024 6:001355/30/2024 21:501495/31/2024 5:501385/31/2024 18:00138
I am trying to plot my husband’s blood pressure over date and time. So that my doctor can see what date he took his blood pressure, what time of day, and I’m going to have to use to top number to do the plot and I was thinking I’d put the full reading in the label somehow, maybe inside the bar if I did bar charting. I tried doing this having one column with the Date Time format and then the top blood pressure.I get weird plots. This is some sample data I am trying to plot. Any help greatly appreciated!! Date TimeTop5/25/2024 18:101445/26/2024 8:151315/27/2024 10:001495/27/2024 18:001675/28/2024 5:451345/28/2024 21:001525/29/2024 5:301325/29/2024 20:301405/30/2024 6:001355/30/2024 21:501495/31/2024 5:501385/31/2024 18:00138 Read More
The Evolution of GenAI Application Deployment Strategy: Building Custom Co-Pilot (PoC)
Building Custom Co-Pilot (PoC)
Azure OpenAI is a cloud service that allows developers to access the powerful capabilities of OpenAI, while enjoying the benefits of security, governance, and more of Azure. When moving from the first initial ideation phase, and starting to move towards a Proof of Concept(PoC)/ Proof of Value(PoV)/ Proof of Technology(PoT), there are a number of considerations that need to be made to ensure this phase is of success.
One of the most common applications of a PoC on Azure OpenAI is to build a custom co-pilot, a tool that can assist both internal employee’s and external users with a wide range of activities, such as summarization, content generation, or more technical such as, suggesting relevant code snippets, completing code blocks; or explaining code logic. This “co-pilot” approach is a tried and tested approach at all levels of maturity across enterprises, as it is a very low-hanging fruit, but a fruit that offers real benefits to those who are both developing and using the application at a PoC phase.
Given the wide scope of technologies that can encompass this phase, I have divided up and placed everything into four defined approaches; each with subsequent pro’s and con’s. To give a quick summary, essentially each path takes a level of code/no-code/low-code, or a combination of all; depending on the level of customization outside of the degrees of black-box that are found within each approach.
That is not to say that one approach is better than another one, but instead, that for a PoC, given simplicity is appreciated, one could look at the greatest abstractions (such as no-code), as the first option to test, given limited time-sink albeit at the cost of complexity, and work through the approaches one-by-one to find the perfect level of trade-offs that are acceptable. With a number of primary aims in a PoC, being to generate excitement between both business and technology, to prove hypothesises of technology and value, and drive towards the next phase; it is really important to be able to iterate quickly and decipher where the current iteration is succeeding or failing, which is where a level of complexity, found in the low/code first approaches provide more value; but again, at more of a time sink.
Let’s talk through some of the approaches
Code-First:
With the inclusion of various packages, such as Microsoft’s Semantic Kernel, or LangChain allow for the orchestration of Azure OpenAI and other microservices to create a copilot solution. This allows for the greatest level of complexity, through code, alias the greatest amount of time to set-up and run.
Usually, these frameworks would sit either in the backend of the code, or run as an orchestrator through some level of abstraction/serverless compute offering, such as a function app.
This deployment can be seen as robust and future-proofing but could be overcomplicating at an earlier stage than required. The newly launched Azure AI Studio is a trusted platform that enables users to create generative AI applications and custom Copilot experiences. Typical PoC’s at this stage implore typical use-cases, such as “Chat with Data”, or RAG (Retrieval Augmented Generation) patterns; which, given their tried and tested nature, can be comparatively easier to implement through our next pattern; being Low-Code.
Low-Code:
This approach takes advantage of some of the “black box” approaches and integrations of Azure, abstracting away some of the difficulty in orchestrating microservices that are found in the purely code-first approach. A number of these are, PromptFlow and Co-Pilot Studio. These offer a more streamlined approach towards that of a RAG-style co-pilot and allow for the goal of a PoC to be achieved, that much faster and more efficiently. A great example of this is found here.
Prompt Flow, as the orchestrator, offers special benefits through abstractions and prebuilt “nodes”, that can streamline and automate a large amount of the code that we would have to write, and even goes as far as one-click creation of complex data structures through automated embeddings and vectorization databases, massively speeding up this phase, and bringing us closer to real value.
No-Code:
Finally, we have a number of no code accelerators for typical co-pilot scenarios, that abstract everything through the GUI and allow us to very quickly adapt a predefined siloed dataset into the base of knowledge that we need for a co-pilot PoC. The typical one is called “Chat with your Data” from both the co-pilot studio and Azure OpenAI portals.
From a PoC point of view, this really allows for the speed and efficiency of this stage to be realised. Without complex code, or specific knowledge around GenAI, this method really allows us to drive and focus on value, before potentially including more complexity in a later stage.
Hybrid:
This approach involves using a combination of the above approaches, depending on the complexity of the co-pilot. For example, a developer in this phase can use the code first approach to write the core logic of the code, and then use the no-code approach to generate additional code features or functionalities. A great example of this is using Prompt Flow, first starting to work on the solution in either a no-code or low-code approach, and then iterating through code subsequently.
The process depicted above shows how the MSFT team is actively involved in assisting our customers in choosing a PoC path, regardless of the PoC development methodology. We will support customers in assessing the strategy, considering factors such as the use case, skills, technology preference, viability, and timeline.
Summary
To summarize, the text describes three different approaches to developing a co-pilot using GenAI:
Code first: This approach involves writing the code manually and then using GenAI to improve it or add new features. This is suitable for developers who have prior experience with coding and want to have more control over the code quality and functionality.
No-code: This approach involves using a graphical interface or natural language to specify the requirements and then using GenAI to generate the code automatically. This is suitable for non-developers who want to create a co-pilot without writing any code and focus on the value proposition.
Hybrid: This approach involves using a combination of the above approaches, depending on the complexity of the co-pilot. For example, a developer can use the code first approach to write the core logic and then use the no-code approach to generate additional features or functionalities. This is suitable for developers who want to leverage the best of both worlds and iterate quickly.
Series: Next article will discuss about consideration and approach of moving from GenAI PoC to MVP
Author: Morgan Gladwell,
Co-Author: @arung
@Paolo Colecchia @Stephen Rhoades @Taonga_Banda @renbafa @morgan Gladwell
Microsoft Tech Community – Latest Blogs –Read More
The Evolution of GenAI Application Deployment Strategy: From PoC to MVP
Initiating the Minimum Viable Product Phase:
Proof of Concept (PoC)/ Proof of Value (PoV) / Proof of Technology (PoT) allows customers to validate and demonstrate the suitability and adaptability of GenAI use cases for their business. However, transitioning from an initial experiment i.e. building custom co-pilot (either low code or code first approach) as PoC to the go-to-market phase involves building a Minimal Viable Product (MVP). The MVP serves as the foundation and then incorporating core functionalities from the PoC along with additional layers enabled by Microsoft community build accelerators.
Microsoft offers a variety of accelerators that aid in the development of GenAI-powered application, effectively addressing use cases and delivering business value. However, it’s crucial to understand that the code from these accelerators, which originates from the Proof of Concept (PoC), is not ready for production. This implies that additional safeguards, known as production guardrails, need to be incorporated to protect the applications or products.
These extra layers of components or services are necessary to ensure governance, security, continuous development, and configuration. A practical strategy is to augment the accelerators used during the PoC with these layers. These enhancements can take the form of new features, customized or improved user interfaces, security measures like (enhanced authentication & authorisation), suitable infrastructure and network topology, content filters, and comprehensive logging and monitoring.
MVP Approach:
Let’s delve into the methodology for constructing a conceptual architecture that can progress from the PoC deployment to the MVP stage. The initial Custom Co-pilot’s (low-code or hybrid) conceptual architecture, which is based on our accelerator in the PoC phase, would look like this:
The above Proof of Concept (PoC) reference design includes basic foundational components to demonstrate its value during the PoC phase. However, it does not incorporate all the essential elements needed for a live deployment. All the components or services are deployed as a unified entity to interact with the relevant LLM Models.
Now, let’s progress and refine the PoC reference design into a Minimum Viable Product (MVP) by expanding and incorporating additional layers. These would be,
· Components:
The first step involves identifying and logically grouping components that offer similar services. This is a crucial starting point. Let’s begin at the network component level. For example:
Apps VNet: This is a virtual network that houses application-related services, providing isolation from other services.
Backend VNet: This is a virtual network dedicated to backend business orchestration, system integration and workload management.
Data VNet: This is responsible for managing data storage and access to production-grade source data.
Service VNet: This connects and interfaces with LLM and other AI services (such as AI Search, Cognitive Services, and Computer Vision). Orchestration frameworks like LangChain, Semantic Kernel, or AutoGen can be employed to abstract the configuration and manage the services based on the use case.
LLMOps:
At the MVP stage, it’s advisable to incorporate Large Language Model Operations (LLMOps) throughout the entire lifecycle from the outset. LLMOps refers to specialized practices and workflows that facilitate the development, deployment, and management of AI models, specifically large language models (LLMs). We have LLMOps approach articulates the how it can be accommodated in the development lifecycle.
The key to success when working with Large Language Models (LLMs) lies in the effective management of the prompt, the application of prompt engineering techniques, and the tracking of versions. Tools like PromptFlow facilitate this process by providing features for effective prompt management and version control.
· LLM Models:
The Proof of Concept (PoC) stage allows us to verify the suitability of LLM Models (type and version) for our needs. However, when we move to the Minimum Viable Product (MVP) stage, we have the opportunity to explore beyond the base model (the LLM base model is a standard for everyone). We can fine-tune the models to better understand our domain-specific dataset, which can enhance performance and optimize interactions. Additionally, we also have the option to consider other open-source models through the Azure AI Studio model catalog.
· Orchestration Framework:
The selection of orchestration, development tools, and their respective frameworks is contingent on the customer’s preferences (tech stack) and the capabilities required to address specific use cases. While the orchestration approach may need to be tailored for different use cases, the underlying hosting infrastructure can be reused.
· Infrastructure & Environment:
During the MVP phase, you could transition from a development to a production subscription, implementing a single-region rollout to cater to a select group of users, both internal and external. To enhance the efficiency of the overall CI/CD process, you might want to consider adopting an Infrastructure as Code (IaC) approach for deployment automation.
· Additional Supporting Services:
You might want to think about incorporating security measures such as managed identities, role management, access controls, and content monitoring/moderation, content safety, prompt shields to ensure responsible AI usage.
· Patterns:
Azure accelerators offer a comprehensive set of patterns and tools for working with Large Language Models (LLMs). These accelerators cover a wide range of use cases and provide valuable support for technical solutions.
RAG,
Prompt Engineering,
Evals models performance,
Caching strategy,
Benchmarking
Keep in mind that moving from a PoC to a MVP entails the improvement and fine-tuning of both the requirements and the product, ensuring they align with market demands and end user expectations.
You should now contemplate the level of services for the LLM Model that will facilitate deployment and support for your use case. Azure AI essentially offers two distinct levels of service:
Pay As You Go (PAYG): This model allows customers to pay based on their actual usage. It’s ideal for Proof of Concept (PoC) scenarios, situations with high latency, and non-critical use cases.
Provisional Throughput Unit (PTU): This is a fixed-term commitment pricing model. PTU is recommended for scenarios requiring low latency and critical use cases. It’s also suggested for MVP workloads.
You can also manage your application traffic according to your business requirements by utilizing both PAYG and PTU. For instance, if your traffic exceeds the peak limit and becomes unpredictable, you can divert the excess to PAYG. Check below reference for more info.
Regardless of the level of service you choose, it’s vital to analyse the volumetrics of your workload to accurately estimate costs. At this stage, it’s crucial to gather feedback from a wider user base, including both internal and a limited number of external users.
MVP Criteria: Here are some critical factors to consider during the development of a MVP:
PoC Outcome: Evaluate the success of the Proof of Concept (PoC) and confirm the adoption of LLM Models in the MVP phase.
Actual Production Dataset: Use a valid, curated dataset that reflects real-world conditions.
Use Case: Comprehend the specific requirements, such as queries, analysis, contextual and search criteria.
Feedback Loop: Collect user feedback on features, improvements, and limitations.
LLM Model Accuracy and Performance: Involve your business Subject Matter Expert (SME) to review the results of the LLM Model outcome and validate it with the actual dataset. Achieving a similar outcome/result from the LLM can be done by adopting best practices such as Prompt Engineering, Prompt versioning, Prompt tuning, and curating the dataset.
Token Management (Token Per Minute): Assess and manage the token sizes for efficient processing.
Infrastructure: Ensure the availability of the appropriate infrastructure and supporting components.
Security: Incorporate strong security measures such as ResponsibleAI, address security threats for GenAI Apps (jailbreak,prompt injection)
Business Continuity: Plan for continuity during deployment, such as redundant deployment at the region / cross-region level in order to scale the deployment and workloads.
Governance: Implement governance practices for monitoring and logging end to end.
Responsible AI: Monitor and manage AI products in a responsible manner.
Reference:
· – Azure/aoai-smart-loadbalancing: Smart load balancing for Azure OpenAI endpoints (github.com)
· – Scaling AOAI deployment using PTU and PAYG Azure/aoai-apim: Scaling AOAI using APIM, PTUs and TPMs (github.com)
· – Prompt engineering techniques. Azure PromptFlow tool
· – Benchmarking AOAI loads Azure/azure-openai-benchmark: Azure OpenAI benchmarking tool (github.com)
Conclusion:
Transitioning from the PoC to the MVP stage allows you to demonstrate and validate the business value, as well as define the key criteria for determining the path to live deployment. This assists you in identifying both business and technical dependencies and requirements, and prepares your organization to embrace and adopt the new wave of AI, enhancing your competitiveness in your industry.
Series: Next article will discuss approach of moving from GenAI Application from MVP to Production.
@Paolo Colecchia @Stephen Rhoades @Taonga_Banda @renbafa
Microsoft Tech Community – Latest Blogs –Read More
When running regression tests in Test Manager, how can you tell if all tests have run to completion without skipping any?
I am running Simulink Test, running a lot of regression tests in Simulink Test Manager.
I have been using the "stop simulation" workaround to stop the simulation when the Scenario is complete.
How to Stop Simulation in Test Sequence Block – MATLAB Answers – MATLAB Central (mathworks.com)
What is the best way to make sure that the tests run to completion? Specifically:
how can we tell that the test has run but has not got all the way to the end of the scenario? For example, if one of the tests does not pass, the step will not progress to the next transition.
is there a way to show the test completes all verify steps and has not missed any?I am running Simulink Test, running a lot of regression tests in Simulink Test Manager.
I have been using the "stop simulation" workaround to stop the simulation when the Scenario is complete.
How to Stop Simulation in Test Sequence Block – MATLAB Answers – MATLAB Central (mathworks.com)
What is the best way to make sure that the tests run to completion? Specifically:
how can we tell that the test has run but has not got all the way to the end of the scenario? For example, if one of the tests does not pass, the step will not progress to the next transition.
is there a way to show the test completes all verify steps and has not missed any? I am running Simulink Test, running a lot of regression tests in Simulink Test Manager.
I have been using the "stop simulation" workaround to stop the simulation when the Scenario is complete.
How to Stop Simulation in Test Sequence Block – MATLAB Answers – MATLAB Central (mathworks.com)
What is the best way to make sure that the tests run to completion? Specifically:
how can we tell that the test has run but has not got all the way to the end of the scenario? For example, if one of the tests does not pass, the step will not progress to the next transition.
is there a way to show the test completes all verify steps and has not missed any? simulink, simulink-test, test-manager MATLAB Answers — New Questions
Calling linkaxes on uiaxes objects makes plot contents disappear when using uigridlayout
I wish to link the windows of a set of uiaxes. For regular axes objects, I would call the linkaxes function to link their windows together. In the following example, calling the linkaxes function for uiaxes objects makes any plots on these axes disappear:
% Generate UIFigure
ufig = uifigure;
% Apply grid layout to UIFigure
gl = uigridlayout(ufig, ‘RowHeight’, {‘1x’, ‘1x’, ‘1x’}, ‘ColumnWidth’, {‘1x’});
% Create three uiaxes objects; place them in the grid layout
ax1 = uiaxes(gl);
ax1.Layout.Row = 1;
ax2 = uiaxes(gl);
ax2.Layout.Row = 2;
ax3 = uiaxes(gl);
ax3.Layout.Row = 3;
% Create plots for each uiaxes object
x = 1:10;
plot(ax1, x, x);
plot(ax2, x, x.^2);
plot(ax3, x, x.^3);
% Attempt to link the uiaxes together
linkaxes([ax1 ax2 ax3]);
It is the last line of this code (with linkaxes) that makes the plots disappear. Otherwise, they show up as expected.
I believe the use of uigridlayout is contributing to the problem. In the following code snippet, which does not include the use of uigridlayout, linking the uiaxes objects together with linkaxes does not make the plots disappear:
% Generate UIFigure
ufig = uifigure;
% Create three uiaxes objects; place them side by side
ax1 = uiaxes(ufig, ‘Position’, [ 0 0 200 200]);
ax2 = uiaxes(ufig, ‘Position’, [200 0 200 200]);
ax3 = uiaxes(ufig, ‘Position’, [400 0 200 200]);
% Create plots for each uiaxes object
x = 1:10;
plot(ax1, x, x);
plot(ax2, x, x.^2);
plot(ax3, x, x.^3);
% Attempt to link the uiaxes together
linkaxes([ax1, ax2, ax3]);
I would appreciate any assistance that one could offer to help me understand and, if possible, correct this issue.I wish to link the windows of a set of uiaxes. For regular axes objects, I would call the linkaxes function to link their windows together. In the following example, calling the linkaxes function for uiaxes objects makes any plots on these axes disappear:
% Generate UIFigure
ufig = uifigure;
% Apply grid layout to UIFigure
gl = uigridlayout(ufig, ‘RowHeight’, {‘1x’, ‘1x’, ‘1x’}, ‘ColumnWidth’, {‘1x’});
% Create three uiaxes objects; place them in the grid layout
ax1 = uiaxes(gl);
ax1.Layout.Row = 1;
ax2 = uiaxes(gl);
ax2.Layout.Row = 2;
ax3 = uiaxes(gl);
ax3.Layout.Row = 3;
% Create plots for each uiaxes object
x = 1:10;
plot(ax1, x, x);
plot(ax2, x, x.^2);
plot(ax3, x, x.^3);
% Attempt to link the uiaxes together
linkaxes([ax1 ax2 ax3]);
It is the last line of this code (with linkaxes) that makes the plots disappear. Otherwise, they show up as expected.
I believe the use of uigridlayout is contributing to the problem. In the following code snippet, which does not include the use of uigridlayout, linking the uiaxes objects together with linkaxes does not make the plots disappear:
% Generate UIFigure
ufig = uifigure;
% Create three uiaxes objects; place them side by side
ax1 = uiaxes(ufig, ‘Position’, [ 0 0 200 200]);
ax2 = uiaxes(ufig, ‘Position’, [200 0 200 200]);
ax3 = uiaxes(ufig, ‘Position’, [400 0 200 200]);
% Create plots for each uiaxes object
x = 1:10;
plot(ax1, x, x);
plot(ax2, x, x.^2);
plot(ax3, x, x.^3);
% Attempt to link the uiaxes together
linkaxes([ax1, ax2, ax3]);
I would appreciate any assistance that one could offer to help me understand and, if possible, correct this issue. I wish to link the windows of a set of uiaxes. For regular axes objects, I would call the linkaxes function to link their windows together. In the following example, calling the linkaxes function for uiaxes objects makes any plots on these axes disappear:
% Generate UIFigure
ufig = uifigure;
% Apply grid layout to UIFigure
gl = uigridlayout(ufig, ‘RowHeight’, {‘1x’, ‘1x’, ‘1x’}, ‘ColumnWidth’, {‘1x’});
% Create three uiaxes objects; place them in the grid layout
ax1 = uiaxes(gl);
ax1.Layout.Row = 1;
ax2 = uiaxes(gl);
ax2.Layout.Row = 2;
ax3 = uiaxes(gl);
ax3.Layout.Row = 3;
% Create plots for each uiaxes object
x = 1:10;
plot(ax1, x, x);
plot(ax2, x, x.^2);
plot(ax3, x, x.^3);
% Attempt to link the uiaxes together
linkaxes([ax1 ax2 ax3]);
It is the last line of this code (with linkaxes) that makes the plots disappear. Otherwise, they show up as expected.
I believe the use of uigridlayout is contributing to the problem. In the following code snippet, which does not include the use of uigridlayout, linking the uiaxes objects together with linkaxes does not make the plots disappear:
% Generate UIFigure
ufig = uifigure;
% Create three uiaxes objects; place them side by side
ax1 = uiaxes(ufig, ‘Position’, [ 0 0 200 200]);
ax2 = uiaxes(ufig, ‘Position’, [200 0 200 200]);
ax3 = uiaxes(ufig, ‘Position’, [400 0 200 200]);
% Create plots for each uiaxes object
x = 1:10;
plot(ax1, x, x);
plot(ax2, x, x.^2);
plot(ax3, x, x.^3);
% Attempt to link the uiaxes together
linkaxes([ax1, ax2, ax3]);
I would appreciate any assistance that one could offer to help me understand and, if possible, correct this issue. uigridlayout, uiaxes, linkaxes, plot, disappear, appdesigner MATLAB Answers — New Questions
Result in the Sentinel GUI (Incidents) / No results in logs (query)
Hey guys,
I have a problem understanding how Sentinel works. In my Sentinel, I can search for incidents dating back to the year 2022. However, when I try to find the same incidents with a Kusto query, it returns no results. Interestingly, when I attach a tag to one of these old incidents, it pops up in my query search. It feels like there are other tables that we cannot query or some settings are not correctly configured in my instance.
Does anyone know where I can find some information about this issue?
Big thanks,
Joe
Hey guys,I have a problem understanding how Sentinel works. In my Sentinel, I can search for incidents dating back to the year 2022. However, when I try to find the same incidents with a Kusto query, it returns no results. Interestingly, when I attach a tag to one of these old incidents, it pops up in my query search. It feels like there are other tables that we cannot query or some settings are not correctly configured in my instance.Does anyone know where I can find some information about this issue?Big thanks,Joe Read More
SQL-Server Management Studio: deactivate wildcard expansion
Hello everybody,
I need to save a view with a table with wildcard, like “SELECT * FROM country”.
However, the MS SQL-Management Studio automatically expands the the wildcard with the fields of the table “country”, to “SELECT country.id, country.description, country.president FROM country”.
Since the respective table is supposed to be user-defined, the view that I deploy must contain a wildcard rather than named fields.
So: how do I deactivate the automatic expanson of this wildcard?
Hoping for help
Joachim
Hello everybody, I need to save a view with a table with wildcard, like “SELECT * FROM country”. However, the MS SQL-Management Studio automatically expands the the wildcard with the fields of the table “country”, to “SELECT country.id, country.description, country.president FROM country”. Since the respective table is supposed to be user-defined, the view that I deploy must contain a wildcard rather than named fields. So: how do I deactivate the automatic expanson of this wildcard? Hoping for helpJoachim Read More
High Confidence Phish not fixable
Hi all,
About a month ago we began getting reports from our customers that they were not receiving responses from our helpdesk. With a bit of further digging it transpired that the common factor was all these customers were using Office 365 with Microsoft Defender, and Microsoft Defender was seeing our main product’s portal URL as a phishing scam.
We raised a ticket with support, and have been going backwards and forwards with submissions over the last four weeks, but have made no progress.
Microsoft Defender’s quarantine process seems random – we make submissions, and out of 10 submissions for a URL we make in a week, 9 of them will come back with:
Unknown – We checked but can’t make a decision right now. We were unable to come to a decision regarding the item. This can occur for a variety of reasons, such as different interpretations by different analysts or the item being inaccessible. Please resubmit the item for analysis.
Occasionally, a URL will then come back as No Threats, and maybe for a day we are able to receive an email with said links in it. But then 1 or 2 days later, it reverts back to being a phishing link.
It has mostly been confined to the same sub-domain where our portal lives, and different paths within that domain have alternated as being phishing and being no threat. Even when the URLs themselves are not treated as threads, the URL detonation reputation mechanism has marked the entire email as phishing regardless.
Today we found out that Microsoft Defender is now classing our signup application which deals with new account sign ups of our product, as a phishing URL. This stops anyone from signing up to our product. The sign up web application sits on a different sub-domain altogether from our main portal application.
We have repeatedly scanned all our applications and endpoints, along with our surface management tools, and are sure these are all false positives.
We implemented emergency mitigation procedures by buying a new domain that the sign up process can live on, and changing our entire sign up process to use this new domain.
As soon as the new process was live, we tested it all, the activation link email worked, but as soon as the completion “welcome email” is sent out, that now gets caught as a high confidence phishing scam because Microsoft Defender has, as of this afternoon, decided that our documentation site that is linked from the welcome email, is also a high confidence phishing URL. This is a simple HTML set of pages served by GitHub pages which is totally detached from the rest of our infrastructure.
We have done four submissions so far for the documentation URL, and so far each time we have had that same “Unknown” result we constantly see:
We offer billing services in a mostly B2B market, so the majority of our customers use Office 365. The impact this problem has had on our business is huge, and now with the problem spreading to other sub-domains within our application including our sign up, this now threatens the commercial viability of our business.
Microsoft Defender submissions appear completely broken, they are not able to analyse or permanently determine the status of a URL.
We have gone backwards and forwards with support, and repeatedly asked to have this matter escalated, but been met with no response.
The threat to our business is huge, but we appear to have no recourse to rectify this problem with Microsoft, where does one go from here?
Hi all, About a month ago we began getting reports from our customers that they were not receiving responses from our helpdesk. With a bit of further digging it transpired that the common factor was all these customers were using Office 365 with Microsoft Defender, and Microsoft Defender was seeing our main product’s portal URL as a phishing scam. We raised a ticket with support, and have been going backwards and forwards with submissions over the last four weeks, but have made no progress. Microsoft Defender’s quarantine process seems random – we make submissions, and out of 10 submissions for a URL we make in a week, 9 of them will come back with: Unknown – We checked but can’t make a decision right now. We were unable to come to a decision regarding the item. This can occur for a variety of reasons, such as different interpretations by different analysts or the item being inaccessible. Please resubmit the item for analysis. Occasionally, a URL will then come back as No Threats, and maybe for a day we are able to receive an email with said links in it. But then 1 or 2 days later, it reverts back to being a phishing link. It has mostly been confined to the same sub-domain where our portal lives, and different paths within that domain have alternated as being phishing and being no threat. Even when the URLs themselves are not treated as threads, the URL detonation reputation mechanism has marked the entire email as phishing regardless. Today we found out that Microsoft Defender is now classing our signup application which deals with new account sign ups of our product, as a phishing URL. This stops anyone from signing up to our product. The sign up web application sits on a different sub-domain altogether from our main portal application. We have repeatedly scanned all our applications and endpoints, along with our surface management tools, and are sure these are all false positives. We implemented emergency mitigation procedures by buying a new domain that the sign up process can live on, and changing our entire sign up process to use this new domain. As soon as the new process was live, we tested it all, the activation link email worked, but as soon as the completion “welcome email” is sent out, that now gets caught as a high confidence phishing scam because Microsoft Defender has, as of this afternoon, decided that our documentation site that is linked from the welcome email, is also a high confidence phishing URL. This is a simple HTML set of pages served by GitHub pages which is totally detached from the rest of our infrastructure. We have done four submissions so far for the documentation URL, and so far each time we have had that same “Unknown” result we constantly see: We offer billing services in a mostly B2B market, so the majority of our customers use Office 365. The impact this problem has had on our business is huge, and now with the problem spreading to other sub-domains within our application including our sign up, this now threatens the commercial viability of our business. Microsoft Defender submissions appear completely broken, they are not able to analyse or permanently determine the status of a URL. We have gone backwards and forwards with support, and repeatedly asked to have this matter escalated, but been met with no response. The threat to our business is huge, but we appear to have no recourse to rectify this problem with Microsoft, where does one go from here? Read More
Secure your business: Four ways Microsoft 365 for Business can help
Cybersecurity has become a critical concern for businesses of all sizes, 82% of ransomware attacks are targeted at small and medium-sized businesses [1]. In today’s digital landscape, businesses face an array of security threats that can compromise sensitive data, disrupt operations, and diminish customer trust. Small businesses seek effective, scalable, and user-friendly solutions to operate securely. Microsoft 365 for Business offers a suite of solutions tailored to meet productivity and security needs, with three distinct plans – Microsoft 365 Business Basic, Microsoft 365 Business Standard, and Microsoft 365 Business Premium.
Business Basic includes key Microsoft cloud services like Microsoft Teams and Microsoft SharePoint, apps like Microsoft Word and Microsoft Excel, and foundational identity, email, and mobile device security that’s essential for protecting your business. Business Standard includes everything in Business Basic plus desktop apps like Microsoft Loop and Microsoft Clipchamp. Business Premium takes security to the next level, offering extensive protection that includes everything in Business Standard, plus comprehensive cybersecurity features from Microsoft Intune, Microsoft Purview, Microsoft Entra ID, Azure Virtual Desktop, and Microsoft Defender. With Business Premium, users benefit from layered protection for devices, email, and collaboration content, as well as data encryption, sensitivity labels, and Data Loss Prevention (DLP) capabilities.
By choosing Microsoft 365 for Business, you can leverage security solutions that are not only effective, but also scalable and user-friendly, allowing you to manage your business with confidence. In this blog, we’ll discuss four ways Microsoft 365 for Business can enhance your company’s security.
Enable your employees to access business data and applications with identity and access management
Your online security and privacy depend on using strong, unique passwords for each of your accounts. However, passwords alone are prone to loss or theft, which cybercriminals can use to access your data. To enhance your protection, additional identity and access controls are necessary.
Both Business Basic and Business Standard plans include Microsoft Entra ID Free, enabling you to set up multi-factor authentication (MFA) for your Microsoft accounts and apps. Beyond entering your password, you can use your phone, email, or an app to receive a code or approve a sign-in request. This extra layer of security can block over 99.9% of account compromise attempts [2].
Business Premium takes identity and access controls to the next level by incorporating Microsoft Entra ID P1, offering enhanced identity protection through conditional access policies. These policies grant or block access based on user identity, location, and sign-in methods, ensuring business data remains secure by permitting access only under specific criteria. For example, if cybercriminals from overseas attempt to breach your company’s data by stealing passwords, conditional access policies in Microsoft Entra ID P1 with Microsoft 365 Business Premium can automatically block or require additional verification via multifactor authentication (MFA) for login attempts from countries where your business doesn’t operate. This helps ensure that only authorized personnel can access your data, no matter where or when they log in, providing robust protection against unauthorized access.
Keep your business data safe and secure when collaborating
Working in a hybrid environment presents challenges in communication and collaboration with teams, customers, and partners while protecting your business data. With Business Basic and Business Standard, you can utilize SharePoint to manage file access and Microsoft Teams for secure collaboration within your organization. For customers deploying Copilot for Microsoft 365, who want to prevent oversharing or surfacing sensitive files, Restricted SharePoint Search helps ensure that only content from reviewed and governed sites appears in search results, providing a secure and controlled generative AI experience. Both plans also include audit capabilities, allowing you to track and review user activities, access, and changes to documents, which help to maintain security and compliance.
Business Premium extends the capabilities in Business Basic and Business Standard with additional Microsoft Purview functions, allowing you to label and protect sensitive files and emails, including those used or generated by Copilot for Microsoft 365. For example, imagine you own a retail shop that collects and stores confidential information, like credit card numbers, in an Excel file for future use. This file might be protected by a password, but it’s frequently shared via email for company use, which means anyone could download the document and save their own copy. With Business Premium, you get advanced capabilities like Data Loss Prevention (DLP) and Microsoft Purview Information Protection, to help classify and protect sensitive data like customer or employee information, confidential business data, social security numbers, credit card numbers and more. By applying the appropriate label and protection policies such as encrypt or do not forward, you can help ensure sensitive information is secured to protect your data wherever it goes.
Manage your devices to keep them compliant and secure
Device management helps ensure that your devices remain secure, up-to-date, and compliant with your organizational policies, to prevent unauthorized access, security breaches, data loss, and other incidents that could harm your business. Business Basic and Business Standard include Basic Mobility and Security, allowing you to apply essential security policies and access rules to your mobile devices. For example, you can require a PIN or password for device access or remotely wipe a device if it’s lost or stolen.
Business Premium takes this a step further with Microsoft Intune P1, offering advanced tools to manage Microsoft 365 resources on both personal and company-owned devices. For instance, you can deploy and protect applications across various devices and platforms, enforce compliance policies to ensure devices meet security standards, restrict data sharing to prevent leaks, configure device settings to maintain consistency, and monitor device health and status to proactively address potential issues. For example, Intune App Protection Policies can help to separate work apps from personal apps, so work documents and files are only saved in authorized and secure folders, like OneDrive for Business, to protect sensitive information.
Defend your business against cyberthreats like phishing and ransomware
Cybercriminals are constantly seeking ways to deceive you into giving up your information or infecting your systems with malicious software. One of their most common methods is phishing, where they impersonate a trusted entity and send you an email or message with a link or attachment that leads to a fake website or downloads malware onto your device. Phishing is pervasive and effective; the frequency of business email compromise (BEC) attacks has skyrocketed to over 156,000 daily attempts. Microsoft data shows attempted password attacks increased more than tenfold in 2023, from around 3 billion per month to over 30 billion [3]. These attacks can lead to identity theft, data breaches, ransomware, and other serious consequences.
Microsoft 365 for Business offers security solutions to protect against growing threats. Business Basic and Business Standard include Exchange Online Protection (EOP), a cloud-based service that filters out spam, malware, phishing, and other email threats before they reach your inbox. Business Premium offers enhanced protection with Microsoft Defender for Office 365 P1, which extends security features to Microsoft Outlook, Microsoft Teams, Microsoft OneDrive, and Microsoft SharePoint. It includes layered protection with Safe Links and Safe Attachments. Safe Links check the links in your emails and messages in real-time, blocking access to malicious or compromised websites. Safe Attachments scans your attachments for malware, removing or replacing unsafe files.
Business Premium also includes sophisticated ransomware protection with Microsoft Defender for Business, that helps secure devices across Windows, iOS, Android, Linux and Mac from a single solution. It offers AI-powered endpoint detection and response (EDR) capabilities, including Automatic attack disruption, an industry-first capability that correlates millions of individual signals to identify active ransomware or other sophisticated attacks with high confidence. It then disrupts the attack in real-time by automatically containing compromised assets the attacker has access to and stops them from going any further in your environment. This game-changing capability limits a threat actor’s progress early on and dramatically reduces the overall impact of an attack, from associated costs to loss of productivity. Automatic attack disruption is on by default so you can focus on what really matters – your business. This proactive approach ensures your devices are not only protected from known threats but are also resilient against emerging cyber threats, significantly reducing the risk of data breaches and security incidents.
Don’t wait to protect your business
Cyberattacks are becoming increasingly sophisticated and causing bigger impacts to small and medium-sized businesses. Microsoft 365 for Business offers productivity and security solutions tailored to meet your business’ specific security needs, helping you run your operations securely and efficiently.
Partners tools to manage Microsoft 365 business customers with Microsoft 365 Lighthouse
Partners play an important role in helping small and medium-sized businesses stay secure. Recognizing this, we built Microsoft 365 Lighthouse, a unified portal for managed service provider (MSP) partners that helps you manage Microsoft 365 business accounts across multiple clients from a single pane of glass. This tool allows partners to streamline security management, monitor compliance, and respond to threats efficiently – helping ensure that all managed businesses maintain robust security standards.
Learn more to help secure your business
Customer resources:
Check out the infographic to see how the Microsoft 365 for Business plans compare: aka.ms/SMB-infographic
Compare all Microsoft 365 plans
Explore how Microsoft Security for Business can help (SMB security page)
Learn about Microsoft 365 for Business Security best practices: https://aka.ms/SMB-best-practices
Partner resources:
Get the Business Premium partner playbook: aka.ms/M365BPPartnerPlaybook
Discover how to grow your business at scale with Microsoft 365 Lighthouse: aka.ms/M365Lighthouse
References:
[2] https://www.microsoft.com/security/blog/2019/08/20/one-simple-action-you-can-take-to-prevent-99-9-percent-of-account-attacks/ based on MSFT internal study
[3] Microsoft Digital Defense Report 2023 (MDDR)
Microsoft Tech Community – Latest Blogs –Read More
What is the best way to heat up a thermal liquid ?
Hello everyone,
In your opininon what is the best way to heat up a thermal liquid knowing the heat flow (in W). I am currently using a pipe with a controlled heat flow source but the fluid is not heating up correctly. Thank you in advance.Hello everyone,
In your opininon what is the best way to heat up a thermal liquid knowing the heat flow (in W). I am currently using a pipe with a controlled heat flow source but the fluid is not heating up correctly. Thank you in advance. Hello everyone,
In your opininon what is the best way to heat up a thermal liquid knowing the heat flow (in W). I am currently using a pipe with a controlled heat flow source but the fluid is not heating up correctly. Thank you in advance. simscape, simulink, simulation, matlab MATLAB Answers — New Questions