Month: August 2024
What’s new: Multi-tenancy in the unified security operations platform experience in Public Preview
Multi-tenancy for Microsoft Sentinel in the Defender portal (unified security operations platform)
Multi-tenancy, with a single workspace is now in public preview for customers using Microsoft’s unified security operations (SecOps) platform. This will expand the use cases we can support with this innovative experience that brings together the critical tools a SOC requires into a single experience to improve protection and efficiency. Read on to learn more about what is available now, and how to get started.
What is Microsoft’s unified SecOps platform?
The unified security operations platform provides a single experience for Microsoft Sentinel and Defender XDR, along with Copilot for Security, exposure management and threat intelligence, in the Defender portal. The unified SecOps platform is in GA for commercial cloud customers with both Microsoft Sentinel and Defender XDR.
What are we enabling with the public preview of multi-tenancy in the unified security operations (SecOps) platform?
Multi-tenancy, now in public preview, supports managed security service providers (MSSPs) and enterprises in protecting their whole environment. Previously, customers were required to manage this separately in Microsoft Sentinel, with Azure Lighthouse and Microsoft Defender, with Multi-tenant Organization (MTO).
This release will not include multi-tenancy for Copilot for Security, Threat Intelligence or exposure management.
With this public preview, customers can:
Detect and investigate incidents with better accuracy: Multi-tenant customers can triage incidents and alerts across SIEM and XDR data.
Improve threat hunting experience: Users can now proactively search for data across multiple tenants, including SIEM and XDR data.
Unified management: customers now can manage their tenancy in a single place for their threat protection tools.
What value do MSSPs and multi-tenant organizations get from using the unified platform?
Enhanced detection and response: Incidents and alerts are automatically correlated across SIEM and XDR data, providing a comprehensive and accurate picture of multistage attacks. This holistic view improves detection and response times, ensuring threats are identified and mitigated more effectively.
Streamlined investigation: Out-of-the-box enrichments such as device, user, and other entities information from Microsoft Defenders simplifies the investigation process. These enrichments provide additional context and insights, making it easier to understand and respond to security incidents. It is also possible to hunt for threats across all SIEM and XDR data, without ingesting XDR data.
Scalability and flexibility: The unified platform is designed to scale with your business, accommodating the needs of growing customer bases and evolving security landscapes. This flexibility ensures that MSSPs can continue to deliver high-quality security services as their operations expand.
Comprehensive threat intelligence: Access to Microsoft’s extensive threat intelligence network provides MSSPs with up-to-date information on the latest threats and vulnerabilities. This intelligence helps in proactively defending against emerging threats and staying ahead of attackers.
Seamless Integration: The platform integrates seamlessly with existing security tools and workflows, minimizing disruption and maximizing the value of existing investments. This integration ensures a smooth transition and enhances overall security posture.
How many workspaces can I manage through multi-tenancy in the unified SecOps platform?
The unified SecOps platform’s multi-tenant management feature enables the handling of various tenants through a unified interface. Currently, each tenant is limited to one workspace. Multi-workspace support is on the way, to participate in our private preview, please join our connected community.
What are the requirements to utilize multi-tenant management in the unified security operations platform?
Customers must be using Microsoft Sentinel and at least one Defender XDR workload.
Users must have delegated access to more than 1 tenant enrolled into the unified SecOps platform, using Azure B2B collaboration.
To learn more about scalable B2B deployment for Defender, navigate to Secure and govern security operations center (SOC) access in a multitenant organization with Microsoft Defender for Cloud XDR and Microsoft Entra ID Governance – Microsoft Entra ID | Microsoft Learn
Are Azure Lighthouse and GDAP supported?
Not yet.
How do I use multi-tenant management in the unified SecOps platform?
Navigate to mto.security.microsoft.com
Who is the intended user for multi-tenant management within the unified SecOps platform?
Any enterprise or Managed Security Service Provider (MSSP) aiming to handle security for multiple client organizations, or large, multi-national enterprises.
How can I provide feedback?
The best way to provide feedback is in product, as shown here.
To provide feedback on private preview features, you can join Microsoft’s Customer Connection Program. Learn more at https://aka.ms/MSSecurityCCP.
What are the licenses required to use this new feature?
No license is required to use this feature. To access multiple tenant’s data, each of them is required to have its own license.
Are there any additional ingestion costs?
Multi-tenant management does not incur additional ingestion costs. In fact, there is the potential for cost savings when using the unified security operations platform experience as customers do not need to ingest their Defender XDR data into Microsoft Sentinel in order to correlate incidents or hunt for threats. Ingestion is still required for extended retention.
Learn more and get started now:
https://learn.microsoft.com/en-us/defender-xdr/mto-overview
Microsoft Sentinel in the Microsoft Defender portal | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
The issue of optimization (minimization) of the average relative error between experimental and calculated data
hello
I want to share the difficulties that I faced. Can someone help
problem statement:
there is a ‘x’ column where the values of the independent variable are written and there is a ‘y’ column where the experimental values of the dependent variable are written.
approximation model is considered:
y_calculate=A*x^B+C,
and based on this model, an objective function is created, which is equal to the average value of the relative deviations between y and y_calculate:
error_function = mean(abs(y – y_calculate)) / y)=mean(abs(y – =A*x^B+C)) / y);
Our goal is to select parameters A,B,C in such a way that ‘error_function’ takes the value of the global minimum.
I calculated the optimal values of A, B, C and got:
A = 85.5880, B = -0.0460, C = 4.8824,
at which error function value for optimized parameters: 0.0285.
but I know in advance the specific values of A, B, C:
A = 1005.6335852931, B = -1.59745963925582, C = 73.54149744754400,
at which error function value for specific parameters: 0.002680472178434,
which is much better than with optimization
Below is the code with visualization, which confirms the above.
clear
close all
% Data
x = [7.3392, 14.6784, 22.0176, 29.3436, 36.6828, 44.0088, 51.3348, 58.674, 66, 73.3392, 80.6652, 88.0044, 95.3304, 102.6696, 109.9956, 117.3348, 124.6608, 132];
y = [115.1079, 87.7698, 80.5755, 78.1611, 76.5743, 75.7074, 74.9375, 74.9453, 74.59, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990];
% Initial guesses for parameters A, B, C
initial_guess = [1, 1, 1];
% Error function
error_function = @(params) mean(abs(y – (params(1) * x.^params(2) + params(3))) ./ y);
% Optimization of parameters
optimized_params = fminsearch(error_function, initial_guess);
% Results of optimization
A_optimized = optimized_params(1);
B_optimized = optimized_params(2);
C_optimized = optimized_params(3);
% Calculation of the fitted function for optimized parameters
y_calculate_optimized = A_optimized * x.^B_optimized + C_optimized;
% Calculate and display the error function value for optimized parameters
value_error_optimized = error_function(optimized_params);
fprintf(‘Optimized parameters:nA = %.4fnB = %.4fnC = %.4fn’, A_optimized, B_optimized, C_optimized);
fprintf(‘ error function value for optimized parameters: %.4fn’, value_error_optimized);
% Other specific parameters A, B, C
A_specific = 1005.63358529310;
B_specific = -1.59745963925582;
C_specific = 73.541497447544;
% Calculation of the fitted function for specific parameters
y_calculate_specific = A_specific * x.^B_specific + C_specific;
% Calculate and display the error function value for specific parameters
value_error_specific = error_function([A_specific, B_specific, C_specific]);
fprintf(‘Specific parameters:nA = %.10fnB = %.14fnC = %.14fn’, A_specific, B_specific, C_specific);
fprintf(‘ error function value for specific parameters: %.4fn’, value_error_specific);
% Visualization
figure;
plot(x, y, ‘bo-‘, ‘DisplayName’, ‘Experimental data’);
hold on;
plot(x, y_calculate_optimized, ‘r–‘, ‘DisplayName’, ‘Fitted model (Optimized)’);
plot(x, y_calculate_specific, ‘g-.’, ‘DisplayName’, ‘Fitted model (Specific)’);
xlabel(‘x’);
ylabel(‘y’);
legend(‘Location’, ‘best’);
title(‘Approximation of experimental data’);
grid on;
Obviously, my optimization code does not lead to a global minimum of the objective function, since there is a better approximation for specific values of A,B,C. Maybe this is caused by a random selection of the initial values of the parameters A=1, B=1, c=1 and therefore my code is stuck in a local minimum?
who can write a code that will select the A,B,C parameters so as to achieve the global minimum of the target function ‘error_function’, for any initial iteration data of the variables A,B,C. Thoughts for testing: the value of the target function ‘error_function’ should not be worse (that is, more) than 0.002680472178434, which is obtained with the specific value of A,B,C: A = 1005.6335852931, B = -1.59745963925582, C = 73.54149744754400hello
I want to share the difficulties that I faced. Can someone help
problem statement:
there is a ‘x’ column where the values of the independent variable are written and there is a ‘y’ column where the experimental values of the dependent variable are written.
approximation model is considered:
y_calculate=A*x^B+C,
and based on this model, an objective function is created, which is equal to the average value of the relative deviations between y and y_calculate:
error_function = mean(abs(y – y_calculate)) / y)=mean(abs(y – =A*x^B+C)) / y);
Our goal is to select parameters A,B,C in such a way that ‘error_function’ takes the value of the global minimum.
I calculated the optimal values of A, B, C and got:
A = 85.5880, B = -0.0460, C = 4.8824,
at which error function value for optimized parameters: 0.0285.
but I know in advance the specific values of A, B, C:
A = 1005.6335852931, B = -1.59745963925582, C = 73.54149744754400,
at which error function value for specific parameters: 0.002680472178434,
which is much better than with optimization
Below is the code with visualization, which confirms the above.
clear
close all
% Data
x = [7.3392, 14.6784, 22.0176, 29.3436, 36.6828, 44.0088, 51.3348, 58.674, 66, 73.3392, 80.6652, 88.0044, 95.3304, 102.6696, 109.9956, 117.3348, 124.6608, 132];
y = [115.1079, 87.7698, 80.5755, 78.1611, 76.5743, 75.7074, 74.9375, 74.9453, 74.59, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990];
% Initial guesses for parameters A, B, C
initial_guess = [1, 1, 1];
% Error function
error_function = @(params) mean(abs(y – (params(1) * x.^params(2) + params(3))) ./ y);
% Optimization of parameters
optimized_params = fminsearch(error_function, initial_guess);
% Results of optimization
A_optimized = optimized_params(1);
B_optimized = optimized_params(2);
C_optimized = optimized_params(3);
% Calculation of the fitted function for optimized parameters
y_calculate_optimized = A_optimized * x.^B_optimized + C_optimized;
% Calculate and display the error function value for optimized parameters
value_error_optimized = error_function(optimized_params);
fprintf(‘Optimized parameters:nA = %.4fnB = %.4fnC = %.4fn’, A_optimized, B_optimized, C_optimized);
fprintf(‘ error function value for optimized parameters: %.4fn’, value_error_optimized);
% Other specific parameters A, B, C
A_specific = 1005.63358529310;
B_specific = -1.59745963925582;
C_specific = 73.541497447544;
% Calculation of the fitted function for specific parameters
y_calculate_specific = A_specific * x.^B_specific + C_specific;
% Calculate and display the error function value for specific parameters
value_error_specific = error_function([A_specific, B_specific, C_specific]);
fprintf(‘Specific parameters:nA = %.10fnB = %.14fnC = %.14fn’, A_specific, B_specific, C_specific);
fprintf(‘ error function value for specific parameters: %.4fn’, value_error_specific);
% Visualization
figure;
plot(x, y, ‘bo-‘, ‘DisplayName’, ‘Experimental data’);
hold on;
plot(x, y_calculate_optimized, ‘r–‘, ‘DisplayName’, ‘Fitted model (Optimized)’);
plot(x, y_calculate_specific, ‘g-.’, ‘DisplayName’, ‘Fitted model (Specific)’);
xlabel(‘x’);
ylabel(‘y’);
legend(‘Location’, ‘best’);
title(‘Approximation of experimental data’);
grid on;
Obviously, my optimization code does not lead to a global minimum of the objective function, since there is a better approximation for specific values of A,B,C. Maybe this is caused by a random selection of the initial values of the parameters A=1, B=1, c=1 and therefore my code is stuck in a local minimum?
who can write a code that will select the A,B,C parameters so as to achieve the global minimum of the target function ‘error_function’, for any initial iteration data of the variables A,B,C. Thoughts for testing: the value of the target function ‘error_function’ should not be worse (that is, more) than 0.002680472178434, which is obtained with the specific value of A,B,C: A = 1005.6335852931, B = -1.59745963925582, C = 73.54149744754400 hello
I want to share the difficulties that I faced. Can someone help
problem statement:
there is a ‘x’ column where the values of the independent variable are written and there is a ‘y’ column where the experimental values of the dependent variable are written.
approximation model is considered:
y_calculate=A*x^B+C,
and based on this model, an objective function is created, which is equal to the average value of the relative deviations between y and y_calculate:
error_function = mean(abs(y – y_calculate)) / y)=mean(abs(y – =A*x^B+C)) / y);
Our goal is to select parameters A,B,C in such a way that ‘error_function’ takes the value of the global minimum.
I calculated the optimal values of A, B, C and got:
A = 85.5880, B = -0.0460, C = 4.8824,
at which error function value for optimized parameters: 0.0285.
but I know in advance the specific values of A, B, C:
A = 1005.6335852931, B = -1.59745963925582, C = 73.54149744754400,
at which error function value for specific parameters: 0.002680472178434,
which is much better than with optimization
Below is the code with visualization, which confirms the above.
clear
close all
% Data
x = [7.3392, 14.6784, 22.0176, 29.3436, 36.6828, 44.0088, 51.3348, 58.674, 66, 73.3392, 80.6652, 88.0044, 95.3304, 102.6696, 109.9956, 117.3348, 124.6608, 132];
y = [115.1079, 87.7698, 80.5755, 78.1611, 76.5743, 75.7074, 74.9375, 74.9453, 74.59, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990, 74.2990];
% Initial guesses for parameters A, B, C
initial_guess = [1, 1, 1];
% Error function
error_function = @(params) mean(abs(y – (params(1) * x.^params(2) + params(3))) ./ y);
% Optimization of parameters
optimized_params = fminsearch(error_function, initial_guess);
% Results of optimization
A_optimized = optimized_params(1);
B_optimized = optimized_params(2);
C_optimized = optimized_params(3);
% Calculation of the fitted function for optimized parameters
y_calculate_optimized = A_optimized * x.^B_optimized + C_optimized;
% Calculate and display the error function value for optimized parameters
value_error_optimized = error_function(optimized_params);
fprintf(‘Optimized parameters:nA = %.4fnB = %.4fnC = %.4fn’, A_optimized, B_optimized, C_optimized);
fprintf(‘ error function value for optimized parameters: %.4fn’, value_error_optimized);
% Other specific parameters A, B, C
A_specific = 1005.63358529310;
B_specific = -1.59745963925582;
C_specific = 73.541497447544;
% Calculation of the fitted function for specific parameters
y_calculate_specific = A_specific * x.^B_specific + C_specific;
% Calculate and display the error function value for specific parameters
value_error_specific = error_function([A_specific, B_specific, C_specific]);
fprintf(‘Specific parameters:nA = %.10fnB = %.14fnC = %.14fn’, A_specific, B_specific, C_specific);
fprintf(‘ error function value for specific parameters: %.4fn’, value_error_specific);
% Visualization
figure;
plot(x, y, ‘bo-‘, ‘DisplayName’, ‘Experimental data’);
hold on;
plot(x, y_calculate_optimized, ‘r–‘, ‘DisplayName’, ‘Fitted model (Optimized)’);
plot(x, y_calculate_specific, ‘g-.’, ‘DisplayName’, ‘Fitted model (Specific)’);
xlabel(‘x’);
ylabel(‘y’);
legend(‘Location’, ‘best’);
title(‘Approximation of experimental data’);
grid on;
Obviously, my optimization code does not lead to a global minimum of the objective function, since there is a better approximation for specific values of A,B,C. Maybe this is caused by a random selection of the initial values of the parameters A=1, B=1, c=1 and therefore my code is stuck in a local minimum?
who can write a code that will select the A,B,C parameters so as to achieve the global minimum of the target function ‘error_function’, for any initial iteration data of the variables A,B,C. Thoughts for testing: the value of the target function ‘error_function’ should not be worse (that is, more) than 0.002680472178434, which is obtained with the specific value of A,B,C: A = 1005.6335852931, B = -1.59745963925582, C = 73.54149744754400 optimization MATLAB Answers — New Questions
Warning: JPEG library error (8 bit), “Invalid SOS parameters for sequential JPEG”.
Hello there!
I am trying to modify the pretrained network, alexnet, for my images, which are in jpg format. The code after creating the datastore is here:
forestzones = imds.Labels;
[trainimgs,testimgs] = splitEachLabel(imds,0.8,’randomized’);
foractual = testimgs.Labels;
trainimgs = augmentedImageDatastore([227 227],trainimgs);
testimgs = augmentedImageDatastore([227 227],testimgs);
anet = alexnet;
layers = anet.Layers;
fc = fullyConnectedLayer(2);
layers(23) = fc;
layers(end) = classificationLayer;
opts = trainingOptions(‘sgdm’,’InitialLearnRate’,0.01);
[fornet,info] = trainNetwork(trainimgs,layers,opts)
I get >> Warning: JPEG library error (8 bit), "Invalid SOS parameters for sequential JPEG".
As a result I am not getting a proper info table.
What is the problem.
Thank you in advance!!Hello there!
I am trying to modify the pretrained network, alexnet, for my images, which are in jpg format. The code after creating the datastore is here:
forestzones = imds.Labels;
[trainimgs,testimgs] = splitEachLabel(imds,0.8,’randomized’);
foractual = testimgs.Labels;
trainimgs = augmentedImageDatastore([227 227],trainimgs);
testimgs = augmentedImageDatastore([227 227],testimgs);
anet = alexnet;
layers = anet.Layers;
fc = fullyConnectedLayer(2);
layers(23) = fc;
layers(end) = classificationLayer;
opts = trainingOptions(‘sgdm’,’InitialLearnRate’,0.01);
[fornet,info] = trainNetwork(trainimgs,layers,opts)
I get >> Warning: JPEG library error (8 bit), "Invalid SOS parameters for sequential JPEG".
As a result I am not getting a proper info table.
What is the problem.
Thank you in advance!! Hello there!
I am trying to modify the pretrained network, alexnet, for my images, which are in jpg format. The code after creating the datastore is here:
forestzones = imds.Labels;
[trainimgs,testimgs] = splitEachLabel(imds,0.8,’randomized’);
foractual = testimgs.Labels;
trainimgs = augmentedImageDatastore([227 227],trainimgs);
testimgs = augmentedImageDatastore([227 227],testimgs);
anet = alexnet;
layers = anet.Layers;
fc = fullyConnectedLayer(2);
layers(23) = fc;
layers(end) = classificationLayer;
opts = trainingOptions(‘sgdm’,’InitialLearnRate’,0.01);
[fornet,info] = trainNetwork(trainimgs,layers,opts)
I get >> Warning: JPEG library error (8 bit), "Invalid SOS parameters for sequential JPEG".
As a result I am not getting a proper info table.
What is the problem.
Thank you in advance!! deep learning, image processing MATLAB Answers — New Questions
How can I efficiently save and access large arrays generated in nested loops?
I need to run nested for-loops over the variables J1 and J2. The range for J1 is 1 to 41, and the range for J2 is 1 to 9. Inside these loops, I evaluate 16 functions, each of which returns an array of complex numbers with a size of 500 by 502.
I used the following given method to save the data, and it produced an 11 GB file, which seems very large. Is this normal? What is an efficient way to save this data at the end of the calculation?
What I want to do with this data afterward:
I will need to access the 16 arrays, A1 to A16, within the same J1 and J2 loop to perform other operations. Therefore, I want to store the data in a way that allows easy access to these 16 arrays within the loops.
My method to store data:
all_data = cell(41,9);
for J1 = 1:41
for J2 = 1:9
%evaluate 16 function to get 16 arrays (A1 to A16) of size 500 x 502:
all_data{J1,J2} = struct("A1", A1,…
"A2", A2,…
"A3", A3,…
"A4", A4,…
"A5", A5,…
"A6", A6,…
"A7", A7,…
"A8", A8,…
"A9", A9,…
"A10", A10,…
"A11", A11,…
"A12", A12,…
"A13", A13,…
"A14", A14,…
"A15", A15,…
"A16", A16);
end
end
save(‘Saved_Data.mat’,’-v7.3′);I need to run nested for-loops over the variables J1 and J2. The range for J1 is 1 to 41, and the range for J2 is 1 to 9. Inside these loops, I evaluate 16 functions, each of which returns an array of complex numbers with a size of 500 by 502.
I used the following given method to save the data, and it produced an 11 GB file, which seems very large. Is this normal? What is an efficient way to save this data at the end of the calculation?
What I want to do with this data afterward:
I will need to access the 16 arrays, A1 to A16, within the same J1 and J2 loop to perform other operations. Therefore, I want to store the data in a way that allows easy access to these 16 arrays within the loops.
My method to store data:
all_data = cell(41,9);
for J1 = 1:41
for J2 = 1:9
%evaluate 16 function to get 16 arrays (A1 to A16) of size 500 x 502:
all_data{J1,J2} = struct("A1", A1,…
"A2", A2,…
"A3", A3,…
"A4", A4,…
"A5", A5,…
"A6", A6,…
"A7", A7,…
"A8", A8,…
"A9", A9,…
"A10", A10,…
"A11", A11,…
"A12", A12,…
"A13", A13,…
"A14", A14,…
"A15", A15,…
"A16", A16);
end
end
save(‘Saved_Data.mat’,’-v7.3′); I need to run nested for-loops over the variables J1 and J2. The range for J1 is 1 to 41, and the range for J2 is 1 to 9. Inside these loops, I evaluate 16 functions, each of which returns an array of complex numbers with a size of 500 by 502.
I used the following given method to save the data, and it produced an 11 GB file, which seems very large. Is this normal? What is an efficient way to save this data at the end of the calculation?
What I want to do with this data afterward:
I will need to access the 16 arrays, A1 to A16, within the same J1 and J2 loop to perform other operations. Therefore, I want to store the data in a way that allows easy access to these 16 arrays within the loops.
My method to store data:
all_data = cell(41,9);
for J1 = 1:41
for J2 = 1:9
%evaluate 16 function to get 16 arrays (A1 to A16) of size 500 x 502:
all_data{J1,J2} = struct("A1", A1,…
"A2", A2,…
"A3", A3,…
"A4", A4,…
"A5", A5,…
"A6", A6,…
"A7", A7,…
"A8", A8,…
"A9", A9,…
"A10", A10,…
"A11", A11,…
"A12", A12,…
"A13", A13,…
"A14", A14,…
"A15", A15,…
"A16", A16);
end
end
save(‘Saved_Data.mat’,’-v7.3′); storage, data, big data, matlab MATLAB Answers — New Questions
Problems with dynamic tables and charts
Good day,
I have a question with regards to dynamic tables and charts.
I set up three columns for five food types using an ActiveX combobox so that I can switch between different food types. This was done for 12 months for 5 products (per food type).
The first column is Product and has 60 rows. The second column is Sales volume and also has 60 rows and the third column is Month and also has 60 rows populated with data.
So, I want to create a line chart displaying the Months on the x-axis with a line for each product and Sales volume on the y-axis.
The problem is, when I set up the y-axis (vertical), I am extracting data from the Sales volume column which has 60 rows populated with data. This is populated perfectly in the chart.
I created two helper columns. One for the Products (since the products each repeat multiple times, I had to extract only the five products that I needed – which updates dynamically) and one for the Months (as there are 60 columns populated with month data, the months repeat multiple times).
When I however extract data from the Month helper column, Excel gets confused because it is now only extracting data from 12 columns (12 months) instead of the 60 columns for the Sales volume (y-axis).
So, when I try to populate the Months on the x-axis, Excel sees the 48 blank cells as data and I have tried just about everything to get it to ignore those columns, but nothing is working. I also can’t manually delete the blank columns from the horizontal axis data input for the line graph.
So, the months are squashed in so tightly that they are written over each other on the x-axis.
How do I fix this error? And also, how do I populate a line for each product for each food type in the dynamic chart? I am struggling with that as well.
I would really appreciate your help.
Kind regards,
Heinrich
Good day, I have a question with regards to dynamic tables and charts. I set up three columns for five food types using an ActiveX combobox so that I can switch between different food types. This was done for 12 months for 5 products (per food type). The first column is Product and has 60 rows. The second column is Sales volume and also has 60 rows and the third column is Month and also has 60 rows populated with data. So, I want to create a line chart displaying the Months on the x-axis with a line for each product and Sales volume on the y-axis. The problem is, when I set up the y-axis (vertical), I am extracting data from the Sales volume column which has 60 rows populated with data. This is populated perfectly in the chart. I created two helper columns. One for the Products (since the products each repeat multiple times, I had to extract only the five products that I needed – which updates dynamically) and one for the Months (as there are 60 columns populated with month data, the months repeat multiple times). When I however extract data from the Month helper column, Excel gets confused because it is now only extracting data from 12 columns (12 months) instead of the 60 columns for the Sales volume (y-axis). So, when I try to populate the Months on the x-axis, Excel sees the 48 blank cells as data and I have tried just about everything to get it to ignore those columns, but nothing is working. I also can’t manually delete the blank columns from the horizontal axis data input for the line graph. So, the months are squashed in so tightly that they are written over each other on the x-axis. How do I fix this error? And also, how do I populate a line for each product for each food type in the dynamic chart? I am struggling with that as well. I would really appreciate your help. Kind regards, Heinrich Read More
Defender for Endpoint License Consumption
Good day!
I just want to ask on how license consumption works in MDE. I have 5 devices onboarded to MDE using local script. All are AAD Joined and users with MDE license are logged in.
Upon checking on security.microsoft.com Settings>Endpoints>Licenses it shows that 0/20 licensed used but Monthly active devices are 5.
Can anyone help me with this?
TIA!
Good day! I just want to ask on how license consumption works in MDE. I have 5 devices onboarded to MDE using local script. All are AAD Joined and users with MDE license are logged in.Upon checking on security.microsoft.com Settings>Endpoints>Licenses it shows that 0/20 licensed used but Monthly active devices are 5.Can anyone help me with this? TIA! Read More
Project Online: SharePoint Custom Script control impact
In February 2024, Message Center post MC 714186 detailed changes coming to Custom Script settings in SharePoint Online. The MC post did not specifically call out Project Web App sites in Project Online, but it is impacted by these changes in a number of ways. There are settings to re-enable these features which will be explained later, but the typical issues reported by customers are:
Project Web App web parts are no longer listed as a web part Category
Save site as template is no longer available
Script Editor and Content Editor parts no longer available to add to a page (these are often used to add functionality to Project Web App)
Custom Fields added to Project Detail Pages (PDPs) do not ‘stick’ and although it looks like you added them, they are not present when you stop editing
Reports of 3rd party applications that automate steps like those above, have also been reported
Apart from the last bullet point, nothing breaks or stops working, just the steps to use these features will require an additional action to ‘unblock’ these scenarios. If you use custom script today, maybe a button that is set to publish all your projects, then this will still work, regardless of the settings described here. These settings only apply to changing things – for example if the code behind the button needed to be edited.
Steps to re-enable above features
Until November 2024 a single PowerShell command can be run which will ensure that any unblocking will last until November 2024. After that time, and if you do not run that script, you will need to unblock for each 24-hour period that you want to carry out any of the listed actions.
A tenant admin can choose to run a new PowerShell command in the SharePoint Management shell version 16.0.24524.12000 or later, after executing Connect-SPOService:
NOTE: When implementing this make sure you understand the security implications.
Connect-SPOService -Url https://<tenantname>-admin.sharepoint.com
Set-SPOTenant -DelayDenyAddAndCustomizePagesEnforcement $True
Even if the admin chooses to set this option, they will still need to proceed with the next steps to allow any of the changes bulleted at the top of this article to be carried out. The DelayDenyAddAndCustomizePagesEnforcement just avoids the setting getting revert after 24 hours. These steps will require a SharePoint Administrator to either run a PowerShell script or make a setting change in the SharePoint Admin Center. Be aware that the PWA admin, even though a site collection admin, may not be a SharePoint Admin.
Via PowerShell, for the PWA site they would need to run (after Connect-SPOService):
Set-SPOSite <SiteURL> -DenyAddAndCustomizePages 0
where <SiteURL> is your PWA site. Once complete, you should be able to carry out the options as usual. A quick check is to see if Save site as a template is present again in Site Settings.
Via SharePoint Admin Center, the admin would navigate to Sites, Active Sites, browse/search for the site and click the site name. In the pane that opens, navigating to the Settings tab will show an option for Custom scripts, which will say Blocked and have an option to Edit underneath.
Clicking Edit shows an option to set to Allowed rather than Blocked, along with some warning text, and a reminder that this will revert in 24 hours. This reminder shows even if the script to stop the reversion has been executed.
Clicking Save brings up further warnings, a link to the security implications article referenced above and requests for the change to be Confirmed.
The Active Sites list now has an additional column to expose the Custom script settings, along with a useful new filter than shows all sites where custom scripts are set to Allowed.
The SharePoint admin could also choose to revert the changes, either through the UI, or via PowerShell, once the changes had been completed, rather than weight for the automatic reversion after 24 hours. That probably isn’t a bad idea.
This change has been introduced because of concerns over what custom script can do, and the scenarios this affects are not usually ‘every day’ occurrences within PWA. However, I appreciate that this would require a SharePoint admin to get involved, which makes it more than a minor inconvenience. We are still exploring what other options we might have here, but best to plan to need to involve the SharePoint Admins when doing these kinds of edits in future. Although the issue uses the term ‘custom script’, many of the options that are blocked may not look like you are really doing any custom scripting – but it is more about what you ‘could’ be doing. For example, editing a PDP to just add an Enterprise Custom Field doesn’t add any custom script – but while editing you ‘could’ add a script editor web part that contains script. It is more to ensure a conscious decision is being made about these actions.
We are also working on a Learn.Microsoft.com article which should be out shortly. We should certainly have ensured the Message Center post also included Project Online explicitly, so sorry for the lack of communication and confusion around these changes.
Not applicable to PWA, but for reference.
One confusion is that the SharePoint Message Center post and subsequent article start off talking about the following section with the settings to allow users to run custom scripts on OneDrive and self-service created sites. Project Web App does not come under this category of site, so this setting does not need to be enabled for the above scenarios to work. The steps above are all that are needed. The settings page is also changing with the prevention on personal sites no longer being allowed to be changed. It may look like this:
Or the latest screenshot of SharePoint Classic settings section on Custom Script looks like this one. Either way, this doesn’t apply to Project Online sites.
Microsoft Tech Community – Latest Blogs –Read More
License installation key for 2022A
I need a LIK License Installation Key for 2022A.I need a LIK License Installation Key for 2022A. I need a LIK License Installation Key for 2022A. license installation key MATLAB Answers — New Questions
v24.151.0728.0003 (64 bits) – BUG (old) – cannot start sync a shared folder
The user received permission on a shared folder, she is able to create/rename/delete via web browser – everything is fine.
The problem starts when user try to sync that shared folder on her Windows 10 desktop :pouting_face:.
We realize that the OneDrive app, register a phisycal folder to it on the profile path, but never starts the sync – even wainting more than 01 hour.
If you go to the app settings and then try to ‘stop the sync’ nothing is stopped for real.
If you try to select some folder on that share resource, you can see an error like the image below. :pouting_face:
If we open a ticket with M365 Support team, they will aks to start the ‘OneDrive reset procedure’… We doing this almost every month for too many users with diferent computers (new one with fresh install and also old computers reinstalled from scratch) and the behavior is the same.
The only way is instruct user to use all files over browser while the computer is trying to reset the onedrive settings – that expend more than 04h.
—
Come on Microsoft developers!
We are in 2024, why OneDrive still uses ‘global.ini’ file instead to be using WINDOWS REGISTRY? Are MS planning to bring back the ‘Windows 3.11’????? :face_with_rolling_eyes:
—-
* M365 Tenant with users with ‘M365 E3’ license and Windows 10 Pro from Dell.
The user received permission on a shared folder, she is able to create/rename/delete via web browser – everything is fine.The problem starts when user try to sync that shared folder on her Windows 10 desktop :pouting_face:.We realize that the OneDrive app, register a phisycal folder to it on the profile path, but never starts the sync – even wainting more than 01 hour. If you go to the app settings and then try to ‘stop the sync’ nothing is stopped for real.If you try to select some folder on that share resource, you can see an error like the image below. :pouting_face: If we open a ticket with M365 Support team, they will aks to start the ‘OneDrive reset procedure’… We doing this almost every month for too many users with diferent computers (new one with fresh install and also old computers reinstalled from scratch) and the behavior is the same.The only way is instruct user to use all files over browser while the computer is trying to reset the onedrive settings – that expend more than 04h. —Come on Microsoft developers!We are in 2024, why OneDrive still uses ‘global.ini’ file instead to be using WINDOWS REGISTRY? Are MS planning to bring back the ‘Windows 3.11’????? :face_with_rolling_eyes:—-* M365 Tenant with users with ‘M365 E3’ license and Windows 10 Pro from Dell. Read More
Number of days + or –
I am trying to get the difference in the number of days between two dates and it works if the target completion date is before the completion date using this formula: =DATEDIF(H3,J3,”d”). I do get the correct number of days using this formula, however, I would ideally like is a plus or minus in front of that number (e.g., + if the project went past the target date, and – if the project was completed sooner). I can apply conditional formatting to the rows after to change the color of the number if I get that far.
When the completion date is before the target completion date (e.g., completed project early), I get the #NUM! error, which I expected. But I cannot find a way to get exactly what I am looking for. Hopefully I am explaining this clearly. The formula in L4: =DATEDIF(H4,J4,”d”)
I am trying to get the difference in the number of days between two dates and it works if the target completion date is before the completion date using this formula: =DATEDIF(H3,J3,”d”). I do get the correct number of days using this formula, however, I would ideally like is a plus or minus in front of that number (e.g., + if the project went past the target date, and – if the project was completed sooner). I can apply conditional formatting to the rows after to change the color of the number if I get that far. When the completion date is before the target completion date (e.g., completed project early), I get the #NUM! error, which I expected. But I cannot find a way to get exactly what I am looking for. Hopefully I am explaining this clearly. The formula in L4: =DATEDIF(H4,J4,”d”) Read More
scheduling issue
My settings are correct but an appointment was set in times that I’m both set not available and that I have another person already scheduled for they set an appointment for 1pm tomorrow but the system put them in at 6am instead and I already a have someone already in the 1pm slot. I have it set up that client can’t make appointments for anything until 1pm and why the system it placing a 1pm appointment at 6am I have know Idea if you all didn’t send the email about the appointment I would not have know it was even there. Also none of my appointments from my outlook calendar are on the bookings page at all.
My settings are correct but an appointment was set in times that I’m both set not available and that I have another person already scheduled for they set an appointment for 1pm tomorrow but the system put them in at 6am instead and I already a have someone already in the 1pm slot. I have it set up that client can’t make appointments for anything until 1pm and why the system it placing a 1pm appointment at 6am I have know Idea if you all didn’t send the email about the appointment I would not have know it was even there. Also none of my appointments from my outlook calendar are on the bookings page at all. Read More
launch excel from access
Is there a way to launch or jump to excel from a form in access?
Is there a way to launch or jump to excel from a form in access? Read More
Allow use of One Time Password
Hello,
We have setup Passwordless authentication using Conditional Access Policies, which is working great. The question I have is how can I setup the option to allow the use of the one time password (6 digit code in the authenticator) to be used when the mobile device is offline and cannot receive the number matching. For example, the user is in a plane and has purchased the use of WiFi for the laptop, but the phone is offline and want to use the 6 digit code from the authenticator.
Hello, We have setup Passwordless authentication using Conditional Access Policies, which is working great. The question I have is how can I setup the option to allow the use of the one time password (6 digit code in the authenticator) to be used when the mobile device is offline and cannot receive the number matching. For example, the user is in a plane and has purchased the use of WiFi for the laptop, but the phone is offline and want to use the 6 digit code from the authenticator. Read More
Trouble updating a single field in a sharepoint list item with automate
Greetings!
Not sure if this is the right forum, but…
I am having a bit of a problem. I have a Sharepoint List where, amongst other fields, a field of multiple “Person or Group” type exists, let us call it fieldA. I have a workflow with a trigger for created or changed item. There is a condition in it where, when reached, must change a simple string field in the item that triggered it.
I go and get the item and then try to use the Update Item action. Now my problem starts.
I have read, whether it is correct or not, that the update item MUST be given all values which cannot be empty in the list. And when I try to do my update without setting values for the mandatory fields, I promptly get an error message when I try to save the workflow. However, when I place a reference to the value I got as current for fieldA, it says that I am trying to alter a read only property, DisplayName, even though I am just using the current value.
Is there not a way to change just a single field without even trying to touch the others? And if there is not, how do I go around this silliness of it apparently detecting that an equal Person means I am trying to change the DisplayName?
Thanks!
Greetings!Not sure if this is the right forum, but…I am having a bit of a problem. I have a Sharepoint List where, amongst other fields, a field of multiple “Person or Group” type exists, let us call it fieldA. I have a workflow with a trigger for created or changed item. There is a condition in it where, when reached, must change a simple string field in the item that triggered it.I go and get the item and then try to use the Update Item action. Now my problem starts.I have read, whether it is correct or not, that the update item MUST be given all values which cannot be empty in the list. And when I try to do my update without setting values for the mandatory fields, I promptly get an error message when I try to save the workflow. However, when I place a reference to the value I got as current for fieldA, it says that I am trying to alter a read only property, DisplayName, even though I am just using the current value.Is there not a way to change just a single field without even trying to touch the others? And if there is not, how do I go around this silliness of it apparently detecting that an equal Person means I am trying to change the DisplayName?Thanks! Read More
Return cursor to commandline after plotting
I just upgraded to R2024a and am using dark mode. Previously, in my R2023b release, if I used the plot function in the command line (e.g. "plot(randn(1, 100))") a figure would pop up, but the cursor would remain active in the command window. This was helpful for rapid data exploration – I could type a plot command, look at the results, then programatically close it in the command line and continue on.
Now, after a plotting function is called, the command window is no longer active. I have to manually select the command window using the cursor if I want to continue entering commands. This really slows me down. Is there some setting or preference that can be flipped to return control back to the command window? I tried using keyboard shortcuts to return to the command window (ctrl-0, from Use Keyboard Shortcuts to Navigate MATLAB – MATLAB & Simulink (mathworks.com)), but this does nothing. Thanks,I just upgraded to R2024a and am using dark mode. Previously, in my R2023b release, if I used the plot function in the command line (e.g. "plot(randn(1, 100))") a figure would pop up, but the cursor would remain active in the command window. This was helpful for rapid data exploration – I could type a plot command, look at the results, then programatically close it in the command line and continue on.
Now, after a plotting function is called, the command window is no longer active. I have to manually select the command window using the cursor if I want to continue entering commands. This really slows me down. Is there some setting or preference that can be flipped to return control back to the command window? I tried using keyboard shortcuts to return to the command window (ctrl-0, from Use Keyboard Shortcuts to Navigate MATLAB – MATLAB & Simulink (mathworks.com)), but this does nothing. Thanks, I just upgraded to R2024a and am using dark mode. Previously, in my R2023b release, if I used the plot function in the command line (e.g. "plot(randn(1, 100))") a figure would pop up, but the cursor would remain active in the command window. This was helpful for rapid data exploration – I could type a plot command, look at the results, then programatically close it in the command line and continue on.
Now, after a plotting function is called, the command window is no longer active. I have to manually select the command window using the cursor if I want to continue entering commands. This really slows me down. Is there some setting or preference that can be flipped to return control back to the command window? I tried using keyboard shortcuts to return to the command window (ctrl-0, from Use Keyboard Shortcuts to Navigate MATLAB – MATLAB & Simulink (mathworks.com)), but this does nothing. Thanks, plotting, command window, keyboard shortcuts MATLAB Answers — New Questions
How can I define the temperature in 2D domain at location x,y [rather than nodal locations] for steady state or transient solution?
I have been able to duplicate the results for the steady state and transient responses for the problem defined at
https://www.mathworks.com/help/pde/ug/heat-transfer-problem-with-temperature-dependent-properties.html
This example includes code to plot the temperature at a specific point in the block, in this case near the center of the right edge, as a function of time.
I would be interested to define the temperature and temperature history at any x,y location for the defined block with the slot in it- for example at the top right corner of the slot.I have been able to duplicate the results for the steady state and transient responses for the problem defined at
https://www.mathworks.com/help/pde/ug/heat-transfer-problem-with-temperature-dependent-properties.html
This example includes code to plot the temperature at a specific point in the block, in this case near the center of the right edge, as a function of time.
I would be interested to define the temperature and temperature history at any x,y location for the defined block with the slot in it- for example at the top right corner of the slot. I have been able to duplicate the results for the steady state and transient responses for the problem defined at
https://www.mathworks.com/help/pde/ug/heat-transfer-problem-with-temperature-dependent-properties.html
This example includes code to plot the temperature at a specific point in the block, in this case near the center of the right edge, as a function of time.
I would be interested to define the temperature and temperature history at any x,y location for the defined block with the slot in it- for example at the top right corner of the slot. 2d, temperature, specific location, xy MATLAB Answers — New Questions
Inside PnP Modern Search Web Part, how i can show the Date Time in the sharepoint local time zone
I am working on a SharePoint online site collection which have this regional settings:-
And i have a SharePoint column of type Date/Time, now inside SharePoint list view the date/time will be show as follow:-
which respect the site regional settings. now for the above site column >> i have a managed property and when i am showing this manage property inside the PnP Modern Search Result web part i will get the value based on UTC, as follow:-
so how i can show the date/time inside my Search Result to be equal to what is shown inside the list view?
Here is how i am rendering the column value inside the Search Result web part settings (resource.fields.RefinableDate01):-
I am working on a SharePoint online site collection which have this regional settings:- And i have a SharePoint column of type Date/Time, now inside SharePoint list view the date/time will be show as follow:- which respect the site regional settings. now for the above site column >> i have a managed property and when i am showing this manage property inside the PnP Modern Search Result web part i will get the value based on UTC, as follow:- so how i can show the date/time inside my Search Result to be equal to what is shown inside the list view?Here is how i am rendering the column value inside the Search Result web part settings (resource.fields.RefinableDate01):- Read More
Moving Bookmark in Edge in Windows 11
I absolutely love, MICROSOFT EDGE Whoever decided to set it up with the Chromium Platform is my positive vote. What I don’t like in MICROSOFT EDGE is how bookmarks can be placed in the top toolbar/tashbar that I use most often such as bank, calendar, and Gmail. Why was it set up in such a frustrating way is beyond me. It is also a waste of time and energy. Why wasn’t moving bookmarks set up like it is in Google Chrome? All I have to do is put in the page I want to bookmark and move it to the top taskbar/toolbar.
I absolutely love, MICROSOFT EDGE Whoever decided to set it up with the Chromium Platform is my positive vote. What I don’t like in MICROSOFT EDGE is how bookmarks can be placed in the top toolbar/tashbar that I use most often such as bank, calendar, and Gmail. Why was it set up in such a frustrating way is beyond me. It is also a waste of time and energy. Why wasn’t moving bookmarks set up like it is in Google Chrome? All I have to do is put in the page I want to bookmark and move it to the top taskbar/toolbar. Read More
How-To Migrate ConfigMgr Apps to Intune Win32Apps
Hi, Jonas here!
Or as we say in the north of Germany: “Moin Moin!“
I’m a Microsoft Cloud Solution Architect and a while back I was asked on how to migrate ConfigMgr apps to Intune. The result is a PowerShell script which can be used to analyze ConfigMgr apps and migrate them to Intune.
This article starts with a general overview about the process and contains an FAQ section instead of long text sections explaining everything in detail.
If you haven’t seen my other articles yet, feel free to check them out here: https://aka.ms/JonasOhmsenBlogs
Preparation
Before running any migration step, it is important to validate if a migration makes sense and saves time and resources compared to new apps created in Intune.
That decision depends on the following factors:
How many apps need to be migrated
If the apps we want to migrate can be migrated at all
If the migration would save time and resources compared to creating new apps in Intune
If there is an automated process in place to create apps in ConfigMgr, it might be better to change the process to automatically create the same apps in Intune and not migrate anything from ConfigMgr
If you are licensed to use the Intune Enterprise App Catalog. Take a look at the catalog and make sure to not migrate already existing applications. Read more about the Intune Enterprise App Catalog here
Migration process and prerequisites
The script I created to migrate apps from ConfigMgr to Intune has three modes which should run one after the other.
NOTE: The script does not require admin permissions as long as it has write permissions to the export folder (applies to all three modes). See “Process steps” for more details.
Step1 GetConfigMgrAppInfo
Exports ConfigMgr application metadata to a given folder.
This step will also analyze the metadata for any incompatible settings. See FAQ section for more details.
Requirements for this mode:
At least application read permissions in ConfigMgr
ConfigMgr admin console or ConfigMgr PowerShell CmdLets are not required.
Step1.1 Manually analyze the application metadata
While this is not a step of the script, it is important to validate the script output before going forward. More information can be found in the next section and the FAQ section down below.
Step2 CreateIntuneWinFiles
The step will compress the ConfigMgr app source data to create an intunewin file. (Required to be able to create win32apps in Intune)
Requirements for this mode:
The script will download the IntuneWinAppUtil.exe tool if not already present. See FAQ section for manual instructions.
At least read permissions for ConfigMgr app source path.
Connection to ConfigMgr not required.
Step3 UploadAppsToIntune
Creates a win32app in Intune and uploads the previously created intunewin file.
Requirements for this mode:
The script will install module: “Microsoft.Graph.Authentication” if not already present. See FAQ section for manual instructions.
The script requires the “DeviceManagementApps.ReadWrite.All” permission in the Microsoft Graph PowerShell Entra ID app. The script will ask to approve the permission if not already done. See FAQ section for more details. (Custom app registration possible)
Connection to ConfigMgr not required.
As already mentioned, please also have a look at the FAQ section further down below to better understand the script and its logic.
Get the script from here: Invoke-ConfigMgrAppMigrationToIntune.ps1
Process steps:
Run the script in “Step1GetConfigMgrAppInfo” mode like this:
IMPORTANT: Change all parameter values to match your environment for all commands mentioned in this blog post.
.Invoke-ConfigMgrAppMigrationToIntune.ps1 -Step1GetConfigMgrAppInfo -Sitecode ‘P02’ -Providermachinename ‘CM02.contoso.local’ -ExportFolder ‘E:ExportToIntune’
NOTE: This step will first create the export folder if not already there and some sub-folders explained in the FAQ section.
Select some or all apps to let the script analyze if a migration would be possible.
In most cases it makes sense to select all apps to get as much data as possible.
Metadata for each selected app will then be exported into the defined export folder.
Read more about the folders and files in the FAQ section.
There are multiple options to read the result of the analysis done by the script.
IMPORTANT: It is important to validate the output to understand why it might not be possible to migrate an app. Also, some settings will be ignored by the script and the app might look and behave differently in Intune.
Option 1: Open the exported CSV file in Excel. “<ExportFolder>AppDetailsAllAps.csv”
Each column starting with “Check” contains information about one of the many checks done by the script.
Read more about the checks in the FAQ section.
Option 2: Open the JSON file: “<ExportFolder>AppDetailsAllAps.json” in a text editor and validate the result there.
Option 3: View the results in a PowerShell Grid-View.
Follow the next instruction to open the result in a PowerShell Grid-View.
Now run the script in “Step2CreateIntuneWinFiles” mode with the following parameters to create intunewin files for selected apps.
.Invoke-ConfigMgrAppMigrationToIntune.ps1 – Step2CreateIntuneWinFiles -ExportFolder ‘E:ExportToIntune’
NOTE: If not already present, this step will download the “IntuneWinAppUtil.exe” tool to “<ExportFolder>Tools”
NOTE: The export folder can also be copied to another system. Just make sure that the parameter “-ExportFolder” points to a folder with previously exported data.
All the checks are shown in the Grid-View. Scroll to the right in order to see the check results.
Either hit “Cancel” to close the window or “OK” to create intunewin files for selected apps.
NOTE: Only apps marked with “AppImportToIntunePossible = Yes” can be imported to Intune. Validate the check results to find the exact reason in case an import is not possible. Checks are marked with “NO IMPORT” in that case.
The last step is to run the script in “Step3UploadAppsToIntune” mode.
.Invoke-ConfigMgrAppMigrationToIntune.ps1 -Step3UploadAppsToIntune -ExportFolder ‘E:ExportToIntune’
NOTE: If not already present, this step will install PowerShell module: “Microsoft.Graph.Authentication”.
“Nuget” will also be installed in case we need to download “Microsoft.Graph.Authentication”.
NOTE: Only apps marked with “IntunewinFileExists=Yes” and “AppImportToIntunePossible = Yes” can be imported to Intune.
Select one or all apps and hit “OK” to start the app import to Intune.
Repeat the steps as often as needed and also check out the FAQ section down below. 😉
FAQ section
This section should help understand the solution better and what it can and cannot do.
What type of app can the script export from ConfigMgr?
At the moment the script can only handle exe/script apps and no other type like imported MSI files or appx packages.
Does the script require admin permissions to run?
No. The script can run as long as it has write permissions to the export folder.
Do I need the ConfigMgr console installed or ConfigMgr PowerShell cmdlets to run the script?
No, the script will make a direct call to the SMS Provider.
Can I download the IntuneWinAppUtil.exe tool manually?
Yes. If you need to run step2 (create intunewin files) on a machine without internet access, download the tool from here: https://github.com/Microsoft/Microsoft-Win32-Content-Prep-Tool and place it in a “Tools” folder in the export folder.
What temp folder does the IntuneWinAppUtil.exe tool use?
The tool will compress the source files to the users temp folder. (The user running the script)
Currently the tool does not have a parameter to change the path. So, make sure the C: drive has enough space to compress the files.
Example path: “C:Users<UserName>AppDataLocalTemp”
Can I install PowerShell module Microsoft.Graph.Authentication manually?
Yes. Run: “Install-Module -Name Microsoft.Graph.Authentication”
Or consult the documentation here: https://learn.microsoft.com/en-us/powershell/microsoftgraph/installation
What does “AppImportToIntunePossible = No” mean?
The script detected a hard blocker and cannot import the app into Intune.
The reason is visible in one of the check columns and marked with: “NO IMPORT”
See section: “What checks will the script run against ConfigMgr app metadata?” for more details.
What does “AllChecksPassed = No” mean?
The script does a series of checks to validate if an app can be migrated without a difference.
If one of the checks failed, the app will be marked with “AllChecksPassed = No”.
It is still possible to create the app in Intune, but the app might behave differently.
Make sure to understand the implications of the specific check result.
Either change the app in ConfigMgr or copy and change the new app to avoid any issues with the app.
See section: “What checks will the script run against ConfigMgr app metadata?” for more details.
What does “IntunewinFileExists=No” mean?
No means, step2 of the script has not yet run to create an intunewin file for the app.
Only apps with content can be migrated by the script.
Re-run the script with parameter “-Step2CreateIntuneWinFiles” to create the files for the apps.
What files and folders will the script create?
The following table contains all the folders and files the script will create.
File or folder
Description
<ExportFolder>AppDetails
Folder containing data related to ConfigMgr apps
AllApps.json
JSON file containing a list of all selected apps with typical ConfigMgr data and Intune related data for a later import in Intune.
It also contains the result of the analyze step to be able to validate if an app can or cannot be imported to Intune.
The JSON format is just to make it easier to read and parse with the human eye.
The file is not used by the script in any other way and is just created for convenience.
AllApps.xml
Contains the same information as in the JSON but also contains object type information.
Therefore the file can easily be “re-used” with command “Import-Clixml” in any PowerShell session.
It is used when the script runs in any of the other modes as a base. When the contents of the export folder has been copied to another system for example.
AllApps.csv
The same info as in the AllApps.json file but in csv format.
That way the data could be analyzed and filtered in Excel.
The file is not used by the script in any other way and is just created for convenience.
Application_<ConfigMgr App GUID>.json
The same info as in the AllApps.json but for a single app.
The file is not used by the script in any other way and is just created for convenience.
Application_<ConfigMgr App GUID>.xml
The same info as in the AllApps.xml but for a single app.
The file is not used by the script in any other way and is just created for convenience.
Application_<ConfigMgr App GUID>-Intune.json
JSON file containing the Intune JSON definition to create a win32app in Intune.
The file is not used by the script in any other way and is just created for convenience.
To be able to troubleshoot or use the file to create the app manually in Graph Explorer https://aka.ms/ge for example.
<ExportFolder>Icons
Folder containing app icons
Icon_<Icon-ID>.png
If the exported ConfigMgr app has an icon, the icon will be exported to be able to upload the icon to Intune.
<ExportFolder>Scripts
Folder containing detection scripts
DeploymentType_<DeploymentType-GUID>.ps1″
If the app’s deployment type has a detection script set, the script will be exported to the “Scripts” folder.
Note: Script Global Conditions used as requirements are not handled by the script.
<ExportFolder>Tools
Folder containing tools.
The script will download the IntuneWinAppUtil.exe file to be able to create “.intunewin” files to the tools folder.
<ExportFolder>Win32Apps
Folder containing created intunewin files.
<ConfigMgr App GUID> <DeploymentType-GUID>.intunewin
The intunewin file per deployment type created by step two of the script.
Each app has its own folder.
<ConfigMgr App GUID> <DeploymentType-GUID>.log
Log file created by the IntuneWinAppUtil.exe tool containing the result of the compress operation.
What happens if the script runs again?
That depends on the step/mode the script runs in.
Step1GetConfigMgrAppInfo
If the export folder exists, the script will load the existing AllApps.* files and replace any existing applications. That way the files should always contain accurate data about ConfigMgr apps.
NOTE: That will reset the state: “IntunewinFileExists=Yes” and the content needs to be compressed into an intunewin file again.
Step2CreateIntuneWinFiles
The script will simply replace any previous created intunewin files for selected applications.
Step3UploadAppsToIntune
The script can be run in this mode as often as needed and does not act differently when run twice or more.
Existing applications will not be overwritten. Instead a new application with the same name will be created in Intune.
In case of an error, delete the application in Intune and run the script again. Maybe fix the error if you can.
What checks will the script run against ConfigMgr app metadata?
The following list contains the checks made by the script and a list of unsupported scenarios.
Unsupported can either mean not implemented in the script or unsupported by Intune.
Supersedence and dependencies
While Intune supports supersedence and dependencies in Win32Apps, the script currently does not. Set any dependency or supersedence after an app has been imported in Intune.
Tags
The script currently ignores tags.
Custom return codes
While Intune supports custom return codes, the script will ignore them.
Add them later to the Intune Win32App if required.
Requirements
While Intune supports requirements, the script will ignore ConfigMgr requirements at the moment. But since Intune requires at least a selection for architecture type and minimum operating system version, the script will set a requirements for x86 and x64 bit as well as Windows 10 1607 as a minimum.
Consider the use of a PowerShell script as a replacement for complex ConfigMgr requirements.
More than one deployment type
Intune does not support multiple DeploymenTypes and therefore the script will mark the app with: “AppImportToIntunePossible = No”.
As a workaround copy the app in ConfigMgr and remove any DeploymentTypes except one and re-run the script to import the app in Intune.
Logon requirement other than: “Whether or not a user is logged on”
There is no option to wait for a user to logon in Intune. The app can still be imported, but the setting is ignored.
Allow Interaction
There is no option to allow interaction in Intune. The app can still be imported, but the setting is ignored. As a workaround the ServicesUI.exe from the Microsoft Deployment Toolkit (MDT) can be used to allow some form of interaction with the end-user.
Program visibility other than “hidden” or “normal”
There is no option for program visibility in Intune. The app can still be imported, but the setting is ignored.
Unknown setup file extension
If the setup command is missing a known file extension the script will mark the app with: “AppImportToIntunePossible = No”.
Example: “install” and not “install.exe”
Different uninstall content
If the uninstall content is not the same as the install content the app can still be imported, but the uninstall content is ignored.
As a workaround copy the app and copy the uninstall content into the install content folder. Then re-run the script to import the app in Intune.
Missing uninstall command line
An uninstall string is required to create the app in Intune. Therefore the script will mark the app with: “AppImportToIntunePossible = No”.
Repair command
Intune currently does not have a repair command option. The app can still be imported, but the setting is ignored.
SourceUpdateProductCode for MSI repairs
Intune currently does not have an option for SourceUpdateProductCode. The app can still be imported, but the setting is ignored.
Exe to close before execution
Intune currently does not have an option to close an exe before running the installation. The app can still be imported, but the setting is ignored.
As a workaround use a script to close an exe before running an installation command and use that as a single script to install the app.
Detection rules with OR operator and limited support for rules with groups
In ConfigMgr it is possible to create detection rules arranged in groups and with the OR operator. While Intune does support multiple detection rules, it does not currently support the OR operator for them.
In that case all detection rules will be created in Intune, but the OR operator will be ignored.
The app should either be corrected before or after the upload.
Consider the use of a detection script with the same check logic instead.
Detection rules with unsupported operators
Some operators are not available in Intune at the moment. Like “EndsWith” for example.
The app can still be created, but the detection rule with an unsupported operator will be ignored.
Consider the use of a detection script with the same check logic instead.
The script will replace UNC install strings with just the command
In case an app directly runs an install command from an UNC path (which typically should not be the case) the script will set the UNC path as the source path.
Does the script account for all the possible ConfigMgr app configurations?
Unfortunately not. While I tried to integrate as many different options for detection methods and other app settings, I cannot guarantee a complete integration of every settings combination.
That also means you should always double check any imported app into Intune.
Is the script meant for large scale app migrations?
The script was designed to migrate a specific set of apps and is not capable of accounting for all the possible ConfigMgr app configurations. It should work even with a large list of ConfigMgr apps, but I never tested to export more than 1000 apps.
It is also not designed to run as a task nor track changes in ConfigMgr and keep apps in Intune up to date.
Can the script run in silent mode in the background?
No, it is designed for user input. But since it is a PowerShell script, it could be changed to run silently if that is what you need.
What Entra ID permission is required to import applications in Intune?
The script needs the “DeviceManagementApps.ReadWrite.All” permission in order to create new win32apps.
What Entra ID app registration will the script use?
The script will use the default Microsoft Graph PowerShell Modules Entra ID app.
Typically called: “Microsoft Graph Command Line Tools” (Other names possible)
AppID = 14d82eec-204b-4c2f-b7e8-296a70dab67e
Can I use a custom Entra ID app registration to run the script?
Yes, create an Entra ID app registration with the following permission: “DeviceManagementApps.ReadWrite.All”
Then run the script with the “-EntraIDAppID” parameter and use the app ID as the value.
You also need to add the tenant ID or domain name via parameter: “-EntraIDTenantID”
The full command would look like this:
.Invoke-ConfigMgrAppMigrationToIntune.ps1 -Step3UploadAppsToIntune -ExportFolder ‘E:ExportToIntune’ -EntraIDAppID ‘365908cc-fd28-43f7-94d2-f88a65b1ea21’ -EntraIDTenantID ‘contoso.onmicrosoft.com’
What happens if I change any of the files created by the script manually?
Only the AllApps.xml file will be used by the script to create intunewin files and to upload data to Intune. For example, if you change an app name in that file, a new Intune win32app will later have that name.
All other files are just there for convenience.
Changes to the “.intunewin” content files will not be checked. So make sure to store them safely or create them directly before the upload to Intune.
What data is stored in an .intunewin file?
The generated .intunewin file contains all source setup files in an compressed and encrypted form and the encryption information itself. Please keep it in a safe place as your source setup files.
Read more about it here
How can I get more information about the script?
Either open the script in a text editor and have a look at the description or run the script without any parameters.
Disclaimer
Always validate the script output and make sure the Intune applications look and behave the same way as the original applications in ConfigMgr.
Do test every application before deploying the application to production systems.
Also, this sample script is not supported under any Microsoft standard support program or service. This sample script is provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of this sample script and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of this script be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use this sample script or documentation, even if Microsoft has been advised of the possibility of such damages.
I hope you enjoyed reading this blog post.
If you have any questions or concerns, please let me know in the comments or create an issue or pull request on GitHub.
Get the script from here: Invoke-ConfigMgrAppMigrationToIntune.ps1
Stay safe!
Jonas Ohmsen
Microsoft Tech Community – Latest Blogs –Read More
How to preserve transparent edges in images after image processing?
Hello, everyone!
I’m facing a problem related to the processing of .png images with transparent edges obtained in a simulation software. Whenever I convert the image (RGB) to grayscale, together with a Graussian filter, the transparent edges become white, and this hinders the reading and training of the neural network model I’m using.
for i = 1:length(files)
img = imread(fullfile(path, files(i).name));
[~, name, ext] = fileparts(files(i).name);
if size(img, 3) == 3
grayImg = rgb2gray(img);
else
grayImg = img;
end
mean=0; variance=0.01;
noisedImg = imnoise(grayImg, ‘gaussian’, mean, variance);
imgDeformed = elasticDeform(noisedImg, 8, 30);
file(:, :, 1, i) = imresize(imgDeformed, fileSize);
separatorIndex = strfind(name, ‘-‘);
depth(i, 1) = round(str2double(name(1:separatorIndex-1)), 2);
time(i, 1) = round(str2double(name(separatorIndex+1:end)));
end
[train_idx, ~, val_idx] = dividerand(numFiles, 0.7, 0, 0.3);
XTrain = file(:, :, 1, train_idx);
YTrain = depth(train_idx, 1);
XVal = file(:, :, 1, val_idx);
YVal = depth(val_idx, 1);
save(‘data.mat’, ‘XTrain’, ‘XVal’, ‘YTrain’, ‘YVal’);
Where elasticDeform is:
function imgDeformed = elasticDeform(img, alpha, sigma)
[rows, cols] = size(img);
dx = alpha * randn(rows, cols);
dy = alpha * randn(rows, cols);
dx = imgaussfilt(dx, sigma);
dy = imgaussfilt(dy, sigma);
[X, Y] = meshgrid(1:cols, 1:rows);
X_new = X + dx; Y_new = Y + dy;
imgDeformed = interp2(X, Y, double(img), X_new, Y_new, ‘linear’, 0);
imgDeformed = uint8(imgDeformed);
end
The product of the image processing is therefore:
You can see that in the images there are white edgesHello, everyone!
I’m facing a problem related to the processing of .png images with transparent edges obtained in a simulation software. Whenever I convert the image (RGB) to grayscale, together with a Graussian filter, the transparent edges become white, and this hinders the reading and training of the neural network model I’m using.
for i = 1:length(files)
img = imread(fullfile(path, files(i).name));
[~, name, ext] = fileparts(files(i).name);
if size(img, 3) == 3
grayImg = rgb2gray(img);
else
grayImg = img;
end
mean=0; variance=0.01;
noisedImg = imnoise(grayImg, ‘gaussian’, mean, variance);
imgDeformed = elasticDeform(noisedImg, 8, 30);
file(:, :, 1, i) = imresize(imgDeformed, fileSize);
separatorIndex = strfind(name, ‘-‘);
depth(i, 1) = round(str2double(name(1:separatorIndex-1)), 2);
time(i, 1) = round(str2double(name(separatorIndex+1:end)));
end
[train_idx, ~, val_idx] = dividerand(numFiles, 0.7, 0, 0.3);
XTrain = file(:, :, 1, train_idx);
YTrain = depth(train_idx, 1);
XVal = file(:, :, 1, val_idx);
YVal = depth(val_idx, 1);
save(‘data.mat’, ‘XTrain’, ‘XVal’, ‘YTrain’, ‘YVal’);
Where elasticDeform is:
function imgDeformed = elasticDeform(img, alpha, sigma)
[rows, cols] = size(img);
dx = alpha * randn(rows, cols);
dy = alpha * randn(rows, cols);
dx = imgaussfilt(dx, sigma);
dy = imgaussfilt(dy, sigma);
[X, Y] = meshgrid(1:cols, 1:rows);
X_new = X + dx; Y_new = Y + dy;
imgDeformed = interp2(X, Y, double(img), X_new, Y_new, ‘linear’, 0);
imgDeformed = uint8(imgDeformed);
end
The product of the image processing is therefore:
You can see that in the images there are white edges Hello, everyone!
I’m facing a problem related to the processing of .png images with transparent edges obtained in a simulation software. Whenever I convert the image (RGB) to grayscale, together with a Graussian filter, the transparent edges become white, and this hinders the reading and training of the neural network model I’m using.
for i = 1:length(files)
img = imread(fullfile(path, files(i).name));
[~, name, ext] = fileparts(files(i).name);
if size(img, 3) == 3
grayImg = rgb2gray(img);
else
grayImg = img;
end
mean=0; variance=0.01;
noisedImg = imnoise(grayImg, ‘gaussian’, mean, variance);
imgDeformed = elasticDeform(noisedImg, 8, 30);
file(:, :, 1, i) = imresize(imgDeformed, fileSize);
separatorIndex = strfind(name, ‘-‘);
depth(i, 1) = round(str2double(name(1:separatorIndex-1)), 2);
time(i, 1) = round(str2double(name(separatorIndex+1:end)));
end
[train_idx, ~, val_idx] = dividerand(numFiles, 0.7, 0, 0.3);
XTrain = file(:, :, 1, train_idx);
YTrain = depth(train_idx, 1);
XVal = file(:, :, 1, val_idx);
YVal = depth(val_idx, 1);
save(‘data.mat’, ‘XTrain’, ‘XVal’, ‘YTrain’, ‘YVal’);
Where elasticDeform is:
function imgDeformed = elasticDeform(img, alpha, sigma)
[rows, cols] = size(img);
dx = alpha * randn(rows, cols);
dy = alpha * randn(rows, cols);
dx = imgaussfilt(dx, sigma);
dy = imgaussfilt(dy, sigma);
[X, Y] = meshgrid(1:cols, 1:rows);
X_new = X + dx; Y_new = Y + dy;
imgDeformed = interp2(X, Y, double(img), X_new, Y_new, ‘linear’, 0);
imgDeformed = uint8(imgDeformed);
end
The product of the image processing is therefore:
You can see that in the images there are white edges image processing, neural networks, computer vision MATLAB Answers — New Questions