Month: August 2024
Validataion Errors Not Being Shown To The User
Last week we had a few customers report to us that they had booked a time without receiving a confirmation email. There is no record of their booking in the system or export data so it is not clear for us what is happening. We think that it is unusual for several people to claim the same thing when every time that we tested the bookings system we couldn’t see any problems. But if the customer believes that they have booked a time then where could the confusion be?
We have experimented a little bit to try and understand what the problem could be and we’ve discovered that if the user does not select a time for the service or fill in all the required fields then the Bookings program just appears to reload the page instead of showing the validation errors on the screen. This is clearly a bug because the program used to show messages that guide the user on how to fill in the form correctly. For example if only one time slot is available for a day then it isn’t totally clear that the customer needs to click on this time, but the validation error messages would normally highlight this kind of problem for the customer. But now when you click the “book” without selecting a time you would just see the same booking page again. The customer never sees any confirmation message so this might explain why the customers think that they have booked a time with us. But the worry for us now is that we cannot possibly know how many people this problem has affected but it is currently about 4% of the bookings. We also don’t know when the validation messages stopped working either.
We can’t see that anybody else has reported this kind of problem but we’ve tried it on two separate Bookings sites and both sites have the same user experience at the moment. The user won’t see any validation errors if they fill out the form incorrectly and because the page reloads without a confirmation message the customer might believe that their booking was successful.
As a temporary measure we are trying to highlight through our own website that the user will receive a confirmation email if they have booked their time correctly and that missing details like selecting the time or missing one of the fields would result in their booking not being registered.
There is also an error in the console saying that the page failed to load a resource which may or may not be connected to this problem.
Hope that this problem gets fixed soon!
Last week we had a few customers report to us that they had booked a time without receiving a confirmation email. There is no record of their booking in the system or export data so it is not clear for us what is happening. We think that it is unusual for several people to claim the same thing when every time that we tested the bookings system we couldn’t see any problems. But if the customer believes that they have booked a time then where could the confusion be?We have experimented a little bit to try and understand what the problem could be and we’ve discovered that if the user does not select a time for the service or fill in all the required fields then the Bookings program just appears to reload the page instead of showing the validation errors on the screen. This is clearly a bug because the program used to show messages that guide the user on how to fill in the form correctly. For example if only one time slot is available for a day then it isn’t totally clear that the customer needs to click on this time, but the validation error messages would normally highlight this kind of problem for the customer. But now when you click the “book” without selecting a time you would just see the same booking page again. The customer never sees any confirmation message so this might explain why the customers think that they have booked a time with us. But the worry for us now is that we cannot possibly know how many people this problem has affected but it is currently about 4% of the bookings. We also don’t know when the validation messages stopped working either.We can’t see that anybody else has reported this kind of problem but we’ve tried it on two separate Bookings sites and both sites have the same user experience at the moment. The user won’t see any validation errors if they fill out the form incorrectly and because the page reloads without a confirmation message the customer might believe that their booking was successful.As a temporary measure we are trying to highlight through our own website that the user will receive a confirmation email if they have booked their time correctly and that missing details like selecting the time or missing one of the fields would result in their booking not being registered.There is also an error in the console saying that the page failed to load a resource which may or may not be connected to this problem.Hope that this problem gets fixed soon! Read More
Secure APIM and Azure OpenAI with managed identity
<set-header name=”Authorization” exists-action=”override”>
<value>@(“Bearer ” + (string)context.Variables[“managed-id-access-token”])</value>
</set-header>
name: name
location: location
tags: union(tags, { ‘azd-service-name’: name })
sku: {
name: sku
capacity: (sku == ‘Consumption’) ? 0 : ((sku == ‘Developer’) ? 1 : skuCount)
}
properties: {
publisherEmail: publisherEmail
publisherName: publisherName
// Custom properties are not supported for Consumption SKU
customProperties: sku == ‘Consumption’ ? {} : {
‘Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Ciphers.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA’: ‘false’
‘Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Ciphers.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA’: ‘false’
‘Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Ciphers.TLS_RSA_WITH_AES_128_GCM_SHA256’: ‘false’
‘Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Ciphers.TLS_RSA_WITH_AES_256_CBC_SHA256’: ‘false’
‘Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Ciphers.TLS_RSA_WITH_AES_128_CBC_SHA256’: ‘false’
‘Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Ciphers.TLS_RSA_WITH_AES_256_CBC_SHA’: ‘false’
‘Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Ciphers.TLS_RSA_WITH_AES_128_CBC_SHA’: ‘false’
‘Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Ciphers.TripleDes168’: ‘false’
‘Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Protocols.Tls10’: ‘false’
‘Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Protocols.Tls11’: ‘false’
‘Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Protocols.Ssl30’: ‘false’
‘Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Backend.Protocols.Tls10’: ‘false’
‘Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Backend.Protocols.Tls11’: ‘false’
‘Microsoft.WindowsAzure.ApiManagement.Gateway.Security.Backend.Protocols.Ssl30’: ‘false’
}
}
identity: {
type: ‘SystemAssigned’
}
}
name: ‘your-openai-resource-name’
location: ‘your-location’
sku: {
name: ‘S0’
}
kind: ‘OpenAI’
properties: {
// Add other necessary properties here
}
identity: {
type: ‘SystemAssigned’
}
properties: {
publicNetworkAccess: ‘Disabled’
networkAcls: {
defaultAction: ‘Deny’
}
disableLocalAuth: true
}
}
name: guid(openAI.id, ‘cognitive-services-openai-user-role’)
properties: {
roleDefinitionId: subscriptionResourceId(‘Microsoft.Authorization/roleDefinitions’, ‘c1c469a3-0a2d-4bba-b0e1-0eaf1d3d728b’) // Role ID for Cognitive Services OpenAI User
principalId: openAI.identity.principalId
principalType: ‘ServicePrincipal’
scope: openAI.id
}
}
name: guid(apimIdentity.id, resourceGroup().id, ‘cognitive-services-openai-user-role’)
properties: {
roleDefinitionId: subscriptionResourceId(‘Microsoft.Authorization/roleDefinitions’, ‘c1c469a3-0a2d-4bba-b0e1-0eaf1d3d728b’) // Role ID for Cognitive Services OpenAI User
principalId: apimIdentity.properties.principalId
principalType: ‘ServicePrincipal’
scope: resourceGroup().id
}
}
Subscriptions in Azure API Management are a way to control access to APIs. When you publish APIs through APIM, you can secure them using subscription keys. Here’s a quick overview:
Subscriptions: These are containers for a pair of subscription keys (primary and secondary). Developers need a valid subscription key to call the APIs.
Subscription IDs: Each subscription has a unique identifier called a Subscription ID.
How does Subscription relate to the APIM resource though?
Scope of Subscriptions: Subscriptions can be associated with different scopes within an APIM instance:
Product Scope: Subscriptions can be linked to a specific product, which is a collection of one or more APIs. Developers subscribe to the product to access all APIs within it.
API Scope: Subscriptions can also be associated with individual APIs, allowing more granular control over access.
parent: apimService
name: apiName
properties: {
displayName: apiName
apiType: ‘http’
path: apiSuffix
format: ‘openapi+json-link’
value: ‘https://raw.githubusercontent.com/Azure/azure-rest-api-specs/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-03-01-preview/inference.json’
subscriptionKeyParameterNames: {
header: ‘api-key’
}
}
resource apimDiagnostics ‘diagnostics@2023-05-01-preview’ = {
name: ‘applicationinsights’ // Use a supported diagnostic identifier
properties: {
loggerId: ‘/subscriptions/${subscriptionId}/resourceGroups/${resourceGroup().name}/providers/Microsoft.ApiManagement/service/${apimService.name}/loggers/${apimLogger.name}’
metrics: true
}
}
}
// Creating a product for the API. Products are used to group APIs and apply policies to them
resource product ‘Microsoft.ApiManagement/service/products@2020-06-01-preview’ = {
parent: apimService
name: productName
properties: {
displayName: productName
description: productDescription
state: ‘published’
subscriptionRequired: true
}
}
// Create PRODUCT-API association the API with the product
resource productApi1 ‘Microsoft.ApiManagement/service/products/apis@2020-06-01-preview’ = {
parent: product
name: api1.name
}
// Creating a user for the API Management service
resource user ‘Microsoft.ApiManagement/service/users@2020-06-01-preview’ = {
parent: apimService
name: ‘userName’
properties: {
firstName: ‘User’
lastName: ‘Name’
email: ‘user@example.com’
state: ‘active’
}
}
// Creating a subscription for the API Management service
// NOTE: the subscription is associated with the user and the product, AND the subscription ID is what will be used in the request to authenticate the calling client
resource subscription ‘Microsoft.ApiManagement/service/subscriptions@2020-06-01-preview’ = {
parent: apimService
name: ‘subscriptionAIProduct’
properties: {
displayName: ‘Subscribing to AI services’
state: ‘active’
ownerId: user.id
scope: product.id
}
}
“model”:”gpt-35-turbo”,”messages”:[
{
“role”:”system”,”content”:”You’re a helpful assistant”
},
{
“role”:”user”,”content”:prompt
}
]};
return fetch(URL_CHAT, {
method: “POST”,
headers: {
“api-key”: process.env.SUBSCRIPTION_KEY,
“Content-Type”: “application/json”
},
body: JSON.stringify(body)
})
Microsoft Tech Community – Latest Blogs –Read More
How do I plot a graph?
Hello everyone, I need to plot a graph according to the formulas given below. I am also sharing the image of the graph that should be. I started with the following code. I would be very happy if you could help.
reference:
% Define parameters
epsilon0 = 8.85e-12; % F/m (Permittivity of free space)
epsilon_m = 79; % F/m (Relative permittivity of medium)
CM = 0.5; % Clausius-Mossotti factor
k_B = 1.38e-23; % J/K (Boltzmann constant)
R = 1e-6; % m (Particle radius)
gamma = 1.88e-8; % kg/s (Friction coefficient)
q = 1e-14; % C (Charge of the particle)
dt = 1e-3; % s (Time step)
T = 300; % K (Room temperature)
x0 = 10e-6; % Initial position set to 10 micrometers (10×10^-6 m)
N = 100000; % Number of simulations
num_steps = 1000; % Number of steps (simulation for 1 second)
epsilon = 1e-9; % Small offset to prevent division by zero
k = 1 / (4 * pi * epsilon_m); % Constant for force coefficient
% Generate random numbers
rng(0); % Reset random number generator
W = randn(num_steps, N); % Random numbers from standard normal distributionHello everyone, I need to plot a graph according to the formulas given below. I am also sharing the image of the graph that should be. I started with the following code. I would be very happy if you could help.
reference:
% Define parameters
epsilon0 = 8.85e-12; % F/m (Permittivity of free space)
epsilon_m = 79; % F/m (Relative permittivity of medium)
CM = 0.5; % Clausius-Mossotti factor
k_B = 1.38e-23; % J/K (Boltzmann constant)
R = 1e-6; % m (Particle radius)
gamma = 1.88e-8; % kg/s (Friction coefficient)
q = 1e-14; % C (Charge of the particle)
dt = 1e-3; % s (Time step)
T = 300; % K (Room temperature)
x0 = 10e-6; % Initial position set to 10 micrometers (10×10^-6 m)
N = 100000; % Number of simulations
num_steps = 1000; % Number of steps (simulation for 1 second)
epsilon = 1e-9; % Small offset to prevent division by zero
k = 1 / (4 * pi * epsilon_m); % Constant for force coefficient
% Generate random numbers
rng(0); % Reset random number generator
W = randn(num_steps, N); % Random numbers from standard normal distribution Hello everyone, I need to plot a graph according to the formulas given below. I am also sharing the image of the graph that should be. I started with the following code. I would be very happy if you could help.
reference:
% Define parameters
epsilon0 = 8.85e-12; % F/m (Permittivity of free space)
epsilon_m = 79; % F/m (Relative permittivity of medium)
CM = 0.5; % Clausius-Mossotti factor
k_B = 1.38e-23; % J/K (Boltzmann constant)
R = 1e-6; % m (Particle radius)
gamma = 1.88e-8; % kg/s (Friction coefficient)
q = 1e-14; % C (Charge of the particle)
dt = 1e-3; % s (Time step)
T = 300; % K (Room temperature)
x0 = 10e-6; % Initial position set to 10 micrometers (10×10^-6 m)
N = 100000; % Number of simulations
num_steps = 1000; % Number of steps (simulation for 1 second)
epsilon = 1e-9; % Small offset to prevent division by zero
k = 1 / (4 * pi * epsilon_m); % Constant for force coefficient
% Generate random numbers
rng(0); % Reset random number generator
W = randn(num_steps, N); % Random numbers from standard normal distribution matlab, matlab code, mathematics, mean, loop, graphics MATLAB Answers — New Questions
How do I use a for loop for the newton raphson method?
The code I have is where f is a function handle, a is a real number, and n is a positive integer:
function r=mynewton(f,a,n)
syms x
f=@x;
c=f(x);
y(1)=a;
for i=[1:length(n)]
y(i+1)=y(i)-(c(i)/diff(c(i)));
end;
r=y
The erros I am getting are:
Undefined function or variable ‘x’.
Error in mynewton (line 4)
c=f(x);
How do I fix these errors?The code I have is where f is a function handle, a is a real number, and n is a positive integer:
function r=mynewton(f,a,n)
syms x
f=@x;
c=f(x);
y(1)=a;
for i=[1:length(n)]
y(i+1)=y(i)-(c(i)/diff(c(i)));
end;
r=y
The erros I am getting are:
Undefined function or variable ‘x’.
Error in mynewton (line 4)
c=f(x);
How do I fix these errors? The code I have is where f is a function handle, a is a real number, and n is a positive integer:
function r=mynewton(f,a,n)
syms x
f=@x;
c=f(x);
y(1)=a;
for i=[1:length(n)]
y(i+1)=y(i)-(c(i)/diff(c(i)));
end;
r=y
The erros I am getting are:
Undefined function or variable ‘x’.
Error in mynewton (line 4)
c=f(x);
How do I fix these errors? newton raphson method function handle MATLAB Answers — New Questions
How can I include Simulink.Breakpoint in A2L file?
In the model, few variables are used for input to ‘Interpolation Using Prelookup’ block. As per requirement these variable have class Simulink.Breakpoint. However this is preventing these variables to be generated in A2L file.
Is there any possible way to include Simulink.Breakpoint class variables in E-coder generated A2L file?In the model, few variables are used for input to ‘Interpolation Using Prelookup’ block. As per requirement these variable have class Simulink.Breakpoint. However this is preventing these variables to be generated in A2L file.
Is there any possible way to include Simulink.Breakpoint class variables in E-coder generated A2L file? In the model, few variables are used for input to ‘Interpolation Using Prelookup’ block. As per requirement these variable have class Simulink.Breakpoint. However this is preventing these variables to be generated in A2L file.
Is there any possible way to include Simulink.Breakpoint class variables in E-coder generated A2L file? a2l, asap2, simulink.breakpoint MATLAB Answers — New Questions
Calibrating multiple cameras: How do I get 3D points from triangulation into a worldpointset or pointTracker?
Hi everyone! I am working on a project where I need to calibrate multiple cameras observing a scene, to ultimately be able to get 3D points of an object in later videos collected by the same cameras. The cameras are stationary. Importantly, I need to be able to triangulate the checkerboard points from the calibration and then do sparse bundle adjustment on these points to improve the acuracy of the camera pose estimation and 3D checkerboard points from the calibration. Sparse bundle adjustment (bundleAdjustment) can take in either pointTrack objects or worldpointset objects.
I have two calibration sessions (front camera and rear right, and the front camera and rear left – they are in a triangular config) from which I load the stereoParams, I have also stored the useful data in a structure called ‘s’.
I then get the 3D coordinates of the checkerboards, and using the ‘worldpointset’ and feature matching approach. I have included all code (including the code I used to save important variables).
The error I get with the bundleAdjustment function is the following:
Error using vision.internal.bundleAdjust.validateAndParseInputs
The number of feature points in view 1 must be greater than or equal to 51.
Error in vision.internal.bundleAdjust.sparseBA (line 39)
vision.internal.bundleAdjust.validateAndParseInputs(optimType, mfilename, varargin{:});
Error in bundleAdjustment (line 10)
vision.internal.bundleAdjust.sparseBA(‘full’, mfilename, varargin{:});
When I investigated using pointTrack, it seems that it is best used for tracking a point through multiple frames in a video, but not great for my application where I want to track one point through 3 different views at once.
AT LAST–> MY QUESTION:
Am I using worldpointset correctly for this application, and if so, can someone please help me figure out where this error in the feature points is coming from?
If not, would pointTrack be better for my application if I change the dimensionality of my problem? If pointTrack is better, I would need to track a point through the frames of each camera and somehow correlate and triangulate points that way.
**Note, with structure ‘s’ containing many images it was too large to upload (even when compressed), so I uploaded a screenshot of the structure. But hopefully my code helps with context. The visualisation runs though!
load("params.mat","params")
intrinsics1 = params.cam1.Intrinsics;
intrinsics2 = params.cam2.Intrinsics;
intrinsics3 = params.cam3.Intrinsics;
intrinsics4 = params.cam4.Intrinsics;
intrinsicsFront = intrinsics2;
intrinsicsRLeft = intrinsics3;
intrinsicsRRight = intrinsics4;
%% Visualise cameras
load("stereoParams1.mat")
load("stereoParams2.mat")
figure; showExtrinsics(stereoParams1, ‘CameraCentric’)
hold on;
showExtrinsics(stereoParams2, ‘CameraCentric’);
hold off;
%initialise camera 1 pose as at 0, with no rotation
front_absolute_pose = rigidtform3d([0 0 0], [0 0 0]);
%% Get 3D Points
load("s_struct.mat","s")
board_shape = s.front.board_shape;
camPoseVSet = imageviewset;
camPoseVSet = addView(camPoseVSet,1,front_absolute_pose);
camPoseVSet = addView(camPoseVSet,2,stereoParams1.PoseCamera2);
camPoseVSet = addView(camPoseVSet,3,stereoParams2.PoseCamera2);
camposes = poses(camPoseVSet);
intrinsicsArray = [intrinsicsFront, intrinsicsRRight, intrinsicsRLeft];
frames = fieldnames(s.front.points);
framesRearRight = fieldnames(s.rearright.points);
framesRearLeft = fieldnames(s.rearleft.points);
wpSet =worldpointset;
wpsettrial = worldpointset;
for i =1:length(frames) %for frames in front
frame_i = frames{i};
pointsFront = s.front.points.(frame_i);
pointsFrontUS = s.front.unshapedPoints.(frame_i);
container = contains(framesRearRight, frame_i);
j = 1;
if ~isempty(container) && any(container)
pointsRearRight = s.rearright.points.(frame_i);
pointsRearRightUS = s.rearright.unshapedPoints.(frame_i);
pointIn3D = [];
pointIn3Dnew = [];
[features1, validPts1] = extractFeatures(im2gray(s.front.imageFile.(frame_i)), pointsFrontUS);
[features2, validPts2] = extractFeatures(im2gray(s.rearright.imageFile.(frame_i)), pointsRearRightUS);
indexPairs = matchFeatures(features1,features2);
matchedPoints1 = validPts1(indexPairs(:,1),:);
matchedPoints2 = validPts2(indexPairs(:,2),:);
worldPTS = triangulate(matchedPoints1, matchedPoints2, stereoParams2);
[wpsettrial,newPointIndices] = addWorldPoints(wpsettrial,worldPTS);
wpsettrial = addCorrespondences(wpsettrial,1,newPointIndices,indexPairs(:,1));
wpsettrial = addCorrespondences(wpsettrial,3,newPointIndices,indexPairs(:,2));
sz = size(s.front.points.(frame_i));
for h =1: sz(1)
for w = 1:sz(2)
point2track = [pointsFront(h,w,1), pointsFront(h,w,2); pointsRearRight(h,w,1), pointsRearRight(h,w,2)];
IDs = [1, 3];
track = pointTrack(IDs,point2track);
triang3D = triangulate([pointsFront(h,w,1), pointsFront(h,w,2)], [pointsRearRight(h,w,1), pointsRearRight(h,w,2)], stereoParams1);
% [wpSet,newPointIndices] = addWorldPoints(wpSet,triang3D);
% wpSet = addCorrespondences(wpSet,1,j,j);
% wpSet = addCorrespondences(wpSet,3,j,j);
pointIn3D = [pointIn3D;triang3D];
j=j+1;
end
end
pointIn3D = reshape3D(pointIn3D, board_shape);
%xyzPoints = reshape3D(pointIn3D,board_shape);
s.frontANDright.PT3D.(frame_i) = pointIn3D;
%s.frontANDright.PT3DSBA.(frame_i) = xyzPoints;
end
container = contains(framesRearLeft, frame_i);
m=1;
if ~isempty(container) && any(container)
pointsRearLeft = s.rearleft.points.(frame_i);
pointsRearLeftUS = s.rearleft.unshapedPoints.(frame_i);
pointIn3D = [];
pointIn3Dnew = [];
sz = size(s.front.points.(frame_i));
[features1, validPts1] = extractFeatures(im2gray(s.front.imageFile.(frame_i)), pointsFrontUS);
[features2, validPts2] = extractFeatures(im2gray(s.rearleft.imageFile.(frame_i)), pointsRearLeftUS);
indexPairs = matchFeatures(features1,features2);
matchedPoints1 = validPts1(indexPairs(:,1),:);
matchedPoints2 = validPts2(indexPairs(:,2),:);
worldPTS = triangulate(matchedPoints1, matchedPoints2, stereoParams1);
[wpsettrial,newPointIndices] = addWorldPoints(wpsettrial,worldPTS);
wpsettrial = addCorrespondences(wpsettrial,1,newPointIndices,indexPairs(:,1));
wpsettrial = addCorrespondences(wpsettrial,2,newPointIndices,indexPairs(:,2));
for h =1: sz(1)
for w = 1:sz(2)
point2track = [pointsFront(h,w,1), pointsFront(h,w,2); pointsRearLeft(h,w,1), pointsRearLeft(h,w,2)];
IDs = [1, 2];
track = pointTrack(IDs,point2track);
triang3D = triangulate([pointsFront(h,w,1), pointsFront(h,w,2)], [pointsRearLeft(h,w,1), pointsRearLeft(h,w,2)], stereoParams1);
% wpSet = addWorldPoints(wpSet,triang3D);
% wpSet = addCorrespondences(wpSet,1,m,m);
% wpSet = addCorrespondences(wpSet,2,m,m);
pointIn3D = [pointIn3D;triang3D];
m = m+1;
end
end
pointIn3D = reshape3D(pointIn3D, board_shape);
%xyzPoints = reshape3D(pointIn3D,board_shape);
s.frontANDleft.PT3D.(frame_i) = pointIn3D;
%s.frontANDleft.PT3DSBA.(frame_i) = xyzPoints;
end
[wpSetRefined,vSetRefined,pointIndex] = bundleAdjustment(wpsettrial,camPoseVSet,[1,3,2],intrinsicsArray, FixedViewIDs=[1,3,2], …
Solver="preconditioned-conjugate-gradient")
end
function [img_name, ptsUS,pts, worldpoints] = reformatData(img_name, pts, board_shape, worldpoints)
%method taken from acinoset code
img_name = img_name(1:strfind(img_name,’ ‘)-1);
img_name = replace(img_name, ‘.’,’_’);
ptsUS = pts;
pts = pagetranspose(reshape(pts, [board_shape, 2]));
pts = pagetranspose(reshape(pts, [board_shape, 2])); %repetition is purposeful
worldpoints = pagetranspose(reshape(worldpoints, [board_shape,2]));
worldpoints = pagetranspose(reshape(worldpoints, [board_shape,2]));
end
function pts = reshape3D(points3D, board_shape)
pts = pagetranspose(reshape(points3D, [board_shape, 3]));
pts = pagetranspose(reshape(pts, [board_shape, 3])); %repetition is purposeful
endHi everyone! I am working on a project where I need to calibrate multiple cameras observing a scene, to ultimately be able to get 3D points of an object in later videos collected by the same cameras. The cameras are stationary. Importantly, I need to be able to triangulate the checkerboard points from the calibration and then do sparse bundle adjustment on these points to improve the acuracy of the camera pose estimation and 3D checkerboard points from the calibration. Sparse bundle adjustment (bundleAdjustment) can take in either pointTrack objects or worldpointset objects.
I have two calibration sessions (front camera and rear right, and the front camera and rear left – they are in a triangular config) from which I load the stereoParams, I have also stored the useful data in a structure called ‘s’.
I then get the 3D coordinates of the checkerboards, and using the ‘worldpointset’ and feature matching approach. I have included all code (including the code I used to save important variables).
The error I get with the bundleAdjustment function is the following:
Error using vision.internal.bundleAdjust.validateAndParseInputs
The number of feature points in view 1 must be greater than or equal to 51.
Error in vision.internal.bundleAdjust.sparseBA (line 39)
vision.internal.bundleAdjust.validateAndParseInputs(optimType, mfilename, varargin{:});
Error in bundleAdjustment (line 10)
vision.internal.bundleAdjust.sparseBA(‘full’, mfilename, varargin{:});
When I investigated using pointTrack, it seems that it is best used for tracking a point through multiple frames in a video, but not great for my application where I want to track one point through 3 different views at once.
AT LAST–> MY QUESTION:
Am I using worldpointset correctly for this application, and if so, can someone please help me figure out where this error in the feature points is coming from?
If not, would pointTrack be better for my application if I change the dimensionality of my problem? If pointTrack is better, I would need to track a point through the frames of each camera and somehow correlate and triangulate points that way.
**Note, with structure ‘s’ containing many images it was too large to upload (even when compressed), so I uploaded a screenshot of the structure. But hopefully my code helps with context. The visualisation runs though!
load("params.mat","params")
intrinsics1 = params.cam1.Intrinsics;
intrinsics2 = params.cam2.Intrinsics;
intrinsics3 = params.cam3.Intrinsics;
intrinsics4 = params.cam4.Intrinsics;
intrinsicsFront = intrinsics2;
intrinsicsRLeft = intrinsics3;
intrinsicsRRight = intrinsics4;
%% Visualise cameras
load("stereoParams1.mat")
load("stereoParams2.mat")
figure; showExtrinsics(stereoParams1, ‘CameraCentric’)
hold on;
showExtrinsics(stereoParams2, ‘CameraCentric’);
hold off;
%initialise camera 1 pose as at 0, with no rotation
front_absolute_pose = rigidtform3d([0 0 0], [0 0 0]);
%% Get 3D Points
load("s_struct.mat","s")
board_shape = s.front.board_shape;
camPoseVSet = imageviewset;
camPoseVSet = addView(camPoseVSet,1,front_absolute_pose);
camPoseVSet = addView(camPoseVSet,2,stereoParams1.PoseCamera2);
camPoseVSet = addView(camPoseVSet,3,stereoParams2.PoseCamera2);
camposes = poses(camPoseVSet);
intrinsicsArray = [intrinsicsFront, intrinsicsRRight, intrinsicsRLeft];
frames = fieldnames(s.front.points);
framesRearRight = fieldnames(s.rearright.points);
framesRearLeft = fieldnames(s.rearleft.points);
wpSet =worldpointset;
wpsettrial = worldpointset;
for i =1:length(frames) %for frames in front
frame_i = frames{i};
pointsFront = s.front.points.(frame_i);
pointsFrontUS = s.front.unshapedPoints.(frame_i);
container = contains(framesRearRight, frame_i);
j = 1;
if ~isempty(container) && any(container)
pointsRearRight = s.rearright.points.(frame_i);
pointsRearRightUS = s.rearright.unshapedPoints.(frame_i);
pointIn3D = [];
pointIn3Dnew = [];
[features1, validPts1] = extractFeatures(im2gray(s.front.imageFile.(frame_i)), pointsFrontUS);
[features2, validPts2] = extractFeatures(im2gray(s.rearright.imageFile.(frame_i)), pointsRearRightUS);
indexPairs = matchFeatures(features1,features2);
matchedPoints1 = validPts1(indexPairs(:,1),:);
matchedPoints2 = validPts2(indexPairs(:,2),:);
worldPTS = triangulate(matchedPoints1, matchedPoints2, stereoParams2);
[wpsettrial,newPointIndices] = addWorldPoints(wpsettrial,worldPTS);
wpsettrial = addCorrespondences(wpsettrial,1,newPointIndices,indexPairs(:,1));
wpsettrial = addCorrespondences(wpsettrial,3,newPointIndices,indexPairs(:,2));
sz = size(s.front.points.(frame_i));
for h =1: sz(1)
for w = 1:sz(2)
point2track = [pointsFront(h,w,1), pointsFront(h,w,2); pointsRearRight(h,w,1), pointsRearRight(h,w,2)];
IDs = [1, 3];
track = pointTrack(IDs,point2track);
triang3D = triangulate([pointsFront(h,w,1), pointsFront(h,w,2)], [pointsRearRight(h,w,1), pointsRearRight(h,w,2)], stereoParams1);
% [wpSet,newPointIndices] = addWorldPoints(wpSet,triang3D);
% wpSet = addCorrespondences(wpSet,1,j,j);
% wpSet = addCorrespondences(wpSet,3,j,j);
pointIn3D = [pointIn3D;triang3D];
j=j+1;
end
end
pointIn3D = reshape3D(pointIn3D, board_shape);
%xyzPoints = reshape3D(pointIn3D,board_shape);
s.frontANDright.PT3D.(frame_i) = pointIn3D;
%s.frontANDright.PT3DSBA.(frame_i) = xyzPoints;
end
container = contains(framesRearLeft, frame_i);
m=1;
if ~isempty(container) && any(container)
pointsRearLeft = s.rearleft.points.(frame_i);
pointsRearLeftUS = s.rearleft.unshapedPoints.(frame_i);
pointIn3D = [];
pointIn3Dnew = [];
sz = size(s.front.points.(frame_i));
[features1, validPts1] = extractFeatures(im2gray(s.front.imageFile.(frame_i)), pointsFrontUS);
[features2, validPts2] = extractFeatures(im2gray(s.rearleft.imageFile.(frame_i)), pointsRearLeftUS);
indexPairs = matchFeatures(features1,features2);
matchedPoints1 = validPts1(indexPairs(:,1),:);
matchedPoints2 = validPts2(indexPairs(:,2),:);
worldPTS = triangulate(matchedPoints1, matchedPoints2, stereoParams1);
[wpsettrial,newPointIndices] = addWorldPoints(wpsettrial,worldPTS);
wpsettrial = addCorrespondences(wpsettrial,1,newPointIndices,indexPairs(:,1));
wpsettrial = addCorrespondences(wpsettrial,2,newPointIndices,indexPairs(:,2));
for h =1: sz(1)
for w = 1:sz(2)
point2track = [pointsFront(h,w,1), pointsFront(h,w,2); pointsRearLeft(h,w,1), pointsRearLeft(h,w,2)];
IDs = [1, 2];
track = pointTrack(IDs,point2track);
triang3D = triangulate([pointsFront(h,w,1), pointsFront(h,w,2)], [pointsRearLeft(h,w,1), pointsRearLeft(h,w,2)], stereoParams1);
% wpSet = addWorldPoints(wpSet,triang3D);
% wpSet = addCorrespondences(wpSet,1,m,m);
% wpSet = addCorrespondences(wpSet,2,m,m);
pointIn3D = [pointIn3D;triang3D];
m = m+1;
end
end
pointIn3D = reshape3D(pointIn3D, board_shape);
%xyzPoints = reshape3D(pointIn3D,board_shape);
s.frontANDleft.PT3D.(frame_i) = pointIn3D;
%s.frontANDleft.PT3DSBA.(frame_i) = xyzPoints;
end
[wpSetRefined,vSetRefined,pointIndex] = bundleAdjustment(wpsettrial,camPoseVSet,[1,3,2],intrinsicsArray, FixedViewIDs=[1,3,2], …
Solver="preconditioned-conjugate-gradient")
end
function [img_name, ptsUS,pts, worldpoints] = reformatData(img_name, pts, board_shape, worldpoints)
%method taken from acinoset code
img_name = img_name(1:strfind(img_name,’ ‘)-1);
img_name = replace(img_name, ‘.’,’_’);
ptsUS = pts;
pts = pagetranspose(reshape(pts, [board_shape, 2]));
pts = pagetranspose(reshape(pts, [board_shape, 2])); %repetition is purposeful
worldpoints = pagetranspose(reshape(worldpoints, [board_shape,2]));
worldpoints = pagetranspose(reshape(worldpoints, [board_shape,2]));
end
function pts = reshape3D(points3D, board_shape)
pts = pagetranspose(reshape(points3D, [board_shape, 3]));
pts = pagetranspose(reshape(pts, [board_shape, 3])); %repetition is purposeful
end Hi everyone! I am working on a project where I need to calibrate multiple cameras observing a scene, to ultimately be able to get 3D points of an object in later videos collected by the same cameras. The cameras are stationary. Importantly, I need to be able to triangulate the checkerboard points from the calibration and then do sparse bundle adjustment on these points to improve the acuracy of the camera pose estimation and 3D checkerboard points from the calibration. Sparse bundle adjustment (bundleAdjustment) can take in either pointTrack objects or worldpointset objects.
I have two calibration sessions (front camera and rear right, and the front camera and rear left – they are in a triangular config) from which I load the stereoParams, I have also stored the useful data in a structure called ‘s’.
I then get the 3D coordinates of the checkerboards, and using the ‘worldpointset’ and feature matching approach. I have included all code (including the code I used to save important variables).
The error I get with the bundleAdjustment function is the following:
Error using vision.internal.bundleAdjust.validateAndParseInputs
The number of feature points in view 1 must be greater than or equal to 51.
Error in vision.internal.bundleAdjust.sparseBA (line 39)
vision.internal.bundleAdjust.validateAndParseInputs(optimType, mfilename, varargin{:});
Error in bundleAdjustment (line 10)
vision.internal.bundleAdjust.sparseBA(‘full’, mfilename, varargin{:});
When I investigated using pointTrack, it seems that it is best used for tracking a point through multiple frames in a video, but not great for my application where I want to track one point through 3 different views at once.
AT LAST–> MY QUESTION:
Am I using worldpointset correctly for this application, and if so, can someone please help me figure out where this error in the feature points is coming from?
If not, would pointTrack be better for my application if I change the dimensionality of my problem? If pointTrack is better, I would need to track a point through the frames of each camera and somehow correlate and triangulate points that way.
**Note, with structure ‘s’ containing many images it was too large to upload (even when compressed), so I uploaded a screenshot of the structure. But hopefully my code helps with context. The visualisation runs though!
load("params.mat","params")
intrinsics1 = params.cam1.Intrinsics;
intrinsics2 = params.cam2.Intrinsics;
intrinsics3 = params.cam3.Intrinsics;
intrinsics4 = params.cam4.Intrinsics;
intrinsicsFront = intrinsics2;
intrinsicsRLeft = intrinsics3;
intrinsicsRRight = intrinsics4;
%% Visualise cameras
load("stereoParams1.mat")
load("stereoParams2.mat")
figure; showExtrinsics(stereoParams1, ‘CameraCentric’)
hold on;
showExtrinsics(stereoParams2, ‘CameraCentric’);
hold off;
%initialise camera 1 pose as at 0, with no rotation
front_absolute_pose = rigidtform3d([0 0 0], [0 0 0]);
%% Get 3D Points
load("s_struct.mat","s")
board_shape = s.front.board_shape;
camPoseVSet = imageviewset;
camPoseVSet = addView(camPoseVSet,1,front_absolute_pose);
camPoseVSet = addView(camPoseVSet,2,stereoParams1.PoseCamera2);
camPoseVSet = addView(camPoseVSet,3,stereoParams2.PoseCamera2);
camposes = poses(camPoseVSet);
intrinsicsArray = [intrinsicsFront, intrinsicsRRight, intrinsicsRLeft];
frames = fieldnames(s.front.points);
framesRearRight = fieldnames(s.rearright.points);
framesRearLeft = fieldnames(s.rearleft.points);
wpSet =worldpointset;
wpsettrial = worldpointset;
for i =1:length(frames) %for frames in front
frame_i = frames{i};
pointsFront = s.front.points.(frame_i);
pointsFrontUS = s.front.unshapedPoints.(frame_i);
container = contains(framesRearRight, frame_i);
j = 1;
if ~isempty(container) && any(container)
pointsRearRight = s.rearright.points.(frame_i);
pointsRearRightUS = s.rearright.unshapedPoints.(frame_i);
pointIn3D = [];
pointIn3Dnew = [];
[features1, validPts1] = extractFeatures(im2gray(s.front.imageFile.(frame_i)), pointsFrontUS);
[features2, validPts2] = extractFeatures(im2gray(s.rearright.imageFile.(frame_i)), pointsRearRightUS);
indexPairs = matchFeatures(features1,features2);
matchedPoints1 = validPts1(indexPairs(:,1),:);
matchedPoints2 = validPts2(indexPairs(:,2),:);
worldPTS = triangulate(matchedPoints1, matchedPoints2, stereoParams2);
[wpsettrial,newPointIndices] = addWorldPoints(wpsettrial,worldPTS);
wpsettrial = addCorrespondences(wpsettrial,1,newPointIndices,indexPairs(:,1));
wpsettrial = addCorrespondences(wpsettrial,3,newPointIndices,indexPairs(:,2));
sz = size(s.front.points.(frame_i));
for h =1: sz(1)
for w = 1:sz(2)
point2track = [pointsFront(h,w,1), pointsFront(h,w,2); pointsRearRight(h,w,1), pointsRearRight(h,w,2)];
IDs = [1, 3];
track = pointTrack(IDs,point2track);
triang3D = triangulate([pointsFront(h,w,1), pointsFront(h,w,2)], [pointsRearRight(h,w,1), pointsRearRight(h,w,2)], stereoParams1);
% [wpSet,newPointIndices] = addWorldPoints(wpSet,triang3D);
% wpSet = addCorrespondences(wpSet,1,j,j);
% wpSet = addCorrespondences(wpSet,3,j,j);
pointIn3D = [pointIn3D;triang3D];
j=j+1;
end
end
pointIn3D = reshape3D(pointIn3D, board_shape);
%xyzPoints = reshape3D(pointIn3D,board_shape);
s.frontANDright.PT3D.(frame_i) = pointIn3D;
%s.frontANDright.PT3DSBA.(frame_i) = xyzPoints;
end
container = contains(framesRearLeft, frame_i);
m=1;
if ~isempty(container) && any(container)
pointsRearLeft = s.rearleft.points.(frame_i);
pointsRearLeftUS = s.rearleft.unshapedPoints.(frame_i);
pointIn3D = [];
pointIn3Dnew = [];
sz = size(s.front.points.(frame_i));
[features1, validPts1] = extractFeatures(im2gray(s.front.imageFile.(frame_i)), pointsFrontUS);
[features2, validPts2] = extractFeatures(im2gray(s.rearleft.imageFile.(frame_i)), pointsRearLeftUS);
indexPairs = matchFeatures(features1,features2);
matchedPoints1 = validPts1(indexPairs(:,1),:);
matchedPoints2 = validPts2(indexPairs(:,2),:);
worldPTS = triangulate(matchedPoints1, matchedPoints2, stereoParams1);
[wpsettrial,newPointIndices] = addWorldPoints(wpsettrial,worldPTS);
wpsettrial = addCorrespondences(wpsettrial,1,newPointIndices,indexPairs(:,1));
wpsettrial = addCorrespondences(wpsettrial,2,newPointIndices,indexPairs(:,2));
for h =1: sz(1)
for w = 1:sz(2)
point2track = [pointsFront(h,w,1), pointsFront(h,w,2); pointsRearLeft(h,w,1), pointsRearLeft(h,w,2)];
IDs = [1, 2];
track = pointTrack(IDs,point2track);
triang3D = triangulate([pointsFront(h,w,1), pointsFront(h,w,2)], [pointsRearLeft(h,w,1), pointsRearLeft(h,w,2)], stereoParams1);
% wpSet = addWorldPoints(wpSet,triang3D);
% wpSet = addCorrespondences(wpSet,1,m,m);
% wpSet = addCorrespondences(wpSet,2,m,m);
pointIn3D = [pointIn3D;triang3D];
m = m+1;
end
end
pointIn3D = reshape3D(pointIn3D, board_shape);
%xyzPoints = reshape3D(pointIn3D,board_shape);
s.frontANDleft.PT3D.(frame_i) = pointIn3D;
%s.frontANDleft.PT3DSBA.(frame_i) = xyzPoints;
end
[wpSetRefined,vSetRefined,pointIndex] = bundleAdjustment(wpsettrial,camPoseVSet,[1,3,2],intrinsicsArray, FixedViewIDs=[1,3,2], …
Solver="preconditioned-conjugate-gradient")
end
function [img_name, ptsUS,pts, worldpoints] = reformatData(img_name, pts, board_shape, worldpoints)
%method taken from acinoset code
img_name = img_name(1:strfind(img_name,’ ‘)-1);
img_name = replace(img_name, ‘.’,’_’);
ptsUS = pts;
pts = pagetranspose(reshape(pts, [board_shape, 2]));
pts = pagetranspose(reshape(pts, [board_shape, 2])); %repetition is purposeful
worldpoints = pagetranspose(reshape(worldpoints, [board_shape,2]));
worldpoints = pagetranspose(reshape(worldpoints, [board_shape,2]));
end
function pts = reshape3D(points3D, board_shape)
pts = pagetranspose(reshape(points3D, [board_shape, 3]));
pts = pagetranspose(reshape(pts, [board_shape, 3])); %repetition is purposeful
end image processing, computer vision, calibration, 3d, worldpointset, bundleadjustment MATLAB Answers — New Questions
SFTP to sharepoint folder Flow
Hi,
Hi, Is it possible to run a flow that brings file from SFTP folder to sharepoint folder? I built the below one and it kept returnning issues related to connections Read More
Outbound audio drops midway through Teams calls
Hi all,
We have a user (director) here whose outbound audio drops approximately 5-10 minutes through Teams calls (both audio only and video). The incoming audio continues to work without fail. They use a Logitech C270 Webcam as their primary microphone as they won’t use a headset and would prefer not to use an external microphone in terms of desk space. We’ve replaced the webcam twice including different models, that would indicate to me it isn’t related to the webcam as being a hardware issue. We’ve also re-installed the Teams app and cleared the Teams cache (although I’m open to suggestions here if there is a more thorough way of doing this). I should note that the audio always works in short test calls on Teams and it is working in other applications such as Zoom, so I believe there is something unique to Teams happening. Thanks all for any suggestions you can offer.
Hi all,We have a user (director) here whose outbound audio drops approximately 5-10 minutes through Teams calls (both audio only and video). The incoming audio continues to work without fail. They use a Logitech C270 Webcam as their primary microphone as they won’t use a headset and would prefer not to use an external microphone in terms of desk space. We’ve replaced the webcam twice including different models, that would indicate to me it isn’t related to the webcam as being a hardware issue. We’ve also re-installed the Teams app and cleared the Teams cache (although I’m open to suggestions here if there is a more thorough way of doing this). I should note that the audio always works in short test calls on Teams and it is working in other applications such as Zoom, so I believe there is something unique to Teams happening. Thanks all for any suggestions you can offer. Read More
Can’t convert jpg to webp with the Photos app on Windows 11?
When it comes to basic image conversion, the Photos app is always my favorite. But it is not the case for jpg to webp conversion. When I open the jpg with Photos app, the save as menu does not include option for webp, only jpg, jpx, jpx, png, tiff and bmp available.
I am building a new website, and the uploaded images have to be in WebP format based on search engine recommendation. How can I bulk convert JPG to WebP on Windows 11 PC?
Thanks
When it comes to basic image conversion, the Photos app is always my favorite. But it is not the case for jpg to webp conversion. When I open the jpg with Photos app, the save as menu does not include option for webp, only jpg, jpx, jpx, png, tiff and bmp available. I am building a new website, and the uploaded images have to be in WebP format based on search engine recommendation. How can I bulk convert JPG to WebP on Windows 11 PC? Thanks Read More
Ошибка #Н/Д в функциях ИНДЕКС/ПОИСКПОЗ
Я просмотрел публикацию по этой теме (Как исправить ошибку #N/A в функциях INDEX/MATCH – Служба поддержки Microsoft), и сделал всё что там написано. Но, ошибка #Н/Д не исправилась.
Я просмотрел публикацию по этой теме (Как исправить ошибку #N/A в функциях INDEX/MATCH – Служба поддержки Microsoft), и сделал всё что там написано. Но, ошибка #Н/Д не исправилась. Read More
Same device with Onboarded and Not Onboarded status
Hi,
I’m creating a detection rule to search for servers which are not onboarded to Defender. What’s strange about this query is that I get the same device (same devicename but different deviceid) with both Onboarding status, which is “Onboarded” and “Can be onboarded”.
Anyone knows why? This way I get uncorrect results on my detection rule.
Thanks
Hi,I’m creating a detection rule to search for servers which are not onboarded to Defender. What’s strange about this query is that I get the same device (same devicename but different deviceid) with both Onboarding status, which is “Onboarded” and “Can be onboarded”.Anyone knows why? This way I get uncorrect results on my detection rule.Thanks Read More
RefinableString Property empty
Hi,
On our tenant I’ve configured the RefinableString00 (more than 2 days ago) with the crawled property OWS_Q_TEXT_MANUFACTURER. The RefinableString00 still returns null.
What steps we need to follow to see the results
Thanks
Hi, On our tenant I’ve configured the RefinableString00 (more than 2 days ago) with the crawled property OWS_Q_TEXT_MANUFACTURER. The RefinableString00 still returns null. What steps we need to follow to see the results Thanks Read More
Edit links on classic SharePoint site
Hello!
I’m hoping someone can help me with editing a very old classic SharePoint site please!
I want to edit the URL link within the menu along the top…but clicking Edit Links doesn’t let me edit the drop-down menu part (I need to edit the URL at link “B” located within “A”, within “S”? What am I missing?? All Edit Links lets me do is edit “S”. I am an Admin of the site.
Hello! I’m hoping someone can help me with editing a very old classic SharePoint site please! I want to edit the URL link within the menu along the top…but clicking Edit Links doesn’t let me edit the drop-down menu part (I need to edit the URL at link “B” located within “A”, within “S”? What am I missing?? All Edit Links lets me do is edit “S”. I am an Admin of the site. Read More
Hunting for data related to priviledge escalation (like app installs)
Hi,
I’m navigating the Defender tables to try to understand how can I hunt for priviledge escalation events, benign ones in this case, for example, when our Helpdesk team connects to a computer to install an application, it will request an elevation of priviledges, as the local users do not have permissions for it.
I would like to audit this type of priviledge escalation events, but I can’t find the data related to it.
Anyone knows in which table can I find this kind of data?
Thanks
Hi,I’m navigating the Defender tables to try to understand how can I hunt for priviledge escalation events, benign ones in this case, for example, when our Helpdesk team connects to a computer to install an application, it will request an elevation of priviledges, as the local users do not have permissions for it.I would like to audit this type of priviledge escalation events, but I can’t find the data related to it. Anyone knows in which table can I find this kind of data?Thanks Read More
How to use the new Chats & Channels experience in Teams
The new chats and channels experience in Microsoft Teams introduces several enhancements designed to improve collaboration and streamline communication.
You can turn on this feature in preview for evaluation.
These updates aim to make it easier for teams to collaborate, stay organized, and ensure that no important messages are missed.
#MicrosoftTeams #Teams #NewFeatures #Microsoft365 #Productivity #MPVbuzz
The new chats and channels experience in Microsoft Teams introduces several enhancements designed to improve collaboration and streamline communication.
You can turn on this feature in preview for evaluation.
These updates aim to make it easier for teams to collaborate, stay organized, and ensure that no important messages are missed.
#MicrosoftTeams #Teams #NewFeatures #Microsoft365 #Productivity #MPVbuzz Read More
The Importance of Implementing SAST Scanning for Infrastructure as Code
Introduction
As the adoption of Infrastructure as Code (IaC) continues to grow, ensuring the security of your infrastructure configurations becomes increasingly crucial. Static Application Security Testing (SAST) scanning for IaC can play a vital role in identifying vulnerabilities early in the development lifecycle. This blog explores why implementing SAST scanning for IaC is essential for maintaining secure and robust infrastructure.
What is Infrastructure as Code?
Infrastructure as Code (IaC) is the practice of managing and provisioning computing infrastructure through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. Common tools for IaC include Azure Resource Manager (ARM) templates, Bicep Templates, Terraform, AWS CloudFormation.
Understanding SAST
Static Application Security Testing (SAST) is a white-box testing methodology that analyzes source code to identify security vulnerabilities. Unlike dynamic analysis, which requires running the application, SAST scans the code at rest, allowing developers to identify and fix vulnerabilities early in the development process.
Why SAST for IaC?
Early Detection of Vulnerabilities
Implementing SAST scanning for IaC allows you to detect vulnerabilities in your infrastructure code before it is deployed. By integrating SAST tools into your CI/CD pipeline, you can identify and remediate security issues during the development phase, significantly reducing the risk of deploying insecure infrastructure.
Compliance and Best Practices
Many industries and organizations have specific compliance requirements that mandate the implementation of security best practices. SAST scanning helps ensure that your IaC adheres to these standards by identifying non-compliant configurations and suggesting best practices.
Reduced Attack Surface
IaC templates often include configurations for networking, storage, compute resources, and more. Misconfigurations in these templates can lead to security vulnerabilities, such as open ports, insecure storage configurations, or excessive permissions. SAST scanning helps identify these issues, reducing the overall attack surface of your infrastructure.
Key Benefits of SAST for IaC
Automated Security
SAST tools can be integrated into your CI/CD pipeline, enabling automated security checks for every code commit. This automation ensures that security is a continuous part of the development process, rather than an afterthought.
Improved Developer Productivity
By identifying vulnerabilities early, SAST scanning reduces the time and effort required to fix security issues. Developers can address vulnerabilities as they write code, rather than having to go back and fix issues after they have been deployed.
Enhanced Security Posture
Regular SAST scanning helps maintain a strong security posture by ensuring that your infrastructure configurations are continuously monitored for vulnerabilities. This proactive approach helps prevent security incidents and ensures that your infrastructure remains secure over time.
Implementing SAST for IaC
Choose the Right Tool
There are several SAST tools available for IaC, each with its own strengths and weaknesses. Some popular options include Trivy, Checkov, Snyk, and Terrascan. Evaluate these tools based on their capabilities, ease of integration, and support for your specific IaC platform.
Integrate into CI/CD Pipeline
Integrate your chosen SAST tool into your CI/CD pipeline to enable automated scanning. This integration ensures that every code change is scanned for vulnerabilities before it is merged and deployed. For example, there is a Microsoft Security DevOps GitHub action and a Microsoft Security DevOps Azure DevOps extension that integrates many of these features.
Regularly Update and Review
Security is an ongoing process. Regularly update your SAST tools to benefit from the latest vulnerability definitions and scanning capabilities. Additionally, periodically review your scanning policies and configurations to ensure they remain effective.
Conclusion
Implementing SAST scanning for Infrastructure as Code is essential for maintaining secure and compliant infrastructure. By detecting vulnerabilities early, reducing the attack surface, and ensuring adherence to best practices, SAST scanning enhances the security and robustness of your infrastructure. Integrating SAST tools into your CI/CD pipeline automates security checks, improving developer productivity and maintaining a strong security posture. Microsoft Defender for Cloud DevOps security helps integrate these tools into your environment.
By making SAST scanning an integral part of your IaC process, you can confidently build and manage secure infrastructure that meets the demands of modern applications and compliance requirements.
Happy securing!
Microsoft Tech Community – Latest Blogs –Read More
Season of AI for Developers!
If you’re passionate about Artificial Intelligence and application development, don’t miss the opportunity to watch this amazing series from Microsoft Reactor. Throughout the season, we cover everything from the fundamentals of Azure OpenAI to the latest innovations presented at Microsoft Build 2024, culminating in the powerful Semantic Kernel framework for building intelligent applications. Each session is packed with numerous demos to help you understand every concept and apply it effectively.
Episodes:
Episode 1: Introduction to Azure OpenAI
Explore Azure OpenAI models, their capabilities, and how to integrate them with the Azure SDK.
Episode 2: Considerations for Implementing Models in Azure OpenAI
Learn how to manage service quotas, balance performance and latency, plan cost management, and apply the RAG pattern to optimize your implementations.
Episode 3: What’s New from Microsoft Build: PHI3, GPT-4o, Azure Content Safety, and More
Discover the latest updates from Microsoft Build, including PHI 3, GPT-4o with multimodal capabilities, the new Azure AI Studio, and Azure Content Safety.
Episode 4: Getting Started with Semantic Kernel
Learn about Semantic Kernel, an open-source SDK that allows you to easily integrate advanced LLMs into your applications to create smarter and more natural experiences.
Episode 5: Build Your Own Copilot with Semantic Kernel
Learn how to use Plugins, Planners, and Memories in Semantic Kernel to create copilots that work alongside users, providing intelligent suggestions to complete tasks.
–Don’t miss it! Rewatch each episode to discover how you can take your applications to the next level with Microsoft AI.
–Learn more and enhance your AI skills during this series with this collection of resources from Microsoft Learn: Explore the collection here.
Speakers:
-Luis Beltran – Microsoft MVP – LinkedIn
-Pablo Piovano – Microsoft MVP – LinkedIn
Microsoft Tech Community – Latest Blogs –Read More
Why Entra ID can Restore Some Types of Deleted Groups and Not Others
Ability to Restore Deleted Groups Depends on Graph APIs
Yesterday, I covered a gap that exists between the Purview development group and the Exchange Online development group when it comes to applying scoped roles to audit log searches. Today, a blog post by ex-MVP Tony Murray-Smith reminds me about another functionality gap that exists in the area of groups. The problem described occurred when a user deleted a security group by mistake only to discover that the Entra admin center doesn’t support a method to restore deleted groups of this type.
In fact, Microsoft 365 groups are the only type of group that Entra supports for restoration via its admin center. There’s no way to restore a deleted distribution list, dynamic distribution list, security group, or mail-enabled security group. Apart from dynamic distribution lists, these objects are recognized by Entra ID and accessible through the Groups API. However, the only group objects supported by the List Deleted Items and Restore Deleted Items (directory objects) APIs remain Microsoft 365 groups. And if a Graph API isn’t available to support restoration, the administrative portals cannot create functionality from thin air.
This situation has persisted since the introduction of cmdlets to restore deleted Microsoft 365 groups in 2017 followed by a GUI option in the Exchange admin center, Microsoft 365 admin center, and Entra admin center. Microsoft subsequently removed the option to restore deleted groups from the new EAC, so the current GUI-based options to restore deleted Microsoft 365 groups are in the Entra admin center and Microsoft 365 admin center. And if you want to use PowerShell, there’s the Restore-MgDirectoryDeletedItem cmdlet.
The Gap Between the Exchange DS and Entra ID
The question is why Entra ID only supports the restoration of Microsoft 365 groups. I think the answer lies in two parts. First, the desire within Microsoft to make its brand-new cloud-only Office 365 groups (now Microsoft 365 groups) the “best group for everything” following their launch at the Ignite conference in May 2015.
The infrastructure to fully support Microsoft 365 groups took time to develop, and building the capability to reconnect all the different resources that a group might use made the process more complicated for Microsoft 365 groups. Being able to restore SharePoint Online, Teams, the group mailbox, and so on was a big undertaking that Microsoft quickly discovered needed to be tackled after the launch of Office 365 groups, especially after some early customers discovered that they couldn’t be restored. The functionality duly arrived in 2017. The campaign to make Microsoft 365 groups do everything is far less intense now than it was some years ago, but its legacy is evident sometimes.
The EXODS Objects
The second issue is heritage. Distribution lists and mail-enabled security groups originated in Exchange Server. Exchange Online still has its own directory (EXODS) to store details for mail-enabled objects. Synchronization and dual-write update operations keep Entra ID and EXODS aligned so that updates performed in one directory synchronize immediately to the other. The Graph APIs support distribution lists and security groups, including mail-enabled security groups, but Entra ID and the Graph APIs ignore dynamic distribution lists and can’t update settings for distribution lists and mail-enabled security groups because these objects are homed within Exchange Online.
Good reasons exist for why the differentiation exists. Dynamic distribution lists require Exchange Online to resolve their membership because the membership supports objects like mail-enabled public folders that don’t exist in Entra ID. Dynamic distribution lists also support nested lists. Regular distribution lists and their mail-enabled security group variants have many settings that aren’t supported in Entra ID, like message approval.
As far as I can remember, it has never been possible to restore deleted distribution lists (and some of the online answers are very misleading, like this example). Once an administrator removes a distribution list, it’s gone. The only thing that can be done is to recreate the distribution list from scratch. That might be possible if someone knows the membership and the list settings, but that might not be the case.
Some Work Necessary in This Area
Microsoft should do some work to make it possible to restore all forms of deleted groups. That work will need contributions from teams responsible for Entra ID, the Graph API, and Exchange Online. Mistakes do happen and administrators remove important distribution lists or mail-enabled security groups when they shouldn’t. Being told that it’s necessary to recreate an object from scratch is a royal pain, and it’s something that shouldn’t still be a problem in 2024. Customers assume that if they can restore one type of deleted group, they should be able to restore any type of deleted group.
Then again, other pains exist around distribution list management, like the Microsoft’s failure to produce a utility to move distribution lists from on-premises servers to the cloud. Tim McMichael’s DLConversionV2 solution is the best available. He’ll be discussing distribution list management at TEC 2024 in Dallas in October. Maybe I should ask Tim about restoring groups that aren’t Microsoft 365 groups.
Learn about using Exchange Online and the rest of Office 365 by subscribing to the Office 365 for IT Pros eBook. Use our experience to understand what’s important and how best to protect your tenant.
MERGE EXCEL SHEETS INTO ONE MATLAB DATA FILE
Dear All,
I have Survey data for six years each year containing 26 variables and more than 400 thousand entries for each variable. Is it possible to join the data year by year into a single MATLAB mat file from the EXCEL file. The data for each year in the Excel file is on a different sheet.
Any help will be appreciated.
RegardsDear All,
I have Survey data for six years each year containing 26 variables and more than 400 thousand entries for each variable. Is it possible to join the data year by year into a single MATLAB mat file from the EXCEL file. The data for each year in the Excel file is on a different sheet.
Any help will be appreciated.
Regards Dear All,
I have Survey data for six years each year containing 26 variables and more than 400 thousand entries for each variable. Is it possible to join the data year by year into a single MATLAB mat file from the EXCEL file. The data for each year in the Excel file is on a different sheet.
Any help will be appreciated.
Regards join large data MATLAB Answers — New Questions