Author: PuTI
How do I plot a graph?
Hello everyone, I need to plot a graph according to the formulas given below. I am also sharing the image of the graph that should be. I started with the following code. I would be very happy if you could help.
reference:
% Define parameters
epsilon0 = 8.85e-12; % F/m (Permittivity of free space)
epsilon_m = 79; % F/m (Relative permittivity of medium)
CM = 0.5; % Clausius-Mossotti factor
k_B = 1.38e-23; % J/K (Boltzmann constant)
R = 1e-6; % m (Particle radius)
gamma = 1.88e-8; % kg/s (Friction coefficient)
q = 1e-14; % C (Charge of the particle)
dt = 1e-3; % s (Time step)
T = 300; % K (Room temperature)
x0 = 10e-6; % Initial position set to 10 micrometers (10×10^-6 m)
N = 100000; % Number of simulations
num_steps = 1000; % Number of steps (simulation for 1 second)
epsilon = 1e-9; % Small offset to prevent division by zero
k = 1 / (4 * pi * epsilon_m); % Constant for force coefficient
% Generate random numbers
rng(0); % Reset random number generator
W = randn(num_steps, N); % Random numbers from standard normal distributionHello everyone, I need to plot a graph according to the formulas given below. I am also sharing the image of the graph that should be. I started with the following code. I would be very happy if you could help.
reference:
% Define parameters
epsilon0 = 8.85e-12; % F/m (Permittivity of free space)
epsilon_m = 79; % F/m (Relative permittivity of medium)
CM = 0.5; % Clausius-Mossotti factor
k_B = 1.38e-23; % J/K (Boltzmann constant)
R = 1e-6; % m (Particle radius)
gamma = 1.88e-8; % kg/s (Friction coefficient)
q = 1e-14; % C (Charge of the particle)
dt = 1e-3; % s (Time step)
T = 300; % K (Room temperature)
x0 = 10e-6; % Initial position set to 10 micrometers (10×10^-6 m)
N = 100000; % Number of simulations
num_steps = 1000; % Number of steps (simulation for 1 second)
epsilon = 1e-9; % Small offset to prevent division by zero
k = 1 / (4 * pi * epsilon_m); % Constant for force coefficient
% Generate random numbers
rng(0); % Reset random number generator
W = randn(num_steps, N); % Random numbers from standard normal distribution Hello everyone, I need to plot a graph according to the formulas given below. I am also sharing the image of the graph that should be. I started with the following code. I would be very happy if you could help.
reference:
% Define parameters
epsilon0 = 8.85e-12; % F/m (Permittivity of free space)
epsilon_m = 79; % F/m (Relative permittivity of medium)
CM = 0.5; % Clausius-Mossotti factor
k_B = 1.38e-23; % J/K (Boltzmann constant)
R = 1e-6; % m (Particle radius)
gamma = 1.88e-8; % kg/s (Friction coefficient)
q = 1e-14; % C (Charge of the particle)
dt = 1e-3; % s (Time step)
T = 300; % K (Room temperature)
x0 = 10e-6; % Initial position set to 10 micrometers (10×10^-6 m)
N = 100000; % Number of simulations
num_steps = 1000; % Number of steps (simulation for 1 second)
epsilon = 1e-9; % Small offset to prevent division by zero
k = 1 / (4 * pi * epsilon_m); % Constant for force coefficient
% Generate random numbers
rng(0); % Reset random number generator
W = randn(num_steps, N); % Random numbers from standard normal distribution matlab, matlab code, mathematics, mean, loop, graphics MATLAB Answers — New Questions
How do I use a for loop for the newton raphson method?
The code I have is where f is a function handle, a is a real number, and n is a positive integer:
function r=mynewton(f,a,n)
syms x
f=@x;
c=f(x);
y(1)=a;
for i=[1:length(n)]
y(i+1)=y(i)-(c(i)/diff(c(i)));
end;
r=y
The erros I am getting are:
Undefined function or variable ‘x’.
Error in mynewton (line 4)
c=f(x);
How do I fix these errors?The code I have is where f is a function handle, a is a real number, and n is a positive integer:
function r=mynewton(f,a,n)
syms x
f=@x;
c=f(x);
y(1)=a;
for i=[1:length(n)]
y(i+1)=y(i)-(c(i)/diff(c(i)));
end;
r=y
The erros I am getting are:
Undefined function or variable ‘x’.
Error in mynewton (line 4)
c=f(x);
How do I fix these errors? The code I have is where f is a function handle, a is a real number, and n is a positive integer:
function r=mynewton(f,a,n)
syms x
f=@x;
c=f(x);
y(1)=a;
for i=[1:length(n)]
y(i+1)=y(i)-(c(i)/diff(c(i)));
end;
r=y
The erros I am getting are:
Undefined function or variable ‘x’.
Error in mynewton (line 4)
c=f(x);
How do I fix these errors? newton raphson method function handle MATLAB Answers — New Questions
How can I include Simulink.Breakpoint in A2L file?
In the model, few variables are used for input to ‘Interpolation Using Prelookup’ block. As per requirement these variable have class Simulink.Breakpoint. However this is preventing these variables to be generated in A2L file.
Is there any possible way to include Simulink.Breakpoint class variables in E-coder generated A2L file?In the model, few variables are used for input to ‘Interpolation Using Prelookup’ block. As per requirement these variable have class Simulink.Breakpoint. However this is preventing these variables to be generated in A2L file.
Is there any possible way to include Simulink.Breakpoint class variables in E-coder generated A2L file? In the model, few variables are used for input to ‘Interpolation Using Prelookup’ block. As per requirement these variable have class Simulink.Breakpoint. However this is preventing these variables to be generated in A2L file.
Is there any possible way to include Simulink.Breakpoint class variables in E-coder generated A2L file? a2l, asap2, simulink.breakpoint MATLAB Answers — New Questions
Calibrating multiple cameras: How do I get 3D points from triangulation into a worldpointset or pointTracker?
Hi everyone! I am working on a project where I need to calibrate multiple cameras observing a scene, to ultimately be able to get 3D points of an object in later videos collected by the same cameras. The cameras are stationary. Importantly, I need to be able to triangulate the checkerboard points from the calibration and then do sparse bundle adjustment on these points to improve the acuracy of the camera pose estimation and 3D checkerboard points from the calibration. Sparse bundle adjustment (bundleAdjustment) can take in either pointTrack objects or worldpointset objects.
I have two calibration sessions (front camera and rear right, and the front camera and rear left – they are in a triangular config) from which I load the stereoParams, I have also stored the useful data in a structure called ‘s’.
I then get the 3D coordinates of the checkerboards, and using the ‘worldpointset’ and feature matching approach. I have included all code (including the code I used to save important variables).
The error I get with the bundleAdjustment function is the following:
Error using vision.internal.bundleAdjust.validateAndParseInputs
The number of feature points in view 1 must be greater than or equal to 51.
Error in vision.internal.bundleAdjust.sparseBA (line 39)
vision.internal.bundleAdjust.validateAndParseInputs(optimType, mfilename, varargin{:});
Error in bundleAdjustment (line 10)
vision.internal.bundleAdjust.sparseBA(‘full’, mfilename, varargin{:});
When I investigated using pointTrack, it seems that it is best used for tracking a point through multiple frames in a video, but not great for my application where I want to track one point through 3 different views at once.
AT LAST–> MY QUESTION:
Am I using worldpointset correctly for this application, and if so, can someone please help me figure out where this error in the feature points is coming from?
If not, would pointTrack be better for my application if I change the dimensionality of my problem? If pointTrack is better, I would need to track a point through the frames of each camera and somehow correlate and triangulate points that way.
**Note, with structure ‘s’ containing many images it was too large to upload (even when compressed), so I uploaded a screenshot of the structure. But hopefully my code helps with context. The visualisation runs though!
load("params.mat","params")
intrinsics1 = params.cam1.Intrinsics;
intrinsics2 = params.cam2.Intrinsics;
intrinsics3 = params.cam3.Intrinsics;
intrinsics4 = params.cam4.Intrinsics;
intrinsicsFront = intrinsics2;
intrinsicsRLeft = intrinsics3;
intrinsicsRRight = intrinsics4;
%% Visualise cameras
load("stereoParams1.mat")
load("stereoParams2.mat")
figure; showExtrinsics(stereoParams1, ‘CameraCentric’)
hold on;
showExtrinsics(stereoParams2, ‘CameraCentric’);
hold off;
%initialise camera 1 pose as at 0, with no rotation
front_absolute_pose = rigidtform3d([0 0 0], [0 0 0]);
%% Get 3D Points
load("s_struct.mat","s")
board_shape = s.front.board_shape;
camPoseVSet = imageviewset;
camPoseVSet = addView(camPoseVSet,1,front_absolute_pose);
camPoseVSet = addView(camPoseVSet,2,stereoParams1.PoseCamera2);
camPoseVSet = addView(camPoseVSet,3,stereoParams2.PoseCamera2);
camposes = poses(camPoseVSet);
intrinsicsArray = [intrinsicsFront, intrinsicsRRight, intrinsicsRLeft];
frames = fieldnames(s.front.points);
framesRearRight = fieldnames(s.rearright.points);
framesRearLeft = fieldnames(s.rearleft.points);
wpSet =worldpointset;
wpsettrial = worldpointset;
for i =1:length(frames) %for frames in front
frame_i = frames{i};
pointsFront = s.front.points.(frame_i);
pointsFrontUS = s.front.unshapedPoints.(frame_i);
container = contains(framesRearRight, frame_i);
j = 1;
if ~isempty(container) && any(container)
pointsRearRight = s.rearright.points.(frame_i);
pointsRearRightUS = s.rearright.unshapedPoints.(frame_i);
pointIn3D = [];
pointIn3Dnew = [];
[features1, validPts1] = extractFeatures(im2gray(s.front.imageFile.(frame_i)), pointsFrontUS);
[features2, validPts2] = extractFeatures(im2gray(s.rearright.imageFile.(frame_i)), pointsRearRightUS);
indexPairs = matchFeatures(features1,features2);
matchedPoints1 = validPts1(indexPairs(:,1),:);
matchedPoints2 = validPts2(indexPairs(:,2),:);
worldPTS = triangulate(matchedPoints1, matchedPoints2, stereoParams2);
[wpsettrial,newPointIndices] = addWorldPoints(wpsettrial,worldPTS);
wpsettrial = addCorrespondences(wpsettrial,1,newPointIndices,indexPairs(:,1));
wpsettrial = addCorrespondences(wpsettrial,3,newPointIndices,indexPairs(:,2));
sz = size(s.front.points.(frame_i));
for h =1: sz(1)
for w = 1:sz(2)
point2track = [pointsFront(h,w,1), pointsFront(h,w,2); pointsRearRight(h,w,1), pointsRearRight(h,w,2)];
IDs = [1, 3];
track = pointTrack(IDs,point2track);
triang3D = triangulate([pointsFront(h,w,1), pointsFront(h,w,2)], [pointsRearRight(h,w,1), pointsRearRight(h,w,2)], stereoParams1);
% [wpSet,newPointIndices] = addWorldPoints(wpSet,triang3D);
% wpSet = addCorrespondences(wpSet,1,j,j);
% wpSet = addCorrespondences(wpSet,3,j,j);
pointIn3D = [pointIn3D;triang3D];
j=j+1;
end
end
pointIn3D = reshape3D(pointIn3D, board_shape);
%xyzPoints = reshape3D(pointIn3D,board_shape);
s.frontANDright.PT3D.(frame_i) = pointIn3D;
%s.frontANDright.PT3DSBA.(frame_i) = xyzPoints;
end
container = contains(framesRearLeft, frame_i);
m=1;
if ~isempty(container) && any(container)
pointsRearLeft = s.rearleft.points.(frame_i);
pointsRearLeftUS = s.rearleft.unshapedPoints.(frame_i);
pointIn3D = [];
pointIn3Dnew = [];
sz = size(s.front.points.(frame_i));
[features1, validPts1] = extractFeatures(im2gray(s.front.imageFile.(frame_i)), pointsFrontUS);
[features2, validPts2] = extractFeatures(im2gray(s.rearleft.imageFile.(frame_i)), pointsRearLeftUS);
indexPairs = matchFeatures(features1,features2);
matchedPoints1 = validPts1(indexPairs(:,1),:);
matchedPoints2 = validPts2(indexPairs(:,2),:);
worldPTS = triangulate(matchedPoints1, matchedPoints2, stereoParams1);
[wpsettrial,newPointIndices] = addWorldPoints(wpsettrial,worldPTS);
wpsettrial = addCorrespondences(wpsettrial,1,newPointIndices,indexPairs(:,1));
wpsettrial = addCorrespondences(wpsettrial,2,newPointIndices,indexPairs(:,2));
for h =1: sz(1)
for w = 1:sz(2)
point2track = [pointsFront(h,w,1), pointsFront(h,w,2); pointsRearLeft(h,w,1), pointsRearLeft(h,w,2)];
IDs = [1, 2];
track = pointTrack(IDs,point2track);
triang3D = triangulate([pointsFront(h,w,1), pointsFront(h,w,2)], [pointsRearLeft(h,w,1), pointsRearLeft(h,w,2)], stereoParams1);
% wpSet = addWorldPoints(wpSet,triang3D);
% wpSet = addCorrespondences(wpSet,1,m,m);
% wpSet = addCorrespondences(wpSet,2,m,m);
pointIn3D = [pointIn3D;triang3D];
m = m+1;
end
end
pointIn3D = reshape3D(pointIn3D, board_shape);
%xyzPoints = reshape3D(pointIn3D,board_shape);
s.frontANDleft.PT3D.(frame_i) = pointIn3D;
%s.frontANDleft.PT3DSBA.(frame_i) = xyzPoints;
end
[wpSetRefined,vSetRefined,pointIndex] = bundleAdjustment(wpsettrial,camPoseVSet,[1,3,2],intrinsicsArray, FixedViewIDs=[1,3,2], …
Solver="preconditioned-conjugate-gradient")
end
function [img_name, ptsUS,pts, worldpoints] = reformatData(img_name, pts, board_shape, worldpoints)
%method taken from acinoset code
img_name = img_name(1:strfind(img_name,’ ‘)-1);
img_name = replace(img_name, ‘.’,’_’);
ptsUS = pts;
pts = pagetranspose(reshape(pts, [board_shape, 2]));
pts = pagetranspose(reshape(pts, [board_shape, 2])); %repetition is purposeful
worldpoints = pagetranspose(reshape(worldpoints, [board_shape,2]));
worldpoints = pagetranspose(reshape(worldpoints, [board_shape,2]));
end
function pts = reshape3D(points3D, board_shape)
pts = pagetranspose(reshape(points3D, [board_shape, 3]));
pts = pagetranspose(reshape(pts, [board_shape, 3])); %repetition is purposeful
endHi everyone! I am working on a project where I need to calibrate multiple cameras observing a scene, to ultimately be able to get 3D points of an object in later videos collected by the same cameras. The cameras are stationary. Importantly, I need to be able to triangulate the checkerboard points from the calibration and then do sparse bundle adjustment on these points to improve the acuracy of the camera pose estimation and 3D checkerboard points from the calibration. Sparse bundle adjustment (bundleAdjustment) can take in either pointTrack objects or worldpointset objects.
I have two calibration sessions (front camera and rear right, and the front camera and rear left – they are in a triangular config) from which I load the stereoParams, I have also stored the useful data in a structure called ‘s’.
I then get the 3D coordinates of the checkerboards, and using the ‘worldpointset’ and feature matching approach. I have included all code (including the code I used to save important variables).
The error I get with the bundleAdjustment function is the following:
Error using vision.internal.bundleAdjust.validateAndParseInputs
The number of feature points in view 1 must be greater than or equal to 51.
Error in vision.internal.bundleAdjust.sparseBA (line 39)
vision.internal.bundleAdjust.validateAndParseInputs(optimType, mfilename, varargin{:});
Error in bundleAdjustment (line 10)
vision.internal.bundleAdjust.sparseBA(‘full’, mfilename, varargin{:});
When I investigated using pointTrack, it seems that it is best used for tracking a point through multiple frames in a video, but not great for my application where I want to track one point through 3 different views at once.
AT LAST–> MY QUESTION:
Am I using worldpointset correctly for this application, and if so, can someone please help me figure out where this error in the feature points is coming from?
If not, would pointTrack be better for my application if I change the dimensionality of my problem? If pointTrack is better, I would need to track a point through the frames of each camera and somehow correlate and triangulate points that way.
**Note, with structure ‘s’ containing many images it was too large to upload (even when compressed), so I uploaded a screenshot of the structure. But hopefully my code helps with context. The visualisation runs though!
load("params.mat","params")
intrinsics1 = params.cam1.Intrinsics;
intrinsics2 = params.cam2.Intrinsics;
intrinsics3 = params.cam3.Intrinsics;
intrinsics4 = params.cam4.Intrinsics;
intrinsicsFront = intrinsics2;
intrinsicsRLeft = intrinsics3;
intrinsicsRRight = intrinsics4;
%% Visualise cameras
load("stereoParams1.mat")
load("stereoParams2.mat")
figure; showExtrinsics(stereoParams1, ‘CameraCentric’)
hold on;
showExtrinsics(stereoParams2, ‘CameraCentric’);
hold off;
%initialise camera 1 pose as at 0, with no rotation
front_absolute_pose = rigidtform3d([0 0 0], [0 0 0]);
%% Get 3D Points
load("s_struct.mat","s")
board_shape = s.front.board_shape;
camPoseVSet = imageviewset;
camPoseVSet = addView(camPoseVSet,1,front_absolute_pose);
camPoseVSet = addView(camPoseVSet,2,stereoParams1.PoseCamera2);
camPoseVSet = addView(camPoseVSet,3,stereoParams2.PoseCamera2);
camposes = poses(camPoseVSet);
intrinsicsArray = [intrinsicsFront, intrinsicsRRight, intrinsicsRLeft];
frames = fieldnames(s.front.points);
framesRearRight = fieldnames(s.rearright.points);
framesRearLeft = fieldnames(s.rearleft.points);
wpSet =worldpointset;
wpsettrial = worldpointset;
for i =1:length(frames) %for frames in front
frame_i = frames{i};
pointsFront = s.front.points.(frame_i);
pointsFrontUS = s.front.unshapedPoints.(frame_i);
container = contains(framesRearRight, frame_i);
j = 1;
if ~isempty(container) && any(container)
pointsRearRight = s.rearright.points.(frame_i);
pointsRearRightUS = s.rearright.unshapedPoints.(frame_i);
pointIn3D = [];
pointIn3Dnew = [];
[features1, validPts1] = extractFeatures(im2gray(s.front.imageFile.(frame_i)), pointsFrontUS);
[features2, validPts2] = extractFeatures(im2gray(s.rearright.imageFile.(frame_i)), pointsRearRightUS);
indexPairs = matchFeatures(features1,features2);
matchedPoints1 = validPts1(indexPairs(:,1),:);
matchedPoints2 = validPts2(indexPairs(:,2),:);
worldPTS = triangulate(matchedPoints1, matchedPoints2, stereoParams2);
[wpsettrial,newPointIndices] = addWorldPoints(wpsettrial,worldPTS);
wpsettrial = addCorrespondences(wpsettrial,1,newPointIndices,indexPairs(:,1));
wpsettrial = addCorrespondences(wpsettrial,3,newPointIndices,indexPairs(:,2));
sz = size(s.front.points.(frame_i));
for h =1: sz(1)
for w = 1:sz(2)
point2track = [pointsFront(h,w,1), pointsFront(h,w,2); pointsRearRight(h,w,1), pointsRearRight(h,w,2)];
IDs = [1, 3];
track = pointTrack(IDs,point2track);
triang3D = triangulate([pointsFront(h,w,1), pointsFront(h,w,2)], [pointsRearRight(h,w,1), pointsRearRight(h,w,2)], stereoParams1);
% [wpSet,newPointIndices] = addWorldPoints(wpSet,triang3D);
% wpSet = addCorrespondences(wpSet,1,j,j);
% wpSet = addCorrespondences(wpSet,3,j,j);
pointIn3D = [pointIn3D;triang3D];
j=j+1;
end
end
pointIn3D = reshape3D(pointIn3D, board_shape);
%xyzPoints = reshape3D(pointIn3D,board_shape);
s.frontANDright.PT3D.(frame_i) = pointIn3D;
%s.frontANDright.PT3DSBA.(frame_i) = xyzPoints;
end
container = contains(framesRearLeft, frame_i);
m=1;
if ~isempty(container) && any(container)
pointsRearLeft = s.rearleft.points.(frame_i);
pointsRearLeftUS = s.rearleft.unshapedPoints.(frame_i);
pointIn3D = [];
pointIn3Dnew = [];
sz = size(s.front.points.(frame_i));
[features1, validPts1] = extractFeatures(im2gray(s.front.imageFile.(frame_i)), pointsFrontUS);
[features2, validPts2] = extractFeatures(im2gray(s.rearleft.imageFile.(frame_i)), pointsRearLeftUS);
indexPairs = matchFeatures(features1,features2);
matchedPoints1 = validPts1(indexPairs(:,1),:);
matchedPoints2 = validPts2(indexPairs(:,2),:);
worldPTS = triangulate(matchedPoints1, matchedPoints2, stereoParams1);
[wpsettrial,newPointIndices] = addWorldPoints(wpsettrial,worldPTS);
wpsettrial = addCorrespondences(wpsettrial,1,newPointIndices,indexPairs(:,1));
wpsettrial = addCorrespondences(wpsettrial,2,newPointIndices,indexPairs(:,2));
for h =1: sz(1)
for w = 1:sz(2)
point2track = [pointsFront(h,w,1), pointsFront(h,w,2); pointsRearLeft(h,w,1), pointsRearLeft(h,w,2)];
IDs = [1, 2];
track = pointTrack(IDs,point2track);
triang3D = triangulate([pointsFront(h,w,1), pointsFront(h,w,2)], [pointsRearLeft(h,w,1), pointsRearLeft(h,w,2)], stereoParams1);
% wpSet = addWorldPoints(wpSet,triang3D);
% wpSet = addCorrespondences(wpSet,1,m,m);
% wpSet = addCorrespondences(wpSet,2,m,m);
pointIn3D = [pointIn3D;triang3D];
m = m+1;
end
end
pointIn3D = reshape3D(pointIn3D, board_shape);
%xyzPoints = reshape3D(pointIn3D,board_shape);
s.frontANDleft.PT3D.(frame_i) = pointIn3D;
%s.frontANDleft.PT3DSBA.(frame_i) = xyzPoints;
end
[wpSetRefined,vSetRefined,pointIndex] = bundleAdjustment(wpsettrial,camPoseVSet,[1,3,2],intrinsicsArray, FixedViewIDs=[1,3,2], …
Solver="preconditioned-conjugate-gradient")
end
function [img_name, ptsUS,pts, worldpoints] = reformatData(img_name, pts, board_shape, worldpoints)
%method taken from acinoset code
img_name = img_name(1:strfind(img_name,’ ‘)-1);
img_name = replace(img_name, ‘.’,’_’);
ptsUS = pts;
pts = pagetranspose(reshape(pts, [board_shape, 2]));
pts = pagetranspose(reshape(pts, [board_shape, 2])); %repetition is purposeful
worldpoints = pagetranspose(reshape(worldpoints, [board_shape,2]));
worldpoints = pagetranspose(reshape(worldpoints, [board_shape,2]));
end
function pts = reshape3D(points3D, board_shape)
pts = pagetranspose(reshape(points3D, [board_shape, 3]));
pts = pagetranspose(reshape(pts, [board_shape, 3])); %repetition is purposeful
end Hi everyone! I am working on a project where I need to calibrate multiple cameras observing a scene, to ultimately be able to get 3D points of an object in later videos collected by the same cameras. The cameras are stationary. Importantly, I need to be able to triangulate the checkerboard points from the calibration and then do sparse bundle adjustment on these points to improve the acuracy of the camera pose estimation and 3D checkerboard points from the calibration. Sparse bundle adjustment (bundleAdjustment) can take in either pointTrack objects or worldpointset objects.
I have two calibration sessions (front camera and rear right, and the front camera and rear left – they are in a triangular config) from which I load the stereoParams, I have also stored the useful data in a structure called ‘s’.
I then get the 3D coordinates of the checkerboards, and using the ‘worldpointset’ and feature matching approach. I have included all code (including the code I used to save important variables).
The error I get with the bundleAdjustment function is the following:
Error using vision.internal.bundleAdjust.validateAndParseInputs
The number of feature points in view 1 must be greater than or equal to 51.
Error in vision.internal.bundleAdjust.sparseBA (line 39)
vision.internal.bundleAdjust.validateAndParseInputs(optimType, mfilename, varargin{:});
Error in bundleAdjustment (line 10)
vision.internal.bundleAdjust.sparseBA(‘full’, mfilename, varargin{:});
When I investigated using pointTrack, it seems that it is best used for tracking a point through multiple frames in a video, but not great for my application where I want to track one point through 3 different views at once.
AT LAST–> MY QUESTION:
Am I using worldpointset correctly for this application, and if so, can someone please help me figure out where this error in the feature points is coming from?
If not, would pointTrack be better for my application if I change the dimensionality of my problem? If pointTrack is better, I would need to track a point through the frames of each camera and somehow correlate and triangulate points that way.
**Note, with structure ‘s’ containing many images it was too large to upload (even when compressed), so I uploaded a screenshot of the structure. But hopefully my code helps with context. The visualisation runs though!
load("params.mat","params")
intrinsics1 = params.cam1.Intrinsics;
intrinsics2 = params.cam2.Intrinsics;
intrinsics3 = params.cam3.Intrinsics;
intrinsics4 = params.cam4.Intrinsics;
intrinsicsFront = intrinsics2;
intrinsicsRLeft = intrinsics3;
intrinsicsRRight = intrinsics4;
%% Visualise cameras
load("stereoParams1.mat")
load("stereoParams2.mat")
figure; showExtrinsics(stereoParams1, ‘CameraCentric’)
hold on;
showExtrinsics(stereoParams2, ‘CameraCentric’);
hold off;
%initialise camera 1 pose as at 0, with no rotation
front_absolute_pose = rigidtform3d([0 0 0], [0 0 0]);
%% Get 3D Points
load("s_struct.mat","s")
board_shape = s.front.board_shape;
camPoseVSet = imageviewset;
camPoseVSet = addView(camPoseVSet,1,front_absolute_pose);
camPoseVSet = addView(camPoseVSet,2,stereoParams1.PoseCamera2);
camPoseVSet = addView(camPoseVSet,3,stereoParams2.PoseCamera2);
camposes = poses(camPoseVSet);
intrinsicsArray = [intrinsicsFront, intrinsicsRRight, intrinsicsRLeft];
frames = fieldnames(s.front.points);
framesRearRight = fieldnames(s.rearright.points);
framesRearLeft = fieldnames(s.rearleft.points);
wpSet =worldpointset;
wpsettrial = worldpointset;
for i =1:length(frames) %for frames in front
frame_i = frames{i};
pointsFront = s.front.points.(frame_i);
pointsFrontUS = s.front.unshapedPoints.(frame_i);
container = contains(framesRearRight, frame_i);
j = 1;
if ~isempty(container) && any(container)
pointsRearRight = s.rearright.points.(frame_i);
pointsRearRightUS = s.rearright.unshapedPoints.(frame_i);
pointIn3D = [];
pointIn3Dnew = [];
[features1, validPts1] = extractFeatures(im2gray(s.front.imageFile.(frame_i)), pointsFrontUS);
[features2, validPts2] = extractFeatures(im2gray(s.rearright.imageFile.(frame_i)), pointsRearRightUS);
indexPairs = matchFeatures(features1,features2);
matchedPoints1 = validPts1(indexPairs(:,1),:);
matchedPoints2 = validPts2(indexPairs(:,2),:);
worldPTS = triangulate(matchedPoints1, matchedPoints2, stereoParams2);
[wpsettrial,newPointIndices] = addWorldPoints(wpsettrial,worldPTS);
wpsettrial = addCorrespondences(wpsettrial,1,newPointIndices,indexPairs(:,1));
wpsettrial = addCorrespondences(wpsettrial,3,newPointIndices,indexPairs(:,2));
sz = size(s.front.points.(frame_i));
for h =1: sz(1)
for w = 1:sz(2)
point2track = [pointsFront(h,w,1), pointsFront(h,w,2); pointsRearRight(h,w,1), pointsRearRight(h,w,2)];
IDs = [1, 3];
track = pointTrack(IDs,point2track);
triang3D = triangulate([pointsFront(h,w,1), pointsFront(h,w,2)], [pointsRearRight(h,w,1), pointsRearRight(h,w,2)], stereoParams1);
% [wpSet,newPointIndices] = addWorldPoints(wpSet,triang3D);
% wpSet = addCorrespondences(wpSet,1,j,j);
% wpSet = addCorrespondences(wpSet,3,j,j);
pointIn3D = [pointIn3D;triang3D];
j=j+1;
end
end
pointIn3D = reshape3D(pointIn3D, board_shape);
%xyzPoints = reshape3D(pointIn3D,board_shape);
s.frontANDright.PT3D.(frame_i) = pointIn3D;
%s.frontANDright.PT3DSBA.(frame_i) = xyzPoints;
end
container = contains(framesRearLeft, frame_i);
m=1;
if ~isempty(container) && any(container)
pointsRearLeft = s.rearleft.points.(frame_i);
pointsRearLeftUS = s.rearleft.unshapedPoints.(frame_i);
pointIn3D = [];
pointIn3Dnew = [];
sz = size(s.front.points.(frame_i));
[features1, validPts1] = extractFeatures(im2gray(s.front.imageFile.(frame_i)), pointsFrontUS);
[features2, validPts2] = extractFeatures(im2gray(s.rearleft.imageFile.(frame_i)), pointsRearLeftUS);
indexPairs = matchFeatures(features1,features2);
matchedPoints1 = validPts1(indexPairs(:,1),:);
matchedPoints2 = validPts2(indexPairs(:,2),:);
worldPTS = triangulate(matchedPoints1, matchedPoints2, stereoParams1);
[wpsettrial,newPointIndices] = addWorldPoints(wpsettrial,worldPTS);
wpsettrial = addCorrespondences(wpsettrial,1,newPointIndices,indexPairs(:,1));
wpsettrial = addCorrespondences(wpsettrial,2,newPointIndices,indexPairs(:,2));
for h =1: sz(1)
for w = 1:sz(2)
point2track = [pointsFront(h,w,1), pointsFront(h,w,2); pointsRearLeft(h,w,1), pointsRearLeft(h,w,2)];
IDs = [1, 2];
track = pointTrack(IDs,point2track);
triang3D = triangulate([pointsFront(h,w,1), pointsFront(h,w,2)], [pointsRearLeft(h,w,1), pointsRearLeft(h,w,2)], stereoParams1);
% wpSet = addWorldPoints(wpSet,triang3D);
% wpSet = addCorrespondences(wpSet,1,m,m);
% wpSet = addCorrespondences(wpSet,2,m,m);
pointIn3D = [pointIn3D;triang3D];
m = m+1;
end
end
pointIn3D = reshape3D(pointIn3D, board_shape);
%xyzPoints = reshape3D(pointIn3D,board_shape);
s.frontANDleft.PT3D.(frame_i) = pointIn3D;
%s.frontANDleft.PT3DSBA.(frame_i) = xyzPoints;
end
[wpSetRefined,vSetRefined,pointIndex] = bundleAdjustment(wpsettrial,camPoseVSet,[1,3,2],intrinsicsArray, FixedViewIDs=[1,3,2], …
Solver="preconditioned-conjugate-gradient")
end
function [img_name, ptsUS,pts, worldpoints] = reformatData(img_name, pts, board_shape, worldpoints)
%method taken from acinoset code
img_name = img_name(1:strfind(img_name,’ ‘)-1);
img_name = replace(img_name, ‘.’,’_’);
ptsUS = pts;
pts = pagetranspose(reshape(pts, [board_shape, 2]));
pts = pagetranspose(reshape(pts, [board_shape, 2])); %repetition is purposeful
worldpoints = pagetranspose(reshape(worldpoints, [board_shape,2]));
worldpoints = pagetranspose(reshape(worldpoints, [board_shape,2]));
end
function pts = reshape3D(points3D, board_shape)
pts = pagetranspose(reshape(points3D, [board_shape, 3]));
pts = pagetranspose(reshape(pts, [board_shape, 3])); %repetition is purposeful
end image processing, computer vision, calibration, 3d, worldpointset, bundleadjustment MATLAB Answers — New Questions
SFTP to sharepoint folder Flow
Hi,
Hi, Is it possible to run a flow that brings file from SFTP folder to sharepoint folder? I built the below one and it kept returnning issues related to connections Read More
Outbound audio drops midway through Teams calls
Hi all,
We have a user (director) here whose outbound audio drops approximately 5-10 minutes through Teams calls (both audio only and video). The incoming audio continues to work without fail. They use a Logitech C270 Webcam as their primary microphone as they won’t use a headset and would prefer not to use an external microphone in terms of desk space. We’ve replaced the webcam twice including different models, that would indicate to me it isn’t related to the webcam as being a hardware issue. We’ve also re-installed the Teams app and cleared the Teams cache (although I’m open to suggestions here if there is a more thorough way of doing this). I should note that the audio always works in short test calls on Teams and it is working in other applications such as Zoom, so I believe there is something unique to Teams happening. Thanks all for any suggestions you can offer.
Hi all,We have a user (director) here whose outbound audio drops approximately 5-10 minutes through Teams calls (both audio only and video). The incoming audio continues to work without fail. They use a Logitech C270 Webcam as their primary microphone as they won’t use a headset and would prefer not to use an external microphone in terms of desk space. We’ve replaced the webcam twice including different models, that would indicate to me it isn’t related to the webcam as being a hardware issue. We’ve also re-installed the Teams app and cleared the Teams cache (although I’m open to suggestions here if there is a more thorough way of doing this). I should note that the audio always works in short test calls on Teams and it is working in other applications such as Zoom, so I believe there is something unique to Teams happening. Thanks all for any suggestions you can offer. Read More
Can’t convert jpg to webp with the Photos app on Windows 11?
When it comes to basic image conversion, the Photos app is always my favorite. But it is not the case for jpg to webp conversion. When I open the jpg with Photos app, the save as menu does not include option for webp, only jpg, jpx, jpx, png, tiff and bmp available.
I am building a new website, and the uploaded images have to be in WebP format based on search engine recommendation. How can I bulk convert JPG to WebP on Windows 11 PC?
Thanks
When it comes to basic image conversion, the Photos app is always my favorite. But it is not the case for jpg to webp conversion. When I open the jpg with Photos app, the save as menu does not include option for webp, only jpg, jpx, jpx, png, tiff and bmp available. I am building a new website, and the uploaded images have to be in WebP format based on search engine recommendation. How can I bulk convert JPG to WebP on Windows 11 PC? Thanks Read More
Same device with Onboarded and Not Onboarded status
Hi,
I’m creating a detection rule to search for servers which are not onboarded to Defender. What’s strange about this query is that I get the same device (same devicename but different deviceid) with both Onboarding status, which is “Onboarded” and “Can be onboarded”.
Anyone knows why? This way I get uncorrect results on my detection rule.
Thanks
Hi,I’m creating a detection rule to search for servers which are not onboarded to Defender. What’s strange about this query is that I get the same device (same devicename but different deviceid) with both Onboarding status, which is “Onboarded” and “Can be onboarded”.Anyone knows why? This way I get uncorrect results on my detection rule.Thanks Read More
Ошибка #Н/Д в функциях ИНДЕКС/ПОИСКПОЗ
Я просмотрел публикацию по этой теме (Как исправить ошибку #N/A в функциях INDEX/MATCH – Служба поддержки Microsoft), и сделал всё что там написано. Но, ошибка #Н/Д не исправилась.
Я просмотрел публикацию по этой теме (Как исправить ошибку #N/A в функциях INDEX/MATCH – Служба поддержки Microsoft), и сделал всё что там написано. Но, ошибка #Н/Д не исправилась. Read More
RefinableString Property empty
Hi,
On our tenant I’ve configured the RefinableString00 (more than 2 days ago) with the crawled property OWS_Q_TEXT_MANUFACTURER. The RefinableString00 still returns null.
What steps we need to follow to see the results
Thanks
Hi, On our tenant I’ve configured the RefinableString00 (more than 2 days ago) with the crawled property OWS_Q_TEXT_MANUFACTURER. The RefinableString00 still returns null. What steps we need to follow to see the results Thanks Read More
Edit links on classic SharePoint site
Hello!
I’m hoping someone can help me with editing a very old classic SharePoint site please!
I want to edit the URL link within the menu along the top…but clicking Edit Links doesn’t let me edit the drop-down menu part (I need to edit the URL at link “B” located within “A”, within “S”? What am I missing?? All Edit Links lets me do is edit “S”. I am an Admin of the site.
Hello! I’m hoping someone can help me with editing a very old classic SharePoint site please! I want to edit the URL link within the menu along the top…but clicking Edit Links doesn’t let me edit the drop-down menu part (I need to edit the URL at link “B” located within “A”, within “S”? What am I missing?? All Edit Links lets me do is edit “S”. I am an Admin of the site. Read More
Hunting for data related to priviledge escalation (like app installs)
Hi,
I’m navigating the Defender tables to try to understand how can I hunt for priviledge escalation events, benign ones in this case, for example, when our Helpdesk team connects to a computer to install an application, it will request an elevation of priviledges, as the local users do not have permissions for it.
I would like to audit this type of priviledge escalation events, but I can’t find the data related to it.
Anyone knows in which table can I find this kind of data?
Thanks
Hi,I’m navigating the Defender tables to try to understand how can I hunt for priviledge escalation events, benign ones in this case, for example, when our Helpdesk team connects to a computer to install an application, it will request an elevation of priviledges, as the local users do not have permissions for it.I would like to audit this type of priviledge escalation events, but I can’t find the data related to it. Anyone knows in which table can I find this kind of data?Thanks Read More
How to use the new Chats & Channels experience in Teams
The new chats and channels experience in Microsoft Teams introduces several enhancements designed to improve collaboration and streamline communication.
You can turn on this feature in preview for evaluation.
These updates aim to make it easier for teams to collaborate, stay organized, and ensure that no important messages are missed.
#MicrosoftTeams #Teams #NewFeatures #Microsoft365 #Productivity #MPVbuzz
The new chats and channels experience in Microsoft Teams introduces several enhancements designed to improve collaboration and streamline communication.
You can turn on this feature in preview for evaluation.
These updates aim to make it easier for teams to collaborate, stay organized, and ensure that no important messages are missed.
#MicrosoftTeams #Teams #NewFeatures #Microsoft365 #Productivity #MPVbuzz Read More
The Importance of Implementing SAST Scanning for Infrastructure as Code
Introduction
As the adoption of Infrastructure as Code (IaC) continues to grow, ensuring the security of your infrastructure configurations becomes increasingly crucial. Static Application Security Testing (SAST) scanning for IaC can play a vital role in identifying vulnerabilities early in the development lifecycle. This blog explores why implementing SAST scanning for IaC is essential for maintaining secure and robust infrastructure.
What is Infrastructure as Code?
Infrastructure as Code (IaC) is the practice of managing and provisioning computing infrastructure through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. Common tools for IaC include Azure Resource Manager (ARM) templates, Bicep Templates, Terraform, AWS CloudFormation.
Understanding SAST
Static Application Security Testing (SAST) is a white-box testing methodology that analyzes source code to identify security vulnerabilities. Unlike dynamic analysis, which requires running the application, SAST scans the code at rest, allowing developers to identify and fix vulnerabilities early in the development process.
Why SAST for IaC?
Early Detection of Vulnerabilities
Implementing SAST scanning for IaC allows you to detect vulnerabilities in your infrastructure code before it is deployed. By integrating SAST tools into your CI/CD pipeline, you can identify and remediate security issues during the development phase, significantly reducing the risk of deploying insecure infrastructure.
Compliance and Best Practices
Many industries and organizations have specific compliance requirements that mandate the implementation of security best practices. SAST scanning helps ensure that your IaC adheres to these standards by identifying non-compliant configurations and suggesting best practices.
Reduced Attack Surface
IaC templates often include configurations for networking, storage, compute resources, and more. Misconfigurations in these templates can lead to security vulnerabilities, such as open ports, insecure storage configurations, or excessive permissions. SAST scanning helps identify these issues, reducing the overall attack surface of your infrastructure.
Key Benefits of SAST for IaC
Automated Security
SAST tools can be integrated into your CI/CD pipeline, enabling automated security checks for every code commit. This automation ensures that security is a continuous part of the development process, rather than an afterthought.
Improved Developer Productivity
By identifying vulnerabilities early, SAST scanning reduces the time and effort required to fix security issues. Developers can address vulnerabilities as they write code, rather than having to go back and fix issues after they have been deployed.
Enhanced Security Posture
Regular SAST scanning helps maintain a strong security posture by ensuring that your infrastructure configurations are continuously monitored for vulnerabilities. This proactive approach helps prevent security incidents and ensures that your infrastructure remains secure over time.
Implementing SAST for IaC
Choose the Right Tool
There are several SAST tools available for IaC, each with its own strengths and weaknesses. Some popular options include Trivy, Checkov, Snyk, and Terrascan. Evaluate these tools based on their capabilities, ease of integration, and support for your specific IaC platform.
Integrate into CI/CD Pipeline
Integrate your chosen SAST tool into your CI/CD pipeline to enable automated scanning. This integration ensures that every code change is scanned for vulnerabilities before it is merged and deployed. For example, there is a Microsoft Security DevOps GitHub action and a Microsoft Security DevOps Azure DevOps extension that integrates many of these features.
Regularly Update and Review
Security is an ongoing process. Regularly update your SAST tools to benefit from the latest vulnerability definitions and scanning capabilities. Additionally, periodically review your scanning policies and configurations to ensure they remain effective.
Conclusion
Implementing SAST scanning for Infrastructure as Code is essential for maintaining secure and compliant infrastructure. By detecting vulnerabilities early, reducing the attack surface, and ensuring adherence to best practices, SAST scanning enhances the security and robustness of your infrastructure. Integrating SAST tools into your CI/CD pipeline automates security checks, improving developer productivity and maintaining a strong security posture. Microsoft Defender for Cloud DevOps security helps integrate these tools into your environment.
By making SAST scanning an integral part of your IaC process, you can confidently build and manage secure infrastructure that meets the demands of modern applications and compliance requirements.
Happy securing!
Microsoft Tech Community – Latest Blogs –Read More
Season of AI for Developers!
If you’re passionate about Artificial Intelligence and application development, don’t miss the opportunity to watch this amazing series from Microsoft Reactor. Throughout the season, we cover everything from the fundamentals of Azure OpenAI to the latest innovations presented at Microsoft Build 2024, culminating in the powerful Semantic Kernel framework for building intelligent applications. Each session is packed with numerous demos to help you understand every concept and apply it effectively.
Episodes:
Episode 1: Introduction to Azure OpenAI
Explore Azure OpenAI models, their capabilities, and how to integrate them with the Azure SDK.
Episode 2: Considerations for Implementing Models in Azure OpenAI
Learn how to manage service quotas, balance performance and latency, plan cost management, and apply the RAG pattern to optimize your implementations.
Episode 3: What’s New from Microsoft Build: PHI3, GPT-4o, Azure Content Safety, and More
Discover the latest updates from Microsoft Build, including PHI 3, GPT-4o with multimodal capabilities, the new Azure AI Studio, and Azure Content Safety.
Episode 4: Getting Started with Semantic Kernel
Learn about Semantic Kernel, an open-source SDK that allows you to easily integrate advanced LLMs into your applications to create smarter and more natural experiences.
Episode 5: Build Your Own Copilot with Semantic Kernel
Learn how to use Plugins, Planners, and Memories in Semantic Kernel to create copilots that work alongside users, providing intelligent suggestions to complete tasks.
–Don’t miss it! Rewatch each episode to discover how you can take your applications to the next level with Microsoft AI.
–Learn more and enhance your AI skills during this series with this collection of resources from Microsoft Learn: Explore the collection here.
Speakers:
-Luis Beltran – Microsoft MVP – LinkedIn
-Pablo Piovano – Microsoft MVP – LinkedIn
Microsoft Tech Community – Latest Blogs –Read More
Windows XP SP2 causes Error 678 or Error 769 when you try to surf the Internet – Microsoft Support
Describes how to troubleshoot the cause of Error 678 and Error 769 which may occur when you try to connect to the Internet after you install Windows XP Service Pack 2 (SP2).
Error message when you try to start Windows Defender on a computer that you recently upgraded to Windows Vista: “Application failed to initialize” – Microsoft Support
Describes that you receive an “Application failed to initialize” error message when you try to start Windows Defender on a Windows Vista-based computer after you uninstall Windows Defender. Provides two resolutions.
Error message when you start a Windows Vista-based computer: “Windows has blocked some startup programs” – Microsoft Support
Describes an issue that occurs in Windows Vista. Windows may block some startup programs and services. A resolution is included.
Mitigation of a large number of block events that collect in the MDATP portal – Microsoft Support
Describes how to mitigate large numbers of block events that collect in the MDATP portal.