Calibrating multiple cameras: How do I get 3D points from triangulation into a worldpointset or pointTracker?
Hi everyone! I am working on a project where I need to calibrate multiple cameras observing a scene, to ultimately be able to get 3D points of an object in later videos collected by the same cameras. The cameras are stationary. Importantly, I need to be able to triangulate the checkerboard points from the calibration and then do sparse bundle adjustment on these points to improve the acuracy of the camera pose estimation and 3D checkerboard points from the calibration. Sparse bundle adjustment (bundleAdjustment) can take in either pointTrack objects or worldpointset objects.
I have two calibration sessions (front camera and rear right, and the front camera and rear left – they are in a triangular config) from which I load the stereoParams, I have also stored the useful data in a structure called ‘s’.
I then get the 3D coordinates of the checkerboards, and using the ‘worldpointset’ and feature matching approach. I have included all code (including the code I used to save important variables).
The error I get with the bundleAdjustment function is the following:
Error using vision.internal.bundleAdjust.validateAndParseInputs
The number of feature points in view 1 must be greater than or equal to 51.
Error in vision.internal.bundleAdjust.sparseBA (line 39)
vision.internal.bundleAdjust.validateAndParseInputs(optimType, mfilename, varargin{:});
Error in bundleAdjustment (line 10)
vision.internal.bundleAdjust.sparseBA(‘full’, mfilename, varargin{:});
When I investigated using pointTrack, it seems that it is best used for tracking a point through multiple frames in a video, but not great for my application where I want to track one point through 3 different views at once.
AT LAST–> MY QUESTION:
Am I using worldpointset correctly for this application, and if so, can someone please help me figure out where this error in the feature points is coming from?
If not, would pointTrack be better for my application if I change the dimensionality of my problem? If pointTrack is better, I would need to track a point through the frames of each camera and somehow correlate and triangulate points that way.
**Note, with structure ‘s’ containing many images it was too large to upload (even when compressed), so I uploaded a screenshot of the structure. But hopefully my code helps with context. The visualisation runs though!
load("params.mat","params")
intrinsics1 = params.cam1.Intrinsics;
intrinsics2 = params.cam2.Intrinsics;
intrinsics3 = params.cam3.Intrinsics;
intrinsics4 = params.cam4.Intrinsics;
intrinsicsFront = intrinsics2;
intrinsicsRLeft = intrinsics3;
intrinsicsRRight = intrinsics4;
%% Visualise cameras
load("stereoParams1.mat")
load("stereoParams2.mat")
figure; showExtrinsics(stereoParams1, ‘CameraCentric’)
hold on;
showExtrinsics(stereoParams2, ‘CameraCentric’);
hold off;
%initialise camera 1 pose as at 0, with no rotation
front_absolute_pose = rigidtform3d([0 0 0], [0 0 0]);
%% Get 3D Points
load("s_struct.mat","s")
board_shape = s.front.board_shape;
camPoseVSet = imageviewset;
camPoseVSet = addView(camPoseVSet,1,front_absolute_pose);
camPoseVSet = addView(camPoseVSet,2,stereoParams1.PoseCamera2);
camPoseVSet = addView(camPoseVSet,3,stereoParams2.PoseCamera2);
camposes = poses(camPoseVSet);
intrinsicsArray = [intrinsicsFront, intrinsicsRRight, intrinsicsRLeft];
frames = fieldnames(s.front.points);
framesRearRight = fieldnames(s.rearright.points);
framesRearLeft = fieldnames(s.rearleft.points);
wpSet =worldpointset;
wpsettrial = worldpointset;
for i =1:length(frames) %for frames in front
frame_i = frames{i};
pointsFront = s.front.points.(frame_i);
pointsFrontUS = s.front.unshapedPoints.(frame_i);
container = contains(framesRearRight, frame_i);
j = 1;
if ~isempty(container) && any(container)
pointsRearRight = s.rearright.points.(frame_i);
pointsRearRightUS = s.rearright.unshapedPoints.(frame_i);
pointIn3D = [];
pointIn3Dnew = [];
[features1, validPts1] = extractFeatures(im2gray(s.front.imageFile.(frame_i)), pointsFrontUS);
[features2, validPts2] = extractFeatures(im2gray(s.rearright.imageFile.(frame_i)), pointsRearRightUS);
indexPairs = matchFeatures(features1,features2);
matchedPoints1 = validPts1(indexPairs(:,1),:);
matchedPoints2 = validPts2(indexPairs(:,2),:);
worldPTS = triangulate(matchedPoints1, matchedPoints2, stereoParams2);
[wpsettrial,newPointIndices] = addWorldPoints(wpsettrial,worldPTS);
wpsettrial = addCorrespondences(wpsettrial,1,newPointIndices,indexPairs(:,1));
wpsettrial = addCorrespondences(wpsettrial,3,newPointIndices,indexPairs(:,2));
sz = size(s.front.points.(frame_i));
for h =1: sz(1)
for w = 1:sz(2)
point2track = [pointsFront(h,w,1), pointsFront(h,w,2); pointsRearRight(h,w,1), pointsRearRight(h,w,2)];
IDs = [1, 3];
track = pointTrack(IDs,point2track);
triang3D = triangulate([pointsFront(h,w,1), pointsFront(h,w,2)], [pointsRearRight(h,w,1), pointsRearRight(h,w,2)], stereoParams1);
% [wpSet,newPointIndices] = addWorldPoints(wpSet,triang3D);
% wpSet = addCorrespondences(wpSet,1,j,j);
% wpSet = addCorrespondences(wpSet,3,j,j);
pointIn3D = [pointIn3D;triang3D];
j=j+1;
end
end
pointIn3D = reshape3D(pointIn3D, board_shape);
%xyzPoints = reshape3D(pointIn3D,board_shape);
s.frontANDright.PT3D.(frame_i) = pointIn3D;
%s.frontANDright.PT3DSBA.(frame_i) = xyzPoints;
end
container = contains(framesRearLeft, frame_i);
m=1;
if ~isempty(container) && any(container)
pointsRearLeft = s.rearleft.points.(frame_i);
pointsRearLeftUS = s.rearleft.unshapedPoints.(frame_i);
pointIn3D = [];
pointIn3Dnew = [];
sz = size(s.front.points.(frame_i));
[features1, validPts1] = extractFeatures(im2gray(s.front.imageFile.(frame_i)), pointsFrontUS);
[features2, validPts2] = extractFeatures(im2gray(s.rearleft.imageFile.(frame_i)), pointsRearLeftUS);
indexPairs = matchFeatures(features1,features2);
matchedPoints1 = validPts1(indexPairs(:,1),:);
matchedPoints2 = validPts2(indexPairs(:,2),:);
worldPTS = triangulate(matchedPoints1, matchedPoints2, stereoParams1);
[wpsettrial,newPointIndices] = addWorldPoints(wpsettrial,worldPTS);
wpsettrial = addCorrespondences(wpsettrial,1,newPointIndices,indexPairs(:,1));
wpsettrial = addCorrespondences(wpsettrial,2,newPointIndices,indexPairs(:,2));
for h =1: sz(1)
for w = 1:sz(2)
point2track = [pointsFront(h,w,1), pointsFront(h,w,2); pointsRearLeft(h,w,1), pointsRearLeft(h,w,2)];
IDs = [1, 2];
track = pointTrack(IDs,point2track);
triang3D = triangulate([pointsFront(h,w,1), pointsFront(h,w,2)], [pointsRearLeft(h,w,1), pointsRearLeft(h,w,2)], stereoParams1);
% wpSet = addWorldPoints(wpSet,triang3D);
% wpSet = addCorrespondences(wpSet,1,m,m);
% wpSet = addCorrespondences(wpSet,2,m,m);
pointIn3D = [pointIn3D;triang3D];
m = m+1;
end
end
pointIn3D = reshape3D(pointIn3D, board_shape);
%xyzPoints = reshape3D(pointIn3D,board_shape);
s.frontANDleft.PT3D.(frame_i) = pointIn3D;
%s.frontANDleft.PT3DSBA.(frame_i) = xyzPoints;
end
[wpSetRefined,vSetRefined,pointIndex] = bundleAdjustment(wpsettrial,camPoseVSet,[1,3,2],intrinsicsArray, FixedViewIDs=[1,3,2], …
Solver="preconditioned-conjugate-gradient")
end
function [img_name, ptsUS,pts, worldpoints] = reformatData(img_name, pts, board_shape, worldpoints)
%method taken from acinoset code
img_name = img_name(1:strfind(img_name,’ ‘)-1);
img_name = replace(img_name, ‘.’,’_’);
ptsUS = pts;
pts = pagetranspose(reshape(pts, [board_shape, 2]));
pts = pagetranspose(reshape(pts, [board_shape, 2])); %repetition is purposeful
worldpoints = pagetranspose(reshape(worldpoints, [board_shape,2]));
worldpoints = pagetranspose(reshape(worldpoints, [board_shape,2]));
end
function pts = reshape3D(points3D, board_shape)
pts = pagetranspose(reshape(points3D, [board_shape, 3]));
pts = pagetranspose(reshape(pts, [board_shape, 3])); %repetition is purposeful
endHi everyone! I am working on a project where I need to calibrate multiple cameras observing a scene, to ultimately be able to get 3D points of an object in later videos collected by the same cameras. The cameras are stationary. Importantly, I need to be able to triangulate the checkerboard points from the calibration and then do sparse bundle adjustment on these points to improve the acuracy of the camera pose estimation and 3D checkerboard points from the calibration. Sparse bundle adjustment (bundleAdjustment) can take in either pointTrack objects or worldpointset objects.
I have two calibration sessions (front camera and rear right, and the front camera and rear left – they are in a triangular config) from which I load the stereoParams, I have also stored the useful data in a structure called ‘s’.
I then get the 3D coordinates of the checkerboards, and using the ‘worldpointset’ and feature matching approach. I have included all code (including the code I used to save important variables).
The error I get with the bundleAdjustment function is the following:
Error using vision.internal.bundleAdjust.validateAndParseInputs
The number of feature points in view 1 must be greater than or equal to 51.
Error in vision.internal.bundleAdjust.sparseBA (line 39)
vision.internal.bundleAdjust.validateAndParseInputs(optimType, mfilename, varargin{:});
Error in bundleAdjustment (line 10)
vision.internal.bundleAdjust.sparseBA(‘full’, mfilename, varargin{:});
When I investigated using pointTrack, it seems that it is best used for tracking a point through multiple frames in a video, but not great for my application where I want to track one point through 3 different views at once.
AT LAST–> MY QUESTION:
Am I using worldpointset correctly for this application, and if so, can someone please help me figure out where this error in the feature points is coming from?
If not, would pointTrack be better for my application if I change the dimensionality of my problem? If pointTrack is better, I would need to track a point through the frames of each camera and somehow correlate and triangulate points that way.
**Note, with structure ‘s’ containing many images it was too large to upload (even when compressed), so I uploaded a screenshot of the structure. But hopefully my code helps with context. The visualisation runs though!
load("params.mat","params")
intrinsics1 = params.cam1.Intrinsics;
intrinsics2 = params.cam2.Intrinsics;
intrinsics3 = params.cam3.Intrinsics;
intrinsics4 = params.cam4.Intrinsics;
intrinsicsFront = intrinsics2;
intrinsicsRLeft = intrinsics3;
intrinsicsRRight = intrinsics4;
%% Visualise cameras
load("stereoParams1.mat")
load("stereoParams2.mat")
figure; showExtrinsics(stereoParams1, ‘CameraCentric’)
hold on;
showExtrinsics(stereoParams2, ‘CameraCentric’);
hold off;
%initialise camera 1 pose as at 0, with no rotation
front_absolute_pose = rigidtform3d([0 0 0], [0 0 0]);
%% Get 3D Points
load("s_struct.mat","s")
board_shape = s.front.board_shape;
camPoseVSet = imageviewset;
camPoseVSet = addView(camPoseVSet,1,front_absolute_pose);
camPoseVSet = addView(camPoseVSet,2,stereoParams1.PoseCamera2);
camPoseVSet = addView(camPoseVSet,3,stereoParams2.PoseCamera2);
camposes = poses(camPoseVSet);
intrinsicsArray = [intrinsicsFront, intrinsicsRRight, intrinsicsRLeft];
frames = fieldnames(s.front.points);
framesRearRight = fieldnames(s.rearright.points);
framesRearLeft = fieldnames(s.rearleft.points);
wpSet =worldpointset;
wpsettrial = worldpointset;
for i =1:length(frames) %for frames in front
frame_i = frames{i};
pointsFront = s.front.points.(frame_i);
pointsFrontUS = s.front.unshapedPoints.(frame_i);
container = contains(framesRearRight, frame_i);
j = 1;
if ~isempty(container) && any(container)
pointsRearRight = s.rearright.points.(frame_i);
pointsRearRightUS = s.rearright.unshapedPoints.(frame_i);
pointIn3D = [];
pointIn3Dnew = [];
[features1, validPts1] = extractFeatures(im2gray(s.front.imageFile.(frame_i)), pointsFrontUS);
[features2, validPts2] = extractFeatures(im2gray(s.rearright.imageFile.(frame_i)), pointsRearRightUS);
indexPairs = matchFeatures(features1,features2);
matchedPoints1 = validPts1(indexPairs(:,1),:);
matchedPoints2 = validPts2(indexPairs(:,2),:);
worldPTS = triangulate(matchedPoints1, matchedPoints2, stereoParams2);
[wpsettrial,newPointIndices] = addWorldPoints(wpsettrial,worldPTS);
wpsettrial = addCorrespondences(wpsettrial,1,newPointIndices,indexPairs(:,1));
wpsettrial = addCorrespondences(wpsettrial,3,newPointIndices,indexPairs(:,2));
sz = size(s.front.points.(frame_i));
for h =1: sz(1)
for w = 1:sz(2)
point2track = [pointsFront(h,w,1), pointsFront(h,w,2); pointsRearRight(h,w,1), pointsRearRight(h,w,2)];
IDs = [1, 3];
track = pointTrack(IDs,point2track);
triang3D = triangulate([pointsFront(h,w,1), pointsFront(h,w,2)], [pointsRearRight(h,w,1), pointsRearRight(h,w,2)], stereoParams1);
% [wpSet,newPointIndices] = addWorldPoints(wpSet,triang3D);
% wpSet = addCorrespondences(wpSet,1,j,j);
% wpSet = addCorrespondences(wpSet,3,j,j);
pointIn3D = [pointIn3D;triang3D];
j=j+1;
end
end
pointIn3D = reshape3D(pointIn3D, board_shape);
%xyzPoints = reshape3D(pointIn3D,board_shape);
s.frontANDright.PT3D.(frame_i) = pointIn3D;
%s.frontANDright.PT3DSBA.(frame_i) = xyzPoints;
end
container = contains(framesRearLeft, frame_i);
m=1;
if ~isempty(container) && any(container)
pointsRearLeft = s.rearleft.points.(frame_i);
pointsRearLeftUS = s.rearleft.unshapedPoints.(frame_i);
pointIn3D = [];
pointIn3Dnew = [];
sz = size(s.front.points.(frame_i));
[features1, validPts1] = extractFeatures(im2gray(s.front.imageFile.(frame_i)), pointsFrontUS);
[features2, validPts2] = extractFeatures(im2gray(s.rearleft.imageFile.(frame_i)), pointsRearLeftUS);
indexPairs = matchFeatures(features1,features2);
matchedPoints1 = validPts1(indexPairs(:,1),:);
matchedPoints2 = validPts2(indexPairs(:,2),:);
worldPTS = triangulate(matchedPoints1, matchedPoints2, stereoParams1);
[wpsettrial,newPointIndices] = addWorldPoints(wpsettrial,worldPTS);
wpsettrial = addCorrespondences(wpsettrial,1,newPointIndices,indexPairs(:,1));
wpsettrial = addCorrespondences(wpsettrial,2,newPointIndices,indexPairs(:,2));
for h =1: sz(1)
for w = 1:sz(2)
point2track = [pointsFront(h,w,1), pointsFront(h,w,2); pointsRearLeft(h,w,1), pointsRearLeft(h,w,2)];
IDs = [1, 2];
track = pointTrack(IDs,point2track);
triang3D = triangulate([pointsFront(h,w,1), pointsFront(h,w,2)], [pointsRearLeft(h,w,1), pointsRearLeft(h,w,2)], stereoParams1);
% wpSet = addWorldPoints(wpSet,triang3D);
% wpSet = addCorrespondences(wpSet,1,m,m);
% wpSet = addCorrespondences(wpSet,2,m,m);
pointIn3D = [pointIn3D;triang3D];
m = m+1;
end
end
pointIn3D = reshape3D(pointIn3D, board_shape);
%xyzPoints = reshape3D(pointIn3D,board_shape);
s.frontANDleft.PT3D.(frame_i) = pointIn3D;
%s.frontANDleft.PT3DSBA.(frame_i) = xyzPoints;
end
[wpSetRefined,vSetRefined,pointIndex] = bundleAdjustment(wpsettrial,camPoseVSet,[1,3,2],intrinsicsArray, FixedViewIDs=[1,3,2], …
Solver="preconditioned-conjugate-gradient")
end
function [img_name, ptsUS,pts, worldpoints] = reformatData(img_name, pts, board_shape, worldpoints)
%method taken from acinoset code
img_name = img_name(1:strfind(img_name,’ ‘)-1);
img_name = replace(img_name, ‘.’,’_’);
ptsUS = pts;
pts = pagetranspose(reshape(pts, [board_shape, 2]));
pts = pagetranspose(reshape(pts, [board_shape, 2])); %repetition is purposeful
worldpoints = pagetranspose(reshape(worldpoints, [board_shape,2]));
worldpoints = pagetranspose(reshape(worldpoints, [board_shape,2]));
end
function pts = reshape3D(points3D, board_shape)
pts = pagetranspose(reshape(points3D, [board_shape, 3]));
pts = pagetranspose(reshape(pts, [board_shape, 3])); %repetition is purposeful
end Hi everyone! I am working on a project where I need to calibrate multiple cameras observing a scene, to ultimately be able to get 3D points of an object in later videos collected by the same cameras. The cameras are stationary. Importantly, I need to be able to triangulate the checkerboard points from the calibration and then do sparse bundle adjustment on these points to improve the acuracy of the camera pose estimation and 3D checkerboard points from the calibration. Sparse bundle adjustment (bundleAdjustment) can take in either pointTrack objects or worldpointset objects.
I have two calibration sessions (front camera and rear right, and the front camera and rear left – they are in a triangular config) from which I load the stereoParams, I have also stored the useful data in a structure called ‘s’.
I then get the 3D coordinates of the checkerboards, and using the ‘worldpointset’ and feature matching approach. I have included all code (including the code I used to save important variables).
The error I get with the bundleAdjustment function is the following:
Error using vision.internal.bundleAdjust.validateAndParseInputs
The number of feature points in view 1 must be greater than or equal to 51.
Error in vision.internal.bundleAdjust.sparseBA (line 39)
vision.internal.bundleAdjust.validateAndParseInputs(optimType, mfilename, varargin{:});
Error in bundleAdjustment (line 10)
vision.internal.bundleAdjust.sparseBA(‘full’, mfilename, varargin{:});
When I investigated using pointTrack, it seems that it is best used for tracking a point through multiple frames in a video, but not great for my application where I want to track one point through 3 different views at once.
AT LAST–> MY QUESTION:
Am I using worldpointset correctly for this application, and if so, can someone please help me figure out where this error in the feature points is coming from?
If not, would pointTrack be better for my application if I change the dimensionality of my problem? If pointTrack is better, I would need to track a point through the frames of each camera and somehow correlate and triangulate points that way.
**Note, with structure ‘s’ containing many images it was too large to upload (even when compressed), so I uploaded a screenshot of the structure. But hopefully my code helps with context. The visualisation runs though!
load("params.mat","params")
intrinsics1 = params.cam1.Intrinsics;
intrinsics2 = params.cam2.Intrinsics;
intrinsics3 = params.cam3.Intrinsics;
intrinsics4 = params.cam4.Intrinsics;
intrinsicsFront = intrinsics2;
intrinsicsRLeft = intrinsics3;
intrinsicsRRight = intrinsics4;
%% Visualise cameras
load("stereoParams1.mat")
load("stereoParams2.mat")
figure; showExtrinsics(stereoParams1, ‘CameraCentric’)
hold on;
showExtrinsics(stereoParams2, ‘CameraCentric’);
hold off;
%initialise camera 1 pose as at 0, with no rotation
front_absolute_pose = rigidtform3d([0 0 0], [0 0 0]);
%% Get 3D Points
load("s_struct.mat","s")
board_shape = s.front.board_shape;
camPoseVSet = imageviewset;
camPoseVSet = addView(camPoseVSet,1,front_absolute_pose);
camPoseVSet = addView(camPoseVSet,2,stereoParams1.PoseCamera2);
camPoseVSet = addView(camPoseVSet,3,stereoParams2.PoseCamera2);
camposes = poses(camPoseVSet);
intrinsicsArray = [intrinsicsFront, intrinsicsRRight, intrinsicsRLeft];
frames = fieldnames(s.front.points);
framesRearRight = fieldnames(s.rearright.points);
framesRearLeft = fieldnames(s.rearleft.points);
wpSet =worldpointset;
wpsettrial = worldpointset;
for i =1:length(frames) %for frames in front
frame_i = frames{i};
pointsFront = s.front.points.(frame_i);
pointsFrontUS = s.front.unshapedPoints.(frame_i);
container = contains(framesRearRight, frame_i);
j = 1;
if ~isempty(container) && any(container)
pointsRearRight = s.rearright.points.(frame_i);
pointsRearRightUS = s.rearright.unshapedPoints.(frame_i);
pointIn3D = [];
pointIn3Dnew = [];
[features1, validPts1] = extractFeatures(im2gray(s.front.imageFile.(frame_i)), pointsFrontUS);
[features2, validPts2] = extractFeatures(im2gray(s.rearright.imageFile.(frame_i)), pointsRearRightUS);
indexPairs = matchFeatures(features1,features2);
matchedPoints1 = validPts1(indexPairs(:,1),:);
matchedPoints2 = validPts2(indexPairs(:,2),:);
worldPTS = triangulate(matchedPoints1, matchedPoints2, stereoParams2);
[wpsettrial,newPointIndices] = addWorldPoints(wpsettrial,worldPTS);
wpsettrial = addCorrespondences(wpsettrial,1,newPointIndices,indexPairs(:,1));
wpsettrial = addCorrespondences(wpsettrial,3,newPointIndices,indexPairs(:,2));
sz = size(s.front.points.(frame_i));
for h =1: sz(1)
for w = 1:sz(2)
point2track = [pointsFront(h,w,1), pointsFront(h,w,2); pointsRearRight(h,w,1), pointsRearRight(h,w,2)];
IDs = [1, 3];
track = pointTrack(IDs,point2track);
triang3D = triangulate([pointsFront(h,w,1), pointsFront(h,w,2)], [pointsRearRight(h,w,1), pointsRearRight(h,w,2)], stereoParams1);
% [wpSet,newPointIndices] = addWorldPoints(wpSet,triang3D);
% wpSet = addCorrespondences(wpSet,1,j,j);
% wpSet = addCorrespondences(wpSet,3,j,j);
pointIn3D = [pointIn3D;triang3D];
j=j+1;
end
end
pointIn3D = reshape3D(pointIn3D, board_shape);
%xyzPoints = reshape3D(pointIn3D,board_shape);
s.frontANDright.PT3D.(frame_i) = pointIn3D;
%s.frontANDright.PT3DSBA.(frame_i) = xyzPoints;
end
container = contains(framesRearLeft, frame_i);
m=1;
if ~isempty(container) && any(container)
pointsRearLeft = s.rearleft.points.(frame_i);
pointsRearLeftUS = s.rearleft.unshapedPoints.(frame_i);
pointIn3D = [];
pointIn3Dnew = [];
sz = size(s.front.points.(frame_i));
[features1, validPts1] = extractFeatures(im2gray(s.front.imageFile.(frame_i)), pointsFrontUS);
[features2, validPts2] = extractFeatures(im2gray(s.rearleft.imageFile.(frame_i)), pointsRearLeftUS);
indexPairs = matchFeatures(features1,features2);
matchedPoints1 = validPts1(indexPairs(:,1),:);
matchedPoints2 = validPts2(indexPairs(:,2),:);
worldPTS = triangulate(matchedPoints1, matchedPoints2, stereoParams1);
[wpsettrial,newPointIndices] = addWorldPoints(wpsettrial,worldPTS);
wpsettrial = addCorrespondences(wpsettrial,1,newPointIndices,indexPairs(:,1));
wpsettrial = addCorrespondences(wpsettrial,2,newPointIndices,indexPairs(:,2));
for h =1: sz(1)
for w = 1:sz(2)
point2track = [pointsFront(h,w,1), pointsFront(h,w,2); pointsRearLeft(h,w,1), pointsRearLeft(h,w,2)];
IDs = [1, 2];
track = pointTrack(IDs,point2track);
triang3D = triangulate([pointsFront(h,w,1), pointsFront(h,w,2)], [pointsRearLeft(h,w,1), pointsRearLeft(h,w,2)], stereoParams1);
% wpSet = addWorldPoints(wpSet,triang3D);
% wpSet = addCorrespondences(wpSet,1,m,m);
% wpSet = addCorrespondences(wpSet,2,m,m);
pointIn3D = [pointIn3D;triang3D];
m = m+1;
end
end
pointIn3D = reshape3D(pointIn3D, board_shape);
%xyzPoints = reshape3D(pointIn3D,board_shape);
s.frontANDleft.PT3D.(frame_i) = pointIn3D;
%s.frontANDleft.PT3DSBA.(frame_i) = xyzPoints;
end
[wpSetRefined,vSetRefined,pointIndex] = bundleAdjustment(wpsettrial,camPoseVSet,[1,3,2],intrinsicsArray, FixedViewIDs=[1,3,2], …
Solver="preconditioned-conjugate-gradient")
end
function [img_name, ptsUS,pts, worldpoints] = reformatData(img_name, pts, board_shape, worldpoints)
%method taken from acinoset code
img_name = img_name(1:strfind(img_name,’ ‘)-1);
img_name = replace(img_name, ‘.’,’_’);
ptsUS = pts;
pts = pagetranspose(reshape(pts, [board_shape, 2]));
pts = pagetranspose(reshape(pts, [board_shape, 2])); %repetition is purposeful
worldpoints = pagetranspose(reshape(worldpoints, [board_shape,2]));
worldpoints = pagetranspose(reshape(worldpoints, [board_shape,2]));
end
function pts = reshape3D(points3D, board_shape)
pts = pagetranspose(reshape(points3D, [board_shape, 3]));
pts = pagetranspose(reshape(pts, [board_shape, 3])); %repetition is purposeful
end image processing, computer vision, calibration, 3d, worldpointset, bundleadjustment MATLAB Answers — New Questions