Input image size must be greater than [64 1856] error in lidar labeler app
I have an error in my machine learning algorithm where it would display an issue with the image size. I am using a .pcd file different to pandaSet yet I am wondering about why theres an error with the network model. I cant seem to determine what creates the netSize and how would I influence it for a new set of pcd dataSet
Error is:
Error using
Input image size must be greater than [64 1856]. The minimum input image size must be equal to or
greater than the input size in image input layer of the network.
Error in ()
iCheckImage(I, netSize);
Error in (line 244)
params = iParseInputs(I, net, varargin{:});
Error in lidar.labeler.LidarSemanticSegmentation/run (line 120)
predictedResult = semanticseg(I, algObj.PretrainedNetwork);
Error in lidar.labeler.AutomationAlgorithm/doRun (line 611)
videoLabels = run(this, frame);
Error in lidar.internal.lidarLabeler.tool.TemporalLabelingTool/runAlgorithm
Error in vision.internal.labeler.tool.AlgorithmTab/setAlgorithmModeAndExecute
Error in vision.internal.labeler.tool.AlgorithmTab
Error in internal.Callback.execute (line 128)
feval(callback, src, event);
Error in matlab.ui.internal.toolstrip.base.Action/PeerEventCallback (line 846)
internal.Callback.execute(this.PushPerformedFcn, this, eventdata);
Error in matlab.ui.internal.toolstrip.base.ActionInterface>@(event,data)PeerEventCallback(this,event,data) (line 57)
this.PeerEventListener = addlistener(this.Peer, ‘peerEvent’, @(event, data) PeerEventCallback(this, event, data));
Error in hgfeval (line 62)
feval(fcn{1},varargin{:},fcn{2:end});
Error in javaaddlistener>cbBridge (line 52)
hgfeval(response, java(o), e.JavaEvent)
Error in javaaddlistener>@(o,e)cbBridge(o,e,response) (line 47)
@(o,e) cbBridge(o,e,response));
classdef LidarSemanticSegmentation < lidar.labeler.AutomationAlgorithm
% LidarSemanticSegmentation Automation algorithm performs semantic
% segmentation in the point cloud.
% LidarSemanticSegmentation is an automation algorithm for segmenting
% a point cloud using SqueezeSegV2 semantic segmentation network
% which is trained on Pandaset data set.
%
% See also lidarLabeler, groundTruthLabeler
% lidar.labeler.AutomationAlgorithm.
% Copyright 2021 The MathWorks, Inc.
% ———————————————————————-
% Step 1: Define the required properties describing the algorithm. This
% includes Name, Description, and UserDirections.
properties(Constant)
% Name Algorithm Name
% Character vector specifying the name of the algorithm.
Name = ‘Lidar Semantic Segmentation’;
% Description Algorithm Description
% Character vector specifying the short description of the algorithm.
Description = ‘Segment the point cloud using SqueezeSegV2 network.’;
% UserDirections Algorithm Usage Directions
% Cell array of character vectors specifying directions for
% algorithm users to follow to use the algorithm.
UserDirections = {[‘ROI Label Definition Selection: select one of ‘ …
‘the ROI definitions to be labeled’], …
‘Run: Press RUN to run the automation algorithm. ‘, …
[‘Review and Modify: Review automated labels over the interval ‘, …
‘using playback controls. Modify/delete/add ROIs that were not ‘ …
‘satisfactorily automated at this stage. If the results are ‘ …
‘satisfactory, click Accept to accept the automated labels.’], …
[‘Accept/Cancel: If the results of automation are satisfactory, ‘ …
‘click Accept to accept all automated labels and return to ‘ …
‘manual labeling. If the results of automation are not ‘ …
‘satisfactory, click Cancel to return to manual labeling ‘ …
‘without saving the automated labels.’]};
end
% ———————————————————————
% Step 2: Define properties you want to use during the algorithm
% execution.
properties
% AllCategories
% AllCategories holds the default ‘unlabelled’, ‘Vegetation’,
% ‘Ground’, ‘Road’, ‘RoadMarkings’, ‘SideWalk’, ‘Car’, ‘Truck’,
% ‘OtherVehicle’, ‘Pedestrian’, ‘RoadBarriers’, ‘Signs’,
% ‘Buildings’ categorical types.
AllCategories = {‘unlabelled’};
% PretrainedNetwork
% PretrainedNetwork saves the pretrained SqueezeSegV2 network.
PretrainedNetwork
end
%———————————————————————-
% Note: this method needs to be included for lidarLabeler app to
% recognize it as using pointcloud
methods (Static)
% This method is static to allow the apps to call it and check the
% signal type before instantiation. When users refresh the
% algorithm list, we can quickly check and discard algorithms for
% any signal that is not support in a given app.
function isValid = checkSignalType(signalType)
isValid = (signalType == vision.labeler.loading.SignalType.PointCloud);
end
end
%———————————————————————-
% Step 3: Define methods used for setting up the algorithm.
methods
function isValid = checkLabelDefinition(algObj, labelDef)
% Only Voxel ROI label definitions are valid for the Lidar
% semantic segmentation algorithm.
isValid = labelDef.Type == lidarLabelType.Voxel;
if isValid
algObj.AllCategories{end+1} = labelDef.Name;
end
end
function isReady = checkSetup(algObj)
% Is there one selected ROI Label definition to automate.
isReady = ~isempty(algObj.SelectedLabelDefinitions);
end
end
%———————————————————————-
% Step 4: Specify algorithm execution. This controls what happens when
% the user presses RUN. Algorithm execution proceeds by first
% executing initialize on the first frame, followed by run on
% every frame, and terminate on the last frame.
methods
function initialize(algObj,~)
% Load the pretrained SqueezeSegV2 semantic segmentation network.
outputFolder = fullfile(tempdir, ‘Pandaset’);
pretrainedSqueezeSeg = load(fullfile(outputFolder,’trainedSqueezeSegV2PandasetNet.mat’));
% Store the network in the ‘PretrainedNetwork’ property of this object.
algObj.PretrainedNetwork = pretrainedSqueezeSeg.net;
end
function autoLabels = run(algObj, pointCloud)
% Setup categorical matrix with categories including
% ‘Vegetation’, ‘Ground’, ‘Road’, ‘RoadMarkings’, ‘SideWalk’,
% ‘Car’, ‘Truck’, ‘OtherVehicle’, ‘Pedestrian’, ‘RoadBarriers’,
% and ‘Signs’.
autoLabels = categorical(zeros(size(pointCloud.Location,1), size(pointCloud.Location,2)), …
0:12,algObj.AllCategories);
% Convert the input point cloud to five channel image.
I = helperPointCloudToImage(pointCloud);
% Predict the segmentation result.
predictedResult = semanticseg(I, algObj.PretrainedNetwork);
autoLabels(:) = predictedResult;
%using this area we would be able to continuously update the latest file on
% sending the output towards the CAN Network or atleast ensure that the
% item is obtainable
% This area would work the best.
%first we must
end
end
end
function image = helperPointCloudToImage(ptcloud)
% helperPointCloudToImage converts the point cloud to 5 channel image
image = ptcloud.Location;
image(:,5) = ptcloud.Intensity;
rangeData = iComputeRangeData(image(:,1),image(:,2),image(:,3));
image(:,4) = rangeData;
index = isnan(image);
image(index) = 0;
end
function rangeData = iComputeRangeData(xChannel,yChannel,zChannel)
rangeData = sqrt(xChannel.*xChannel+yChannel.*yChannel+zChannel.*zChannel);
endI have an error in my machine learning algorithm where it would display an issue with the image size. I am using a .pcd file different to pandaSet yet I am wondering about why theres an error with the network model. I cant seem to determine what creates the netSize and how would I influence it for a new set of pcd dataSet
Error is:
Error using
Input image size must be greater than [64 1856]. The minimum input image size must be equal to or
greater than the input size in image input layer of the network.
Error in ()
iCheckImage(I, netSize);
Error in (line 244)
params = iParseInputs(I, net, varargin{:});
Error in lidar.labeler.LidarSemanticSegmentation/run (line 120)
predictedResult = semanticseg(I, algObj.PretrainedNetwork);
Error in lidar.labeler.AutomationAlgorithm/doRun (line 611)
videoLabels = run(this, frame);
Error in lidar.internal.lidarLabeler.tool.TemporalLabelingTool/runAlgorithm
Error in vision.internal.labeler.tool.AlgorithmTab/setAlgorithmModeAndExecute
Error in vision.internal.labeler.tool.AlgorithmTab
Error in internal.Callback.execute (line 128)
feval(callback, src, event);
Error in matlab.ui.internal.toolstrip.base.Action/PeerEventCallback (line 846)
internal.Callback.execute(this.PushPerformedFcn, this, eventdata);
Error in matlab.ui.internal.toolstrip.base.ActionInterface>@(event,data)PeerEventCallback(this,event,data) (line 57)
this.PeerEventListener = addlistener(this.Peer, ‘peerEvent’, @(event, data) PeerEventCallback(this, event, data));
Error in hgfeval (line 62)
feval(fcn{1},varargin{:},fcn{2:end});
Error in javaaddlistener>cbBridge (line 52)
hgfeval(response, java(o), e.JavaEvent)
Error in javaaddlistener>@(o,e)cbBridge(o,e,response) (line 47)
@(o,e) cbBridge(o,e,response));
classdef LidarSemanticSegmentation < lidar.labeler.AutomationAlgorithm
% LidarSemanticSegmentation Automation algorithm performs semantic
% segmentation in the point cloud.
% LidarSemanticSegmentation is an automation algorithm for segmenting
% a point cloud using SqueezeSegV2 semantic segmentation network
% which is trained on Pandaset data set.
%
% See also lidarLabeler, groundTruthLabeler
% lidar.labeler.AutomationAlgorithm.
% Copyright 2021 The MathWorks, Inc.
% ———————————————————————-
% Step 1: Define the required properties describing the algorithm. This
% includes Name, Description, and UserDirections.
properties(Constant)
% Name Algorithm Name
% Character vector specifying the name of the algorithm.
Name = ‘Lidar Semantic Segmentation’;
% Description Algorithm Description
% Character vector specifying the short description of the algorithm.
Description = ‘Segment the point cloud using SqueezeSegV2 network.’;
% UserDirections Algorithm Usage Directions
% Cell array of character vectors specifying directions for
% algorithm users to follow to use the algorithm.
UserDirections = {[‘ROI Label Definition Selection: select one of ‘ …
‘the ROI definitions to be labeled’], …
‘Run: Press RUN to run the automation algorithm. ‘, …
[‘Review and Modify: Review automated labels over the interval ‘, …
‘using playback controls. Modify/delete/add ROIs that were not ‘ …
‘satisfactorily automated at this stage. If the results are ‘ …
‘satisfactory, click Accept to accept the automated labels.’], …
[‘Accept/Cancel: If the results of automation are satisfactory, ‘ …
‘click Accept to accept all automated labels and return to ‘ …
‘manual labeling. If the results of automation are not ‘ …
‘satisfactory, click Cancel to return to manual labeling ‘ …
‘without saving the automated labels.’]};
end
% ———————————————————————
% Step 2: Define properties you want to use during the algorithm
% execution.
properties
% AllCategories
% AllCategories holds the default ‘unlabelled’, ‘Vegetation’,
% ‘Ground’, ‘Road’, ‘RoadMarkings’, ‘SideWalk’, ‘Car’, ‘Truck’,
% ‘OtherVehicle’, ‘Pedestrian’, ‘RoadBarriers’, ‘Signs’,
% ‘Buildings’ categorical types.
AllCategories = {‘unlabelled’};
% PretrainedNetwork
% PretrainedNetwork saves the pretrained SqueezeSegV2 network.
PretrainedNetwork
end
%———————————————————————-
% Note: this method needs to be included for lidarLabeler app to
% recognize it as using pointcloud
methods (Static)
% This method is static to allow the apps to call it and check the
% signal type before instantiation. When users refresh the
% algorithm list, we can quickly check and discard algorithms for
% any signal that is not support in a given app.
function isValid = checkSignalType(signalType)
isValid = (signalType == vision.labeler.loading.SignalType.PointCloud);
end
end
%———————————————————————-
% Step 3: Define methods used for setting up the algorithm.
methods
function isValid = checkLabelDefinition(algObj, labelDef)
% Only Voxel ROI label definitions are valid for the Lidar
% semantic segmentation algorithm.
isValid = labelDef.Type == lidarLabelType.Voxel;
if isValid
algObj.AllCategories{end+1} = labelDef.Name;
end
end
function isReady = checkSetup(algObj)
% Is there one selected ROI Label definition to automate.
isReady = ~isempty(algObj.SelectedLabelDefinitions);
end
end
%———————————————————————-
% Step 4: Specify algorithm execution. This controls what happens when
% the user presses RUN. Algorithm execution proceeds by first
% executing initialize on the first frame, followed by run on
% every frame, and terminate on the last frame.
methods
function initialize(algObj,~)
% Load the pretrained SqueezeSegV2 semantic segmentation network.
outputFolder = fullfile(tempdir, ‘Pandaset’);
pretrainedSqueezeSeg = load(fullfile(outputFolder,’trainedSqueezeSegV2PandasetNet.mat’));
% Store the network in the ‘PretrainedNetwork’ property of this object.
algObj.PretrainedNetwork = pretrainedSqueezeSeg.net;
end
function autoLabels = run(algObj, pointCloud)
% Setup categorical matrix with categories including
% ‘Vegetation’, ‘Ground’, ‘Road’, ‘RoadMarkings’, ‘SideWalk’,
% ‘Car’, ‘Truck’, ‘OtherVehicle’, ‘Pedestrian’, ‘RoadBarriers’,
% and ‘Signs’.
autoLabels = categorical(zeros(size(pointCloud.Location,1), size(pointCloud.Location,2)), …
0:12,algObj.AllCategories);
% Convert the input point cloud to five channel image.
I = helperPointCloudToImage(pointCloud);
% Predict the segmentation result.
predictedResult = semanticseg(I, algObj.PretrainedNetwork);
autoLabels(:) = predictedResult;
%using this area we would be able to continuously update the latest file on
% sending the output towards the CAN Network or atleast ensure that the
% item is obtainable
% This area would work the best.
%first we must
end
end
end
function image = helperPointCloudToImage(ptcloud)
% helperPointCloudToImage converts the point cloud to 5 channel image
image = ptcloud.Location;
image(:,5) = ptcloud.Intensity;
rangeData = iComputeRangeData(image(:,1),image(:,2),image(:,3));
image(:,4) = rangeData;
index = isnan(image);
image(index) = 0;
end
function rangeData = iComputeRangeData(xChannel,yChannel,zChannel)
rangeData = sqrt(xChannel.*xChannel+yChannel.*yChannel+zChannel.*zChannel);
end I have an error in my machine learning algorithm where it would display an issue with the image size. I am using a .pcd file different to pandaSet yet I am wondering about why theres an error with the network model. I cant seem to determine what creates the netSize and how would I influence it for a new set of pcd dataSet
Error is:
Error using
Input image size must be greater than [64 1856]. The minimum input image size must be equal to or
greater than the input size in image input layer of the network.
Error in ()
iCheckImage(I, netSize);
Error in (line 244)
params = iParseInputs(I, net, varargin{:});
Error in lidar.labeler.LidarSemanticSegmentation/run (line 120)
predictedResult = semanticseg(I, algObj.PretrainedNetwork);
Error in lidar.labeler.AutomationAlgorithm/doRun (line 611)
videoLabels = run(this, frame);
Error in lidar.internal.lidarLabeler.tool.TemporalLabelingTool/runAlgorithm
Error in vision.internal.labeler.tool.AlgorithmTab/setAlgorithmModeAndExecute
Error in vision.internal.labeler.tool.AlgorithmTab
Error in internal.Callback.execute (line 128)
feval(callback, src, event);
Error in matlab.ui.internal.toolstrip.base.Action/PeerEventCallback (line 846)
internal.Callback.execute(this.PushPerformedFcn, this, eventdata);
Error in matlab.ui.internal.toolstrip.base.ActionInterface>@(event,data)PeerEventCallback(this,event,data) (line 57)
this.PeerEventListener = addlistener(this.Peer, ‘peerEvent’, @(event, data) PeerEventCallback(this, event, data));
Error in hgfeval (line 62)
feval(fcn{1},varargin{:},fcn{2:end});
Error in javaaddlistener>cbBridge (line 52)
hgfeval(response, java(o), e.JavaEvent)
Error in javaaddlistener>@(o,e)cbBridge(o,e,response) (line 47)
@(o,e) cbBridge(o,e,response));
classdef LidarSemanticSegmentation < lidar.labeler.AutomationAlgorithm
% LidarSemanticSegmentation Automation algorithm performs semantic
% segmentation in the point cloud.
% LidarSemanticSegmentation is an automation algorithm for segmenting
% a point cloud using SqueezeSegV2 semantic segmentation network
% which is trained on Pandaset data set.
%
% See also lidarLabeler, groundTruthLabeler
% lidar.labeler.AutomationAlgorithm.
% Copyright 2021 The MathWorks, Inc.
% ———————————————————————-
% Step 1: Define the required properties describing the algorithm. This
% includes Name, Description, and UserDirections.
properties(Constant)
% Name Algorithm Name
% Character vector specifying the name of the algorithm.
Name = ‘Lidar Semantic Segmentation’;
% Description Algorithm Description
% Character vector specifying the short description of the algorithm.
Description = ‘Segment the point cloud using SqueezeSegV2 network.’;
% UserDirections Algorithm Usage Directions
% Cell array of character vectors specifying directions for
% algorithm users to follow to use the algorithm.
UserDirections = {[‘ROI Label Definition Selection: select one of ‘ …
‘the ROI definitions to be labeled’], …
‘Run: Press RUN to run the automation algorithm. ‘, …
[‘Review and Modify: Review automated labels over the interval ‘, …
‘using playback controls. Modify/delete/add ROIs that were not ‘ …
‘satisfactorily automated at this stage. If the results are ‘ …
‘satisfactory, click Accept to accept the automated labels.’], …
[‘Accept/Cancel: If the results of automation are satisfactory, ‘ …
‘click Accept to accept all automated labels and return to ‘ …
‘manual labeling. If the results of automation are not ‘ …
‘satisfactory, click Cancel to return to manual labeling ‘ …
‘without saving the automated labels.’]};
end
% ———————————————————————
% Step 2: Define properties you want to use during the algorithm
% execution.
properties
% AllCategories
% AllCategories holds the default ‘unlabelled’, ‘Vegetation’,
% ‘Ground’, ‘Road’, ‘RoadMarkings’, ‘SideWalk’, ‘Car’, ‘Truck’,
% ‘OtherVehicle’, ‘Pedestrian’, ‘RoadBarriers’, ‘Signs’,
% ‘Buildings’ categorical types.
AllCategories = {‘unlabelled’};
% PretrainedNetwork
% PretrainedNetwork saves the pretrained SqueezeSegV2 network.
PretrainedNetwork
end
%———————————————————————-
% Note: this method needs to be included for lidarLabeler app to
% recognize it as using pointcloud
methods (Static)
% This method is static to allow the apps to call it and check the
% signal type before instantiation. When users refresh the
% algorithm list, we can quickly check and discard algorithms for
% any signal that is not support in a given app.
function isValid = checkSignalType(signalType)
isValid = (signalType == vision.labeler.loading.SignalType.PointCloud);
end
end
%———————————————————————-
% Step 3: Define methods used for setting up the algorithm.
methods
function isValid = checkLabelDefinition(algObj, labelDef)
% Only Voxel ROI label definitions are valid for the Lidar
% semantic segmentation algorithm.
isValid = labelDef.Type == lidarLabelType.Voxel;
if isValid
algObj.AllCategories{end+1} = labelDef.Name;
end
end
function isReady = checkSetup(algObj)
% Is there one selected ROI Label definition to automate.
isReady = ~isempty(algObj.SelectedLabelDefinitions);
end
end
%———————————————————————-
% Step 4: Specify algorithm execution. This controls what happens when
% the user presses RUN. Algorithm execution proceeds by first
% executing initialize on the first frame, followed by run on
% every frame, and terminate on the last frame.
methods
function initialize(algObj,~)
% Load the pretrained SqueezeSegV2 semantic segmentation network.
outputFolder = fullfile(tempdir, ‘Pandaset’);
pretrainedSqueezeSeg = load(fullfile(outputFolder,’trainedSqueezeSegV2PandasetNet.mat’));
% Store the network in the ‘PretrainedNetwork’ property of this object.
algObj.PretrainedNetwork = pretrainedSqueezeSeg.net;
end
function autoLabels = run(algObj, pointCloud)
% Setup categorical matrix with categories including
% ‘Vegetation’, ‘Ground’, ‘Road’, ‘RoadMarkings’, ‘SideWalk’,
% ‘Car’, ‘Truck’, ‘OtherVehicle’, ‘Pedestrian’, ‘RoadBarriers’,
% and ‘Signs’.
autoLabels = categorical(zeros(size(pointCloud.Location,1), size(pointCloud.Location,2)), …
0:12,algObj.AllCategories);
% Convert the input point cloud to five channel image.
I = helperPointCloudToImage(pointCloud);
% Predict the segmentation result.
predictedResult = semanticseg(I, algObj.PretrainedNetwork);
autoLabels(:) = predictedResult;
%using this area we would be able to continuously update the latest file on
% sending the output towards the CAN Network or atleast ensure that the
% item is obtainable
% This area would work the best.
%first we must
end
end
end
function image = helperPointCloudToImage(ptcloud)
% helperPointCloudToImage converts the point cloud to 5 channel image
image = ptcloud.Location;
image(:,5) = ptcloud.Intensity;
rangeData = iComputeRangeData(image(:,1),image(:,2),image(:,3));
image(:,4) = rangeData;
index = isnan(image);
image(index) = 0;
end
function rangeData = iComputeRangeData(xChannel,yChannel,zChannel)
rangeData = sqrt(xChannel.*xChannel+yChannel.*yChannel+zChannel.*zChannel);
end machine learning, neural network MATLAB Answers — New Questions