Conv2d, fully connected layers, and regression – number of predictions and number of channels mismatch
Hello! I’m trying to get a CNN up and running and I think I’m almost there, but I’m still running into a few errors. What I would like is to have a series of 1D convolutions with a featureInputLayer, but those throw the following error:
Caused by:
Layer ‘conv1d1’: Input data must have one spatial dimension only, one temporal dimension only, or one of each. Instead, it
has 0 spatial dimensions and 0 temporal dimensions.
According to https://www.mathworks.com/matlabcentral/answers/1747170-error-on-convolutional-layer-s-input-data-has-0-spatial-dimensions-and-0-temporal-dimensions the workaround is to reformat the CNN to a conv2d using N x 1 "images." So, I’ve tried that and now I have a new and interesting problem:
Error using trainnet (line 46)
Number of channels in predictions (3) must match the number of channels in the targets (1).
Error in convNet_1_edits (line 97)
[trainedNet, trainInfo]=trainnet(masterTrain,net,’mse’,options);
This problem has been approached several times before (https://www.mathworks.com/matlabcentral/answers/2123216-error-in-deep-learning-classification-code/?s_tid=ans_lp_feed_leaf and others) but none of them that I’ve found have used fully connected layers. For reference, my CNN is the following:
layers = [
imageInputLayer(nFeatures, "name", "input");
convolution2dLayer(f1Size, numFilters1, ‘padding’, ‘same’,…
"name", "conv1")
batchNormalizationLayer();
reluLayer();
convolution2dLayer(f1Size, numFilters2, "padding", "same",…
‘numchannels’, numFilters1, ‘name’, ‘conv2’)
batchNormalizationLayer();
reluLayer();
maxPooling2dLayer([1, 3]);
convolution2dLayer(f1Size, numFilters3, "padding", "same",…
‘numchannels’, numFilters2, ‘name’, ‘conv3’)
batchNormalizationLayer();
reluLayer();
maxPooling2dLayer([1, 5]);
fullyConnectedLayer(60, ‘name’, ‘fc1’)
reluLayer()
fullyConnectedLayer(30, ‘name’, ‘fc2’)
reluLayer()
fullyConnectedLayer(15, ‘name’, ‘fc3’)
reluLayer()
fullyConnectedLayer(3, ‘name’, ‘fc4’)
% regressionLayer()
];
net = dlnetwork;
net = addLayers(net, layers);
And I am using trainnetwork and a datastore. The output of read(ds) produces the following:
read(masterTest)
ans =
1×4 cell array
{1×1341 double} {[0.6500]} {[6.8000e-07]} {[0.0250]}
Where I have a 1 x 1341 set of features being used to predict three outputs. I thought the three neurons in my final fully connected layer would be the regression outputs, but there seems to be a mismatch in the number of predictions and number of targets. How can I align the number of predictions and targets when using regression in FC layers?Hello! I’m trying to get a CNN up and running and I think I’m almost there, but I’m still running into a few errors. What I would like is to have a series of 1D convolutions with a featureInputLayer, but those throw the following error:
Caused by:
Layer ‘conv1d1’: Input data must have one spatial dimension only, one temporal dimension only, or one of each. Instead, it
has 0 spatial dimensions and 0 temporal dimensions.
According to https://www.mathworks.com/matlabcentral/answers/1747170-error-on-convolutional-layer-s-input-data-has-0-spatial-dimensions-and-0-temporal-dimensions the workaround is to reformat the CNN to a conv2d using N x 1 "images." So, I’ve tried that and now I have a new and interesting problem:
Error using trainnet (line 46)
Number of channels in predictions (3) must match the number of channels in the targets (1).
Error in convNet_1_edits (line 97)
[trainedNet, trainInfo]=trainnet(masterTrain,net,’mse’,options);
This problem has been approached several times before (https://www.mathworks.com/matlabcentral/answers/2123216-error-in-deep-learning-classification-code/?s_tid=ans_lp_feed_leaf and others) but none of them that I’ve found have used fully connected layers. For reference, my CNN is the following:
layers = [
imageInputLayer(nFeatures, "name", "input");
convolution2dLayer(f1Size, numFilters1, ‘padding’, ‘same’,…
"name", "conv1")
batchNormalizationLayer();
reluLayer();
convolution2dLayer(f1Size, numFilters2, "padding", "same",…
‘numchannels’, numFilters1, ‘name’, ‘conv2’)
batchNormalizationLayer();
reluLayer();
maxPooling2dLayer([1, 3]);
convolution2dLayer(f1Size, numFilters3, "padding", "same",…
‘numchannels’, numFilters2, ‘name’, ‘conv3’)
batchNormalizationLayer();
reluLayer();
maxPooling2dLayer([1, 5]);
fullyConnectedLayer(60, ‘name’, ‘fc1’)
reluLayer()
fullyConnectedLayer(30, ‘name’, ‘fc2’)
reluLayer()
fullyConnectedLayer(15, ‘name’, ‘fc3’)
reluLayer()
fullyConnectedLayer(3, ‘name’, ‘fc4’)
% regressionLayer()
];
net = dlnetwork;
net = addLayers(net, layers);
And I am using trainnetwork and a datastore. The output of read(ds) produces the following:
read(masterTest)
ans =
1×4 cell array
{1×1341 double} {[0.6500]} {[6.8000e-07]} {[0.0250]}
Where I have a 1 x 1341 set of features being used to predict three outputs. I thought the three neurons in my final fully connected layer would be the regression outputs, but there seems to be a mismatch in the number of predictions and number of targets. How can I align the number of predictions and targets when using regression in FC layers? Hello! I’m trying to get a CNN up and running and I think I’m almost there, but I’m still running into a few errors. What I would like is to have a series of 1D convolutions with a featureInputLayer, but those throw the following error:
Caused by:
Layer ‘conv1d1’: Input data must have one spatial dimension only, one temporal dimension only, or one of each. Instead, it
has 0 spatial dimensions and 0 temporal dimensions.
According to https://www.mathworks.com/matlabcentral/answers/1747170-error-on-convolutional-layer-s-input-data-has-0-spatial-dimensions-and-0-temporal-dimensions the workaround is to reformat the CNN to a conv2d using N x 1 "images." So, I’ve tried that and now I have a new and interesting problem:
Error using trainnet (line 46)
Number of channels in predictions (3) must match the number of channels in the targets (1).
Error in convNet_1_edits (line 97)
[trainedNet, trainInfo]=trainnet(masterTrain,net,’mse’,options);
This problem has been approached several times before (https://www.mathworks.com/matlabcentral/answers/2123216-error-in-deep-learning-classification-code/?s_tid=ans_lp_feed_leaf and others) but none of them that I’ve found have used fully connected layers. For reference, my CNN is the following:
layers = [
imageInputLayer(nFeatures, "name", "input");
convolution2dLayer(f1Size, numFilters1, ‘padding’, ‘same’,…
"name", "conv1")
batchNormalizationLayer();
reluLayer();
convolution2dLayer(f1Size, numFilters2, "padding", "same",…
‘numchannels’, numFilters1, ‘name’, ‘conv2’)
batchNormalizationLayer();
reluLayer();
maxPooling2dLayer([1, 3]);
convolution2dLayer(f1Size, numFilters3, "padding", "same",…
‘numchannels’, numFilters2, ‘name’, ‘conv3’)
batchNormalizationLayer();
reluLayer();
maxPooling2dLayer([1, 5]);
fullyConnectedLayer(60, ‘name’, ‘fc1’)
reluLayer()
fullyConnectedLayer(30, ‘name’, ‘fc2’)
reluLayer()
fullyConnectedLayer(15, ‘name’, ‘fc3’)
reluLayer()
fullyConnectedLayer(3, ‘name’, ‘fc4’)
% regressionLayer()
];
net = dlnetwork;
net = addLayers(net, layers);
And I am using trainnetwork and a datastore. The output of read(ds) produces the following:
read(masterTest)
ans =
1×4 cell array
{1×1341 double} {[0.6500]} {[6.8000e-07]} {[0.0250]}
Where I have a 1 x 1341 set of features being used to predict three outputs. I thought the three neurons in my final fully connected layer would be the regression outputs, but there seems to be a mismatch in the number of predictions and number of targets. How can I align the number of predictions and targets when using regression in FC layers? convolution, cnn, regression, trainnet MATLAB Answers — New Questions