How to fix the error: Error using trainNetwork, Input data indices must be nonnegative integers.
I am working on "wave segmentaion using deep learning" which can be found in the page: https://www.mathworks.com/help/signal/ug/waveform-segmentation-using-deep-learning.html#WaveformSegmentationUsingDeepLearningExample-15
This is a problem of sequence-to-sequnce classification. (e.g., input: (0.5, -5, 3, 10, 40, …); prediction: (P, T, T, T, n/a,…))
I apply Tranformer encoder based on the code by Ben (Matlab staff, https://www.mathworks.com/matlabcentral/answers/2014811-is-there-any-documentation-on-how-to-build-a-transformer-encoder-from-scratch-in-matlab ), and replace LSTM layer by a Transformer encoder. The modified code by me is given at the bottom.
When I run the section of network training, I got an error message as follows, and hopefully could get some help to fix the problem.
———————————————————————————————————————————————————————————————
Error in waveExtractionTest_TransEnc (line …)
filteredNet = trainNetwork(filteredTrainSignalss,trainLabels,net,options);
Caused by:
Error using nnet.internal.cnn.layer.util.EmbeddingDAGNetworkBaseStrategy/embedData
Input data indices must be nonnegative integers.
——————————————————————————————————————————————————————–
%% Download the data
dataURL = ‘https://www.mathworks.com/supportfiles/SPT/data/QTDatabaseECGData1.zip’;
dirQT = pwd;
datasetFolder = fullfile(dirQT,’QTDataset’);
zipFile = fullfile(dirQT,’QTDatabaseECGData.zip’);
if ~exist(datasetFolder,’dir’)
websave(zipFile,dataURL);
unzip(zipFile,dirQT);
end
%%
sds = signalDatastore(datasetFolder,’SignalVariableNames’,["ecgSignal","signalRegionLabels"])
%%
rng default
[trainIdx,~,testIdx] = dividerand(numel(sds.Files),0.8,0,0.2);
trainDs = subset(sds,trainIdx);
testDs = subset(sds,testIdx);
%%
trainDs = transform(trainDs,@resizeData);
testDs = transform(testDs,@resizeData);
%%
% Bandpass filter design
hFilt = designfilt(‘bandpassiir’, ‘StopbandFrequency1′,0.4215,’PassbandFrequency1’, 0.5, …
‘PassbandFrequency2′,40,’StopbandFrequency2’,53.345,…
‘StopbandAttenuation1′,60,’PassbandRipple’,0.1,’StopbandAttenuation2′,60,…
‘SampleRate’,250,’DesignMethod’,’ellip’);
% Create tall arrays from the transformed datastores and filter the signals
tallTrainSet = tall(trainDs);
tallTestSet = tall(testDs);
filteredTrainSignals = gather(cellfun(@(x)filter(hFilt,x),tallTrainSet(:,1),’UniformOutput’,false));
trainLabels = gather(tallTrainSet(:,2));
filteredTestSignals = gather(cellfun(@(x)filter(hFilt,x),tallTestSet(:,1),’UniformOutput’,false));
testLabels = gather(tallTestSet(:,2));
%% Create model
% We will use 2 encoder layers.
numHeads = 1;
numKeyChannels = 20;
feedforwardHiddenSize = 100;
modelHiddenSize = 20;
% Since the values in the sequence can be 1,2, …, 10 the "vocabulary" size is 10.
vocabSize = 100000; % the size of input sequence of one sample-training-data is 5000
inputSize = 1;
encoderLayers = [
sequenceInputLayer(1,Name="in") % input
wordEmbeddingLayer(modelHiddenSize,vocabSize,Name="embedding") % embedding
positionEmbeddingLayer(modelHiddenSize,vocabSize) % position embedding
additionLayer(2,Name="embed_add") % add the data and position embeddings
selfAttentionLayer(numHeads,numKeyChannels) % encoder block 1
additionLayer(2,Name="attention_add") %
layerNormalizationLayer(Name="attention_norm") %
fullyConnectedLayer(feedforwardHiddenSize) %
reluLayer %
fullyConnectedLayer(modelHiddenSize) %
additionLayer(2,Name="feedforward_add") %
layerNormalizationLayer(Name="encoder1_out") %
selfAttentionLayer(numHeads,numKeyChannels) % encoder block 2
additionLayer(2,Name="attention2_add") %
layerNormalizationLayer(Name="attention2_norm") %
fullyConnectedLayer(feedforwardHiddenSize) %
reluLayer %
fullyConnectedLayer(modelHiddenSize) %
additionLayer(2,Name="feedforward2_add") %
layerNormalizationLayer() %
% indexing1dLayer %
% fullyConnectedLayer(inputSize)
fullyConnectedLayer(4)
softmaxLayer("Name","softmax")
classificationLayer("Name","classification")
]; % output head
%
net = layerGraph(encoderLayers);
net = connectLayers(net,"embed_add","attention_add/in2");
net = connectLayers(net,"embedding","embed_add/in2");
net = connectLayers(net,"attention_norm","feedforward_add/in2");
net = connectLayers(net,"encoder1_out","attention2_add/in2");
net = connectLayers(net,"attention2_norm","feedforward2_add/in2");
% net = initialize(net);
% analyze the network to see how data flows through it
analyzeNetwork(net)
%
%%
options = trainingOptions("adam", …
MaxEpochs = 10, …
MiniBatchSize = 50, …
Plots="training-progress", …
Shuffle="every-epoch", …
InitialLearnRate=1e-2, …
LearnRateDropFactor=0.9, …
LearnRateDropPeriod=3, …
LearnRateSchedule="piecewise");
%%
filteredNet = trainNetwork(filteredTrainSignals,trainLabels,net,options);I am working on "wave segmentaion using deep learning" which can be found in the page: https://www.mathworks.com/help/signal/ug/waveform-segmentation-using-deep-learning.html#WaveformSegmentationUsingDeepLearningExample-15
This is a problem of sequence-to-sequnce classification. (e.g., input: (0.5, -5, 3, 10, 40, …); prediction: (P, T, T, T, n/a,…))
I apply Tranformer encoder based on the code by Ben (Matlab staff, https://www.mathworks.com/matlabcentral/answers/2014811-is-there-any-documentation-on-how-to-build-a-transformer-encoder-from-scratch-in-matlab ), and replace LSTM layer by a Transformer encoder. The modified code by me is given at the bottom.
When I run the section of network training, I got an error message as follows, and hopefully could get some help to fix the problem.
———————————————————————————————————————————————————————————————
Error in waveExtractionTest_TransEnc (line …)
filteredNet = trainNetwork(filteredTrainSignalss,trainLabels,net,options);
Caused by:
Error using nnet.internal.cnn.layer.util.EmbeddingDAGNetworkBaseStrategy/embedData
Input data indices must be nonnegative integers.
——————————————————————————————————————————————————————–
%% Download the data
dataURL = ‘https://www.mathworks.com/supportfiles/SPT/data/QTDatabaseECGData1.zip’;
dirQT = pwd;
datasetFolder = fullfile(dirQT,’QTDataset’);
zipFile = fullfile(dirQT,’QTDatabaseECGData.zip’);
if ~exist(datasetFolder,’dir’)
websave(zipFile,dataURL);
unzip(zipFile,dirQT);
end
%%
sds = signalDatastore(datasetFolder,’SignalVariableNames’,["ecgSignal","signalRegionLabels"])
%%
rng default
[trainIdx,~,testIdx] = dividerand(numel(sds.Files),0.8,0,0.2);
trainDs = subset(sds,trainIdx);
testDs = subset(sds,testIdx);
%%
trainDs = transform(trainDs,@resizeData);
testDs = transform(testDs,@resizeData);
%%
% Bandpass filter design
hFilt = designfilt(‘bandpassiir’, ‘StopbandFrequency1′,0.4215,’PassbandFrequency1’, 0.5, …
‘PassbandFrequency2′,40,’StopbandFrequency2’,53.345,…
‘StopbandAttenuation1′,60,’PassbandRipple’,0.1,’StopbandAttenuation2′,60,…
‘SampleRate’,250,’DesignMethod’,’ellip’);
% Create tall arrays from the transformed datastores and filter the signals
tallTrainSet = tall(trainDs);
tallTestSet = tall(testDs);
filteredTrainSignals = gather(cellfun(@(x)filter(hFilt,x),tallTrainSet(:,1),’UniformOutput’,false));
trainLabels = gather(tallTrainSet(:,2));
filteredTestSignals = gather(cellfun(@(x)filter(hFilt,x),tallTestSet(:,1),’UniformOutput’,false));
testLabels = gather(tallTestSet(:,2));
%% Create model
% We will use 2 encoder layers.
numHeads = 1;
numKeyChannels = 20;
feedforwardHiddenSize = 100;
modelHiddenSize = 20;
% Since the values in the sequence can be 1,2, …, 10 the "vocabulary" size is 10.
vocabSize = 100000; % the size of input sequence of one sample-training-data is 5000
inputSize = 1;
encoderLayers = [
sequenceInputLayer(1,Name="in") % input
wordEmbeddingLayer(modelHiddenSize,vocabSize,Name="embedding") % embedding
positionEmbeddingLayer(modelHiddenSize,vocabSize) % position embedding
additionLayer(2,Name="embed_add") % add the data and position embeddings
selfAttentionLayer(numHeads,numKeyChannels) % encoder block 1
additionLayer(2,Name="attention_add") %
layerNormalizationLayer(Name="attention_norm") %
fullyConnectedLayer(feedforwardHiddenSize) %
reluLayer %
fullyConnectedLayer(modelHiddenSize) %
additionLayer(2,Name="feedforward_add") %
layerNormalizationLayer(Name="encoder1_out") %
selfAttentionLayer(numHeads,numKeyChannels) % encoder block 2
additionLayer(2,Name="attention2_add") %
layerNormalizationLayer(Name="attention2_norm") %
fullyConnectedLayer(feedforwardHiddenSize) %
reluLayer %
fullyConnectedLayer(modelHiddenSize) %
additionLayer(2,Name="feedforward2_add") %
layerNormalizationLayer() %
% indexing1dLayer %
% fullyConnectedLayer(inputSize)
fullyConnectedLayer(4)
softmaxLayer("Name","softmax")
classificationLayer("Name","classification")
]; % output head
%
net = layerGraph(encoderLayers);
net = connectLayers(net,"embed_add","attention_add/in2");
net = connectLayers(net,"embedding","embed_add/in2");
net = connectLayers(net,"attention_norm","feedforward_add/in2");
net = connectLayers(net,"encoder1_out","attention2_add/in2");
net = connectLayers(net,"attention2_norm","feedforward2_add/in2");
% net = initialize(net);
% analyze the network to see how data flows through it
analyzeNetwork(net)
%
%%
options = trainingOptions("adam", …
MaxEpochs = 10, …
MiniBatchSize = 50, …
Plots="training-progress", …
Shuffle="every-epoch", …
InitialLearnRate=1e-2, …
LearnRateDropFactor=0.9, …
LearnRateDropPeriod=3, …
LearnRateSchedule="piecewise");
%%
filteredNet = trainNetwork(filteredTrainSignals,trainLabels,net,options); I am working on "wave segmentaion using deep learning" which can be found in the page: https://www.mathworks.com/help/signal/ug/waveform-segmentation-using-deep-learning.html#WaveformSegmentationUsingDeepLearningExample-15
This is a problem of sequence-to-sequnce classification. (e.g., input: (0.5, -5, 3, 10, 40, …); prediction: (P, T, T, T, n/a,…))
I apply Tranformer encoder based on the code by Ben (Matlab staff, https://www.mathworks.com/matlabcentral/answers/2014811-is-there-any-documentation-on-how-to-build-a-transformer-encoder-from-scratch-in-matlab ), and replace LSTM layer by a Transformer encoder. The modified code by me is given at the bottom.
When I run the section of network training, I got an error message as follows, and hopefully could get some help to fix the problem.
———————————————————————————————————————————————————————————————
Error in waveExtractionTest_TransEnc (line …)
filteredNet = trainNetwork(filteredTrainSignalss,trainLabels,net,options);
Caused by:
Error using nnet.internal.cnn.layer.util.EmbeddingDAGNetworkBaseStrategy/embedData
Input data indices must be nonnegative integers.
——————————————————————————————————————————————————————–
%% Download the data
dataURL = ‘https://www.mathworks.com/supportfiles/SPT/data/QTDatabaseECGData1.zip’;
dirQT = pwd;
datasetFolder = fullfile(dirQT,’QTDataset’);
zipFile = fullfile(dirQT,’QTDatabaseECGData.zip’);
if ~exist(datasetFolder,’dir’)
websave(zipFile,dataURL);
unzip(zipFile,dirQT);
end
%%
sds = signalDatastore(datasetFolder,’SignalVariableNames’,["ecgSignal","signalRegionLabels"])
%%
rng default
[trainIdx,~,testIdx] = dividerand(numel(sds.Files),0.8,0,0.2);
trainDs = subset(sds,trainIdx);
testDs = subset(sds,testIdx);
%%
trainDs = transform(trainDs,@resizeData);
testDs = transform(testDs,@resizeData);
%%
% Bandpass filter design
hFilt = designfilt(‘bandpassiir’, ‘StopbandFrequency1′,0.4215,’PassbandFrequency1’, 0.5, …
‘PassbandFrequency2′,40,’StopbandFrequency2’,53.345,…
‘StopbandAttenuation1′,60,’PassbandRipple’,0.1,’StopbandAttenuation2′,60,…
‘SampleRate’,250,’DesignMethod’,’ellip’);
% Create tall arrays from the transformed datastores and filter the signals
tallTrainSet = tall(trainDs);
tallTestSet = tall(testDs);
filteredTrainSignals = gather(cellfun(@(x)filter(hFilt,x),tallTrainSet(:,1),’UniformOutput’,false));
trainLabels = gather(tallTrainSet(:,2));
filteredTestSignals = gather(cellfun(@(x)filter(hFilt,x),tallTestSet(:,1),’UniformOutput’,false));
testLabels = gather(tallTestSet(:,2));
%% Create model
% We will use 2 encoder layers.
numHeads = 1;
numKeyChannels = 20;
feedforwardHiddenSize = 100;
modelHiddenSize = 20;
% Since the values in the sequence can be 1,2, …, 10 the "vocabulary" size is 10.
vocabSize = 100000; % the size of input sequence of one sample-training-data is 5000
inputSize = 1;
encoderLayers = [
sequenceInputLayer(1,Name="in") % input
wordEmbeddingLayer(modelHiddenSize,vocabSize,Name="embedding") % embedding
positionEmbeddingLayer(modelHiddenSize,vocabSize) % position embedding
additionLayer(2,Name="embed_add") % add the data and position embeddings
selfAttentionLayer(numHeads,numKeyChannels) % encoder block 1
additionLayer(2,Name="attention_add") %
layerNormalizationLayer(Name="attention_norm") %
fullyConnectedLayer(feedforwardHiddenSize) %
reluLayer %
fullyConnectedLayer(modelHiddenSize) %
additionLayer(2,Name="feedforward_add") %
layerNormalizationLayer(Name="encoder1_out") %
selfAttentionLayer(numHeads,numKeyChannels) % encoder block 2
additionLayer(2,Name="attention2_add") %
layerNormalizationLayer(Name="attention2_norm") %
fullyConnectedLayer(feedforwardHiddenSize) %
reluLayer %
fullyConnectedLayer(modelHiddenSize) %
additionLayer(2,Name="feedforward2_add") %
layerNormalizationLayer() %
% indexing1dLayer %
% fullyConnectedLayer(inputSize)
fullyConnectedLayer(4)
softmaxLayer("Name","softmax")
classificationLayer("Name","classification")
]; % output head
%
net = layerGraph(encoderLayers);
net = connectLayers(net,"embed_add","attention_add/in2");
net = connectLayers(net,"embedding","embed_add/in2");
net = connectLayers(net,"attention_norm","feedforward_add/in2");
net = connectLayers(net,"encoder1_out","attention2_add/in2");
net = connectLayers(net,"attention2_norm","feedforward2_add/in2");
% net = initialize(net);
% analyze the network to see how data flows through it
analyzeNetwork(net)
%
%%
options = trainingOptions("adam", …
MaxEpochs = 10, …
MiniBatchSize = 50, …
Plots="training-progress", …
Shuffle="every-epoch", …
InitialLearnRate=1e-2, …
LearnRateDropFactor=0.9, …
LearnRateDropPeriod=3, …
LearnRateSchedule="piecewise");
%%
filteredNet = trainNetwork(filteredTrainSignals,trainLabels,net,options); sequence-to-sequence classification, transformer encoder, ecg signal wave segmentation MATLAB Answers — New Questions