Category: Matlab
Category Archives: Matlab
Simulink integrator block reset and integrate at single time step
Hello,
I am working on a system where I have two state variables, one of which is to be activated (becomes nonzero) in the middle of simulation.
I am using the "reset" feature of the integrator block to model the appearance of the state variable by feeding appropriate initial state value and triggering reset.
I noticed that when reset is triggered, the integrator block does only the reset part and skips integration, thus spending one timestep without integrating.
I was wondering whether it would be possible to do the reset and then integrate the variable all in one time step when reset is triggered.
Thank you!Hello,
I am working on a system where I have two state variables, one of which is to be activated (becomes nonzero) in the middle of simulation.
I am using the "reset" feature of the integrator block to model the appearance of the state variable by feeding appropriate initial state value and triggering reset.
I noticed that when reset is triggered, the integrator block does only the reset part and skips integration, thus spending one timestep without integrating.
I was wondering whether it would be possible to do the reset and then integrate the variable all in one time step when reset is triggered.
Thank you! Hello,
I am working on a system where I have two state variables, one of which is to be activated (becomes nonzero) in the middle of simulation.
I am using the "reset" feature of the integrator block to model the appearance of the state variable by feeding appropriate initial state value and triggering reset.
I noticed that when reset is triggered, the integrator block does only the reset part and skips integration, thus spending one timestep without integrating.
I was wondering whether it would be possible to do the reset and then integrate the variable all in one time step when reset is triggered.
Thank you! integration, simulink, integrator, simulation MATLAB Answers — New Questions
How to fix the error: Error using trainNetwork, Input data indices must be nonnegative integers.
I am working on "wave segmentaion using deep learning" which can be found in the page: https://www.mathworks.com/help/signal/ug/waveform-segmentation-using-deep-learning.html#WaveformSegmentationUsingDeepLearningExample-15
This is a problem of sequence-to-sequnce classification. (e.g., input: (0.5, -5, 3, 10, 40, …); prediction: (P, T, T, T, n/a,…))
I apply Tranformer encoder based on the code by Ben (Matlab staff, https://www.mathworks.com/matlabcentral/answers/2014811-is-there-any-documentation-on-how-to-build-a-transformer-encoder-from-scratch-in-matlab ), and replace LSTM layer by a Transformer encoder. The modified code by me is given at the bottom.
When I run the section of network training, I got an error message as follows, and hopefully could get some help to fix the problem.
———————————————————————————————————————————————————————————————
Error in waveExtractionTest_TransEnc (line …)
filteredNet = trainNetwork(filteredTrainSignalss,trainLabels,net,options);
Caused by:
Error using nnet.internal.cnn.layer.util.EmbeddingDAGNetworkBaseStrategy/embedData
Input data indices must be nonnegative integers.
——————————————————————————————————————————————————————–
%% Download the data
dataURL = ‘https://www.mathworks.com/supportfiles/SPT/data/QTDatabaseECGData1.zip’;
dirQT = pwd;
datasetFolder = fullfile(dirQT,’QTDataset’);
zipFile = fullfile(dirQT,’QTDatabaseECGData.zip’);
if ~exist(datasetFolder,’dir’)
websave(zipFile,dataURL);
unzip(zipFile,dirQT);
end
%%
sds = signalDatastore(datasetFolder,’SignalVariableNames’,["ecgSignal","signalRegionLabels"])
%%
rng default
[trainIdx,~,testIdx] = dividerand(numel(sds.Files),0.8,0,0.2);
trainDs = subset(sds,trainIdx);
testDs = subset(sds,testIdx);
%%
trainDs = transform(trainDs,@resizeData);
testDs = transform(testDs,@resizeData);
%%
% Bandpass filter design
hFilt = designfilt(‘bandpassiir’, ‘StopbandFrequency1′,0.4215,’PassbandFrequency1’, 0.5, …
‘PassbandFrequency2′,40,’StopbandFrequency2’,53.345,…
‘StopbandAttenuation1′,60,’PassbandRipple’,0.1,’StopbandAttenuation2′,60,…
‘SampleRate’,250,’DesignMethod’,’ellip’);
% Create tall arrays from the transformed datastores and filter the signals
tallTrainSet = tall(trainDs);
tallTestSet = tall(testDs);
filteredTrainSignals = gather(cellfun(@(x)filter(hFilt,x),tallTrainSet(:,1),’UniformOutput’,false));
trainLabels = gather(tallTrainSet(:,2));
filteredTestSignals = gather(cellfun(@(x)filter(hFilt,x),tallTestSet(:,1),’UniformOutput’,false));
testLabels = gather(tallTestSet(:,2));
%% Create model
% We will use 2 encoder layers.
numHeads = 1;
numKeyChannels = 20;
feedforwardHiddenSize = 100;
modelHiddenSize = 20;
% Since the values in the sequence can be 1,2, …, 10 the "vocabulary" size is 10.
vocabSize = 100000; % the size of input sequence of one sample-training-data is 5000
inputSize = 1;
encoderLayers = [
sequenceInputLayer(1,Name="in") % input
wordEmbeddingLayer(modelHiddenSize,vocabSize,Name="embedding") % embedding
positionEmbeddingLayer(modelHiddenSize,vocabSize) % position embedding
additionLayer(2,Name="embed_add") % add the data and position embeddings
selfAttentionLayer(numHeads,numKeyChannels) % encoder block 1
additionLayer(2,Name="attention_add") %
layerNormalizationLayer(Name="attention_norm") %
fullyConnectedLayer(feedforwardHiddenSize) %
reluLayer %
fullyConnectedLayer(modelHiddenSize) %
additionLayer(2,Name="feedforward_add") %
layerNormalizationLayer(Name="encoder1_out") %
selfAttentionLayer(numHeads,numKeyChannels) % encoder block 2
additionLayer(2,Name="attention2_add") %
layerNormalizationLayer(Name="attention2_norm") %
fullyConnectedLayer(feedforwardHiddenSize) %
reluLayer %
fullyConnectedLayer(modelHiddenSize) %
additionLayer(2,Name="feedforward2_add") %
layerNormalizationLayer() %
% indexing1dLayer %
% fullyConnectedLayer(inputSize)
fullyConnectedLayer(4)
softmaxLayer("Name","softmax")
classificationLayer("Name","classification")
]; % output head
%
net = layerGraph(encoderLayers);
net = connectLayers(net,"embed_add","attention_add/in2");
net = connectLayers(net,"embedding","embed_add/in2");
net = connectLayers(net,"attention_norm","feedforward_add/in2");
net = connectLayers(net,"encoder1_out","attention2_add/in2");
net = connectLayers(net,"attention2_norm","feedforward2_add/in2");
% net = initialize(net);
% analyze the network to see how data flows through it
analyzeNetwork(net)
%
%%
options = trainingOptions("adam", …
MaxEpochs = 10, …
MiniBatchSize = 50, …
Plots="training-progress", …
Shuffle="every-epoch", …
InitialLearnRate=1e-2, …
LearnRateDropFactor=0.9, …
LearnRateDropPeriod=3, …
LearnRateSchedule="piecewise");
%%
filteredNet = trainNetwork(filteredTrainSignals,trainLabels,net,options);I am working on "wave segmentaion using deep learning" which can be found in the page: https://www.mathworks.com/help/signal/ug/waveform-segmentation-using-deep-learning.html#WaveformSegmentationUsingDeepLearningExample-15
This is a problem of sequence-to-sequnce classification. (e.g., input: (0.5, -5, 3, 10, 40, …); prediction: (P, T, T, T, n/a,…))
I apply Tranformer encoder based on the code by Ben (Matlab staff, https://www.mathworks.com/matlabcentral/answers/2014811-is-there-any-documentation-on-how-to-build-a-transformer-encoder-from-scratch-in-matlab ), and replace LSTM layer by a Transformer encoder. The modified code by me is given at the bottom.
When I run the section of network training, I got an error message as follows, and hopefully could get some help to fix the problem.
———————————————————————————————————————————————————————————————
Error in waveExtractionTest_TransEnc (line …)
filteredNet = trainNetwork(filteredTrainSignalss,trainLabels,net,options);
Caused by:
Error using nnet.internal.cnn.layer.util.EmbeddingDAGNetworkBaseStrategy/embedData
Input data indices must be nonnegative integers.
——————————————————————————————————————————————————————–
%% Download the data
dataURL = ‘https://www.mathworks.com/supportfiles/SPT/data/QTDatabaseECGData1.zip’;
dirQT = pwd;
datasetFolder = fullfile(dirQT,’QTDataset’);
zipFile = fullfile(dirQT,’QTDatabaseECGData.zip’);
if ~exist(datasetFolder,’dir’)
websave(zipFile,dataURL);
unzip(zipFile,dirQT);
end
%%
sds = signalDatastore(datasetFolder,’SignalVariableNames’,["ecgSignal","signalRegionLabels"])
%%
rng default
[trainIdx,~,testIdx] = dividerand(numel(sds.Files),0.8,0,0.2);
trainDs = subset(sds,trainIdx);
testDs = subset(sds,testIdx);
%%
trainDs = transform(trainDs,@resizeData);
testDs = transform(testDs,@resizeData);
%%
% Bandpass filter design
hFilt = designfilt(‘bandpassiir’, ‘StopbandFrequency1′,0.4215,’PassbandFrequency1’, 0.5, …
‘PassbandFrequency2′,40,’StopbandFrequency2’,53.345,…
‘StopbandAttenuation1′,60,’PassbandRipple’,0.1,’StopbandAttenuation2′,60,…
‘SampleRate’,250,’DesignMethod’,’ellip’);
% Create tall arrays from the transformed datastores and filter the signals
tallTrainSet = tall(trainDs);
tallTestSet = tall(testDs);
filteredTrainSignals = gather(cellfun(@(x)filter(hFilt,x),tallTrainSet(:,1),’UniformOutput’,false));
trainLabels = gather(tallTrainSet(:,2));
filteredTestSignals = gather(cellfun(@(x)filter(hFilt,x),tallTestSet(:,1),’UniformOutput’,false));
testLabels = gather(tallTestSet(:,2));
%% Create model
% We will use 2 encoder layers.
numHeads = 1;
numKeyChannels = 20;
feedforwardHiddenSize = 100;
modelHiddenSize = 20;
% Since the values in the sequence can be 1,2, …, 10 the "vocabulary" size is 10.
vocabSize = 100000; % the size of input sequence of one sample-training-data is 5000
inputSize = 1;
encoderLayers = [
sequenceInputLayer(1,Name="in") % input
wordEmbeddingLayer(modelHiddenSize,vocabSize,Name="embedding") % embedding
positionEmbeddingLayer(modelHiddenSize,vocabSize) % position embedding
additionLayer(2,Name="embed_add") % add the data and position embeddings
selfAttentionLayer(numHeads,numKeyChannels) % encoder block 1
additionLayer(2,Name="attention_add") %
layerNormalizationLayer(Name="attention_norm") %
fullyConnectedLayer(feedforwardHiddenSize) %
reluLayer %
fullyConnectedLayer(modelHiddenSize) %
additionLayer(2,Name="feedforward_add") %
layerNormalizationLayer(Name="encoder1_out") %
selfAttentionLayer(numHeads,numKeyChannels) % encoder block 2
additionLayer(2,Name="attention2_add") %
layerNormalizationLayer(Name="attention2_norm") %
fullyConnectedLayer(feedforwardHiddenSize) %
reluLayer %
fullyConnectedLayer(modelHiddenSize) %
additionLayer(2,Name="feedforward2_add") %
layerNormalizationLayer() %
% indexing1dLayer %
% fullyConnectedLayer(inputSize)
fullyConnectedLayer(4)
softmaxLayer("Name","softmax")
classificationLayer("Name","classification")
]; % output head
%
net = layerGraph(encoderLayers);
net = connectLayers(net,"embed_add","attention_add/in2");
net = connectLayers(net,"embedding","embed_add/in2");
net = connectLayers(net,"attention_norm","feedforward_add/in2");
net = connectLayers(net,"encoder1_out","attention2_add/in2");
net = connectLayers(net,"attention2_norm","feedforward2_add/in2");
% net = initialize(net);
% analyze the network to see how data flows through it
analyzeNetwork(net)
%
%%
options = trainingOptions("adam", …
MaxEpochs = 10, …
MiniBatchSize = 50, …
Plots="training-progress", …
Shuffle="every-epoch", …
InitialLearnRate=1e-2, …
LearnRateDropFactor=0.9, …
LearnRateDropPeriod=3, …
LearnRateSchedule="piecewise");
%%
filteredNet = trainNetwork(filteredTrainSignals,trainLabels,net,options); I am working on "wave segmentaion using deep learning" which can be found in the page: https://www.mathworks.com/help/signal/ug/waveform-segmentation-using-deep-learning.html#WaveformSegmentationUsingDeepLearningExample-15
This is a problem of sequence-to-sequnce classification. (e.g., input: (0.5, -5, 3, 10, 40, …); prediction: (P, T, T, T, n/a,…))
I apply Tranformer encoder based on the code by Ben (Matlab staff, https://www.mathworks.com/matlabcentral/answers/2014811-is-there-any-documentation-on-how-to-build-a-transformer-encoder-from-scratch-in-matlab ), and replace LSTM layer by a Transformer encoder. The modified code by me is given at the bottom.
When I run the section of network training, I got an error message as follows, and hopefully could get some help to fix the problem.
———————————————————————————————————————————————————————————————
Error in waveExtractionTest_TransEnc (line …)
filteredNet = trainNetwork(filteredTrainSignalss,trainLabels,net,options);
Caused by:
Error using nnet.internal.cnn.layer.util.EmbeddingDAGNetworkBaseStrategy/embedData
Input data indices must be nonnegative integers.
——————————————————————————————————————————————————————–
%% Download the data
dataURL = ‘https://www.mathworks.com/supportfiles/SPT/data/QTDatabaseECGData1.zip’;
dirQT = pwd;
datasetFolder = fullfile(dirQT,’QTDataset’);
zipFile = fullfile(dirQT,’QTDatabaseECGData.zip’);
if ~exist(datasetFolder,’dir’)
websave(zipFile,dataURL);
unzip(zipFile,dirQT);
end
%%
sds = signalDatastore(datasetFolder,’SignalVariableNames’,["ecgSignal","signalRegionLabels"])
%%
rng default
[trainIdx,~,testIdx] = dividerand(numel(sds.Files),0.8,0,0.2);
trainDs = subset(sds,trainIdx);
testDs = subset(sds,testIdx);
%%
trainDs = transform(trainDs,@resizeData);
testDs = transform(testDs,@resizeData);
%%
% Bandpass filter design
hFilt = designfilt(‘bandpassiir’, ‘StopbandFrequency1′,0.4215,’PassbandFrequency1’, 0.5, …
‘PassbandFrequency2′,40,’StopbandFrequency2’,53.345,…
‘StopbandAttenuation1′,60,’PassbandRipple’,0.1,’StopbandAttenuation2′,60,…
‘SampleRate’,250,’DesignMethod’,’ellip’);
% Create tall arrays from the transformed datastores and filter the signals
tallTrainSet = tall(trainDs);
tallTestSet = tall(testDs);
filteredTrainSignals = gather(cellfun(@(x)filter(hFilt,x),tallTrainSet(:,1),’UniformOutput’,false));
trainLabels = gather(tallTrainSet(:,2));
filteredTestSignals = gather(cellfun(@(x)filter(hFilt,x),tallTestSet(:,1),’UniformOutput’,false));
testLabels = gather(tallTestSet(:,2));
%% Create model
% We will use 2 encoder layers.
numHeads = 1;
numKeyChannels = 20;
feedforwardHiddenSize = 100;
modelHiddenSize = 20;
% Since the values in the sequence can be 1,2, …, 10 the "vocabulary" size is 10.
vocabSize = 100000; % the size of input sequence of one sample-training-data is 5000
inputSize = 1;
encoderLayers = [
sequenceInputLayer(1,Name="in") % input
wordEmbeddingLayer(modelHiddenSize,vocabSize,Name="embedding") % embedding
positionEmbeddingLayer(modelHiddenSize,vocabSize) % position embedding
additionLayer(2,Name="embed_add") % add the data and position embeddings
selfAttentionLayer(numHeads,numKeyChannels) % encoder block 1
additionLayer(2,Name="attention_add") %
layerNormalizationLayer(Name="attention_norm") %
fullyConnectedLayer(feedforwardHiddenSize) %
reluLayer %
fullyConnectedLayer(modelHiddenSize) %
additionLayer(2,Name="feedforward_add") %
layerNormalizationLayer(Name="encoder1_out") %
selfAttentionLayer(numHeads,numKeyChannels) % encoder block 2
additionLayer(2,Name="attention2_add") %
layerNormalizationLayer(Name="attention2_norm") %
fullyConnectedLayer(feedforwardHiddenSize) %
reluLayer %
fullyConnectedLayer(modelHiddenSize) %
additionLayer(2,Name="feedforward2_add") %
layerNormalizationLayer() %
% indexing1dLayer %
% fullyConnectedLayer(inputSize)
fullyConnectedLayer(4)
softmaxLayer("Name","softmax")
classificationLayer("Name","classification")
]; % output head
%
net = layerGraph(encoderLayers);
net = connectLayers(net,"embed_add","attention_add/in2");
net = connectLayers(net,"embedding","embed_add/in2");
net = connectLayers(net,"attention_norm","feedforward_add/in2");
net = connectLayers(net,"encoder1_out","attention2_add/in2");
net = connectLayers(net,"attention2_norm","feedforward2_add/in2");
% net = initialize(net);
% analyze the network to see how data flows through it
analyzeNetwork(net)
%
%%
options = trainingOptions("adam", …
MaxEpochs = 10, …
MiniBatchSize = 50, …
Plots="training-progress", …
Shuffle="every-epoch", …
InitialLearnRate=1e-2, …
LearnRateDropFactor=0.9, …
LearnRateDropPeriod=3, …
LearnRateSchedule="piecewise");
%%
filteredNet = trainNetwork(filteredTrainSignals,trainLabels,net,options); sequence-to-sequence classification, transformer encoder, ecg signal wave segmentation MATLAB Answers — New Questions
How to fix the error: Error using trainNetwork, Input data indices must be nonnegative integers.
I am working on "wave segmentaion using deep learning" which can be found in the page: https://www.mathworks.com/help/signal/ug/waveform-segmentation-using-deep-learning.html#WaveformSegmentationUsingDeepLearningExample-15
This is a problem of sequence-to-sequnce classification. (e.g., input: (0.5, -5, 3, 10, 40, …); prediction: (P, T, T, T, n/a,…))
I apply Tranformer encoder based on the code by Ben (Matlab staff, https://www.mathworks.com/matlabcentral/answers/2014811-is-there-any-documentation-on-how-to-build-a-transformer-encoder-from-scratch-in-matlab ), and replace LSTM layer by a Transformer encoder. The modified code by me is given at the bottom.
When I run the section of network training, I got an error message as follows, and hopefully could get some help to fix the problem.
———————————————————————————————————————————————————————————————
Error in waveExtractionTest_TransEnc (line …)
filteredNet = trainNetwork(filteredTrainSignalss,trainLabels,net,options);
Caused by:
Error using nnet.internal.cnn.layer.util.EmbeddingDAGNetworkBaseStrategy/embedData
Input data indices must be nonnegative integers.
——————————————————————————————————————————————————————–
%% Download the data
dataURL = ‘https://www.mathworks.com/supportfiles/SPT/data/QTDatabaseECGData1.zip’;
dirQT = pwd;
datasetFolder = fullfile(dirQT,’QTDataset’);
zipFile = fullfile(dirQT,’QTDatabaseECGData.zip’);
if ~exist(datasetFolder,’dir’)
websave(zipFile,dataURL);
unzip(zipFile,dirQT);
end
%%
sds = signalDatastore(datasetFolder,’SignalVariableNames’,["ecgSignal","signalRegionLabels"])
%%
rng default
[trainIdx,~,testIdx] = dividerand(numel(sds.Files),0.8,0,0.2);
trainDs = subset(sds,trainIdx);
testDs = subset(sds,testIdx);
%%
trainDs = transform(trainDs,@resizeData);
testDs = transform(testDs,@resizeData);
%%
% Bandpass filter design
hFilt = designfilt(‘bandpassiir’, ‘StopbandFrequency1′,0.4215,’PassbandFrequency1’, 0.5, …
‘PassbandFrequency2′,40,’StopbandFrequency2’,53.345,…
‘StopbandAttenuation1′,60,’PassbandRipple’,0.1,’StopbandAttenuation2′,60,…
‘SampleRate’,250,’DesignMethod’,’ellip’);
% Create tall arrays from the transformed datastores and filter the signals
tallTrainSet = tall(trainDs);
tallTestSet = tall(testDs);
filteredTrainSignals = gather(cellfun(@(x)filter(hFilt,x),tallTrainSet(:,1),’UniformOutput’,false));
trainLabels = gather(tallTrainSet(:,2));
filteredTestSignals = gather(cellfun(@(x)filter(hFilt,x),tallTestSet(:,1),’UniformOutput’,false));
testLabels = gather(tallTestSet(:,2));
%% Create model
% We will use 2 encoder layers.
numHeads = 1;
numKeyChannels = 20;
feedforwardHiddenSize = 100;
modelHiddenSize = 20;
% Since the values in the sequence can be 1,2, …, 10 the "vocabulary" size is 10.
vocabSize = 100000; % the size of input sequence of one sample-training-data is 5000
inputSize = 1;
encoderLayers = [
sequenceInputLayer(1,Name="in") % input
wordEmbeddingLayer(modelHiddenSize,vocabSize,Name="embedding") % embedding
positionEmbeddingLayer(modelHiddenSize,vocabSize) % position embedding
additionLayer(2,Name="embed_add") % add the data and position embeddings
selfAttentionLayer(numHeads,numKeyChannels) % encoder block 1
additionLayer(2,Name="attention_add") %
layerNormalizationLayer(Name="attention_norm") %
fullyConnectedLayer(feedforwardHiddenSize) %
reluLayer %
fullyConnectedLayer(modelHiddenSize) %
additionLayer(2,Name="feedforward_add") %
layerNormalizationLayer(Name="encoder1_out") %
selfAttentionLayer(numHeads,numKeyChannels) % encoder block 2
additionLayer(2,Name="attention2_add") %
layerNormalizationLayer(Name="attention2_norm") %
fullyConnectedLayer(feedforwardHiddenSize) %
reluLayer %
fullyConnectedLayer(modelHiddenSize) %
additionLayer(2,Name="feedforward2_add") %
layerNormalizationLayer() %
% indexing1dLayer %
% fullyConnectedLayer(inputSize)
fullyConnectedLayer(4)
softmaxLayer("Name","softmax")
classificationLayer("Name","classification")
]; % output head
%
net = layerGraph(encoderLayers);
net = connectLayers(net,"embed_add","attention_add/in2");
net = connectLayers(net,"embedding","embed_add/in2");
net = connectLayers(net,"attention_norm","feedforward_add/in2");
net = connectLayers(net,"encoder1_out","attention2_add/in2");
net = connectLayers(net,"attention2_norm","feedforward2_add/in2");
% net = initialize(net);
% analyze the network to see how data flows through it
analyzeNetwork(net)
%
%%
options = trainingOptions("adam", …
MaxEpochs = 10, …
MiniBatchSize = 50, …
Plots="training-progress", …
Shuffle="every-epoch", …
InitialLearnRate=1e-2, …
LearnRateDropFactor=0.9, …
LearnRateDropPeriod=3, …
LearnRateSchedule="piecewise");
%%
filteredNet = trainNetwork(filteredTrainSignals,trainLabels,net,options);I am working on "wave segmentaion using deep learning" which can be found in the page: https://www.mathworks.com/help/signal/ug/waveform-segmentation-using-deep-learning.html#WaveformSegmentationUsingDeepLearningExample-15
This is a problem of sequence-to-sequnce classification. (e.g., input: (0.5, -5, 3, 10, 40, …); prediction: (P, T, T, T, n/a,…))
I apply Tranformer encoder based on the code by Ben (Matlab staff, https://www.mathworks.com/matlabcentral/answers/2014811-is-there-any-documentation-on-how-to-build-a-transformer-encoder-from-scratch-in-matlab ), and replace LSTM layer by a Transformer encoder. The modified code by me is given at the bottom.
When I run the section of network training, I got an error message as follows, and hopefully could get some help to fix the problem.
———————————————————————————————————————————————————————————————
Error in waveExtractionTest_TransEnc (line …)
filteredNet = trainNetwork(filteredTrainSignalss,trainLabels,net,options);
Caused by:
Error using nnet.internal.cnn.layer.util.EmbeddingDAGNetworkBaseStrategy/embedData
Input data indices must be nonnegative integers.
——————————————————————————————————————————————————————–
%% Download the data
dataURL = ‘https://www.mathworks.com/supportfiles/SPT/data/QTDatabaseECGData1.zip’;
dirQT = pwd;
datasetFolder = fullfile(dirQT,’QTDataset’);
zipFile = fullfile(dirQT,’QTDatabaseECGData.zip’);
if ~exist(datasetFolder,’dir’)
websave(zipFile,dataURL);
unzip(zipFile,dirQT);
end
%%
sds = signalDatastore(datasetFolder,’SignalVariableNames’,["ecgSignal","signalRegionLabels"])
%%
rng default
[trainIdx,~,testIdx] = dividerand(numel(sds.Files),0.8,0,0.2);
trainDs = subset(sds,trainIdx);
testDs = subset(sds,testIdx);
%%
trainDs = transform(trainDs,@resizeData);
testDs = transform(testDs,@resizeData);
%%
% Bandpass filter design
hFilt = designfilt(‘bandpassiir’, ‘StopbandFrequency1′,0.4215,’PassbandFrequency1’, 0.5, …
‘PassbandFrequency2′,40,’StopbandFrequency2’,53.345,…
‘StopbandAttenuation1′,60,’PassbandRipple’,0.1,’StopbandAttenuation2′,60,…
‘SampleRate’,250,’DesignMethod’,’ellip’);
% Create tall arrays from the transformed datastores and filter the signals
tallTrainSet = tall(trainDs);
tallTestSet = tall(testDs);
filteredTrainSignals = gather(cellfun(@(x)filter(hFilt,x),tallTrainSet(:,1),’UniformOutput’,false));
trainLabels = gather(tallTrainSet(:,2));
filteredTestSignals = gather(cellfun(@(x)filter(hFilt,x),tallTestSet(:,1),’UniformOutput’,false));
testLabels = gather(tallTestSet(:,2));
%% Create model
% We will use 2 encoder layers.
numHeads = 1;
numKeyChannels = 20;
feedforwardHiddenSize = 100;
modelHiddenSize = 20;
% Since the values in the sequence can be 1,2, …, 10 the "vocabulary" size is 10.
vocabSize = 100000; % the size of input sequence of one sample-training-data is 5000
inputSize = 1;
encoderLayers = [
sequenceInputLayer(1,Name="in") % input
wordEmbeddingLayer(modelHiddenSize,vocabSize,Name="embedding") % embedding
positionEmbeddingLayer(modelHiddenSize,vocabSize) % position embedding
additionLayer(2,Name="embed_add") % add the data and position embeddings
selfAttentionLayer(numHeads,numKeyChannels) % encoder block 1
additionLayer(2,Name="attention_add") %
layerNormalizationLayer(Name="attention_norm") %
fullyConnectedLayer(feedforwardHiddenSize) %
reluLayer %
fullyConnectedLayer(modelHiddenSize) %
additionLayer(2,Name="feedforward_add") %
layerNormalizationLayer(Name="encoder1_out") %
selfAttentionLayer(numHeads,numKeyChannels) % encoder block 2
additionLayer(2,Name="attention2_add") %
layerNormalizationLayer(Name="attention2_norm") %
fullyConnectedLayer(feedforwardHiddenSize) %
reluLayer %
fullyConnectedLayer(modelHiddenSize) %
additionLayer(2,Name="feedforward2_add") %
layerNormalizationLayer() %
% indexing1dLayer %
% fullyConnectedLayer(inputSize)
fullyConnectedLayer(4)
softmaxLayer("Name","softmax")
classificationLayer("Name","classification")
]; % output head
%
net = layerGraph(encoderLayers);
net = connectLayers(net,"embed_add","attention_add/in2");
net = connectLayers(net,"embedding","embed_add/in2");
net = connectLayers(net,"attention_norm","feedforward_add/in2");
net = connectLayers(net,"encoder1_out","attention2_add/in2");
net = connectLayers(net,"attention2_norm","feedforward2_add/in2");
% net = initialize(net);
% analyze the network to see how data flows through it
analyzeNetwork(net)
%
%%
options = trainingOptions("adam", …
MaxEpochs = 10, …
MiniBatchSize = 50, …
Plots="training-progress", …
Shuffle="every-epoch", …
InitialLearnRate=1e-2, …
LearnRateDropFactor=0.9, …
LearnRateDropPeriod=3, …
LearnRateSchedule="piecewise");
%%
filteredNet = trainNetwork(filteredTrainSignals,trainLabels,net,options); I am working on "wave segmentaion using deep learning" which can be found in the page: https://www.mathworks.com/help/signal/ug/waveform-segmentation-using-deep-learning.html#WaveformSegmentationUsingDeepLearningExample-15
This is a problem of sequence-to-sequnce classification. (e.g., input: (0.5, -5, 3, 10, 40, …); prediction: (P, T, T, T, n/a,…))
I apply Tranformer encoder based on the code by Ben (Matlab staff, https://www.mathworks.com/matlabcentral/answers/2014811-is-there-any-documentation-on-how-to-build-a-transformer-encoder-from-scratch-in-matlab ), and replace LSTM layer by a Transformer encoder. The modified code by me is given at the bottom.
When I run the section of network training, I got an error message as follows, and hopefully could get some help to fix the problem.
———————————————————————————————————————————————————————————————
Error in waveExtractionTest_TransEnc (line …)
filteredNet = trainNetwork(filteredTrainSignalss,trainLabels,net,options);
Caused by:
Error using nnet.internal.cnn.layer.util.EmbeddingDAGNetworkBaseStrategy/embedData
Input data indices must be nonnegative integers.
——————————————————————————————————————————————————————–
%% Download the data
dataURL = ‘https://www.mathworks.com/supportfiles/SPT/data/QTDatabaseECGData1.zip’;
dirQT = pwd;
datasetFolder = fullfile(dirQT,’QTDataset’);
zipFile = fullfile(dirQT,’QTDatabaseECGData.zip’);
if ~exist(datasetFolder,’dir’)
websave(zipFile,dataURL);
unzip(zipFile,dirQT);
end
%%
sds = signalDatastore(datasetFolder,’SignalVariableNames’,["ecgSignal","signalRegionLabels"])
%%
rng default
[trainIdx,~,testIdx] = dividerand(numel(sds.Files),0.8,0,0.2);
trainDs = subset(sds,trainIdx);
testDs = subset(sds,testIdx);
%%
trainDs = transform(trainDs,@resizeData);
testDs = transform(testDs,@resizeData);
%%
% Bandpass filter design
hFilt = designfilt(‘bandpassiir’, ‘StopbandFrequency1′,0.4215,’PassbandFrequency1’, 0.5, …
‘PassbandFrequency2′,40,’StopbandFrequency2’,53.345,…
‘StopbandAttenuation1′,60,’PassbandRipple’,0.1,’StopbandAttenuation2′,60,…
‘SampleRate’,250,’DesignMethod’,’ellip’);
% Create tall arrays from the transformed datastores and filter the signals
tallTrainSet = tall(trainDs);
tallTestSet = tall(testDs);
filteredTrainSignals = gather(cellfun(@(x)filter(hFilt,x),tallTrainSet(:,1),’UniformOutput’,false));
trainLabels = gather(tallTrainSet(:,2));
filteredTestSignals = gather(cellfun(@(x)filter(hFilt,x),tallTestSet(:,1),’UniformOutput’,false));
testLabels = gather(tallTestSet(:,2));
%% Create model
% We will use 2 encoder layers.
numHeads = 1;
numKeyChannels = 20;
feedforwardHiddenSize = 100;
modelHiddenSize = 20;
% Since the values in the sequence can be 1,2, …, 10 the "vocabulary" size is 10.
vocabSize = 100000; % the size of input sequence of one sample-training-data is 5000
inputSize = 1;
encoderLayers = [
sequenceInputLayer(1,Name="in") % input
wordEmbeddingLayer(modelHiddenSize,vocabSize,Name="embedding") % embedding
positionEmbeddingLayer(modelHiddenSize,vocabSize) % position embedding
additionLayer(2,Name="embed_add") % add the data and position embeddings
selfAttentionLayer(numHeads,numKeyChannels) % encoder block 1
additionLayer(2,Name="attention_add") %
layerNormalizationLayer(Name="attention_norm") %
fullyConnectedLayer(feedforwardHiddenSize) %
reluLayer %
fullyConnectedLayer(modelHiddenSize) %
additionLayer(2,Name="feedforward_add") %
layerNormalizationLayer(Name="encoder1_out") %
selfAttentionLayer(numHeads,numKeyChannels) % encoder block 2
additionLayer(2,Name="attention2_add") %
layerNormalizationLayer(Name="attention2_norm") %
fullyConnectedLayer(feedforwardHiddenSize) %
reluLayer %
fullyConnectedLayer(modelHiddenSize) %
additionLayer(2,Name="feedforward2_add") %
layerNormalizationLayer() %
% indexing1dLayer %
% fullyConnectedLayer(inputSize)
fullyConnectedLayer(4)
softmaxLayer("Name","softmax")
classificationLayer("Name","classification")
]; % output head
%
net = layerGraph(encoderLayers);
net = connectLayers(net,"embed_add","attention_add/in2");
net = connectLayers(net,"embedding","embed_add/in2");
net = connectLayers(net,"attention_norm","feedforward_add/in2");
net = connectLayers(net,"encoder1_out","attention2_add/in2");
net = connectLayers(net,"attention2_norm","feedforward2_add/in2");
% net = initialize(net);
% analyze the network to see how data flows through it
analyzeNetwork(net)
%
%%
options = trainingOptions("adam", …
MaxEpochs = 10, …
MiniBatchSize = 50, …
Plots="training-progress", …
Shuffle="every-epoch", …
InitialLearnRate=1e-2, …
LearnRateDropFactor=0.9, …
LearnRateDropPeriod=3, …
LearnRateSchedule="piecewise");
%%
filteredNet = trainNetwork(filteredTrainSignals,trainLabels,net,options); sequence-to-sequence classification, transformer encoder, ecg signal wave segmentation MATLAB Answers — New Questions
Buoy move with ocean wave
Hello guys,
I am here to ask a simulation question. I am trying to model how to buoy makes up and down according to ocean waves. The problem is I couldn’t add the sine wave in the figure. Here is the code what i wrote;
clear all;clc
radiusofcrank = 1;
angular_velocity = 60; % angular velocity in radians per second
for t = linspace(0, 1, 1000)
x_crank = radiusofcrank * cos(angular_velocity * t); %motion of the crank radius
y_crank = radiusofcrank * sin(angular_velocity * t);
x_buoy = 0; %x axis of buoy
y_buoy = y_crank – 5; % makes buoy moves up and down
plot(x_crank, y_crank); %crank
hold on;
plot([0, x_crank], [0, y_crank], ‘black’); %crank radius
plot(x_buoy, y_buoy, ‘o’, ‘MarkerSize’, 50, ‘MarkerFaceColor’, ‘b’); %buoy
plot([x_crank, x_buoy], [y_crank, y_buoy], ‘-‘, ‘Color’, ‘b’); % Connecting rod
theta = linspace(0, 2*pi, 100); %plots circle (2pi = 360 degree)
circle_x = radiusofcrank * cos(theta);
circle_y = radiusofcrank * sin(theta);
plot(circle_x, circle_y);
hold off;
title(‘Buoy Motion’);
xlabel(‘X-coordinate’);
ylabel(‘Y-coordinate’);
axis equal;
grid on;
pause(0.01);
end
% Sine Wave
t = 0.1:100;
Wave_amplitude_sine = 3; % Amplitude of the sine wave
wave_frequency = 0.01; % Frequency of the sine wave (in Hz)
% Create the figure and plot initial sine wave
h = plot(t,sin(2 * pi * wave_frequency * t));
title(‘Ocean Wave’);
xlabel(‘Time (s)’);
ylabel(‘Amplitude’);
grid on;
for time = 0:1:1000
phase = 2 * pi * wave_frequency * time; % Update the phase based on time and speed
set(h, ‘YData’, sin(2 * pi * wave_frequency * t + phase));
pause(0.028);
end
I need your help. Thank you!Hello guys,
I am here to ask a simulation question. I am trying to model how to buoy makes up and down according to ocean waves. The problem is I couldn’t add the sine wave in the figure. Here is the code what i wrote;
clear all;clc
radiusofcrank = 1;
angular_velocity = 60; % angular velocity in radians per second
for t = linspace(0, 1, 1000)
x_crank = radiusofcrank * cos(angular_velocity * t); %motion of the crank radius
y_crank = radiusofcrank * sin(angular_velocity * t);
x_buoy = 0; %x axis of buoy
y_buoy = y_crank – 5; % makes buoy moves up and down
plot(x_crank, y_crank); %crank
hold on;
plot([0, x_crank], [0, y_crank], ‘black’); %crank radius
plot(x_buoy, y_buoy, ‘o’, ‘MarkerSize’, 50, ‘MarkerFaceColor’, ‘b’); %buoy
plot([x_crank, x_buoy], [y_crank, y_buoy], ‘-‘, ‘Color’, ‘b’); % Connecting rod
theta = linspace(0, 2*pi, 100); %plots circle (2pi = 360 degree)
circle_x = radiusofcrank * cos(theta);
circle_y = radiusofcrank * sin(theta);
plot(circle_x, circle_y);
hold off;
title(‘Buoy Motion’);
xlabel(‘X-coordinate’);
ylabel(‘Y-coordinate’);
axis equal;
grid on;
pause(0.01);
end
% Sine Wave
t = 0.1:100;
Wave_amplitude_sine = 3; % Amplitude of the sine wave
wave_frequency = 0.01; % Frequency of the sine wave (in Hz)
% Create the figure and plot initial sine wave
h = plot(t,sin(2 * pi * wave_frequency * t));
title(‘Ocean Wave’);
xlabel(‘Time (s)’);
ylabel(‘Amplitude’);
grid on;
for time = 0:1:1000
phase = 2 * pi * wave_frequency * time; % Update the phase based on time and speed
set(h, ‘YData’, sin(2 * pi * wave_frequency * t + phase));
pause(0.028);
end
I need your help. Thank you! Hello guys,
I am here to ask a simulation question. I am trying to model how to buoy makes up and down according to ocean waves. The problem is I couldn’t add the sine wave in the figure. Here is the code what i wrote;
clear all;clc
radiusofcrank = 1;
angular_velocity = 60; % angular velocity in radians per second
for t = linspace(0, 1, 1000)
x_crank = radiusofcrank * cos(angular_velocity * t); %motion of the crank radius
y_crank = radiusofcrank * sin(angular_velocity * t);
x_buoy = 0; %x axis of buoy
y_buoy = y_crank – 5; % makes buoy moves up and down
plot(x_crank, y_crank); %crank
hold on;
plot([0, x_crank], [0, y_crank], ‘black’); %crank radius
plot(x_buoy, y_buoy, ‘o’, ‘MarkerSize’, 50, ‘MarkerFaceColor’, ‘b’); %buoy
plot([x_crank, x_buoy], [y_crank, y_buoy], ‘-‘, ‘Color’, ‘b’); % Connecting rod
theta = linspace(0, 2*pi, 100); %plots circle (2pi = 360 degree)
circle_x = radiusofcrank * cos(theta);
circle_y = radiusofcrank * sin(theta);
plot(circle_x, circle_y);
hold off;
title(‘Buoy Motion’);
xlabel(‘X-coordinate’);
ylabel(‘Y-coordinate’);
axis equal;
grid on;
pause(0.01);
end
% Sine Wave
t = 0.1:100;
Wave_amplitude_sine = 3; % Amplitude of the sine wave
wave_frequency = 0.01; % Frequency of the sine wave (in Hz)
% Create the figure and plot initial sine wave
h = plot(t,sin(2 * pi * wave_frequency * t));
title(‘Ocean Wave’);
xlabel(‘Time (s)’);
ylabel(‘Amplitude’);
grid on;
for time = 0:1:1000
phase = 2 * pi * wave_frequency * time; % Update the phase based on time and speed
set(h, ‘YData’, sin(2 * pi * wave_frequency * t + phase));
pause(0.028);
end
I need your help. Thank you! model, sinewave, crank MATLAB Answers — New Questions
Simulation 3D Vehicle Lights
Hello,
When using the Simulation 3D vehicle with Ground Following block in Simulink, the lights appear on when all options are set to off. With the below setup and all lights set to off (all zeros)…
the lights remain on.
How do I turn these lights off?
Thanks!Hello,
When using the Simulation 3D vehicle with Ground Following block in Simulink, the lights appear on when all options are set to off. With the below setup and all lights set to off (all zeros)…
the lights remain on.
How do I turn these lights off?
Thanks! Hello,
When using the Simulation 3D vehicle with Ground Following block in Simulink, the lights appear on when all options are set to off. With the below setup and all lights set to off (all zeros)…
the lights remain on.
How do I turn these lights off?
Thanks! unreal engine, automated driving scenario MATLAB Answers — New Questions
How to change color of point depending on side of a line?
Hello,
I am producing a scatter plot using two columns of data from a .db file and would like to change the color of each of the data points based off their location relative to two lines in the figure. That is, if:
1) the point is to the left of some xline, that point should be some color
2) the point is to the right of the xline AND above some yline, let the point be another color
3) the point is to the right of the xline and below some yline, have it be yet another color
I don’t think I can include scatter in a for loop and don’t know if I should organize each of the points in the table data into separate "bins" to then scatter individually (don’t know how to do that either, to be honest).
Any and all help would be greatly appreciated. Thank you!Hello,
I am producing a scatter plot using two columns of data from a .db file and would like to change the color of each of the data points based off their location relative to two lines in the figure. That is, if:
1) the point is to the left of some xline, that point should be some color
2) the point is to the right of the xline AND above some yline, let the point be another color
3) the point is to the right of the xline and below some yline, have it be yet another color
I don’t think I can include scatter in a for loop and don’t know if I should organize each of the points in the table data into separate "bins" to then scatter individually (don’t know how to do that either, to be honest).
Any and all help would be greatly appreciated. Thank you! Hello,
I am producing a scatter plot using two columns of data from a .db file and would like to change the color of each of the data points based off their location relative to two lines in the figure. That is, if:
1) the point is to the left of some xline, that point should be some color
2) the point is to the right of the xline AND above some yline, let the point be another color
3) the point is to the right of the xline and below some yline, have it be yet another color
I don’t think I can include scatter in a for loop and don’t know if I should organize each of the points in the table data into separate "bins" to then scatter individually (don’t know how to do that either, to be honest).
Any and all help would be greatly appreciated. Thank you! scatter, color, figure, xline, yline, .db MATLAB Answers — New Questions
IsField method not returning correct value
The isfield() method is not currently working for me. Check out this output from the console:
K>> TFORM
TFORM =
projective2d with properties:
T: [3×3 double]
Dimensionality: 2
K>> TFORM.T
ans =
0.9889 -0.0068 -0.0000
0.0003 0.9918 -0.0000
3.7452 5.5412 1.0000
K>> isfield(TFORM, ‘T’)
ans =
0
I can see that my object (TFORM) is there, and that it has a field named ‘T’. However, when I run isfield(TFORM, ‘T’), I get a 0 result. What gives?
**Edit**
I’m running R2015a (8.5.0.197613), on a 64-bit Mac.
AND
I was able to work around this by using:
any(strcmp(fieldnames(TFORM),’T’))
In place of:
isfield(TFORM, ‘T’)The isfield() method is not currently working for me. Check out this output from the console:
K>> TFORM
TFORM =
projective2d with properties:
T: [3×3 double]
Dimensionality: 2
K>> TFORM.T
ans =
0.9889 -0.0068 -0.0000
0.0003 0.9918 -0.0000
3.7452 5.5412 1.0000
K>> isfield(TFORM, ‘T’)
ans =
0
I can see that my object (TFORM) is there, and that it has a field named ‘T’. However, when I run isfield(TFORM, ‘T’), I get a 0 result. What gives?
**Edit**
I’m running R2015a (8.5.0.197613), on a 64-bit Mac.
AND
I was able to work around this by using:
any(strcmp(fieldnames(TFORM),’T’))
In place of:
isfield(TFORM, ‘T’) The isfield() method is not currently working for me. Check out this output from the console:
K>> TFORM
TFORM =
projective2d with properties:
T: [3×3 double]
Dimensionality: 2
K>> TFORM.T
ans =
0.9889 -0.0068 -0.0000
0.0003 0.9918 -0.0000
3.7452 5.5412 1.0000
K>> isfield(TFORM, ‘T’)
ans =
0
I can see that my object (TFORM) is there, and that it has a field named ‘T’. However, when I run isfield(TFORM, ‘T’), I get a 0 result. What gives?
**Edit**
I’m running R2015a (8.5.0.197613), on a 64-bit Mac.
AND
I was able to work around this by using:
any(strcmp(fieldnames(TFORM),’T’))
In place of:
isfield(TFORM, ‘T’) isfield, bug, not a bug MATLAB Answers — New Questions
Result of equation in Matlab is different from Excel.
As the numbers increase (not very big), digits after decimal points become different from Excel and calculator. I get 158486.23 in Excel, and 158486.44 in Matlab. Which one is more accurate and should l take into account?As the numbers increase (not very big), digits after decimal points become different from Excel and calculator. I get 158486.23 in Excel, and 158486.44 in Matlab. Which one is more accurate and should l take into account? As the numbers increase (not very big), digits after decimal points become different from Excel and calculator. I get 158486.23 in Excel, and 158486.44 in Matlab. Which one is more accurate and should l take into account? equation, matlab, excel, data MATLAB Answers — New Questions
How can I extract the x and y values of the phase- and magnitude graph of a .fig file plotted with bodeplot() or bode()?
I got a myBodeplot.fig file with two subplots and want to extract the x and y values for magnitude and phase.I got a myBodeplot.fig file with two subplots and want to extract the x and y values for magnitude and phase. I got a myBodeplot.fig file with two subplots and want to extract the x and y values for magnitude and phase. matlab, signal processing, digital signal processing, control system toolbox, bode, bodeplot MATLAB Answers — New Questions
How can I extract the x and y values of the phase- and magnitude graph of a .fig file plotted with bodeplot() or bode()?
I got a myBodeplot.fig file with two subplots and want to extract the x and y values for magnitude and phase.I got a myBodeplot.fig file with two subplots and want to extract the x and y values for magnitude and phase. I got a myBodeplot.fig file with two subplots and want to extract the x and y values for magnitude and phase. matlab, signal processing, digital signal processing, control system toolbox, bode, bodeplot MATLAB Answers — New Questions
Not enough input arguments
Hi, i need help. When i try to run this line of code,
function [prob, mean_val, one_std, two_std, area] = calcProbs(pdf, a, b, mu1, sigma1, mu2, sigma2)
% Integrate the probability density function over the interval [a, b] to get the probability
prob = integral(pdf, a, b);
An error shows which says,
>> calcProbs(pdf, a, b, mu1, sigma1, mu2, sigma2)
Error using pdf
Requires at least two input arguments.
Im not sure where im going wrong.Hi, i need help. When i try to run this line of code,
function [prob, mean_val, one_std, two_std, area] = calcProbs(pdf, a, b, mu1, sigma1, mu2, sigma2)
% Integrate the probability density function over the interval [a, b] to get the probability
prob = integral(pdf, a, b);
An error shows which says,
>> calcProbs(pdf, a, b, mu1, sigma1, mu2, sigma2)
Error using pdf
Requires at least two input arguments.
Im not sure where im going wrong. Hi, i need help. When i try to run this line of code,
function [prob, mean_val, one_std, two_std, area] = calcProbs(pdf, a, b, mu1, sigma1, mu2, sigma2)
% Integrate the probability density function over the interval [a, b] to get the probability
prob = integral(pdf, a, b);
An error shows which says,
>> calcProbs(pdf, a, b, mu1, sigma1, mu2, sigma2)
Error using pdf
Requires at least two input arguments.
Im not sure where im going wrong. #pdf MATLAB Answers — New Questions
求救TT HELP WITH THE PDE error (too many input arguments)
T = 3500;
L = 2;
D = 0.00611;
v = 0.012;
lambda1 = 0.000154;
lambda2 = 0.000154;
w=9/5;
theta1 = 0.818;
theta2=0.0818;
c0m = 1;
c0im = 0;
c0 = [c0m;c0im]
cin = 0;
M = 5;
N = 100;
t = linspace (T/M,T,M);
x = linspace (0,L,N);
options = odeset;
c = pdepe(0,@slowsorpde,@slowsorpic,@slowsorpbc,x,t,options,…
D,v,theta1,theta2,lambda1,lambda2,…
c0,cin,w);
plot (t,c(:,:,1))
xlabel (‘time’); ylabel (‘concentration’);
function [c,f,s] = slowsorpde(x,t,u,DuDx,D,v,theta1,theta2,lambda1,lambda2,c0,cin,w)
c = [1;1];
f = [D;0].*DuDx;
s = -[v;0].*DuDx – [lambda1;lambda2].*u – [(w/theta1)*(u(1)-u(2))-lambda2*u(2);(w/theta2)*(u(1)-u(2))];
end
function u0 = slowsorpic(x,d,v,c0)
u0 = c0;
end
function [pl,ql,pr,qr] = slowsorpbc(xl,ul,xr,ur,t,D,v,theta1,theata2,lambda1,lambda2,c0,cin)
pl = [ul(1)-cin;0];
ql = [0;1];
pr = [0;0];
qr = [1;1];
endT = 3500;
L = 2;
D = 0.00611;
v = 0.012;
lambda1 = 0.000154;
lambda2 = 0.000154;
w=9/5;
theta1 = 0.818;
theta2=0.0818;
c0m = 1;
c0im = 0;
c0 = [c0m;c0im]
cin = 0;
M = 5;
N = 100;
t = linspace (T/M,T,M);
x = linspace (0,L,N);
options = odeset;
c = pdepe(0,@slowsorpde,@slowsorpic,@slowsorpbc,x,t,options,…
D,v,theta1,theta2,lambda1,lambda2,…
c0,cin,w);
plot (t,c(:,:,1))
xlabel (‘time’); ylabel (‘concentration’);
function [c,f,s] = slowsorpde(x,t,u,DuDx,D,v,theta1,theta2,lambda1,lambda2,c0,cin,w)
c = [1;1];
f = [D;0].*DuDx;
s = -[v;0].*DuDx – [lambda1;lambda2].*u – [(w/theta1)*(u(1)-u(2))-lambda2*u(2);(w/theta2)*(u(1)-u(2))];
end
function u0 = slowsorpic(x,d,v,c0)
u0 = c0;
end
function [pl,ql,pr,qr] = slowsorpbc(xl,ul,xr,ur,t,D,v,theta1,theata2,lambda1,lambda2,c0,cin)
pl = [ul(1)-cin;0];
ql = [0;1];
pr = [0;0];
qr = [1;1];
end T = 3500;
L = 2;
D = 0.00611;
v = 0.012;
lambda1 = 0.000154;
lambda2 = 0.000154;
w=9/5;
theta1 = 0.818;
theta2=0.0818;
c0m = 1;
c0im = 0;
c0 = [c0m;c0im]
cin = 0;
M = 5;
N = 100;
t = linspace (T/M,T,M);
x = linspace (0,L,N);
options = odeset;
c = pdepe(0,@slowsorpde,@slowsorpic,@slowsorpbc,x,t,options,…
D,v,theta1,theta2,lambda1,lambda2,…
c0,cin,w);
plot (t,c(:,:,1))
xlabel (‘time’); ylabel (‘concentration’);
function [c,f,s] = slowsorpde(x,t,u,DuDx,D,v,theta1,theta2,lambda1,lambda2,c0,cin,w)
c = [1;1];
f = [D;0].*DuDx;
s = -[v;0].*DuDx – [lambda1;lambda2].*u – [(w/theta1)*(u(1)-u(2))-lambda2*u(2);(w/theta2)*(u(1)-u(2))];
end
function u0 = slowsorpic(x,d,v,c0)
u0 = c0;
end
function [pl,ql,pr,qr] = slowsorpbc(xl,ul,xr,ur,t,D,v,theta1,theata2,lambda1,lambda2,c0,cin)
pl = [ul(1)-cin;0];
ql = [0;1];
pr = [0;0];
qr = [1;1];
end pde too manyinput arguments error MATLAB Answers — New Questions
Trying to use function ,didnt work?
function [Delta]= finddeflexion(Length)
E= 4.2*(10.^10);
I = 1*(10.^-5);
W = 8500;
prompt = "What is the length of the blade? ";
Length = input(prompt);
Delta = W*(Length^3)/(8*E*I);
when i typed it like this it gave me error sayin to type the function part like so and i dont understand why
function [Delta]= finddeflexion(~)
the code then works finefunction [Delta]= finddeflexion(Length)
E= 4.2*(10.^10);
I = 1*(10.^-5);
W = 8500;
prompt = "What is the length of the blade? ";
Length = input(prompt);
Delta = W*(Length^3)/(8*E*I);
when i typed it like this it gave me error sayin to type the function part like so and i dont understand why
function [Delta]= finddeflexion(~)
the code then works fine function [Delta]= finddeflexion(Length)
E= 4.2*(10.^10);
I = 1*(10.^-5);
W = 8500;
prompt = "What is the length of the blade? ";
Length = input(prompt);
Delta = W*(Length^3)/(8*E*I);
when i typed it like this it gave me error sayin to type the function part like so and i dont understand why
function [Delta]= finddeflexion(~)
the code then works fine matlab MATLAB Answers — New Questions
Pass the text of fprintf to the plot’s text
Would it be possible to pass the text of fprintf to the plot’s text?
x = rand(1,10);
plot(x)
m = mean(x);
sd = std(x);
a = fprintf(‘the mean is %1.2fn’,m);
b = fprintf(‘the standard deviation is %1.2fn’,sd);
text(2,0.5,[a b])Would it be possible to pass the text of fprintf to the plot’s text?
x = rand(1,10);
plot(x)
m = mean(x);
sd = std(x);
a = fprintf(‘the mean is %1.2fn’,m);
b = fprintf(‘the standard deviation is %1.2fn’,sd);
text(2,0.5,[a b]) Would it be possible to pass the text of fprintf to the plot’s text?
x = rand(1,10);
plot(x)
m = mean(x);
sd = std(x);
a = fprintf(‘the mean is %1.2fn’,m);
b = fprintf(‘the standard deviation is %1.2fn’,sd);
text(2,0.5,[a b]) text, fprintf, plot MATLAB Answers — New Questions
trying to use syms and i typed in syms(‘y’) says error
y = syms(‘y’)
Error using syms
Using input and output arguments simultaneously is not supported.y = syms(‘y’)
Error using syms
Using input and output arguments simultaneously is not supported. y = syms(‘y’)
Error using syms
Using input and output arguments simultaneously is not supported. matlab MATLAB Answers — New Questions
Trying to enter a transfer function in simulink (tauD s + 1)
I’m trying to enter a transfer function into matlab simulink. This one tauD s + 1. I tried to enter it by entering in the numerator [tauD 1] and in the denominator [1]. But then it will give me an error that says that the order of the numerator and denominator aren’t equal. Anyone any ideas how to enter this function? Maybe i’m doing this wrong, any help is appreciated!I’m trying to enter a transfer function into matlab simulink. This one tauD s + 1. I tried to enter it by entering in the numerator [tauD 1] and in the denominator [1]. But then it will give me an error that says that the order of the numerator and denominator aren’t equal. Anyone any ideas how to enter this function? Maybe i’m doing this wrong, any help is appreciated! I’m trying to enter a transfer function into matlab simulink. This one tauD s + 1. I tried to enter it by entering in the numerator [tauD 1] and in the denominator [1]. But then it will give me an error that says that the order of the numerator and denominator aren’t equal. Anyone any ideas how to enter this function? Maybe i’m doing this wrong, any help is appreciated! simulink, matlab MATLAB Answers — New Questions
How to read values from excel file with mobile matlab app?
Hi, I want to read values from Excel file with the Matlab mobile app but It seems to do not read correctly! With the same code on PC works fine! What can be the reason? filename = ‘Acceleration.xls’;
sheet = ‘Raw Data’;
xlRange = ‘D:D’;
columnD = xlsread(filename,sheet,xlRange)Hi, I want to read values from Excel file with the Matlab mobile app but It seems to do not read correctly! With the same code on PC works fine! What can be the reason? filename = ‘Acceleration.xls’;
sheet = ‘Raw Data’;
xlRange = ‘D:D’;
columnD = xlsread(filename,sheet,xlRange) Hi, I want to read values from Excel file with the Matlab mobile app but It seems to do not read correctly! With the same code on PC works fine! What can be the reason? filename = ‘Acceleration.xls’;
sheet = ‘Raw Data’;
xlRange = ‘D:D’;
columnD = xlsread(filename,sheet,xlRange) read, excel, mobile app MATLAB Answers — New Questions
How to find the minimum difference between the 3 elements of a vector in app designer?
I need to group the elements of a matrix 3 by 3 with the minimum difference between them. I found something like that: min(min(abs(X(1)-X2))), but l have 3 values.I need to group the elements of a matrix 3 by 3 with the minimum difference between them. I found something like that: min(min(abs(X(1)-X2))), but l have 3 values. I need to group the elements of a matrix 3 by 3 with the minimum difference between them. I found something like that: min(min(abs(X(1)-X2))), but l have 3 values. matlab, matrix, appdesigner, app designer MATLAB Answers — New Questions
Is this code suitable for solving a system of ODEs ?
Can i use this code for a system of ODE and in what way ?
x = linspace(0,1,10000)’;
inputSize = 1;
layers = [
featureInputLayer(inputSize,Normalization="none")
fullyConnectedLayer(10)
sigmoidLayer
fullyConnectedLayer(1)
sigmoidLayer];
dlnet = dlnetwork(layers);
numEpochs = 15;
miniBatchSize =100;
initialLearnRate = 0.1;
learnRateDropFactor = 0.3;
learnRateDropPeriod =5 ;
momentum = 0.9;
icCoeff = 7;
ads = arrayDatastore(x,IterationDimension=1);
mbq = minibatchqueue(ads,MiniBatchSize=miniBatchSize,MiniBatchFormat="BC");
figure
set(gca,YScale="log")
lineLossTrain = animatedline(Color=[0.85 0.325 0.098]);
ylim([0 inf])
xlabel("Iteration")
ylabel("Loss (log scale)")
grid on
velocity = [];
iteration = 0;
learnRate = initialLearnRate;
start = tic;
% Loop over epochs.
for epoch = 1:numEpochs
% Shuffle data.
mbq.shuffle
% Loop over mini-batches.
while hasdata(mbq)
iteration = iteration + 1;
% Read mini-batch of data.
dlX = next(mbq);
% Evaluate the model gradients and loss using dlfeval and the modelGradients function.
[gradients,loss] = dlfeval(@modelGradients3, dlnet, dlX, icCoeff);
% Update network parameters using the SGDM optimizer.
[dlnet,velocity] = sgdmupdate(dlnet,gradients,velocity,learnRate,momentum);
% To plot, convert the loss to double.
loss = double(gather(extractdata(loss)));
% Display the training progress.
D = duration(0,0,toc(start),Format="mm:ss.SS");
addpoints(lineLossTrain,iteration,loss)
title("Epoch: " + epoch + " of " + numEpochs + ", Elapsed: " + string(D))
drawnow
end
% Reduce the learning rate.
if mod(epoch,learnRateDropPeriod)==0
learnRate = learnRate*learnRateDropFactor;
end
end
ModelGradients
function [gradients,loss] = modelGradients2(dlnet, dlX, icCoeff)
y = forward(dlnet,dlX);
% Evaluate the gradient of y with respect to x.
% Since another derivative will be taken, set EnableHigherDerivatives to true.
dy = dlgradient(sum(y,"all"),dlX,EnableHigherDerivatives=true);
% Define ODE loss.
eq = dy + y/5 – exp(-(dlX / 5)) .* cos(dlX);
% Define initial condition loss.
ic = forward(dlnet,dlarray(0,"CB")) – 0 ;
% Specify the loss as a weighted sum of the ODE loss and the initial condition loss.
loss = mean(eq.^2,"all") + icCoeff * ic.^2;
% Evaluate model gradients.
gradients = dlgradient(loss, dlnet.Learnables);
endCan i use this code for a system of ODE and in what way ?
x = linspace(0,1,10000)’;
inputSize = 1;
layers = [
featureInputLayer(inputSize,Normalization="none")
fullyConnectedLayer(10)
sigmoidLayer
fullyConnectedLayer(1)
sigmoidLayer];
dlnet = dlnetwork(layers);
numEpochs = 15;
miniBatchSize =100;
initialLearnRate = 0.1;
learnRateDropFactor = 0.3;
learnRateDropPeriod =5 ;
momentum = 0.9;
icCoeff = 7;
ads = arrayDatastore(x,IterationDimension=1);
mbq = minibatchqueue(ads,MiniBatchSize=miniBatchSize,MiniBatchFormat="BC");
figure
set(gca,YScale="log")
lineLossTrain = animatedline(Color=[0.85 0.325 0.098]);
ylim([0 inf])
xlabel("Iteration")
ylabel("Loss (log scale)")
grid on
velocity = [];
iteration = 0;
learnRate = initialLearnRate;
start = tic;
% Loop over epochs.
for epoch = 1:numEpochs
% Shuffle data.
mbq.shuffle
% Loop over mini-batches.
while hasdata(mbq)
iteration = iteration + 1;
% Read mini-batch of data.
dlX = next(mbq);
% Evaluate the model gradients and loss using dlfeval and the modelGradients function.
[gradients,loss] = dlfeval(@modelGradients3, dlnet, dlX, icCoeff);
% Update network parameters using the SGDM optimizer.
[dlnet,velocity] = sgdmupdate(dlnet,gradients,velocity,learnRate,momentum);
% To plot, convert the loss to double.
loss = double(gather(extractdata(loss)));
% Display the training progress.
D = duration(0,0,toc(start),Format="mm:ss.SS");
addpoints(lineLossTrain,iteration,loss)
title("Epoch: " + epoch + " of " + numEpochs + ", Elapsed: " + string(D))
drawnow
end
% Reduce the learning rate.
if mod(epoch,learnRateDropPeriod)==0
learnRate = learnRate*learnRateDropFactor;
end
end
ModelGradients
function [gradients,loss] = modelGradients2(dlnet, dlX, icCoeff)
y = forward(dlnet,dlX);
% Evaluate the gradient of y with respect to x.
% Since another derivative will be taken, set EnableHigherDerivatives to true.
dy = dlgradient(sum(y,"all"),dlX,EnableHigherDerivatives=true);
% Define ODE loss.
eq = dy + y/5 – exp(-(dlX / 5)) .* cos(dlX);
% Define initial condition loss.
ic = forward(dlnet,dlarray(0,"CB")) – 0 ;
% Specify the loss as a weighted sum of the ODE loss and the initial condition loss.
loss = mean(eq.^2,"all") + icCoeff * ic.^2;
% Evaluate model gradients.
gradients = dlgradient(loss, dlnet.Learnables);
end Can i use this code for a system of ODE and in what way ?
x = linspace(0,1,10000)’;
inputSize = 1;
layers = [
featureInputLayer(inputSize,Normalization="none")
fullyConnectedLayer(10)
sigmoidLayer
fullyConnectedLayer(1)
sigmoidLayer];
dlnet = dlnetwork(layers);
numEpochs = 15;
miniBatchSize =100;
initialLearnRate = 0.1;
learnRateDropFactor = 0.3;
learnRateDropPeriod =5 ;
momentum = 0.9;
icCoeff = 7;
ads = arrayDatastore(x,IterationDimension=1);
mbq = minibatchqueue(ads,MiniBatchSize=miniBatchSize,MiniBatchFormat="BC");
figure
set(gca,YScale="log")
lineLossTrain = animatedline(Color=[0.85 0.325 0.098]);
ylim([0 inf])
xlabel("Iteration")
ylabel("Loss (log scale)")
grid on
velocity = [];
iteration = 0;
learnRate = initialLearnRate;
start = tic;
% Loop over epochs.
for epoch = 1:numEpochs
% Shuffle data.
mbq.shuffle
% Loop over mini-batches.
while hasdata(mbq)
iteration = iteration + 1;
% Read mini-batch of data.
dlX = next(mbq);
% Evaluate the model gradients and loss using dlfeval and the modelGradients function.
[gradients,loss] = dlfeval(@modelGradients3, dlnet, dlX, icCoeff);
% Update network parameters using the SGDM optimizer.
[dlnet,velocity] = sgdmupdate(dlnet,gradients,velocity,learnRate,momentum);
% To plot, convert the loss to double.
loss = double(gather(extractdata(loss)));
% Display the training progress.
D = duration(0,0,toc(start),Format="mm:ss.SS");
addpoints(lineLossTrain,iteration,loss)
title("Epoch: " + epoch + " of " + numEpochs + ", Elapsed: " + string(D))
drawnow
end
% Reduce the learning rate.
if mod(epoch,learnRateDropPeriod)==0
learnRate = learnRate*learnRateDropFactor;
end
end
ModelGradients
function [gradients,loss] = modelGradients2(dlnet, dlX, icCoeff)
y = forward(dlnet,dlX);
% Evaluate the gradient of y with respect to x.
% Since another derivative will be taken, set EnableHigherDerivatives to true.
dy = dlgradient(sum(y,"all"),dlX,EnableHigherDerivatives=true);
% Define ODE loss.
eq = dy + y/5 – exp(-(dlX / 5)) .* cos(dlX);
% Define initial condition loss.
ic = forward(dlnet,dlarray(0,"CB")) – 0 ;
% Specify the loss as a weighted sum of the ODE loss and the initial condition loss.
loss = mean(eq.^2,"all") + icCoeff * ic.^2;
% Evaluate model gradients.
gradients = dlgradient(loss, dlnet.Learnables);
end ode, neural network MATLAB Answers — New Questions
Is this code suitable for solving a system of ODEs ?
Can i use this code for a system of ODE and in what way ?
x = linspace(0,1,10000)’;
inputSize = 1;
layers = [
featureInputLayer(inputSize,Normalization="none")
fullyConnectedLayer(10)
sigmoidLayer
fullyConnectedLayer(1)
sigmoidLayer];
dlnet = dlnetwork(layers);
numEpochs = 15;
miniBatchSize =100;
initialLearnRate = 0.1;
learnRateDropFactor = 0.3;
learnRateDropPeriod =5 ;
momentum = 0.9;
icCoeff = 7;
ads = arrayDatastore(x,IterationDimension=1);
mbq = minibatchqueue(ads,MiniBatchSize=miniBatchSize,MiniBatchFormat="BC");
figure
set(gca,YScale="log")
lineLossTrain = animatedline(Color=[0.85 0.325 0.098]);
ylim([0 inf])
xlabel("Iteration")
ylabel("Loss (log scale)")
grid on
velocity = [];
iteration = 0;
learnRate = initialLearnRate;
start = tic;
% Loop over epochs.
for epoch = 1:numEpochs
% Shuffle data.
mbq.shuffle
% Loop over mini-batches.
while hasdata(mbq)
iteration = iteration + 1;
% Read mini-batch of data.
dlX = next(mbq);
% Evaluate the model gradients and loss using dlfeval and the modelGradients function.
[gradients,loss] = dlfeval(@modelGradients3, dlnet, dlX, icCoeff);
% Update network parameters using the SGDM optimizer.
[dlnet,velocity] = sgdmupdate(dlnet,gradients,velocity,learnRate,momentum);
% To plot, convert the loss to double.
loss = double(gather(extractdata(loss)));
% Display the training progress.
D = duration(0,0,toc(start),Format="mm:ss.SS");
addpoints(lineLossTrain,iteration,loss)
title("Epoch: " + epoch + " of " + numEpochs + ", Elapsed: " + string(D))
drawnow
end
% Reduce the learning rate.
if mod(epoch,learnRateDropPeriod)==0
learnRate = learnRate*learnRateDropFactor;
end
end
ModelGradients
function [gradients,loss] = modelGradients2(dlnet, dlX, icCoeff)
y = forward(dlnet,dlX);
% Evaluate the gradient of y with respect to x.
% Since another derivative will be taken, set EnableHigherDerivatives to true.
dy = dlgradient(sum(y,"all"),dlX,EnableHigherDerivatives=true);
% Define ODE loss.
eq = dy + y/5 – exp(-(dlX / 5)) .* cos(dlX);
% Define initial condition loss.
ic = forward(dlnet,dlarray(0,"CB")) – 0 ;
% Specify the loss as a weighted sum of the ODE loss and the initial condition loss.
loss = mean(eq.^2,"all") + icCoeff * ic.^2;
% Evaluate model gradients.
gradients = dlgradient(loss, dlnet.Learnables);
endCan i use this code for a system of ODE and in what way ?
x = linspace(0,1,10000)’;
inputSize = 1;
layers = [
featureInputLayer(inputSize,Normalization="none")
fullyConnectedLayer(10)
sigmoidLayer
fullyConnectedLayer(1)
sigmoidLayer];
dlnet = dlnetwork(layers);
numEpochs = 15;
miniBatchSize =100;
initialLearnRate = 0.1;
learnRateDropFactor = 0.3;
learnRateDropPeriod =5 ;
momentum = 0.9;
icCoeff = 7;
ads = arrayDatastore(x,IterationDimension=1);
mbq = minibatchqueue(ads,MiniBatchSize=miniBatchSize,MiniBatchFormat="BC");
figure
set(gca,YScale="log")
lineLossTrain = animatedline(Color=[0.85 0.325 0.098]);
ylim([0 inf])
xlabel("Iteration")
ylabel("Loss (log scale)")
grid on
velocity = [];
iteration = 0;
learnRate = initialLearnRate;
start = tic;
% Loop over epochs.
for epoch = 1:numEpochs
% Shuffle data.
mbq.shuffle
% Loop over mini-batches.
while hasdata(mbq)
iteration = iteration + 1;
% Read mini-batch of data.
dlX = next(mbq);
% Evaluate the model gradients and loss using dlfeval and the modelGradients function.
[gradients,loss] = dlfeval(@modelGradients3, dlnet, dlX, icCoeff);
% Update network parameters using the SGDM optimizer.
[dlnet,velocity] = sgdmupdate(dlnet,gradients,velocity,learnRate,momentum);
% To plot, convert the loss to double.
loss = double(gather(extractdata(loss)));
% Display the training progress.
D = duration(0,0,toc(start),Format="mm:ss.SS");
addpoints(lineLossTrain,iteration,loss)
title("Epoch: " + epoch + " of " + numEpochs + ", Elapsed: " + string(D))
drawnow
end
% Reduce the learning rate.
if mod(epoch,learnRateDropPeriod)==0
learnRate = learnRate*learnRateDropFactor;
end
end
ModelGradients
function [gradients,loss] = modelGradients2(dlnet, dlX, icCoeff)
y = forward(dlnet,dlX);
% Evaluate the gradient of y with respect to x.
% Since another derivative will be taken, set EnableHigherDerivatives to true.
dy = dlgradient(sum(y,"all"),dlX,EnableHigherDerivatives=true);
% Define ODE loss.
eq = dy + y/5 – exp(-(dlX / 5)) .* cos(dlX);
% Define initial condition loss.
ic = forward(dlnet,dlarray(0,"CB")) – 0 ;
% Specify the loss as a weighted sum of the ODE loss and the initial condition loss.
loss = mean(eq.^2,"all") + icCoeff * ic.^2;
% Evaluate model gradients.
gradients = dlgradient(loss, dlnet.Learnables);
end Can i use this code for a system of ODE and in what way ?
x = linspace(0,1,10000)’;
inputSize = 1;
layers = [
featureInputLayer(inputSize,Normalization="none")
fullyConnectedLayer(10)
sigmoidLayer
fullyConnectedLayer(1)
sigmoidLayer];
dlnet = dlnetwork(layers);
numEpochs = 15;
miniBatchSize =100;
initialLearnRate = 0.1;
learnRateDropFactor = 0.3;
learnRateDropPeriod =5 ;
momentum = 0.9;
icCoeff = 7;
ads = arrayDatastore(x,IterationDimension=1);
mbq = minibatchqueue(ads,MiniBatchSize=miniBatchSize,MiniBatchFormat="BC");
figure
set(gca,YScale="log")
lineLossTrain = animatedline(Color=[0.85 0.325 0.098]);
ylim([0 inf])
xlabel("Iteration")
ylabel("Loss (log scale)")
grid on
velocity = [];
iteration = 0;
learnRate = initialLearnRate;
start = tic;
% Loop over epochs.
for epoch = 1:numEpochs
% Shuffle data.
mbq.shuffle
% Loop over mini-batches.
while hasdata(mbq)
iteration = iteration + 1;
% Read mini-batch of data.
dlX = next(mbq);
% Evaluate the model gradients and loss using dlfeval and the modelGradients function.
[gradients,loss] = dlfeval(@modelGradients3, dlnet, dlX, icCoeff);
% Update network parameters using the SGDM optimizer.
[dlnet,velocity] = sgdmupdate(dlnet,gradients,velocity,learnRate,momentum);
% To plot, convert the loss to double.
loss = double(gather(extractdata(loss)));
% Display the training progress.
D = duration(0,0,toc(start),Format="mm:ss.SS");
addpoints(lineLossTrain,iteration,loss)
title("Epoch: " + epoch + " of " + numEpochs + ", Elapsed: " + string(D))
drawnow
end
% Reduce the learning rate.
if mod(epoch,learnRateDropPeriod)==0
learnRate = learnRate*learnRateDropFactor;
end
end
ModelGradients
function [gradients,loss] = modelGradients2(dlnet, dlX, icCoeff)
y = forward(dlnet,dlX);
% Evaluate the gradient of y with respect to x.
% Since another derivative will be taken, set EnableHigherDerivatives to true.
dy = dlgradient(sum(y,"all"),dlX,EnableHigherDerivatives=true);
% Define ODE loss.
eq = dy + y/5 – exp(-(dlX / 5)) .* cos(dlX);
% Define initial condition loss.
ic = forward(dlnet,dlarray(0,"CB")) – 0 ;
% Specify the loss as a weighted sum of the ODE loss and the initial condition loss.
loss = mean(eq.^2,"all") + icCoeff * ic.^2;
% Evaluate model gradients.
gradients = dlgradient(loss, dlnet.Learnables);
end ode, neural network MATLAB Answers — New Questions