Tag Archives: matlab
ANFIS output extraction to excel file
I have a fis that was saved to workspace. I want to obtain a matrix or excel file that have all the outputs belong to the created fis. I have to normalize the dataset so i need all fis output as a matrix. It is a little bit emergency. I hope smb can help me about this issue.I have a fis that was saved to workspace. I want to obtain a matrix or excel file that have all the outputs belong to the created fis. I have to normalize the dataset so i need all fis output as a matrix. It is a little bit emergency. I hope smb can help me about this issue. I have a fis that was saved to workspace. I want to obtain a matrix or excel file that have all the outputs belong to the created fis. I have to normalize the dataset so i need all fis output as a matrix. It is a little bit emergency. I hope smb can help me about this issue. anfis, output, fis output, excel extraction MATLAB Answers — New Questions
What is the difference between sim (RL toolbox) and directly click “run” in the Simulink Model
sim – Simulate trained reinforcement learning agents within specified environment
After training my RL agent I use TestSim = sim(agent,env) to test my trained agent.
However, I find that the agent performence is different and much worse if I directly click run bottom is the Simulink model pannel rather than using sim command.
I have checked that in my Simulink model the RL agent block has correctly quoted my trained agent in the MATLAB workspacesim – Simulate trained reinforcement learning agents within specified environment
After training my RL agent I use TestSim = sim(agent,env) to test my trained agent.
However, I find that the agent performence is different and much worse if I directly click run bottom is the Simulink model pannel rather than using sim command.
I have checked that in my Simulink model the RL agent block has correctly quoted my trained agent in the MATLAB workspace sim – Simulate trained reinforcement learning agents within specified environment
After training my RL agent I use TestSim = sim(agent,env) to test my trained agent.
However, I find that the agent performence is different and much worse if I directly click run bottom is the Simulink model pannel rather than using sim command.
I have checked that in my Simulink model the RL agent block has correctly quoted my trained agent in the MATLAB workspace simulink, reinforcement learning, agent MATLAB Answers — New Questions
What does the weight of the fir filter mean?
What does the weight of the fir filter mean?
Is it multiplied by a certain physical quantity of the sound wave?
Like the Taylor series, does it get closer to the true value as the degree increases?
https://kr.mathworks.com/matlabcentral/fileexchange/159311-kalman-filter-for-active-noise-controlWhat does the weight of the fir filter mean?
Is it multiplied by a certain physical quantity of the sound wave?
Like the Taylor series, does it get closer to the true value as the degree increases?
https://kr.mathworks.com/matlabcentral/fileexchange/159311-kalman-filter-for-active-noise-control What does the weight of the fir filter mean?
Is it multiplied by a certain physical quantity of the sound wave?
Like the Taylor series, does it get closer to the true value as the degree increases?
https://kr.mathworks.com/matlabcentral/fileexchange/159311-kalman-filter-for-active-noise-control filter MATLAB Answers — New Questions
How to call functions from another m file
I have two scripts. In first script I have some functions.
script1.m:
function res = func1(a)
res = a * 5;
end
function res = func2(x)
res = x .^ 2;
end
In second script I call these functions. How to include script1.m in second script and call functions from script1.m?I have two scripts. In first script I have some functions.
script1.m:
function res = func1(a)
res = a * 5;
end
function res = func2(x)
res = x .^ 2;
end
In second script I call these functions. How to include script1.m in second script and call functions from script1.m? I have two scripts. In first script I have some functions.
script1.m:
function res = func1(a)
res = a * 5;
end
function res = func2(x)
res = x .^ 2;
end
In second script I call these functions. How to include script1.m in second script and call functions from script1.m? functions, load MATLAB Answers — New Questions
How to make contourf image semitransparent?
I am using contourf to plot a 2D image in jet like below. Now I plan to make it semitransparent, therefore I may add some more lines. I tried all solutions searched online. Seems not working. Would anyone help me out?I am using contourf to plot a 2D image in jet like below. Now I plan to make it semitransparent, therefore I may add some more lines. I tried all solutions searched online. Seems not working. Would anyone help me out? I am using contourf to plot a 2D image in jet like below. Now I plan to make it semitransparent, therefore I may add some more lines. I tried all solutions searched online. Seems not working. Would anyone help me out? contourf MATLAB Answers — New Questions
How to custom the Storage Class in Code Mapping?
I want to use my custom Storage Class when I use Code Mapping in R2024a, how to do that? I tried it in R2018b, and it can be set in code generation selection by clicking right mouse button. Unfortunately, this option has been removed in versions after R2018b.I want to use my custom Storage Class when I use Code Mapping in R2024a, how to do that? I tried it in R2018b, and it can be set in code generation selection by clicking right mouse button. Unfortunately, this option has been removed in versions after R2018b. I want to use my custom Storage Class when I use Code Mapping in R2024a, how to do that? I tried it in R2018b, and it can be set in code generation selection by clicking right mouse button. Unfortunately, this option has been removed in versions after R2018b. code generation, simulink MATLAB Answers — New Questions
rotate polygon in geographic axes
I have a n-sided polygon with vertices as (lat,lon) coordinates. I plot this on a geographic axes. I want to rotate it and end up with a new rotated set of (lat,lon) coordinates. I don’t want a filled polygon, but perhaps I could adjust the transparency if that’s the only way (such as patch). Here is what I tried, but not sure it works accurately due to the oblate earth surface. I prefer not to use Mapping Toolbox, but just core Matlab.
geoplot(lat,lon,’k-‘)
polyin = polyshape([lat’,lon’]);
refPoint = [refLat, refLon];
polyout = rotate(polyin, angleDeg, refPoint);
lat = [polyout.Vertices(:,1); polyout.Vertices(1,1)];
lon = [polyout.Vertices(:,2); polyout.Vertices(1,2)];
geoplot(lat,lon,’b–‘)I have a n-sided polygon with vertices as (lat,lon) coordinates. I plot this on a geographic axes. I want to rotate it and end up with a new rotated set of (lat,lon) coordinates. I don’t want a filled polygon, but perhaps I could adjust the transparency if that’s the only way (such as patch). Here is what I tried, but not sure it works accurately due to the oblate earth surface. I prefer not to use Mapping Toolbox, but just core Matlab.
geoplot(lat,lon,’k-‘)
polyin = polyshape([lat’,lon’]);
refPoint = [refLat, refLon];
polyout = rotate(polyin, angleDeg, refPoint);
lat = [polyout.Vertices(:,1); polyout.Vertices(1,1)];
lon = [polyout.Vertices(:,2); polyout.Vertices(1,2)];
geoplot(lat,lon,’b–‘) I have a n-sided polygon with vertices as (lat,lon) coordinates. I plot this on a geographic axes. I want to rotate it and end up with a new rotated set of (lat,lon) coordinates. I don’t want a filled polygon, but perhaps I could adjust the transparency if that’s the only way (such as patch). Here is what I tried, but not sure it works accurately due to the oblate earth surface. I prefer not to use Mapping Toolbox, but just core Matlab.
geoplot(lat,lon,’k-‘)
polyin = polyshape([lat’,lon’]);
refPoint = [refLat, refLon];
polyout = rotate(polyin, angleDeg, refPoint);
lat = [polyout.Vertices(:,1); polyout.Vertices(1,1)];
lon = [polyout.Vertices(:,2); polyout.Vertices(1,2)];
geoplot(lat,lon,’b–‘) rotate, polygon MATLAB Answers — New Questions
Is it posibble to import a simulink model into Unreal Engine simulation ?
Hi,
I’m trying to create a simulation of a self-driving car in Unreal Engine with Simulink modeling the dynamics and control logic. It is easy to use UE scenes in Simulink, but I’m wondering if it’s possible to compile the Simulink model into the Unreal Engine project. The simulation would be developed in Unreal Engine, and I need the logic from Simulink to be imported into the UE project.
Is there any easy way to do this, except for compiling the Simulink model into C++ and incorporating it by hand into UE project?
Thanks for the help.Hi,
I’m trying to create a simulation of a self-driving car in Unreal Engine with Simulink modeling the dynamics and control logic. It is easy to use UE scenes in Simulink, but I’m wondering if it’s possible to compile the Simulink model into the Unreal Engine project. The simulation would be developed in Unreal Engine, and I need the logic from Simulink to be imported into the UE project.
Is there any easy way to do this, except for compiling the Simulink model into C++ and incorporating it by hand into UE project?
Thanks for the help. Hi,
I’m trying to create a simulation of a self-driving car in Unreal Engine with Simulink modeling the dynamics and control logic. It is easy to use UE scenes in Simulink, but I’m wondering if it’s possible to compile the Simulink model into the Unreal Engine project. The simulation would be developed in Unreal Engine, and I need the logic from Simulink to be imported into the UE project.
Is there any easy way to do this, except for compiling the Simulink model into C++ and incorporating it by hand into UE project?
Thanks for the help. unreal engine, simulink, self-driving MATLAB Answers — New Questions
dicomreadVolume ‘Directory was not readable’ error
I am trying to load in a zip folder with dcm files into MATLAB. However, when I use the dicomreadVolume function I am getting an error. I tried unzipping it directly in my file explorer. How can I fix this error?
ctScan = dicomreadVolume(‘Spine_CT.dcm’);
Error using dicomreadVolume>getFilenames
Directory was not readable.
Error in dicomreadVolume (line 11)
filenames = getFilenames(inputSource);
Error in untitled (line 1)
ctScan = dicomreadVolume(‘Spine_CT.dcm’);I am trying to load in a zip folder with dcm files into MATLAB. However, when I use the dicomreadVolume function I am getting an error. I tried unzipping it directly in my file explorer. How can I fix this error?
ctScan = dicomreadVolume(‘Spine_CT.dcm’);
Error using dicomreadVolume>getFilenames
Directory was not readable.
Error in dicomreadVolume (line 11)
filenames = getFilenames(inputSource);
Error in untitled (line 1)
ctScan = dicomreadVolume(‘Spine_CT.dcm’); I am trying to load in a zip folder with dcm files into MATLAB. However, when I use the dicomreadVolume function I am getting an error. I tried unzipping it directly in my file explorer. How can I fix this error?
ctScan = dicomreadVolume(‘Spine_CT.dcm’);
Error using dicomreadVolume>getFilenames
Directory was not readable.
Error in dicomreadVolume (line 11)
filenames = getFilenames(inputSource);
Error in untitled (line 1)
ctScan = dicomreadVolume(‘Spine_CT.dcm’); dicom, dicomread, dicomreadvolume, error MATLAB Answers — New Questions
How can I do intermittent logging with real-time File Log blocks to generate multiple log files and SDI runs?
I run my Simulink Real-Time (SLRT) simulation for multiple days to conduct several experiments using my Speedgoat hardware. I want to start and stop file logging and have separate runs for each experiment when importing the file logs into Simulation Data Inspector (SDI). Ideally, I would like to import completed experiment runs while running a new experiment and recording new data.I run my Simulink Real-Time (SLRT) simulation for multiple days to conduct several experiments using my Speedgoat hardware. I want to start and stop file logging and have separate runs for each experiment when importing the file logs into Simulation Data Inspector (SDI). Ideally, I would like to import completed experiment runs while running a new experiment and recording new data. I run my Simulink Real-Time (SLRT) simulation for multiple days to conduct several experiments using my Speedgoat hardware. I want to start and stop file logging and have separate runs for each experiment when importing the file logs into Simulation Data Inspector (SDI). Ideally, I would like to import completed experiment runs while running a new experiment and recording new data. filelog, segmentation, intermittent, trigger, experiments, data, logging, long-term, speedgoat MATLAB Answers — New Questions
Why is this autoencoder only predicting a single output regardless of input when using min-max scaling?
Key questions:
Why does a network predict a specific value for the output regardless of input as if the input data had no information relevant to prediction?
Why does replacing min-max scaling with standard scaling fix this, at least occassionally?
The problem background: I am trying to train a simple image autoencoder, but I keep getting networks that only output a single image regardless of the input. Taking the difference between each output image reveals they are all exactly the same. Googling this issue, I saw a stack overflow post that this often arises with improperly dimensioned loss functions. I also saw folks mentioning issues with using the sigmoid loss function for autoencoders, but the explanations as to why never surpass guesswork. I changed the scaling from min-max scaling to standard scaling and was able to obtain a network that breaks out of the single-prediction behavior, but without understanding why, I will have no recourse but trial-and-error if it breaks again.
Notes on dimensioning loss functions: When calculating the loss between a batch of images of shape [imgDim, imgDim, 1, batchSize] the mse loss function outputs a loss of dimension [1,1,1,batchSize], but this loss function has produced defective results under min-max scaling, such as the aforementioned degeneration to a single output, as well as an initial loss three orders of magnitude above the inputs and outputs scaled to the range [0,1]. To be clear, I don’t mean the learning is unstable, I mean that the absolute values of the loss are absurd.
I tried to write my own loss function that reports a scalar value, but I encountered the same degeneration to a single prediction independent of input. I then wrote a version that reports an error tensor of the same shape as @mse, but this threw an error listed below, after the custom loss function in question.
% Version that reports a scalar
function meanAbsErr = myMae(prediction, target)
meanAbsErr = mean(abs(flatten(prediction) – flatten(target)), ‘all’);
end
% Version that reports [1,1,1,batchSize]
function meanAbsErr = myMae(prediction, target)
inDims = size(prediction);
meanAbsErr = mean(abs(flatten(prediction) – flatten(target)), 1);
outDims = ones(1,length(inDims)); outDims(end) = inDims(end);
meanAbsErr = reshape(meanAbsErr, outDims);
end
Value to differentiate is non-scalar. It must be a traced real dlarray scalar.
Error in mathworksDebug>modelLoss (line 213)
[gradientsE,gradientsD] = dlgradient(loss,netE.Learnables,netD.Learnables);
Error in deep.internal.dlfeval (line 17)
[varargout{1:nargout}] = fun(x{:});
Error in deep.internal.dlfevalWithNestingCheck (line 19)
[varargout{1:nargout}] = deep.internal.dlfeval(fun,varargin{:});
Error in dlfeval (line 31)
[varargout{1:nargout}] = deep.internal.dlfevalWithNestingCheck(fun,varargin{:});
Error in mathworksDebug (line 134)
[loss,gradientsE,gradientsD] = dlfeval(@modelLoss,netE,netD,X,Ztarget);
Notes on scaling
I wrote a custom scaling function that executes the same behavior as rescale except that it reports the obtained extrema to use in scaling and de-scaling unseen data.
% Min-max scaling between [lb, ub]
function [scaled,smin,smax] = myRescale(varargin)
datastruct = varargin{1}; lb = varargin{2}; ub = varargin{3};
if length(varargin) <= 3
smin = min(datastruct(:)); smax = max(datastruct(:));
else
smin = varargin{4}; smax = varargin{5};
end
scaled = (datastruct – smin) / (smax – smin) * (ub – lb) + lb;
end
% Invert scaling
function unscaled = myDescale(scaled, lb, ub, smin, smax)
unscaled = (scaled + lb ) * (smax – smin) ./ (ub – lb) + smin;
end
% Converts the data to z-scores
function [standard, center, stddev] = myStandardize(varargin)
datastruct = varargin{1};
if length(varargin) == 1
center = mean(datastruct(:)); stddev = std(datastruct(:));
else
center = varargin{2}; stddev = varargin{3};
end
standard = (datastruct – center) / stddev;
end
% Converts z-scores back to the data’s scale
function destandard = myDestandardize(datastruct, center, stddev)
destandard = datastruct * stddev + center;
end
In the following code, I have removed the validation set to reduce bloat.
% % I intend to regularize the latent space of this autoencoder to be a
% classify images once it can accomplish basic reconstruction. I made this note so
% it’s clear what’s going on with the custom losses and so forth.
xTrain = digitTrain4DArrayData;
xTest = digitTest4DArrayData;
%% Scaling that does not work
% Min-max scaling
xlb = 0; xub=1;
[xTrain, xTrainMin, xTrainMax] = myRescale(xTrain, xl, xub);
xTest = myRescale(xTest, xTrainMin, xTrainMax);
%% Scaling that works, at least occasionally
xTest = myStandardize(xTest, xTrainCenter, xTrainStd);
ntrain = size(xDev,4);
IMG_DIM = size(xDev, 1);N_CHANNELS=size(xDev, 3);
OUT_CHANNELS = min(size(tTrain,1), 64);
numLatentChannels = OUT_CHANNELS;
imageSize = [28 28 1];
%% Layer definitions
% Encoder layer
layersE = [
imageInputLayer(imageSize,Normalization="none")
convolution2dLayer(3,32,Padding="same",Stride=2)
reluLayer
convolution2dLayer(3,64,Padding="same",Stride=2)
reluLayer
fullyConnectedLayer(numLatentChannels)
tanhLayer(Name=’latent’)];
% Latent projection
projectionSize = [7 7 64]; enc_dim = projectionSize(1);
numInputChannels = imageSize(3);
% Decoder
layersD = [
featureInputLayer(numLatentChannels)
projectAndReshapeLayer(projectionSize)
transposedConv2dLayer(3,64,Cropping="same",Stride=2)
reluLayer
transposedConv2dLayer(3,32,Cropping="same",Stride=2)
reluLayer
transposedConv2dLayer(3,numInputChannels,Cropping="same")
sigmoidLayer(‘Output’)
];
netE = dlnetwork(layersE);
netD = dlnetwork(layersD);
%% Training Parameters
numEpochs = 150;
miniBatchSize = 20;
learnRate = 1e-3;
dsXTrain = arrayDatastore(xTrain,IterationDimension=4);
dstTrain = arrayDatastore(tTrain,IterationDimension=2);
numOutputs = 2;
dsTrain = combine(dsXTrain, dstTrain);
mbq = minibatchqueue(dsTrain,numOutputs, …
MiniBatchSize = miniBatchSize, …
MiniBatchFormat=["SSCB", "CB"], …
MiniBatchFcn=@preprocessMiniBatch,…
PartialMiniBatch="return");
%Initialize the parameters for the Adam solver.
trailingAvgE = [];
trailingAvgSqE = [];
trailingAvgD = [];
trailingAvgSqD = [];
%Calculate the total number of iterations for the training progress monitor
numIterationsPerEpoch = ceil(ntrain / miniBatchSize);
numIterations = numEpochs * numIterationsPerEpoch;
epoch = 0;
iteration = 0;
%Initialize the training progress monitor.
monitor = trainingProgressMonitor( …
Metrics=["TrainingLoss"], …
Info=["Epoch", "LearningRate"], …
XLabel="Iteration");
%% Training
while epoch < numEpochs && ~monitor.Stop
epoch = epoch + 1;
% Shuffle data.
shuffle(mbq);
% Loop over mini-batches.
while hasdata(mbq) && ~monitor.Stop
iteration = iteration + 1;
% Read mini-batch of data.
[X, Ztarget] = next(mbq);
% Evaluate loss and gradients.
[loss,gradientsE,gradientsD] = dlfeval(@modelLoss,netE,netD,X,Ztarget);
% Update learnable parameters.
[netE,trailingAvgE,trailingAvgSqE] = adamupdate(netE, …
gradientsE,trailingAvgE,trailingAvgSqE,iteration,learnRate);
[netD, trailingAvgD, trailingAvgSqD] = adamupdate(netD, …
gradientsD,trailingAvgD,trailingAvgSqD,iteration,learnRate);
updateInfo(monitor, …
LearningRate=learnRate, …
Epoch=string(epoch) + " of " + string(numEpochs));
recordMetrics(monitor,iteration, …
TrainingLoss=loss);
monitor.Progress = 100*iteration/numIterations;
end
end
%% Testing
dsTest = combine(arrayDatastore(xTest,IterationDimension=4),…
arrayDatastore(tTest,IterationDimension=2));
numOutputs = 2;
mbqTest = minibatchqueue(dsTest,numOutputs, …
MiniBatchSize = miniBatchSize, …
MiniBatchFcn=@preprocessMiniBatch, …
MiniBatchFormat="SSCB");
[YTest, ZTest] = modelPredictions(netE,netD,mbqTest);
reconerr = mean(flatten(xTest-YTest),1);
figure
histogram(reconerr)
xlabel("Reconstruction Error")
ylabel("Frequency")
title("Test Data")
numImages = 64;
ndisplay = 10;
figure
I = imtile(YTest(:,:,:,1:numImages));
imshow(I)
title("Reconstructed Images")
%% Functions
function [loss,gradientsE,gradientsD] = modelLoss(netE,netD,X,Ztarget)
% Forward through encoder.
Z = forward(netE,X);
% Forward through decoder.
Xrecon = forward(netD,Z);
% Calculate loss and gradients.
loss = regularizedLoss(Xrecon,X,Z,Ztarget);
[gradientsE,gradientsD] = dlgradient(loss,netE.Learnables,netD.Learnables);
end
function loss = regularizedLoss(Xrecon,X,Z,Ztarget)
% Image Reconstruction loss.
reconstructionLoss = mse(Xrecon, X);
% Regularized Loss
%regLoss = mse(Z, Ztarget);
% Combined loss.
loss = reconstructionLoss;% + 0.0*regLoss;
end
function [Xrecon, Zpred] = modelPredictions(netE,netD,mbq)
Xrecon = [];
Zpred = [];
% Loop over mini-batches.
while hasdata(mbq)
X = next(mbq);
% Pass through encoder
Z = predict(netE,X);
% Pass through decoder to get reconstructed images
XGenerated = predict(netD,Z);
% Extract and concatenate predictions.
Xrecon = cat(4,Xrecon,extractdata(XGenerated));
Zpred = cat(2,Zpred,extractdata(Z));
end
end
function loss = assessLoss(netE, netD, X, Ztarget)
% Forward through encoder.
Z = predict(netE,X);
% Forward through decoder.
Xrecon = predict(netD,Z);
% Calculate loss and gradients.
loss = regularizedLoss(Xrecon,X,Z,Ztarget);
end
function [X, Ztarget] = preprocessMiniBatch(Xcell, tCell)
% Concatenate.
X = cat(4,Xcell{:});
% Concatenate.
Ztarget = cat(2,tCell{:});
endKey questions:
Why does a network predict a specific value for the output regardless of input as if the input data had no information relevant to prediction?
Why does replacing min-max scaling with standard scaling fix this, at least occassionally?
The problem background: I am trying to train a simple image autoencoder, but I keep getting networks that only output a single image regardless of the input. Taking the difference between each output image reveals they are all exactly the same. Googling this issue, I saw a stack overflow post that this often arises with improperly dimensioned loss functions. I also saw folks mentioning issues with using the sigmoid loss function for autoencoders, but the explanations as to why never surpass guesswork. I changed the scaling from min-max scaling to standard scaling and was able to obtain a network that breaks out of the single-prediction behavior, but without understanding why, I will have no recourse but trial-and-error if it breaks again.
Notes on dimensioning loss functions: When calculating the loss between a batch of images of shape [imgDim, imgDim, 1, batchSize] the mse loss function outputs a loss of dimension [1,1,1,batchSize], but this loss function has produced defective results under min-max scaling, such as the aforementioned degeneration to a single output, as well as an initial loss three orders of magnitude above the inputs and outputs scaled to the range [0,1]. To be clear, I don’t mean the learning is unstable, I mean that the absolute values of the loss are absurd.
I tried to write my own loss function that reports a scalar value, but I encountered the same degeneration to a single prediction independent of input. I then wrote a version that reports an error tensor of the same shape as @mse, but this threw an error listed below, after the custom loss function in question.
% Version that reports a scalar
function meanAbsErr = myMae(prediction, target)
meanAbsErr = mean(abs(flatten(prediction) – flatten(target)), ‘all’);
end
% Version that reports [1,1,1,batchSize]
function meanAbsErr = myMae(prediction, target)
inDims = size(prediction);
meanAbsErr = mean(abs(flatten(prediction) – flatten(target)), 1);
outDims = ones(1,length(inDims)); outDims(end) = inDims(end);
meanAbsErr = reshape(meanAbsErr, outDims);
end
Value to differentiate is non-scalar. It must be a traced real dlarray scalar.
Error in mathworksDebug>modelLoss (line 213)
[gradientsE,gradientsD] = dlgradient(loss,netE.Learnables,netD.Learnables);
Error in deep.internal.dlfeval (line 17)
[varargout{1:nargout}] = fun(x{:});
Error in deep.internal.dlfevalWithNestingCheck (line 19)
[varargout{1:nargout}] = deep.internal.dlfeval(fun,varargin{:});
Error in dlfeval (line 31)
[varargout{1:nargout}] = deep.internal.dlfevalWithNestingCheck(fun,varargin{:});
Error in mathworksDebug (line 134)
[loss,gradientsE,gradientsD] = dlfeval(@modelLoss,netE,netD,X,Ztarget);
Notes on scaling
I wrote a custom scaling function that executes the same behavior as rescale except that it reports the obtained extrema to use in scaling and de-scaling unseen data.
% Min-max scaling between [lb, ub]
function [scaled,smin,smax] = myRescale(varargin)
datastruct = varargin{1}; lb = varargin{2}; ub = varargin{3};
if length(varargin) <= 3
smin = min(datastruct(:)); smax = max(datastruct(:));
else
smin = varargin{4}; smax = varargin{5};
end
scaled = (datastruct – smin) / (smax – smin) * (ub – lb) + lb;
end
% Invert scaling
function unscaled = myDescale(scaled, lb, ub, smin, smax)
unscaled = (scaled + lb ) * (smax – smin) ./ (ub – lb) + smin;
end
% Converts the data to z-scores
function [standard, center, stddev] = myStandardize(varargin)
datastruct = varargin{1};
if length(varargin) == 1
center = mean(datastruct(:)); stddev = std(datastruct(:));
else
center = varargin{2}; stddev = varargin{3};
end
standard = (datastruct – center) / stddev;
end
% Converts z-scores back to the data’s scale
function destandard = myDestandardize(datastruct, center, stddev)
destandard = datastruct * stddev + center;
end
In the following code, I have removed the validation set to reduce bloat.
% % I intend to regularize the latent space of this autoencoder to be a
% classify images once it can accomplish basic reconstruction. I made this note so
% it’s clear what’s going on with the custom losses and so forth.
xTrain = digitTrain4DArrayData;
xTest = digitTest4DArrayData;
%% Scaling that does not work
% Min-max scaling
xlb = 0; xub=1;
[xTrain, xTrainMin, xTrainMax] = myRescale(xTrain, xl, xub);
xTest = myRescale(xTest, xTrainMin, xTrainMax);
%% Scaling that works, at least occasionally
xTest = myStandardize(xTest, xTrainCenter, xTrainStd);
ntrain = size(xDev,4);
IMG_DIM = size(xDev, 1);N_CHANNELS=size(xDev, 3);
OUT_CHANNELS = min(size(tTrain,1), 64);
numLatentChannels = OUT_CHANNELS;
imageSize = [28 28 1];
%% Layer definitions
% Encoder layer
layersE = [
imageInputLayer(imageSize,Normalization="none")
convolution2dLayer(3,32,Padding="same",Stride=2)
reluLayer
convolution2dLayer(3,64,Padding="same",Stride=2)
reluLayer
fullyConnectedLayer(numLatentChannels)
tanhLayer(Name=’latent’)];
% Latent projection
projectionSize = [7 7 64]; enc_dim = projectionSize(1);
numInputChannels = imageSize(3);
% Decoder
layersD = [
featureInputLayer(numLatentChannels)
projectAndReshapeLayer(projectionSize)
transposedConv2dLayer(3,64,Cropping="same",Stride=2)
reluLayer
transposedConv2dLayer(3,32,Cropping="same",Stride=2)
reluLayer
transposedConv2dLayer(3,numInputChannels,Cropping="same")
sigmoidLayer(‘Output’)
];
netE = dlnetwork(layersE);
netD = dlnetwork(layersD);
%% Training Parameters
numEpochs = 150;
miniBatchSize = 20;
learnRate = 1e-3;
dsXTrain = arrayDatastore(xTrain,IterationDimension=4);
dstTrain = arrayDatastore(tTrain,IterationDimension=2);
numOutputs = 2;
dsTrain = combine(dsXTrain, dstTrain);
mbq = minibatchqueue(dsTrain,numOutputs, …
MiniBatchSize = miniBatchSize, …
MiniBatchFormat=["SSCB", "CB"], …
MiniBatchFcn=@preprocessMiniBatch,…
PartialMiniBatch="return");
%Initialize the parameters for the Adam solver.
trailingAvgE = [];
trailingAvgSqE = [];
trailingAvgD = [];
trailingAvgSqD = [];
%Calculate the total number of iterations for the training progress monitor
numIterationsPerEpoch = ceil(ntrain / miniBatchSize);
numIterations = numEpochs * numIterationsPerEpoch;
epoch = 0;
iteration = 0;
%Initialize the training progress monitor.
monitor = trainingProgressMonitor( …
Metrics=["TrainingLoss"], …
Info=["Epoch", "LearningRate"], …
XLabel="Iteration");
%% Training
while epoch < numEpochs && ~monitor.Stop
epoch = epoch + 1;
% Shuffle data.
shuffle(mbq);
% Loop over mini-batches.
while hasdata(mbq) && ~monitor.Stop
iteration = iteration + 1;
% Read mini-batch of data.
[X, Ztarget] = next(mbq);
% Evaluate loss and gradients.
[loss,gradientsE,gradientsD] = dlfeval(@modelLoss,netE,netD,X,Ztarget);
% Update learnable parameters.
[netE,trailingAvgE,trailingAvgSqE] = adamupdate(netE, …
gradientsE,trailingAvgE,trailingAvgSqE,iteration,learnRate);
[netD, trailingAvgD, trailingAvgSqD] = adamupdate(netD, …
gradientsD,trailingAvgD,trailingAvgSqD,iteration,learnRate);
updateInfo(monitor, …
LearningRate=learnRate, …
Epoch=string(epoch) + " of " + string(numEpochs));
recordMetrics(monitor,iteration, …
TrainingLoss=loss);
monitor.Progress = 100*iteration/numIterations;
end
end
%% Testing
dsTest = combine(arrayDatastore(xTest,IterationDimension=4),…
arrayDatastore(tTest,IterationDimension=2));
numOutputs = 2;
mbqTest = minibatchqueue(dsTest,numOutputs, …
MiniBatchSize = miniBatchSize, …
MiniBatchFcn=@preprocessMiniBatch, …
MiniBatchFormat="SSCB");
[YTest, ZTest] = modelPredictions(netE,netD,mbqTest);
reconerr = mean(flatten(xTest-YTest),1);
figure
histogram(reconerr)
xlabel("Reconstruction Error")
ylabel("Frequency")
title("Test Data")
numImages = 64;
ndisplay = 10;
figure
I = imtile(YTest(:,:,:,1:numImages));
imshow(I)
title("Reconstructed Images")
%% Functions
function [loss,gradientsE,gradientsD] = modelLoss(netE,netD,X,Ztarget)
% Forward through encoder.
Z = forward(netE,X);
% Forward through decoder.
Xrecon = forward(netD,Z);
% Calculate loss and gradients.
loss = regularizedLoss(Xrecon,X,Z,Ztarget);
[gradientsE,gradientsD] = dlgradient(loss,netE.Learnables,netD.Learnables);
end
function loss = regularizedLoss(Xrecon,X,Z,Ztarget)
% Image Reconstruction loss.
reconstructionLoss = mse(Xrecon, X);
% Regularized Loss
%regLoss = mse(Z, Ztarget);
% Combined loss.
loss = reconstructionLoss;% + 0.0*regLoss;
end
function [Xrecon, Zpred] = modelPredictions(netE,netD,mbq)
Xrecon = [];
Zpred = [];
% Loop over mini-batches.
while hasdata(mbq)
X = next(mbq);
% Pass through encoder
Z = predict(netE,X);
% Pass through decoder to get reconstructed images
XGenerated = predict(netD,Z);
% Extract and concatenate predictions.
Xrecon = cat(4,Xrecon,extractdata(XGenerated));
Zpred = cat(2,Zpred,extractdata(Z));
end
end
function loss = assessLoss(netE, netD, X, Ztarget)
% Forward through encoder.
Z = predict(netE,X);
% Forward through decoder.
Xrecon = predict(netD,Z);
% Calculate loss and gradients.
loss = regularizedLoss(Xrecon,X,Z,Ztarget);
end
function [X, Ztarget] = preprocessMiniBatch(Xcell, tCell)
% Concatenate.
X = cat(4,Xcell{:});
% Concatenate.
Ztarget = cat(2,tCell{:});
end Key questions:
Why does a network predict a specific value for the output regardless of input as if the input data had no information relevant to prediction?
Why does replacing min-max scaling with standard scaling fix this, at least occassionally?
The problem background: I am trying to train a simple image autoencoder, but I keep getting networks that only output a single image regardless of the input. Taking the difference between each output image reveals they are all exactly the same. Googling this issue, I saw a stack overflow post that this often arises with improperly dimensioned loss functions. I also saw folks mentioning issues with using the sigmoid loss function for autoencoders, but the explanations as to why never surpass guesswork. I changed the scaling from min-max scaling to standard scaling and was able to obtain a network that breaks out of the single-prediction behavior, but without understanding why, I will have no recourse but trial-and-error if it breaks again.
Notes on dimensioning loss functions: When calculating the loss between a batch of images of shape [imgDim, imgDim, 1, batchSize] the mse loss function outputs a loss of dimension [1,1,1,batchSize], but this loss function has produced defective results under min-max scaling, such as the aforementioned degeneration to a single output, as well as an initial loss three orders of magnitude above the inputs and outputs scaled to the range [0,1]. To be clear, I don’t mean the learning is unstable, I mean that the absolute values of the loss are absurd.
I tried to write my own loss function that reports a scalar value, but I encountered the same degeneration to a single prediction independent of input. I then wrote a version that reports an error tensor of the same shape as @mse, but this threw an error listed below, after the custom loss function in question.
% Version that reports a scalar
function meanAbsErr = myMae(prediction, target)
meanAbsErr = mean(abs(flatten(prediction) – flatten(target)), ‘all’);
end
% Version that reports [1,1,1,batchSize]
function meanAbsErr = myMae(prediction, target)
inDims = size(prediction);
meanAbsErr = mean(abs(flatten(prediction) – flatten(target)), 1);
outDims = ones(1,length(inDims)); outDims(end) = inDims(end);
meanAbsErr = reshape(meanAbsErr, outDims);
end
Value to differentiate is non-scalar. It must be a traced real dlarray scalar.
Error in mathworksDebug>modelLoss (line 213)
[gradientsE,gradientsD] = dlgradient(loss,netE.Learnables,netD.Learnables);
Error in deep.internal.dlfeval (line 17)
[varargout{1:nargout}] = fun(x{:});
Error in deep.internal.dlfevalWithNestingCheck (line 19)
[varargout{1:nargout}] = deep.internal.dlfeval(fun,varargin{:});
Error in dlfeval (line 31)
[varargout{1:nargout}] = deep.internal.dlfevalWithNestingCheck(fun,varargin{:});
Error in mathworksDebug (line 134)
[loss,gradientsE,gradientsD] = dlfeval(@modelLoss,netE,netD,X,Ztarget);
Notes on scaling
I wrote a custom scaling function that executes the same behavior as rescale except that it reports the obtained extrema to use in scaling and de-scaling unseen data.
% Min-max scaling between [lb, ub]
function [scaled,smin,smax] = myRescale(varargin)
datastruct = varargin{1}; lb = varargin{2}; ub = varargin{3};
if length(varargin) <= 3
smin = min(datastruct(:)); smax = max(datastruct(:));
else
smin = varargin{4}; smax = varargin{5};
end
scaled = (datastruct – smin) / (smax – smin) * (ub – lb) + lb;
end
% Invert scaling
function unscaled = myDescale(scaled, lb, ub, smin, smax)
unscaled = (scaled + lb ) * (smax – smin) ./ (ub – lb) + smin;
end
% Converts the data to z-scores
function [standard, center, stddev] = myStandardize(varargin)
datastruct = varargin{1};
if length(varargin) == 1
center = mean(datastruct(:)); stddev = std(datastruct(:));
else
center = varargin{2}; stddev = varargin{3};
end
standard = (datastruct – center) / stddev;
end
% Converts z-scores back to the data’s scale
function destandard = myDestandardize(datastruct, center, stddev)
destandard = datastruct * stddev + center;
end
In the following code, I have removed the validation set to reduce bloat.
% % I intend to regularize the latent space of this autoencoder to be a
% classify images once it can accomplish basic reconstruction. I made this note so
% it’s clear what’s going on with the custom losses and so forth.
xTrain = digitTrain4DArrayData;
xTest = digitTest4DArrayData;
%% Scaling that does not work
% Min-max scaling
xlb = 0; xub=1;
[xTrain, xTrainMin, xTrainMax] = myRescale(xTrain, xl, xub);
xTest = myRescale(xTest, xTrainMin, xTrainMax);
%% Scaling that works, at least occasionally
xTest = myStandardize(xTest, xTrainCenter, xTrainStd);
ntrain = size(xDev,4);
IMG_DIM = size(xDev, 1);N_CHANNELS=size(xDev, 3);
OUT_CHANNELS = min(size(tTrain,1), 64);
numLatentChannels = OUT_CHANNELS;
imageSize = [28 28 1];
%% Layer definitions
% Encoder layer
layersE = [
imageInputLayer(imageSize,Normalization="none")
convolution2dLayer(3,32,Padding="same",Stride=2)
reluLayer
convolution2dLayer(3,64,Padding="same",Stride=2)
reluLayer
fullyConnectedLayer(numLatentChannels)
tanhLayer(Name=’latent’)];
% Latent projection
projectionSize = [7 7 64]; enc_dim = projectionSize(1);
numInputChannels = imageSize(3);
% Decoder
layersD = [
featureInputLayer(numLatentChannels)
projectAndReshapeLayer(projectionSize)
transposedConv2dLayer(3,64,Cropping="same",Stride=2)
reluLayer
transposedConv2dLayer(3,32,Cropping="same",Stride=2)
reluLayer
transposedConv2dLayer(3,numInputChannels,Cropping="same")
sigmoidLayer(‘Output’)
];
netE = dlnetwork(layersE);
netD = dlnetwork(layersD);
%% Training Parameters
numEpochs = 150;
miniBatchSize = 20;
learnRate = 1e-3;
dsXTrain = arrayDatastore(xTrain,IterationDimension=4);
dstTrain = arrayDatastore(tTrain,IterationDimension=2);
numOutputs = 2;
dsTrain = combine(dsXTrain, dstTrain);
mbq = minibatchqueue(dsTrain,numOutputs, …
MiniBatchSize = miniBatchSize, …
MiniBatchFormat=["SSCB", "CB"], …
MiniBatchFcn=@preprocessMiniBatch,…
PartialMiniBatch="return");
%Initialize the parameters for the Adam solver.
trailingAvgE = [];
trailingAvgSqE = [];
trailingAvgD = [];
trailingAvgSqD = [];
%Calculate the total number of iterations for the training progress monitor
numIterationsPerEpoch = ceil(ntrain / miniBatchSize);
numIterations = numEpochs * numIterationsPerEpoch;
epoch = 0;
iteration = 0;
%Initialize the training progress monitor.
monitor = trainingProgressMonitor( …
Metrics=["TrainingLoss"], …
Info=["Epoch", "LearningRate"], …
XLabel="Iteration");
%% Training
while epoch < numEpochs && ~monitor.Stop
epoch = epoch + 1;
% Shuffle data.
shuffle(mbq);
% Loop over mini-batches.
while hasdata(mbq) && ~monitor.Stop
iteration = iteration + 1;
% Read mini-batch of data.
[X, Ztarget] = next(mbq);
% Evaluate loss and gradients.
[loss,gradientsE,gradientsD] = dlfeval(@modelLoss,netE,netD,X,Ztarget);
% Update learnable parameters.
[netE,trailingAvgE,trailingAvgSqE] = adamupdate(netE, …
gradientsE,trailingAvgE,trailingAvgSqE,iteration,learnRate);
[netD, trailingAvgD, trailingAvgSqD] = adamupdate(netD, …
gradientsD,trailingAvgD,trailingAvgSqD,iteration,learnRate);
updateInfo(monitor, …
LearningRate=learnRate, …
Epoch=string(epoch) + " of " + string(numEpochs));
recordMetrics(monitor,iteration, …
TrainingLoss=loss);
monitor.Progress = 100*iteration/numIterations;
end
end
%% Testing
dsTest = combine(arrayDatastore(xTest,IterationDimension=4),…
arrayDatastore(tTest,IterationDimension=2));
numOutputs = 2;
mbqTest = minibatchqueue(dsTest,numOutputs, …
MiniBatchSize = miniBatchSize, …
MiniBatchFcn=@preprocessMiniBatch, …
MiniBatchFormat="SSCB");
[YTest, ZTest] = modelPredictions(netE,netD,mbqTest);
reconerr = mean(flatten(xTest-YTest),1);
figure
histogram(reconerr)
xlabel("Reconstruction Error")
ylabel("Frequency")
title("Test Data")
numImages = 64;
ndisplay = 10;
figure
I = imtile(YTest(:,:,:,1:numImages));
imshow(I)
title("Reconstructed Images")
%% Functions
function [loss,gradientsE,gradientsD] = modelLoss(netE,netD,X,Ztarget)
% Forward through encoder.
Z = forward(netE,X);
% Forward through decoder.
Xrecon = forward(netD,Z);
% Calculate loss and gradients.
loss = regularizedLoss(Xrecon,X,Z,Ztarget);
[gradientsE,gradientsD] = dlgradient(loss,netE.Learnables,netD.Learnables);
end
function loss = regularizedLoss(Xrecon,X,Z,Ztarget)
% Image Reconstruction loss.
reconstructionLoss = mse(Xrecon, X);
% Regularized Loss
%regLoss = mse(Z, Ztarget);
% Combined loss.
loss = reconstructionLoss;% + 0.0*regLoss;
end
function [Xrecon, Zpred] = modelPredictions(netE,netD,mbq)
Xrecon = [];
Zpred = [];
% Loop over mini-batches.
while hasdata(mbq)
X = next(mbq);
% Pass through encoder
Z = predict(netE,X);
% Pass through decoder to get reconstructed images
XGenerated = predict(netD,Z);
% Extract and concatenate predictions.
Xrecon = cat(4,Xrecon,extractdata(XGenerated));
Zpred = cat(2,Zpred,extractdata(Z));
end
end
function loss = assessLoss(netE, netD, X, Ztarget)
% Forward through encoder.
Z = predict(netE,X);
% Forward through decoder.
Xrecon = predict(netD,Z);
% Calculate loss and gradients.
loss = regularizedLoss(Xrecon,X,Z,Ztarget);
end
function [X, Ztarget] = preprocessMiniBatch(Xcell, tCell)
% Concatenate.
X = cat(4,Xcell{:});
% Concatenate.
Ztarget = cat(2,tCell{:});
end autoencoder, deep learning, scaling, activation functions MATLAB Answers — New Questions
I can’t model a piezoelectric transducer
Hello to all,
I am currently trying to model a piezoelectric sensor which transforms mechanical energy into electrical energy with Simulink but I encounter some difficulties …
Here is a snapshot of my current montage (not functional):
Could someone help me solve this problem …?
Thank you in advance,Hello to all,
I am currently trying to model a piezoelectric sensor which transforms mechanical energy into electrical energy with Simulink but I encounter some difficulties …
Here is a snapshot of my current montage (not functional):
Could someone help me solve this problem …?
Thank you in advance, Hello to all,
I am currently trying to model a piezoelectric sensor which transforms mechanical energy into electrical energy with Simulink but I encounter some difficulties …
Here is a snapshot of my current montage (not functional):
Could someone help me solve this problem …?
Thank you in advance, matlab, simulink, model, piezo MATLAB Answers — New Questions
How can I create new runs & discard data from SDI while live streaming signals from my Speedgoat target?
I am doing iterative experiments with my Speedgoat hardware where I would like to start and stop signal monitoring during the simulation. Before 2020b, I used Target Scopes to trace & import the data, then removed the Target Scope after every iteration to clear the buffer. Target Scopes were removed in R2020b.
For accomplishing the same workflow in R2020b and beyond, I am trying to use instruments to start/stop streaming signals to Simulation Data Inspector (SDI) during a running simulation. However, all the data will be shown in the same run in SDI. This makes it hard to visualize the data in SDI, and I cannot clear the data from SDI between the iterations.
How can I achieve my desired workflow in Simulink Real-Time R2020b and beyond?I am doing iterative experiments with my Speedgoat hardware where I would like to start and stop signal monitoring during the simulation. Before 2020b, I used Target Scopes to trace & import the data, then removed the Target Scope after every iteration to clear the buffer. Target Scopes were removed in R2020b.
For accomplishing the same workflow in R2020b and beyond, I am trying to use instruments to start/stop streaming signals to Simulation Data Inspector (SDI) during a running simulation. However, all the data will be shown in the same run in SDI. This makes it hard to visualize the data in SDI, and I cannot clear the data from SDI between the iterations.
How can I achieve my desired workflow in Simulink Real-Time R2020b and beyond? I am doing iterative experiments with my Speedgoat hardware where I would like to start and stop signal monitoring during the simulation. Before 2020b, I used Target Scopes to trace & import the data, then removed the Target Scope after every iteration to clear the buffer. Target Scopes were removed in R2020b.
For accomplishing the same workflow in R2020b and beyond, I am trying to use instruments to start/stop streaming signals to Simulation Data Inspector (SDI) during a running simulation. However, all the data will be shown in the same run in SDI. This makes it hard to visualize the data in SDI, and I cannot clear the data from SDI between the iterations.
How can I achieve my desired workflow in Simulink Real-Time R2020b and beyond? slrt, sdi, speedgoat, streaming, start, recording, stop MATLAB Answers — New Questions
Can you create an object in Simulink that can be referenced by multiple MATLAB Fcn blocks?
I’ve attached an image with my current Simulink set-up. I am needing to replace the Interpreted MATLAB Fcn blocks with MATLAB Fcn blocks in order to make my Simulink code generation compatible. However, I am struggling to find a way to mimick this similar OOP approach – where I am essentially creating a global object that is being referenced and manipulated by multiple blocks.
With the MATLAB Fcn block, it is only creating a local object for that specific block. It cannot be referenced and manipulated by another MATLAB Fcn block. I run into a similar issue when trying it with MATLAB System blocks. Is there a workaround for this to get this "global object" character?I’ve attached an image with my current Simulink set-up. I am needing to replace the Interpreted MATLAB Fcn blocks with MATLAB Fcn blocks in order to make my Simulink code generation compatible. However, I am struggling to find a way to mimick this similar OOP approach – where I am essentially creating a global object that is being referenced and manipulated by multiple blocks.
With the MATLAB Fcn block, it is only creating a local object for that specific block. It cannot be referenced and manipulated by another MATLAB Fcn block. I run into a similar issue when trying it with MATLAB System blocks. Is there a workaround for this to get this "global object" character? I’ve attached an image with my current Simulink set-up. I am needing to replace the Interpreted MATLAB Fcn blocks with MATLAB Fcn blocks in order to make my Simulink code generation compatible. However, I am struggling to find a way to mimick this similar OOP approach – where I am essentially creating a global object that is being referenced and manipulated by multiple blocks.
With the MATLAB Fcn block, it is only creating a local object for that specific block. It cannot be referenced and manipulated by another MATLAB Fcn block. I run into a similar issue when trying it with MATLAB System blocks. Is there a workaround for this to get this "global object" character? simulink, matlab, code generation, matlab coder, embedded coder MATLAB Answers — New Questions
Error when using “retime” function MATLAB 2024a.
Hi everyone, I am needing to use the "retime" function on a large timetable data set. I need to retime the data in order to apply filters and process it in MATLAB. The timetable includes the timestamp column and the column of values at each time stamp (cannot post here because of proprietary information). I am trying to retime based on sample rate. Below is the code I am trying to run.
"INV1Spd", "INV1OutTrq", "INVCurrHV" are the timetables I am passing into the command.
"INV1Spd_Ts", "INV1OutTrq_Ts", "INVCurrHV_Ts" are the signal cycle times.
From Main Code:
[INV1Spd_Res, INV1OutTrq_Res, INV1CurrHV_Res] = resample(INV1Spd, INV1Spd_Ts, INV1OutTrq, INV1OutTrq_Ts, INV1CurrHV, INV1CurrHV_Ts);
From Function Script:
INVSpd_Resampled = retime(INVSpd, ‘regular’, ‘linear’, ‘SampleRate’, 1/INVSpd_Ts);
INVOutTrq_Resampled = retime(INVOutTrq, ‘regular’, ‘linear’, ‘SampleRate’, 1/INVOutTrq_Ts);
INVCurrHV_Resampled = retime(INVCurrHV, ‘regular’, ‘linear’, ‘SampleRate’, 1/INVCurrHV_Ts);
Error I get when running code:
Error using timetable/retime (line 142)
Interpolation failed for the variable ‘INV1Spd’ when synchronizing using ‘linear’:
Values V must be of type double or single.
I’m not sure what "Values V" is referencing. I’ve already tried different interpolation methods, I’ve converted the timetables to arrays (which expectedly threw another error), so I am unsure where to go from here. Please ask for any more information needed and I will try to provide. Thanks!Hi everyone, I am needing to use the "retime" function on a large timetable data set. I need to retime the data in order to apply filters and process it in MATLAB. The timetable includes the timestamp column and the column of values at each time stamp (cannot post here because of proprietary information). I am trying to retime based on sample rate. Below is the code I am trying to run.
"INV1Spd", "INV1OutTrq", "INVCurrHV" are the timetables I am passing into the command.
"INV1Spd_Ts", "INV1OutTrq_Ts", "INVCurrHV_Ts" are the signal cycle times.
From Main Code:
[INV1Spd_Res, INV1OutTrq_Res, INV1CurrHV_Res] = resample(INV1Spd, INV1Spd_Ts, INV1OutTrq, INV1OutTrq_Ts, INV1CurrHV, INV1CurrHV_Ts);
From Function Script:
INVSpd_Resampled = retime(INVSpd, ‘regular’, ‘linear’, ‘SampleRate’, 1/INVSpd_Ts);
INVOutTrq_Resampled = retime(INVOutTrq, ‘regular’, ‘linear’, ‘SampleRate’, 1/INVOutTrq_Ts);
INVCurrHV_Resampled = retime(INVCurrHV, ‘regular’, ‘linear’, ‘SampleRate’, 1/INVCurrHV_Ts);
Error I get when running code:
Error using timetable/retime (line 142)
Interpolation failed for the variable ‘INV1Spd’ when synchronizing using ‘linear’:
Values V must be of type double or single.
I’m not sure what "Values V" is referencing. I’ve already tried different interpolation methods, I’ve converted the timetables to arrays (which expectedly threw another error), so I am unsure where to go from here. Please ask for any more information needed and I will try to provide. Thanks! Hi everyone, I am needing to use the "retime" function on a large timetable data set. I need to retime the data in order to apply filters and process it in MATLAB. The timetable includes the timestamp column and the column of values at each time stamp (cannot post here because of proprietary information). I am trying to retime based on sample rate. Below is the code I am trying to run.
"INV1Spd", "INV1OutTrq", "INVCurrHV" are the timetables I am passing into the command.
"INV1Spd_Ts", "INV1OutTrq_Ts", "INVCurrHV_Ts" are the signal cycle times.
From Main Code:
[INV1Spd_Res, INV1OutTrq_Res, INV1CurrHV_Res] = resample(INV1Spd, INV1Spd_Ts, INV1OutTrq, INV1OutTrq_Ts, INV1CurrHV, INV1CurrHV_Ts);
From Function Script:
INVSpd_Resampled = retime(INVSpd, ‘regular’, ‘linear’, ‘SampleRate’, 1/INVSpd_Ts);
INVOutTrq_Resampled = retime(INVOutTrq, ‘regular’, ‘linear’, ‘SampleRate’, 1/INVOutTrq_Ts);
INVCurrHV_Resampled = retime(INVCurrHV, ‘regular’, ‘linear’, ‘SampleRate’, 1/INVCurrHV_Ts);
Error I get when running code:
Error using timetable/retime (line 142)
Interpolation failed for the variable ‘INV1Spd’ when synchronizing using ‘linear’:
Values V must be of type double or single.
I’m not sure what "Values V" is referencing. I’ve already tried different interpolation methods, I’ve converted the timetables to arrays (which expectedly threw another error), so I am unsure where to go from here. Please ask for any more information needed and I will try to provide. Thanks! retime, error, signal processing, synchronization MATLAB Answers — New Questions
How can I stream signals from Speedgoat hardware to Simulation Data Inspector (SDI) using Simulink Real-Time R2020b and beyond?
I want to monitor and visualize signals in Simulation Data Inspector (SDI) while the real-time application is running on the Speedgoat hardware. Ideally, I’d like to dynamically add/remove signals from SDI during the simulation, without the need to rebuild the application. How can I do this in R2020b and beyond?I want to monitor and visualize signals in Simulation Data Inspector (SDI) while the real-time application is running on the Speedgoat hardware. Ideally, I’d like to dynamically add/remove signals from SDI during the simulation, without the need to rebuild the application. How can I do this in R2020b and beyond? I want to monitor and visualize signals in Simulation Data Inspector (SDI) while the real-time application is running on the Speedgoat hardware. Ideally, I’d like to dynamically add/remove signals from SDI during the simulation, without the need to rebuild the application. How can I do this in R2020b and beyond? signal, logging, speedgoat, tracing, slrt, sdi, monitoring, instrumentation, visualization, plotting, start, stop MATLAB Answers — New Questions
Graph adjust controls play peekaboo
My App Designer created app has a graph on it. My users want to adjust the horizontal axis (zoom in horizontally only)
I found where I can set that with a right click but the control things in the upper left corner are disappearing when ever the mouse approaches. Can’t get there to grab one fast enough most of the time. Make them disappear when the mouse is NOT there and appear when it is please.My App Designer created app has a graph on it. My users want to adjust the horizontal axis (zoom in horizontally only)
I found where I can set that with a right click but the control things in the upper left corner are disappearing when ever the mouse approaches. Can’t get there to grab one fast enough most of the time. Make them disappear when the mouse is NOT there and appear when it is please. My App Designer created app has a graph on it. My users want to adjust the horizontal axis (zoom in horizontally only)
I found where I can set that with a right click but the control things in the upper left corner are disappearing when ever the mouse approaches. Can’t get there to grab one fast enough most of the time. Make them disappear when the mouse is NOT there and appear when it is please. graph controls MATLAB Answers — New Questions
Unhelpful error message: Conversion to matlab.ui.control.Table from table is not possible.
The message
Error using MouseOdor4/startupFcn
Conversion to matlab.ui.control.Table from table is not possible.
is about as unhelpfuls as it gets. Sometimes there is a suggestion or even a Fix option.
How about fixing the error message to read
Did you mean UIMyTable.Data = readtable(‘file name’);
Where UIMyTable.Data = readtable(‘file name’); is replaced by the actual line of code
Is there a better place to put suggestions than "ask"The message
Error using MouseOdor4/startupFcn
Conversion to matlab.ui.control.Table from table is not possible.
is about as unhelpfuls as it gets. Sometimes there is a suggestion or even a Fix option.
How about fixing the error message to read
Did you mean UIMyTable.Data = readtable(‘file name’);
Where UIMyTable.Data = readtable(‘file name’); is replaced by the actual line of code
Is there a better place to put suggestions than "ask" The message
Error using MouseOdor4/startupFcn
Conversion to matlab.ui.control.Table from table is not possible.
is about as unhelpfuls as it gets. Sometimes there is a suggestion or even a Fix option.
How about fixing the error message to read
Did you mean UIMyTable.Data = readtable(‘file name’);
Where UIMyTable.Data = readtable(‘file name’); is replaced by the actual line of code
Is there a better place to put suggestions than "ask" readtable error message MATLAB Answers — New Questions
Possible to use Codegen options in AudioPlugin generation?
I am trying to write an audio plugin in Matlab, and while trying to write it, run into various problems, e.g. variables are considered unbounded, cannot use FFTW instead of built-in FFT algorithms, etc. And many of these problems point to changing codegen options as an answer. Yet, codegen options appear inaccessible for the generateAudioPlugin function. What is the workaround?I am trying to write an audio plugin in Matlab, and while trying to write it, run into various problems, e.g. variables are considered unbounded, cannot use FFTW instead of built-in FFT algorithms, etc. And many of these problems point to changing codegen options as an answer. Yet, codegen options appear inaccessible for the generateAudioPlugin function. What is the workaround? I am trying to write an audio plugin in Matlab, and while trying to write it, run into various problems, e.g. variables are considered unbounded, cannot use FFTW instead of built-in FFT algorithms, etc. And many of these problems point to changing codegen options as an answer. Yet, codegen options appear inaccessible for the generateAudioPlugin function. What is the workaround? codegen, generateaudioplugin, audio MATLAB Answers — New Questions
Error using InputOutputModel/feedback
Hi,
I am new to designing controller. Here is my code
m = 4; %mass of the system
g = 9.8; %gravitational force
A = [0, 1, 0, 0;
-g/m, 0, 0, 0;
0, 0, 0, 1;
0, 0, -g/m, 0];
B = [0, 0;
1/m, 0;
0, 0;
0, 1/m];
C = eye(4); %identity matrix
D = [1,0;
0,1;
0,0;
0,0];
Kp = 1;
Ki = 1;
Kd = 1;
inputSignal = pid(Kp,Ki,Kd);
sys = ss(A,B,C,D);
closedLoop = feedback(sys*inputSignal,1);
It gives me below error
Error using InputOutputModel/feedback (line 137)
The first and second arguments of the "feedback" command must have compatible I/O sizes.
Error in pidmodel (line 29)
closedLoop = feedback(sys*inputSignal,1);
I am not sure what’s wrong.Hi,
I am new to designing controller. Here is my code
m = 4; %mass of the system
g = 9.8; %gravitational force
A = [0, 1, 0, 0;
-g/m, 0, 0, 0;
0, 0, 0, 1;
0, 0, -g/m, 0];
B = [0, 0;
1/m, 0;
0, 0;
0, 1/m];
C = eye(4); %identity matrix
D = [1,0;
0,1;
0,0;
0,0];
Kp = 1;
Ki = 1;
Kd = 1;
inputSignal = pid(Kp,Ki,Kd);
sys = ss(A,B,C,D);
closedLoop = feedback(sys*inputSignal,1);
It gives me below error
Error using InputOutputModel/feedback (line 137)
The first and second arguments of the "feedback" command must have compatible I/O sizes.
Error in pidmodel (line 29)
closedLoop = feedback(sys*inputSignal,1);
I am not sure what’s wrong. Hi,
I am new to designing controller. Here is my code
m = 4; %mass of the system
g = 9.8; %gravitational force
A = [0, 1, 0, 0;
-g/m, 0, 0, 0;
0, 0, 0, 1;
0, 0, -g/m, 0];
B = [0, 0;
1/m, 0;
0, 0;
0, 1/m];
C = eye(4); %identity matrix
D = [1,0;
0,1;
0,0;
0,0];
Kp = 1;
Ki = 1;
Kd = 1;
inputSignal = pid(Kp,Ki,Kd);
sys = ss(A,B,C,D);
closedLoop = feedback(sys*inputSignal,1);
It gives me below error
Error using InputOutputModel/feedback (line 137)
The first and second arguments of the "feedback" command must have compatible I/O sizes.
Error in pidmodel (line 29)
closedLoop = feedback(sys*inputSignal,1);
I am not sure what’s wrong. input, output, pid MATLAB Answers — New Questions