Tag Archives: matlab
Does a Raspberry Pi 4B support integrators and transfer functions ?
Hi everyone,
I did a model on Simulink that I would like to build and compile on my Raspberry Pi. The issue is that, using MATLAB Coder, the .elf file is not build and so the compilation returns the following error:
Model Action Rebuild Reason
===========================================================================
test_continuous Failed Code generation information file does not exist.
0 of 1 models built (0 models already up to date)
Build duration: 0h 0m 29.743s
Error:Cannot identify /home/pi/MATLAB_ws/R2020b/C/Users/../test_continuous.elf. No such file or directory.
My model is using integrators and transfer functions and I saw in the documentation that it might be an issue if the user wants to implement that in a small hardware, such as a Raspberry Pi I supposed. When I discretized the model using Model Dicretizer from the Control System Toolbox, it returns exactly the same error.
I did a very small Simulink model that is attached to this post, and again the same error.
I verified that the "Generate code only" box was unchecked in the Code Generation Tab. By the way, I am using MATLAB R2020b and I cannot change this version as I’m working in a company with it.
My question is that does a Raspberry Pi 4B support integrators and transfer functions at all? or did I not get the good interpretations out of this?
Thanks for any help.
RomainHi everyone,
I did a model on Simulink that I would like to build and compile on my Raspberry Pi. The issue is that, using MATLAB Coder, the .elf file is not build and so the compilation returns the following error:
Model Action Rebuild Reason
===========================================================================
test_continuous Failed Code generation information file does not exist.
0 of 1 models built (0 models already up to date)
Build duration: 0h 0m 29.743s
Error:Cannot identify /home/pi/MATLAB_ws/R2020b/C/Users/../test_continuous.elf. No such file or directory.
My model is using integrators and transfer functions and I saw in the documentation that it might be an issue if the user wants to implement that in a small hardware, such as a Raspberry Pi I supposed. When I discretized the model using Model Dicretizer from the Control System Toolbox, it returns exactly the same error.
I did a very small Simulink model that is attached to this post, and again the same error.
I verified that the "Generate code only" box was unchecked in the Code Generation Tab. By the way, I am using MATLAB R2020b and I cannot change this version as I’m working in a company with it.
My question is that does a Raspberry Pi 4B support integrators and transfer functions at all? or did I not get the good interpretations out of this?
Thanks for any help.
Romain Hi everyone,
I did a model on Simulink that I would like to build and compile on my Raspberry Pi. The issue is that, using MATLAB Coder, the .elf file is not build and so the compilation returns the following error:
Model Action Rebuild Reason
===========================================================================
test_continuous Failed Code generation information file does not exist.
0 of 1 models built (0 models already up to date)
Build duration: 0h 0m 29.743s
Error:Cannot identify /home/pi/MATLAB_ws/R2020b/C/Users/../test_continuous.elf. No such file or directory.
My model is using integrators and transfer functions and I saw in the documentation that it might be an issue if the user wants to implement that in a small hardware, such as a Raspberry Pi I supposed. When I discretized the model using Model Dicretizer from the Control System Toolbox, it returns exactly the same error.
I did a very small Simulink model that is attached to this post, and again the same error.
I verified that the "Generate code only" box was unchecked in the Code Generation Tab. By the way, I am using MATLAB R2020b and I cannot change this version as I’m working in a company with it.
My question is that does a Raspberry Pi 4B support integrators and transfer functions at all? or did I not get the good interpretations out of this?
Thanks for any help.
Romain #raspberrypi, #continuous, simulink MATLAB Answers — New Questions
stepwise sort combvec result by its sum over one dimension
Take this code, which works for small lists:
% generate a small list (only works for small randi)
list = cell(1,7);
for idx = 1:7
list{idx} = randi(100,1,randi(10,1));
end
%generate all possible combinations and sort them based on their sum
all_combinations = combvec(list{:});
[~, sorted_idx] = sort(sum(all_combinations, 1),’descend’);
all_combinations_sorted = all_combinations(:,sorted_idx);
%do some work with the generated list
for idx = 1:size(all_combinations_sorted, 2)
combination = all_combinations_sorted(:,idx);
% do more stuff
if some_criteria
break;
end
end
So first I am generating all possible combinations of elements from the 7 lists. Then I calculate the sum of each generated element, e.g. the combination a = [1; 1; 1; 1; 1; 1; 1] has sum(a) == 7. Based on this sum I sort all combinations. Afterwards I iterate over this sorted list and do my work until a stopping criteria is met.
The problem is that, contrary to my example the lists I am dealing with are in the range 100 < size(list{idx}) < 5000 . Thus there are to many combinations to generate all of them, let alone sorting the result. I usually don’t need to go through the entire list, but rather the first 1000 or so entries. My question is:
How can I generate this *sorted* list sequentially, so that I start with the biggest element and then gradually step down to the smallest?Take this code, which works for small lists:
% generate a small list (only works for small randi)
list = cell(1,7);
for idx = 1:7
list{idx} = randi(100,1,randi(10,1));
end
%generate all possible combinations and sort them based on their sum
all_combinations = combvec(list{:});
[~, sorted_idx] = sort(sum(all_combinations, 1),’descend’);
all_combinations_sorted = all_combinations(:,sorted_idx);
%do some work with the generated list
for idx = 1:size(all_combinations_sorted, 2)
combination = all_combinations_sorted(:,idx);
% do more stuff
if some_criteria
break;
end
end
So first I am generating all possible combinations of elements from the 7 lists. Then I calculate the sum of each generated element, e.g. the combination a = [1; 1; 1; 1; 1; 1; 1] has sum(a) == 7. Based on this sum I sort all combinations. Afterwards I iterate over this sorted list and do my work until a stopping criteria is met.
The problem is that, contrary to my example the lists I am dealing with are in the range 100 < size(list{idx}) < 5000 . Thus there are to many combinations to generate all of them, let alone sorting the result. I usually don’t need to go through the entire list, but rather the first 1000 or so entries. My question is:
How can I generate this *sorted* list sequentially, so that I start with the biggest element and then gradually step down to the smallest? Take this code, which works for small lists:
% generate a small list (only works for small randi)
list = cell(1,7);
for idx = 1:7
list{idx} = randi(100,1,randi(10,1));
end
%generate all possible combinations and sort them based on their sum
all_combinations = combvec(list{:});
[~, sorted_idx] = sort(sum(all_combinations, 1),’descend’);
all_combinations_sorted = all_combinations(:,sorted_idx);
%do some work with the generated list
for idx = 1:size(all_combinations_sorted, 2)
combination = all_combinations_sorted(:,idx);
% do more stuff
if some_criteria
break;
end
end
So first I am generating all possible combinations of elements from the 7 lists. Then I calculate the sum of each generated element, e.g. the combination a = [1; 1; 1; 1; 1; 1; 1] has sum(a) == 7. Based on this sum I sort all combinations. Afterwards I iterate over this sorted list and do my work until a stopping criteria is met.
The problem is that, contrary to my example the lists I am dealing with are in the range 100 < size(list{idx}) < 5000 . Thus there are to many combinations to generate all of them, let alone sorting the result. I usually don’t need to go through the entire list, but rather the first 1000 or so entries. My question is:
How can I generate this *sorted* list sequentially, so that I start with the biggest element and then gradually step down to the smallest? sort MATLAB Answers — New Questions
Build process stopped at compile stage. Unable to find the following link-only objects that are specified in the build information: LibGCCarm_cortexm7ldfsp_mathlibCMSISD
Hola, estoy tratando de ejecutar el siguiente diagrama de bloques en Simulink con el NUCLEO-F746ZG.
Al ejecutar la generación de código, me resulta este error:Hola, estoy tratando de ejecutar el siguiente diagrama de bloques en Simulink con el NUCLEO-F746ZG.
Al ejecutar la generación de código, me resulta este error: Hola, estoy tratando de ejecutar el siguiente diagrama de bloques en Simulink con el NUCLEO-F746ZG.
Al ejecutar la generación de código, me resulta este error: stm32, simulink, code generation MATLAB Answers — New Questions
Tracing a loss function to two outputs of the network in order to train a PINN model
Hello everybody,
I’m trying to recreate a machine learning code from this paper in matlab. In python it is somewhat straightforward, but MATLAB does not seem to have the functionality (that I have found) to perform the command:
torch.optim.Adam(list(Net_u.parameters())+list(Net_v.parameters()), lr=learning_rate)
that is, to update the learning parameters for two networks simultaneously. I’ve done some test with the dlgradient function and as I come to understand it, it is only capable of tracing the parameters regarding one function, e.g.;
[U,~] = forward(net_u,XY);
[V,~] = forward(net_v,XY)
du_x = dlgradient(sum(U(1,:),"all"),XY,"EnableHigherDerivatives",true);
du_y = dlgradient(sum(V(1,:),"all"),XY);
…. calculations of of loss function, exempted from code for brevity…
gradients_u = dlgradient(loss,net_u.Learnables);
gradients_v = dlgradient(loss,net_v.Learnables);
Will give completely different gradients then;
[U,~] = forward(net_u,XY);
[V,~] = forward(net_v,XY);
du_x = dlgradient(sum(ux,"all"),XY);
du_y = dlgradient(sum(uy,"all"),XY,"EnableHigherDerivatives",true);
…. calculations of loss function, exempted from code for brevity…
gradients_u = dlgradient(loss,net_u.Learnables);
gradients_v = dlgradient(loss,net_v.Learnables);
Having both higher derivatives traced on, will produce results identical to the first piece of code;
e.g;
[U,~] = forward(net_u,XY);
[V,~] = forward(net_v,XY);
du_x = dlgradient(sum(ux,"all"),XY,"EnableHigherDerivatives",true);
du_y = dlgradient(sum(uy,"all"),XY,"EnableHigherDerivatives",true);
…. calculations of of loss function, exempted from code for brevity…
gradients_u = dlgradient(loss,net_u.Learnables);
gradients_v = dlgradient(loss,net_v.Learnables);
switched the order of dlgradient calls will produce results identical to the second piece of code;
[U,~] = forward(net_u,XY);
[V,~] = forward(net_v,XY);
du_y = dlgradient(sum(uy,"all"),XY,"EnableHigherDerivatives",true);
du_x = dlgradient(sum(ux,"all"),XY,"EnableHigherDerivatives",true);
…. calculations of of loss function, exempted from code for brevity…
gradients_u = dlgradient(loss,net_u.Learnables);
gradients_v = dlgradient(loss,net_v.Learnables);
I only provide short snippets of code as the whole script is a little more then 300 lines, and not I believe is relevant.
At any rate, neither of those two options is able to solve the PINN problem in the example (gradient descent does not converge on the solution).
Has someone experienced and/or played around with these kinds of problems? Any insight would be greatly appreciated.Hello everybody,
I’m trying to recreate a machine learning code from this paper in matlab. In python it is somewhat straightforward, but MATLAB does not seem to have the functionality (that I have found) to perform the command:
torch.optim.Adam(list(Net_u.parameters())+list(Net_v.parameters()), lr=learning_rate)
that is, to update the learning parameters for two networks simultaneously. I’ve done some test with the dlgradient function and as I come to understand it, it is only capable of tracing the parameters regarding one function, e.g.;
[U,~] = forward(net_u,XY);
[V,~] = forward(net_v,XY)
du_x = dlgradient(sum(U(1,:),"all"),XY,"EnableHigherDerivatives",true);
du_y = dlgradient(sum(V(1,:),"all"),XY);
…. calculations of of loss function, exempted from code for brevity…
gradients_u = dlgradient(loss,net_u.Learnables);
gradients_v = dlgradient(loss,net_v.Learnables);
Will give completely different gradients then;
[U,~] = forward(net_u,XY);
[V,~] = forward(net_v,XY);
du_x = dlgradient(sum(ux,"all"),XY);
du_y = dlgradient(sum(uy,"all"),XY,"EnableHigherDerivatives",true);
…. calculations of loss function, exempted from code for brevity…
gradients_u = dlgradient(loss,net_u.Learnables);
gradients_v = dlgradient(loss,net_v.Learnables);
Having both higher derivatives traced on, will produce results identical to the first piece of code;
e.g;
[U,~] = forward(net_u,XY);
[V,~] = forward(net_v,XY);
du_x = dlgradient(sum(ux,"all"),XY,"EnableHigherDerivatives",true);
du_y = dlgradient(sum(uy,"all"),XY,"EnableHigherDerivatives",true);
…. calculations of of loss function, exempted from code for brevity…
gradients_u = dlgradient(loss,net_u.Learnables);
gradients_v = dlgradient(loss,net_v.Learnables);
switched the order of dlgradient calls will produce results identical to the second piece of code;
[U,~] = forward(net_u,XY);
[V,~] = forward(net_v,XY);
du_y = dlgradient(sum(uy,"all"),XY,"EnableHigherDerivatives",true);
du_x = dlgradient(sum(ux,"all"),XY,"EnableHigherDerivatives",true);
…. calculations of of loss function, exempted from code for brevity…
gradients_u = dlgradient(loss,net_u.Learnables);
gradients_v = dlgradient(loss,net_v.Learnables);
I only provide short snippets of code as the whole script is a little more then 300 lines, and not I believe is relevant.
At any rate, neither of those two options is able to solve the PINN problem in the example (gradient descent does not converge on the solution).
Has someone experienced and/or played around with these kinds of problems? Any insight would be greatly appreciated. Hello everybody,
I’m trying to recreate a machine learning code from this paper in matlab. In python it is somewhat straightforward, but MATLAB does not seem to have the functionality (that I have found) to perform the command:
torch.optim.Adam(list(Net_u.parameters())+list(Net_v.parameters()), lr=learning_rate)
that is, to update the learning parameters for two networks simultaneously. I’ve done some test with the dlgradient function and as I come to understand it, it is only capable of tracing the parameters regarding one function, e.g.;
[U,~] = forward(net_u,XY);
[V,~] = forward(net_v,XY)
du_x = dlgradient(sum(U(1,:),"all"),XY,"EnableHigherDerivatives",true);
du_y = dlgradient(sum(V(1,:),"all"),XY);
…. calculations of of loss function, exempted from code for brevity…
gradients_u = dlgradient(loss,net_u.Learnables);
gradients_v = dlgradient(loss,net_v.Learnables);
Will give completely different gradients then;
[U,~] = forward(net_u,XY);
[V,~] = forward(net_v,XY);
du_x = dlgradient(sum(ux,"all"),XY);
du_y = dlgradient(sum(uy,"all"),XY,"EnableHigherDerivatives",true);
…. calculations of loss function, exempted from code for brevity…
gradients_u = dlgradient(loss,net_u.Learnables);
gradients_v = dlgradient(loss,net_v.Learnables);
Having both higher derivatives traced on, will produce results identical to the first piece of code;
e.g;
[U,~] = forward(net_u,XY);
[V,~] = forward(net_v,XY);
du_x = dlgradient(sum(ux,"all"),XY,"EnableHigherDerivatives",true);
du_y = dlgradient(sum(uy,"all"),XY,"EnableHigherDerivatives",true);
…. calculations of of loss function, exempted from code for brevity…
gradients_u = dlgradient(loss,net_u.Learnables);
gradients_v = dlgradient(loss,net_v.Learnables);
switched the order of dlgradient calls will produce results identical to the second piece of code;
[U,~] = forward(net_u,XY);
[V,~] = forward(net_v,XY);
du_y = dlgradient(sum(uy,"all"),XY,"EnableHigherDerivatives",true);
du_x = dlgradient(sum(ux,"all"),XY,"EnableHigherDerivatives",true);
…. calculations of of loss function, exempted from code for brevity…
gradients_u = dlgradient(loss,net_u.Learnables);
gradients_v = dlgradient(loss,net_v.Learnables);
I only provide short snippets of code as the whole script is a little more then 300 lines, and not I believe is relevant.
At any rate, neither of those two options is able to solve the PINN problem in the example (gradient descent does not converge on the solution).
Has someone experienced and/or played around with these kinds of problems? Any insight would be greatly appreciated. machine learning, dlgradient, custom loss function MATLAB Answers — New Questions
Impulse response from poles and zeros
find out impulse response h(n) from pole zero location of system function.
Location of zeros= -0.2, -0.3, -0.4,-0.8
Location of poles= 0.4+0.4j, 0.4-0.4j, 0.5, 0.7find out impulse response h(n) from pole zero location of system function.
Location of zeros= -0.2, -0.3, -0.4,-0.8
Location of poles= 0.4+0.4j, 0.4-0.4j, 0.5, 0.7 find out impulse response h(n) from pole zero location of system function.
Location of zeros= -0.2, -0.3, -0.4,-0.8
Location of poles= 0.4+0.4j, 0.4-0.4j, 0.5, 0.7 code, —obviously homework— MATLAB Answers — New Questions
Prevent mlint warning for onCleanup like return value
I wrote a function, similar to onCleanup. I noticed that Matlab does not give a mlint warning for the following code.
dummy = onCleanup( @() some_func );
But for my own function
dummy = MyOwnCleanup( @() some_func );
I get the warning Value assigned to variable might be unused, which I need to silence with %#ok<NASGU>. Obviously, Matlab recognizes the onCleanup call and does not emit a warning. How can I acchieve similar behaviour for my own MyOwnCleanup function?I wrote a function, similar to onCleanup. I noticed that Matlab does not give a mlint warning for the following code.
dummy = onCleanup( @() some_func );
But for my own function
dummy = MyOwnCleanup( @() some_func );
I get the warning Value assigned to variable might be unused, which I need to silence with %#ok<NASGU>. Obviously, Matlab recognizes the onCleanup call and does not emit a warning. How can I acchieve similar behaviour for my own MyOwnCleanup function? I wrote a function, similar to onCleanup. I noticed that Matlab does not give a mlint warning for the following code.
dummy = onCleanup( @() some_func );
But for my own function
dummy = MyOwnCleanup( @() some_func );
I get the warning Value assigned to variable might be unused, which I need to silence with %#ok<NASGU>. Obviously, Matlab recognizes the onCleanup call and does not emit a warning. How can I acchieve similar behaviour for my own MyOwnCleanup function? mlint, oncleanup MATLAB Answers — New Questions
Bagaimana Cara menampilkan hasil dalam berupa grafik di gui matlab ?
Mohon bantuannya para master, sudah memasukan inputan tetapi tidak muncul hasil atau garis grafik dalam axes3, terima kasihMohon bantuannya para master, sudah memasukan inputan tetapi tidak muncul hasil atau garis grafik dalam axes3, terima kasih Mohon bantuannya para master, sudah memasukan inputan tetapi tidak muncul hasil atau garis grafik dalam axes3, terima kasih pathloss MATLAB Answers — New Questions
Big CSV file read in matlab
I have a CSV file of 5 columns and unknown rows, Its size is 45 GB.
I want to read it in MATLAB and plot it against the time.
Is there any way to do it ?I have a CSV file of 5 columns and unknown rows, Its size is 45 GB.
I want to read it in MATLAB and plot it against the time.
Is there any way to do it ? I have a CSV file of 5 columns and unknown rows, Its size is 45 GB.
I want to read it in MATLAB and plot it against the time.
Is there any way to do it ? bigdata MATLAB Answers — New Questions
create a vector of all the odds positive integers smaller than 100 in increasing order to save it into a variable
Hi im new student in matlab, i try to do this exercise but i dont understand the thing size, the test asking me for a size [1 50] and say is currently [1 99], the code i wrote was:
odds = 1:1:99
thank youHi im new student in matlab, i try to do this exercise but i dont understand the thing size, the test asking me for a size [1 50] and say is currently [1 99], the code i wrote was:
odds = 1:1:99
thank you Hi im new student in matlab, i try to do this exercise but i dont understand the thing size, the test asking me for a size [1 50] and say is currently [1 99], the code i wrote was:
odds = 1:1:99
thank you odds, colon, homework MATLAB Answers — New Questions
xmlwrite – Control the order of attributes
Hi,
I wrote the following script to generate a XML-file…
xml_doc = com.mathworks.xml.XMLUtils.createDocument(‘Node’);
root = xml_doc.getDocumentElement();
tool_elem = xml_doc.createElement(‘Tool’);
tool_elem.setAttribute(‘name’,’me’);
tool_elem.setAttribute(‘defaultValue’,’1122′);
root.appendChild(tool_elem);
disp (xmlwrite(xml_doc));
… and I get the following result:
<?xml version="1.0" encoding="utf-8"?>
<Node>
<Tool defaultValue="1122" name="me"/>
</Node>
I know that the order is irrelevant from the point of the XML specification, but I would like to have the attribute "name" before "defaultValue" for readability.
Can I modify the order of the attributes?Hi,
I wrote the following script to generate a XML-file…
xml_doc = com.mathworks.xml.XMLUtils.createDocument(‘Node’);
root = xml_doc.getDocumentElement();
tool_elem = xml_doc.createElement(‘Tool’);
tool_elem.setAttribute(‘name’,’me’);
tool_elem.setAttribute(‘defaultValue’,’1122′);
root.appendChild(tool_elem);
disp (xmlwrite(xml_doc));
… and I get the following result:
<?xml version="1.0" encoding="utf-8"?>
<Node>
<Tool defaultValue="1122" name="me"/>
</Node>
I know that the order is irrelevant from the point of the XML specification, but I would like to have the attribute "name" before "defaultValue" for readability.
Can I modify the order of the attributes? Hi,
I wrote the following script to generate a XML-file…
xml_doc = com.mathworks.xml.XMLUtils.createDocument(‘Node’);
root = xml_doc.getDocumentElement();
tool_elem = xml_doc.createElement(‘Tool’);
tool_elem.setAttribute(‘name’,’me’);
tool_elem.setAttribute(‘defaultValue’,’1122′);
root.appendChild(tool_elem);
disp (xmlwrite(xml_doc));
… and I get the following result:
<?xml version="1.0" encoding="utf-8"?>
<Node>
<Tool defaultValue="1122" name="me"/>
</Node>
I know that the order is irrelevant from the point of the XML specification, but I would like to have the attribute "name" before "defaultValue" for readability.
Can I modify the order of the attributes? xmlwrite, attribute MATLAB Answers — New Questions
I am implementing forward neural network for prediction while taking weights from patternnet trained model
Dir = ‘.’;
outputFile = fullfile(Dir, ‘net_test1.mat’);
load(outputFile, ‘TrainedNet’);
%%
ih1w = TrainedNet.IW{ 1, 1 };
h1h2w = TrainedNet.LW{ 2, 1 };
h2ow = TrainedNet.LW{ 3, 2 };
h1b = TrainedNet.b{1};
h2b = TrainedNet.b{2};
ob = TrainedNet.b{3};
%%
maxx = TrainedNet.inputs{1}.processSettings{1,1}.xmax;
minx = TrainedNet.inputs{1}.processSettings{1,1}.xmin;
gain = TrainedNet.inputs{1}.processSettings{1,1}.gain;
rangex = TrainedNet.inputs{1}.processSettings{1,1}.xrange;
offset = TrainedNet.inputs{1}.processSettings{1,1}.xoffset;
TrainedNet.inputs{1}.processSettings{1,1}
%%
function y = tanh(x)
y = (2 / (1 + exp(-2 * x))) – 1;
end
function y = sigmoid(x)
y = 1 / (1 + exp(-x));
end
inputlayer = ones(1,1036);
inputlayer = inputlayer’;
inputlayer_normalized = [];
for x = 1:1036
inputlayer_normalized(x) = (inputlayer(x)-offset(x))*gain(x);
end
% Initialize variables
h1size = size(ih1w, 1);
inputsize = size(ih1w, 2);
h2size = size(h1h2w, 1);
outputsize = size(h2ow, 1);
% First hidden layer computation
hl1 = zeros(1, h1size);
for k = 0:h1size-1
sum = 0;
for i = 0:inputsize-1
sum = sum + (ih1w(k+1, i+1) * inputlayer_normalized(i+1));
end
sum = sum + h1b(k+1);
hl1(k+1) = tanh(sum);
end
% Second hidden layer computation
hl2 = zeros(1, h2size);
for k = 0:h2size-1
hl2(k+1) = 0;
for i = 0:h1size-1
hl2(k+1) = hl2(k+1) + (h1h2w(k+1, i+1) * hl1(i+1));
end
hl2(k+1) = hl2(k+1) + h2b(k+1);
hl2(k+1) = tanh(hl2(k+1));
end
% Output layer computation
ol = zeros(1, outputsize);
for k = 0:outputsize-1
ol(k+1) = 0;
for i = 0:h2size-1
ol(k+1) = ol(k+1) + (h2ow(k+1, i+1) * hl2(i+1));
end
ol(k+1) = ol(k+1) + ob(k+1);
ol(k+1) = sigmoid(ol(k+1));
end
Ipred = TrainedNet(inputlayer);
this is code what i am implementing above neural network trained from inbuild function patternnet in matlab
I am using its weights and preprocess
but i am not getting same output in variable ol and IpredDir = ‘.’;
outputFile = fullfile(Dir, ‘net_test1.mat’);
load(outputFile, ‘TrainedNet’);
%%
ih1w = TrainedNet.IW{ 1, 1 };
h1h2w = TrainedNet.LW{ 2, 1 };
h2ow = TrainedNet.LW{ 3, 2 };
h1b = TrainedNet.b{1};
h2b = TrainedNet.b{2};
ob = TrainedNet.b{3};
%%
maxx = TrainedNet.inputs{1}.processSettings{1,1}.xmax;
minx = TrainedNet.inputs{1}.processSettings{1,1}.xmin;
gain = TrainedNet.inputs{1}.processSettings{1,1}.gain;
rangex = TrainedNet.inputs{1}.processSettings{1,1}.xrange;
offset = TrainedNet.inputs{1}.processSettings{1,1}.xoffset;
TrainedNet.inputs{1}.processSettings{1,1}
%%
function y = tanh(x)
y = (2 / (1 + exp(-2 * x))) – 1;
end
function y = sigmoid(x)
y = 1 / (1 + exp(-x));
end
inputlayer = ones(1,1036);
inputlayer = inputlayer’;
inputlayer_normalized = [];
for x = 1:1036
inputlayer_normalized(x) = (inputlayer(x)-offset(x))*gain(x);
end
% Initialize variables
h1size = size(ih1w, 1);
inputsize = size(ih1w, 2);
h2size = size(h1h2w, 1);
outputsize = size(h2ow, 1);
% First hidden layer computation
hl1 = zeros(1, h1size);
for k = 0:h1size-1
sum = 0;
for i = 0:inputsize-1
sum = sum + (ih1w(k+1, i+1) * inputlayer_normalized(i+1));
end
sum = sum + h1b(k+1);
hl1(k+1) = tanh(sum);
end
% Second hidden layer computation
hl2 = zeros(1, h2size);
for k = 0:h2size-1
hl2(k+1) = 0;
for i = 0:h1size-1
hl2(k+1) = hl2(k+1) + (h1h2w(k+1, i+1) * hl1(i+1));
end
hl2(k+1) = hl2(k+1) + h2b(k+1);
hl2(k+1) = tanh(hl2(k+1));
end
% Output layer computation
ol = zeros(1, outputsize);
for k = 0:outputsize-1
ol(k+1) = 0;
for i = 0:h2size-1
ol(k+1) = ol(k+1) + (h2ow(k+1, i+1) * hl2(i+1));
end
ol(k+1) = ol(k+1) + ob(k+1);
ol(k+1) = sigmoid(ol(k+1));
end
Ipred = TrainedNet(inputlayer);
this is code what i am implementing above neural network trained from inbuild function patternnet in matlab
I am using its weights and preprocess
but i am not getting same output in variable ol and Ipred Dir = ‘.’;
outputFile = fullfile(Dir, ‘net_test1.mat’);
load(outputFile, ‘TrainedNet’);
%%
ih1w = TrainedNet.IW{ 1, 1 };
h1h2w = TrainedNet.LW{ 2, 1 };
h2ow = TrainedNet.LW{ 3, 2 };
h1b = TrainedNet.b{1};
h2b = TrainedNet.b{2};
ob = TrainedNet.b{3};
%%
maxx = TrainedNet.inputs{1}.processSettings{1,1}.xmax;
minx = TrainedNet.inputs{1}.processSettings{1,1}.xmin;
gain = TrainedNet.inputs{1}.processSettings{1,1}.gain;
rangex = TrainedNet.inputs{1}.processSettings{1,1}.xrange;
offset = TrainedNet.inputs{1}.processSettings{1,1}.xoffset;
TrainedNet.inputs{1}.processSettings{1,1}
%%
function y = tanh(x)
y = (2 / (1 + exp(-2 * x))) – 1;
end
function y = sigmoid(x)
y = 1 / (1 + exp(-x));
end
inputlayer = ones(1,1036);
inputlayer = inputlayer’;
inputlayer_normalized = [];
for x = 1:1036
inputlayer_normalized(x) = (inputlayer(x)-offset(x))*gain(x);
end
% Initialize variables
h1size = size(ih1w, 1);
inputsize = size(ih1w, 2);
h2size = size(h1h2w, 1);
outputsize = size(h2ow, 1);
% First hidden layer computation
hl1 = zeros(1, h1size);
for k = 0:h1size-1
sum = 0;
for i = 0:inputsize-1
sum = sum + (ih1w(k+1, i+1) * inputlayer_normalized(i+1));
end
sum = sum + h1b(k+1);
hl1(k+1) = tanh(sum);
end
% Second hidden layer computation
hl2 = zeros(1, h2size);
for k = 0:h2size-1
hl2(k+1) = 0;
for i = 0:h1size-1
hl2(k+1) = hl2(k+1) + (h1h2w(k+1, i+1) * hl1(i+1));
end
hl2(k+1) = hl2(k+1) + h2b(k+1);
hl2(k+1) = tanh(hl2(k+1));
end
% Output layer computation
ol = zeros(1, outputsize);
for k = 0:outputsize-1
ol(k+1) = 0;
for i = 0:h2size-1
ol(k+1) = ol(k+1) + (h2ow(k+1, i+1) * hl2(i+1));
end
ol(k+1) = ol(k+1) + ob(k+1);
ol(k+1) = sigmoid(ol(k+1));
end
Ipred = TrainedNet(inputlayer);
this is code what i am implementing above neural network trained from inbuild function patternnet in matlab
I am using its weights and preprocess
but i am not getting same output in variable ol and Ipred patternnet MATLAB Answers — New Questions
problems in modeling a three-phase four-winding transformer YNyn0yn0+d5 with three “multi-winding transformer” in zero sequence component
I want to model a three-phase four-winding transformer YNyn0yn0+d5 with two low voltage windings (yn0yn0) and a compensation winding (d5) using Simscape Electrical. The transformer is a 5 limbs transformer. Since the transformer has four windings, I had to connect three single-phase transformers (model in Simulink: multi-winding transformer) in star and delta configurations respectively.
From the transformer test report, I have calculated the parameters of the T-equivalent circuit diagram. Here, I had to calculate the longitudinal impedances of the compensation winding using the measurement of zero sequence component, because it was not measured in the short-circuit test.
The simulated values of the open-circuit and short-circuit tests in the positive sequence component agree very well with the values from the transformer test report.
My problem:
In the measurement of zero sequence component, I only get matching values for the measurement that I used for the calculation of the compensation winding (HV supply, compensation winding short-circuited). In the further zero sequence measurements (additionally, one LV winding short-circuited), the short-circuit voltage is five times too high.
Questions:
Is there possibly a coupling in the transformer only in the zero sequence component?
Or does anyone already know this problem?
Or does anyone have an idea of how I can model the transformer using other Simulink models?I want to model a three-phase four-winding transformer YNyn0yn0+d5 with two low voltage windings (yn0yn0) and a compensation winding (d5) using Simscape Electrical. The transformer is a 5 limbs transformer. Since the transformer has four windings, I had to connect three single-phase transformers (model in Simulink: multi-winding transformer) in star and delta configurations respectively.
From the transformer test report, I have calculated the parameters of the T-equivalent circuit diagram. Here, I had to calculate the longitudinal impedances of the compensation winding using the measurement of zero sequence component, because it was not measured in the short-circuit test.
The simulated values of the open-circuit and short-circuit tests in the positive sequence component agree very well with the values from the transformer test report.
My problem:
In the measurement of zero sequence component, I only get matching values for the measurement that I used for the calculation of the compensation winding (HV supply, compensation winding short-circuited). In the further zero sequence measurements (additionally, one LV winding short-circuited), the short-circuit voltage is five times too high.
Questions:
Is there possibly a coupling in the transformer only in the zero sequence component?
Or does anyone already know this problem?
Or does anyone have an idea of how I can model the transformer using other Simulink models? I want to model a three-phase four-winding transformer YNyn0yn0+d5 with two low voltage windings (yn0yn0) and a compensation winding (d5) using Simscape Electrical. The transformer is a 5 limbs transformer. Since the transformer has four windings, I had to connect three single-phase transformers (model in Simulink: multi-winding transformer) in star and delta configurations respectively.
From the transformer test report, I have calculated the parameters of the T-equivalent circuit diagram. Here, I had to calculate the longitudinal impedances of the compensation winding using the measurement of zero sequence component, because it was not measured in the short-circuit test.
The simulated values of the open-circuit and short-circuit tests in the positive sequence component agree very well with the values from the transformer test report.
My problem:
In the measurement of zero sequence component, I only get matching values for the measurement that I used for the calculation of the compensation winding (HV supply, compensation winding short-circuited). In the further zero sequence measurements (additionally, one LV winding short-circuited), the short-circuit voltage is five times too high.
Questions:
Is there possibly a coupling in the transformer only in the zero sequence component?
Or does anyone already know this problem?
Or does anyone have an idea of how I can model the transformer using other Simulink models? multi-winding transformer, compensation winding, simscape electrical, transformer, transformer coupling, zero sequence MATLAB Answers — New Questions
PID controller, difference when graphing step function with PID control block in matlab and simulink
Hi everyone,
please tell me, why is there a difference when graphing step function with PID control block in matlab and simulink
s =tf(‘s’);
g = 1.883e5/(s*(s^2+4466*s+6.43e6));
kp = 60;
ki = 63000;
kd = 3;
gpid = pid(kp,ki,kd);
gsys = feedback(g*gpid,1);
step(gsys)Hi everyone,
please tell me, why is there a difference when graphing step function with PID control block in matlab and simulink
s =tf(‘s’);
g = 1.883e5/(s*(s^2+4466*s+6.43e6));
kp = 60;
ki = 63000;
kd = 3;
gpid = pid(kp,ki,kd);
gsys = feedback(g*gpid,1);
step(gsys) Hi everyone,
please tell me, why is there a difference when graphing step function with PID control block in matlab and simulink
s =tf(‘s’);
g = 1.883e5/(s*(s^2+4466*s+6.43e6));
kp = 60;
ki = 63000;
kd = 3;
gpid = pid(kp,ki,kd);
gsys = feedback(g*gpid,1);
step(gsys) pid, graph MATLAB Answers — New Questions
FE Model with function handle
Hello everyone.
Is it possible to use a function handle within a femodel? So I could change the value of a material property or load, for example:
gm = multicuboid(0.5,0.1,0.1);
pdegplot(gm,FaceLabels="on",FaceAlpha=0.5);
% without function handles
model = femodel(AnalysisType="structuralStatic", …
Geometry=gm);
E = 210e3;
P = 1000;
nu = 0.3;
model.MaterialProperties = materialProperties(YoungsModulus=E,PoissonsRatio=nu);
model.FaceLoad(2) = faceLoad(Pressure=P);
model.FaceBC(5) = faceBC("Constraint","fixed");
model = generateMesh(model);
r = solve(model);
pdeplot3D(r.Mesh,"Deformation",r.Displacement,"ColorMap",r.VonMisesStress);
% using function handles
model = femodel(AnalysisType="structuralStatic", …
Geometry=gm);
E = @(x) x(1);
P = @(x) x(2);
model.MaterialProperties = materialProperties(YoungsModulus=E,PoissonsRatio=nu);
model.FaceLoad(2) = faceLoad(Pressure=P);
model.FaceBC(5) = faceBC("Constraint","fixed");
model = generateMesh(model);
values = [210e3 1000];
r = solve(model(values)); % I know it’s wrong, once variable model has only one element, but I want to replace YoungsModulus property by values(1) and faceLoad(2) by values(2)
pdeplot3D(r.Mesh,"Deformation",r.Displacement,"ColorMap",r.VonMisesStress);Hello everyone.
Is it possible to use a function handle within a femodel? So I could change the value of a material property or load, for example:
gm = multicuboid(0.5,0.1,0.1);
pdegplot(gm,FaceLabels="on",FaceAlpha=0.5);
% without function handles
model = femodel(AnalysisType="structuralStatic", …
Geometry=gm);
E = 210e3;
P = 1000;
nu = 0.3;
model.MaterialProperties = materialProperties(YoungsModulus=E,PoissonsRatio=nu);
model.FaceLoad(2) = faceLoad(Pressure=P);
model.FaceBC(5) = faceBC("Constraint","fixed");
model = generateMesh(model);
r = solve(model);
pdeplot3D(r.Mesh,"Deformation",r.Displacement,"ColorMap",r.VonMisesStress);
% using function handles
model = femodel(AnalysisType="structuralStatic", …
Geometry=gm);
E = @(x) x(1);
P = @(x) x(2);
model.MaterialProperties = materialProperties(YoungsModulus=E,PoissonsRatio=nu);
model.FaceLoad(2) = faceLoad(Pressure=P);
model.FaceBC(5) = faceBC("Constraint","fixed");
model = generateMesh(model);
values = [210e3 1000];
r = solve(model(values)); % I know it’s wrong, once variable model has only one element, but I want to replace YoungsModulus property by values(1) and faceLoad(2) by values(2)
pdeplot3D(r.Mesh,"Deformation",r.Displacement,"ColorMap",r.VonMisesStress); Hello everyone.
Is it possible to use a function handle within a femodel? So I could change the value of a material property or load, for example:
gm = multicuboid(0.5,0.1,0.1);
pdegplot(gm,FaceLabels="on",FaceAlpha=0.5);
% without function handles
model = femodel(AnalysisType="structuralStatic", …
Geometry=gm);
E = 210e3;
P = 1000;
nu = 0.3;
model.MaterialProperties = materialProperties(YoungsModulus=E,PoissonsRatio=nu);
model.FaceLoad(2) = faceLoad(Pressure=P);
model.FaceBC(5) = faceBC("Constraint","fixed");
model = generateMesh(model);
r = solve(model);
pdeplot3D(r.Mesh,"Deformation",r.Displacement,"ColorMap",r.VonMisesStress);
% using function handles
model = femodel(AnalysisType="structuralStatic", …
Geometry=gm);
E = @(x) x(1);
P = @(x) x(2);
model.MaterialProperties = materialProperties(YoungsModulus=E,PoissonsRatio=nu);
model.FaceLoad(2) = faceLoad(Pressure=P);
model.FaceBC(5) = faceBC("Constraint","fixed");
model = generateMesh(model);
values = [210e3 1000];
r = solve(model(values)); % I know it’s wrong, once variable model has only one element, but I want to replace YoungsModulus property by values(1) and faceLoad(2) by values(2)
pdeplot3D(r.Mesh,"Deformation",r.Displacement,"ColorMap",r.VonMisesStress); pde, femodel MATLAB Answers — New Questions
STM32H7xx DMA interrupts not working on UART receive
Requirement:
Receive every byte of data over uart, as I need to look for r and then process numbers before it, use interrupt driven code for better resource usage.
What I have tried:
Enable uart in cubeMX, add DMA request for uart in circular mode.
Then in simulink, I added a Hardware Interrupt Block and selected the DMA channel, which i set for uart, as interrupt source. I am checking only ‘TC’ event as interrupt source in hardware manager block.
Issue:
The code compiles and runs without error. But the triggered subsystem(function call) connected to Hardware Interrrupt Block never runs since my values counter never increments in the subsystem. I think that DMA is configured but it is not started properly by simulink to generate interrupts.
I have tried using Hardware interrupt block with External interrupts from button push, in that case, my interrupt driven counter increments. But when switch interrupt source to the DMA attached to uart RX, no interrupt occurs.
Question:
Has anybody any idea how can I generate interrupts from DMA block when it receives one word (4bytes) from UART and use the Hardware Interrupt block to call my triggered subsytem to process those bytes.
Thanks.Requirement:
Receive every byte of data over uart, as I need to look for r and then process numbers before it, use interrupt driven code for better resource usage.
What I have tried:
Enable uart in cubeMX, add DMA request for uart in circular mode.
Then in simulink, I added a Hardware Interrupt Block and selected the DMA channel, which i set for uart, as interrupt source. I am checking only ‘TC’ event as interrupt source in hardware manager block.
Issue:
The code compiles and runs without error. But the triggered subsystem(function call) connected to Hardware Interrrupt Block never runs since my values counter never increments in the subsystem. I think that DMA is configured but it is not started properly by simulink to generate interrupts.
I have tried using Hardware interrupt block with External interrupts from button push, in that case, my interrupt driven counter increments. But when switch interrupt source to the DMA attached to uart RX, no interrupt occurs.
Question:
Has anybody any idea how can I generate interrupts from DMA block when it receives one word (4bytes) from UART and use the Hardware Interrupt block to call my triggered subsytem to process those bytes.
Thanks. Requirement:
Receive every byte of data over uart, as I need to look for r and then process numbers before it, use interrupt driven code for better resource usage.
What I have tried:
Enable uart in cubeMX, add DMA request for uart in circular mode.
Then in simulink, I added a Hardware Interrupt Block and selected the DMA channel, which i set for uart, as interrupt source. I am checking only ‘TC’ event as interrupt source in hardware manager block.
Issue:
The code compiles and runs without error. But the triggered subsystem(function call) connected to Hardware Interrrupt Block never runs since my values counter never increments in the subsystem. I think that DMA is configured but it is not started properly by simulink to generate interrupts.
I have tried using Hardware interrupt block with External interrupts from button push, in that case, my interrupt driven counter increments. But when switch interrupt source to the DMA attached to uart RX, no interrupt occurs.
Question:
Has anybody any idea how can I generate interrupts from DMA block when it receives one word (4bytes) from UART and use the Hardware Interrupt block to call my triggered subsytem to process those bytes.
Thanks. stm32 dma, stm32 uart, stm32 simulink MATLAB Answers — New Questions
wrong motion of SCARA robot in dynamic
I have implemented a SCARA RRP robot in Simulink Matlab. The robot’s movement is carried out correctly using kinematics, but when I use the output data from kinematics as the input for dynamics, the robot performs a rotational and unproductive movement… Should I perform any specific calculations on my trajectories in the dynamics input?I have implemented a SCARA RRP robot in Simulink Matlab. The robot’s movement is carried out correctly using kinematics, but when I use the output data from kinematics as the input for dynamics, the robot performs a rotational and unproductive movement… Should I perform any specific calculations on my trajectories in the dynamics input? I have implemented a SCARA RRP robot in Simulink Matlab. The robot’s movement is carried out correctly using kinematics, but when I use the output data from kinematics as the input for dynamics, the robot performs a rotational and unproductive movement… Should I perform any specific calculations on my trajectories in the dynamics input? scara robot MATLAB Answers — New Questions
Data input and target formatting for Deep Learning Models
I am trying to train a ML model with data from 10 different trials in batches. Right now the input data is stored in a 1×9 cell array (Features) with each cell containing a 3x1x541 dlarray corresponding to (the 3 accelerometer channels C, Batch, and 541 time steps T) for all 10 trials. The other cell array that contains the correposding continous variable we are trying to predict/output values over the 541 time steps stored in (Predictionvalue). I am getting an error when inputing into my model that: Error using trainnet (line 46)
Dimension format of predictions and target values arguments must match.
Are there any suggestions on how I could fix this or if I am formatting my data inputs/tragets incorrectly?
Thank you so much in advance!I am trying to train a ML model with data from 10 different trials in batches. Right now the input data is stored in a 1×9 cell array (Features) with each cell containing a 3x1x541 dlarray corresponding to (the 3 accelerometer channels C, Batch, and 541 time steps T) for all 10 trials. The other cell array that contains the correposding continous variable we are trying to predict/output values over the 541 time steps stored in (Predictionvalue). I am getting an error when inputing into my model that: Error using trainnet (line 46)
Dimension format of predictions and target values arguments must match.
Are there any suggestions on how I could fix this or if I am formatting my data inputs/tragets incorrectly?
Thank you so much in advance! I am trying to train a ML model with data from 10 different trials in batches. Right now the input data is stored in a 1×9 cell array (Features) with each cell containing a 3x1x541 dlarray corresponding to (the 3 accelerometer channels C, Batch, and 541 time steps T) for all 10 trials. The other cell array that contains the correposding continous variable we are trying to predict/output values over the 541 time steps stored in (Predictionvalue). I am getting an error when inputing into my model that: Error using trainnet (line 46)
Dimension format of predictions and target values arguments must match.
Are there any suggestions on how I could fix this or if I am formatting my data inputs/tragets incorrectly?
Thank you so much in advance! data formatting, machine learning, dlarray, deep learning MATLAB Answers — New Questions
how to solve coding issue
Hello everyone
I have faced a problem in applying a code using my data
I don’t know where is the problem in my code or data?
can anyone helpHello everyone
I have faced a problem in applying a code using my data
I don’t know where is the problem in my code or data?
can anyone help Hello everyone
I have faced a problem in applying a code using my data
I don’t know where is the problem in my code or data?
can anyone help solve MATLAB Answers — New Questions
Formatting Data in dlarray for Machine Learning Input
Hello there I am trying to format my data so that I can input it into my machine learning model. I have input values in XTrain which is a 1×10 cell contatining 3×540 doubles in each cell. This corresponds to the 3 channels, 540 time steps, and the 10 of these are the 10 traials or "batches". When I run the following code below I get 3(C) × 540(B) × 1(T) dlarray which is incorrect. I want to be getting 3(C) × 10(B) × 540(T) dlarray coreespoding to the 3 channels, 10 batches/trails, and 540 time steps. Is there a way in which I can fix this or and suggestions on how I should format my data in XTrain to get the datra in the coreect CBT format? Any help is greatly appreciated!
% Data dimensions
%numFeatures = 3
%numTimeSteps = 541
%numTrials = 10
Xtrain = Predictors;
Ytrain = Output;
% Convert cell arrays to dlarray format
for i = 1:numTrials
XTrain{i} = dlarray(Xtrain{i}, ‘CBT’); % ‘CTB’ stands for ‘Channel’, ‘Time’, ‘Batch’
YTrain{i} = dlarray(Ytrain{i}, ‘TB’); % ‘TB’ stands for ‘Time’, ‘Batch’
endHello there I am trying to format my data so that I can input it into my machine learning model. I have input values in XTrain which is a 1×10 cell contatining 3×540 doubles in each cell. This corresponds to the 3 channels, 540 time steps, and the 10 of these are the 10 traials or "batches". When I run the following code below I get 3(C) × 540(B) × 1(T) dlarray which is incorrect. I want to be getting 3(C) × 10(B) × 540(T) dlarray coreespoding to the 3 channels, 10 batches/trails, and 540 time steps. Is there a way in which I can fix this or and suggestions on how I should format my data in XTrain to get the datra in the coreect CBT format? Any help is greatly appreciated!
% Data dimensions
%numFeatures = 3
%numTimeSteps = 541
%numTrials = 10
Xtrain = Predictors;
Ytrain = Output;
% Convert cell arrays to dlarray format
for i = 1:numTrials
XTrain{i} = dlarray(Xtrain{i}, ‘CBT’); % ‘CTB’ stands for ‘Channel’, ‘Time’, ‘Batch’
YTrain{i} = dlarray(Ytrain{i}, ‘TB’); % ‘TB’ stands for ‘Time’, ‘Batch’
end Hello there I am trying to format my data so that I can input it into my machine learning model. I have input values in XTrain which is a 1×10 cell contatining 3×540 doubles in each cell. This corresponds to the 3 channels, 540 time steps, and the 10 of these are the 10 traials or "batches". When I run the following code below I get 3(C) × 540(B) × 1(T) dlarray which is incorrect. I want to be getting 3(C) × 10(B) × 540(T) dlarray coreespoding to the 3 channels, 10 batches/trails, and 540 time steps. Is there a way in which I can fix this or and suggestions on how I should format my data in XTrain to get the datra in the coreect CBT format? Any help is greatly appreciated!
% Data dimensions
%numFeatures = 3
%numTimeSteps = 541
%numTrials = 10
Xtrain = Predictors;
Ytrain = Output;
% Convert cell arrays to dlarray format
for i = 1:numTrials
XTrain{i} = dlarray(Xtrain{i}, ‘CBT’); % ‘CTB’ stands for ‘Channel’, ‘Time’, ‘Batch’
YTrain{i} = dlarray(Ytrain{i}, ‘TB’); % ‘TB’ stands for ‘Time’, ‘Batch’
end dlarray, data formatting, machine learning MATLAB Answers — New Questions
How do I integrate a DLL generated from Simulink into Excel VBA?
How do I integrate a DLL generated from Simulink into Excel VBA?How do I integrate a DLL generated from Simulink into Excel VBA? How do I integrate a DLL generated from Simulink into Excel VBA? excel, vba, dll, simulink MATLAB Answers — New Questions