Tag Archives: matlab
nxn matrix as an input argument in function
function x = matrix(b, L, U)
A= L * U
x= inv(A)*b
end
I am trying to write a function that solves for an matrix, a known vector and an unknown vector. How can I make the above work? In what form am I to input b, L and U into the function arguments?
For
b = $ begin{pmatrix}a\ b \ c end{pmatrix} $.
L =$ begin{pmatrix} d & e & f\ g & h & i \ j & k & lend{pmatrix} $
U=$ begin{pmatrix} m & n & o\ p & q & r \ s & t & uend{pmatrix} $
would my input be like this: matrix([a][b][c], [d,e,f][g,h,i][j,k,l], [m,n,o][p,q,r][s,t,u]) ??function x = matrix(b, L, U)
A= L * U
x= inv(A)*b
end
I am trying to write a function that solves for an matrix, a known vector and an unknown vector. How can I make the above work? In what form am I to input b, L and U into the function arguments?
For
b = $ begin{pmatrix}a\ b \ c end{pmatrix} $.
L =$ begin{pmatrix} d & e & f\ g & h & i \ j & k & lend{pmatrix} $
U=$ begin{pmatrix} m & n & o\ p & q & r \ s & t & uend{pmatrix} $
would my input be like this: matrix([a][b][c], [d,e,f][g,h,i][j,k,l], [m,n,o][p,q,r][s,t,u]) ?? function x = matrix(b, L, U)
A= L * U
x= inv(A)*b
end
I am trying to write a function that solves for an matrix, a known vector and an unknown vector. How can I make the above work? In what form am I to input b, L and U into the function arguments?
For
b = $ begin{pmatrix}a\ b \ c end{pmatrix} $.
L =$ begin{pmatrix} d & e & f\ g & h & i \ j & k & lend{pmatrix} $
U=$ begin{pmatrix} m & n & o\ p & q & r \ s & t & uend{pmatrix} $
would my input be like this: matrix([a][b][c], [d,e,f][g,h,i][j,k,l], [m,n,o][p,q,r][s,t,u]) ?? matrix MATLAB Answers — New Questions
doubt in the backpropagation algorithm
Hi
I’m studying neural networks and i’m doing a NN with 2 hidden layers and one neuron in the output layer
While I was studying and coding my NN, I faced one doubt.
In the backward step, the math behind this is clear:
to the output layer we have:
, where ⊙ is the hadamard product.
and to hidden layers, we have:
my problem is that, when I code this formulas, I need to change the second equation when I will calculate the the gradient of the first hidden layer (the hidden layer next to the input layer) to match the dimensions of the matrix as follow below:
% Backpropagation
delta_saida = erro_estim.*selecionar_funcao(saida_in_estim,ativ_out,sig_a,tanh_a,tanh_b,’True’);
delta_h2 = (w_out’*delta_saida).*selecionar_funcao(h2_in_estim,ativ_out,sig_a,tanh_a,tanh_b,’True’);
delta_h1 = (w2*delta_h2′)’.*selecionar_funcao(h1_in_estim,ativ_h1,sig_a,tanh_a,tanh_b,’True’);
%update weights and biases
w_out = w_out + learning_rate*delta_saida*h2_out_estim’;
b_out = b_out + learning_rate*delta_saida;
w2 = w2 + learning_rate*(delta_h2’*h1_out_estim)’;
b2 = b2 + learning_rate*sum(delta_h2);
w1 = w1 + learning_rate*delta_h1’*enter_estim;
b1 = b1 + learning_rate*sum(delta_h2);
% I wrote this code partially in portuguese so let me explain a litlle.
%’delta_saida’ is the gradient of the output layer
%delta_h2 is the gradient of the second hidden layer
%delta_h1 is the gradient of the first hidden layer
%w_out,w2 and w1 are the weights of output, second hidden layer and first hidden layer, respectively.
%b_out,b2 and b1 are the biases of output, second hidden layer and first hidden layer, respectively.
% the function selecionar_funcao() is just to calculate the derivative accordingly with the activation function of the layer
%As you can see, I need to change delta_h1 to match the matrix dimensions
It is right to change the formula like i’m doing in my code ? I’m asking it because in my mind, the way that we calculate the gradient of all hidden layers must be the same, but in my case it isn’t true. I will share part of my code here to help anyone to see if i’m doing some mistake
%weights and biases initialization
w1 = randn(num_entradas,n_h1)*sqrt(2/num_entradas);
w2 = randn(n_h1,n_h2) *sqrt(2/n_h1);
w_out = randn(n_h2,n_out) *sqrt(2/n_h2);
b1 = randn(1, n_h1) * sqrt(2/num_entradas);
b2 = randn(1, n_h2) * sqrt(2/n_h1);
b_out = randn(1,n_out) * sqrt(2/n_h2);
%backpropagation
for epoch =1:max_epocas
soma_valid = 0;
soma_estim = 0;
%embaralhar os dados
conj_estim = embaralhar(conj_estim);
% conj_valid = embaralhar(conj_valid);
%Validating
for j=1:size(conj_valid,1)
enter_valid = conj_valid(j,2:end);
h1_in_valid = [enter_valid,1]*[w1;b1];
h1_out_valid = selecionar_funcao(h1_in_valid,ativ_h1,sig_a,tanh_a,tanh_b,’False’);
h2_in_valid = [h1_out_valid,1]*[w2;b2];
h2_out_valid = selecionar_funcao(h2_in_valid,ativ_h2,sig_a,tanh_a,tanh_b,’False’);
saida_in_valid = [h2_out_valid,1]*[w_out;b_out];
saida_out_valid = selecionar_funcao(saida_in_valid,ativ_out,sig_a,tanh_a,tanh_b,’False’);
erro_valid = conj_valid(j,1) – saida_out_valid;
soma_valid = soma_valid + (erro_valid^2);
end
erro_atual_valid = (soma_valid/(2*size(conj_valid,1)));
erros_epoca_valid = [erros_epoca_valid;erro_atual_valid];
%trainning
for i =1:size(conj_estim,1)
enter_estim = conj_estim(i,2:end);
h1_in_estim = [enter_estim,1]*[w1;b1];
h1_out_estim = selecionar_funcao(h1_in_estim,ativ_h1,sig_a,tanh_a,tanh_b,’False’);
h2_in_estim = [h1_out_estim,1]*[w2;b2];
h2_out_estim = selecionar_funcao(h2_in_estim,ativ_h2,sig_a,tanh_a,tanh_b,’False’);
saida_in_estim = [h2_out_estim,1]*[w_out;b_out];
saida_out_estim = selecionar_funcao(saida_in_estim,ativ_out,sig_a,tanh_a,tanh_b,’False’);
erro_estim = conj_estim(i,1) – saida_out_estim;
soma_estim = soma_estim + (erro_estim^2);
% Backpropagation
delta_saida = erro_estim.*selecionar_funcao(saida_in_estim,ativ_out,sig_a,tanh_a,tanh_b,’True’);
delta_h2 = (w_out’*delta_saida).*selecionar_funcao(h2_in_estim,ativ_out,sig_a,tanh_a,tanh_b,’True’);
delta_h1 = (w2*delta_h2′)’.*selecionar_funcao(h1_in_estim,ativ_h1,sig_a,tanh_a,tanh_b,’True’);
%atualizar pesos e biases
w_out = w_out + learning_rate*delta_saida*h2_out_estim’;
b_out = b_out + learning_rate*delta_saida;
w2 = w2 + learning_rate*(delta_h2’*h1_out_estim)’;
b2 = b2 + learning_rate*sum(delta_h2);
w1 = w1 + learning_rate*delta_h1’*enter_estim;
b1 = b1 + learning_rate*sum(delta_h2);
end
erro_atual_estim = (soma_estim/(2*size(conj_estim,1)));
erros_epoca_estim = [erros_epoca_estim;erro_atual_estim];
if erros_epoca_estim(epoch) <limiar
break
else
end
endHi
I’m studying neural networks and i’m doing a NN with 2 hidden layers and one neuron in the output layer
While I was studying and coding my NN, I faced one doubt.
In the backward step, the math behind this is clear:
to the output layer we have:
, where ⊙ is the hadamard product.
and to hidden layers, we have:
my problem is that, when I code this formulas, I need to change the second equation when I will calculate the the gradient of the first hidden layer (the hidden layer next to the input layer) to match the dimensions of the matrix as follow below:
% Backpropagation
delta_saida = erro_estim.*selecionar_funcao(saida_in_estim,ativ_out,sig_a,tanh_a,tanh_b,’True’);
delta_h2 = (w_out’*delta_saida).*selecionar_funcao(h2_in_estim,ativ_out,sig_a,tanh_a,tanh_b,’True’);
delta_h1 = (w2*delta_h2′)’.*selecionar_funcao(h1_in_estim,ativ_h1,sig_a,tanh_a,tanh_b,’True’);
%update weights and biases
w_out = w_out + learning_rate*delta_saida*h2_out_estim’;
b_out = b_out + learning_rate*delta_saida;
w2 = w2 + learning_rate*(delta_h2’*h1_out_estim)’;
b2 = b2 + learning_rate*sum(delta_h2);
w1 = w1 + learning_rate*delta_h1’*enter_estim;
b1 = b1 + learning_rate*sum(delta_h2);
% I wrote this code partially in portuguese so let me explain a litlle.
%’delta_saida’ is the gradient of the output layer
%delta_h2 is the gradient of the second hidden layer
%delta_h1 is the gradient of the first hidden layer
%w_out,w2 and w1 are the weights of output, second hidden layer and first hidden layer, respectively.
%b_out,b2 and b1 are the biases of output, second hidden layer and first hidden layer, respectively.
% the function selecionar_funcao() is just to calculate the derivative accordingly with the activation function of the layer
%As you can see, I need to change delta_h1 to match the matrix dimensions
It is right to change the formula like i’m doing in my code ? I’m asking it because in my mind, the way that we calculate the gradient of all hidden layers must be the same, but in my case it isn’t true. I will share part of my code here to help anyone to see if i’m doing some mistake
%weights and biases initialization
w1 = randn(num_entradas,n_h1)*sqrt(2/num_entradas);
w2 = randn(n_h1,n_h2) *sqrt(2/n_h1);
w_out = randn(n_h2,n_out) *sqrt(2/n_h2);
b1 = randn(1, n_h1) * sqrt(2/num_entradas);
b2 = randn(1, n_h2) * sqrt(2/n_h1);
b_out = randn(1,n_out) * sqrt(2/n_h2);
%backpropagation
for epoch =1:max_epocas
soma_valid = 0;
soma_estim = 0;
%embaralhar os dados
conj_estim = embaralhar(conj_estim);
% conj_valid = embaralhar(conj_valid);
%Validating
for j=1:size(conj_valid,1)
enter_valid = conj_valid(j,2:end);
h1_in_valid = [enter_valid,1]*[w1;b1];
h1_out_valid = selecionar_funcao(h1_in_valid,ativ_h1,sig_a,tanh_a,tanh_b,’False’);
h2_in_valid = [h1_out_valid,1]*[w2;b2];
h2_out_valid = selecionar_funcao(h2_in_valid,ativ_h2,sig_a,tanh_a,tanh_b,’False’);
saida_in_valid = [h2_out_valid,1]*[w_out;b_out];
saida_out_valid = selecionar_funcao(saida_in_valid,ativ_out,sig_a,tanh_a,tanh_b,’False’);
erro_valid = conj_valid(j,1) – saida_out_valid;
soma_valid = soma_valid + (erro_valid^2);
end
erro_atual_valid = (soma_valid/(2*size(conj_valid,1)));
erros_epoca_valid = [erros_epoca_valid;erro_atual_valid];
%trainning
for i =1:size(conj_estim,1)
enter_estim = conj_estim(i,2:end);
h1_in_estim = [enter_estim,1]*[w1;b1];
h1_out_estim = selecionar_funcao(h1_in_estim,ativ_h1,sig_a,tanh_a,tanh_b,’False’);
h2_in_estim = [h1_out_estim,1]*[w2;b2];
h2_out_estim = selecionar_funcao(h2_in_estim,ativ_h2,sig_a,tanh_a,tanh_b,’False’);
saida_in_estim = [h2_out_estim,1]*[w_out;b_out];
saida_out_estim = selecionar_funcao(saida_in_estim,ativ_out,sig_a,tanh_a,tanh_b,’False’);
erro_estim = conj_estim(i,1) – saida_out_estim;
soma_estim = soma_estim + (erro_estim^2);
% Backpropagation
delta_saida = erro_estim.*selecionar_funcao(saida_in_estim,ativ_out,sig_a,tanh_a,tanh_b,’True’);
delta_h2 = (w_out’*delta_saida).*selecionar_funcao(h2_in_estim,ativ_out,sig_a,tanh_a,tanh_b,’True’);
delta_h1 = (w2*delta_h2′)’.*selecionar_funcao(h1_in_estim,ativ_h1,sig_a,tanh_a,tanh_b,’True’);
%atualizar pesos e biases
w_out = w_out + learning_rate*delta_saida*h2_out_estim’;
b_out = b_out + learning_rate*delta_saida;
w2 = w2 + learning_rate*(delta_h2’*h1_out_estim)’;
b2 = b2 + learning_rate*sum(delta_h2);
w1 = w1 + learning_rate*delta_h1’*enter_estim;
b1 = b1 + learning_rate*sum(delta_h2);
end
erro_atual_estim = (soma_estim/(2*size(conj_estim,1)));
erros_epoca_estim = [erros_epoca_estim;erro_atual_estim];
if erros_epoca_estim(epoch) <limiar
break
else
end
end Hi
I’m studying neural networks and i’m doing a NN with 2 hidden layers and one neuron in the output layer
While I was studying and coding my NN, I faced one doubt.
In the backward step, the math behind this is clear:
to the output layer we have:
, where ⊙ is the hadamard product.
and to hidden layers, we have:
my problem is that, when I code this formulas, I need to change the second equation when I will calculate the the gradient of the first hidden layer (the hidden layer next to the input layer) to match the dimensions of the matrix as follow below:
% Backpropagation
delta_saida = erro_estim.*selecionar_funcao(saida_in_estim,ativ_out,sig_a,tanh_a,tanh_b,’True’);
delta_h2 = (w_out’*delta_saida).*selecionar_funcao(h2_in_estim,ativ_out,sig_a,tanh_a,tanh_b,’True’);
delta_h1 = (w2*delta_h2′)’.*selecionar_funcao(h1_in_estim,ativ_h1,sig_a,tanh_a,tanh_b,’True’);
%update weights and biases
w_out = w_out + learning_rate*delta_saida*h2_out_estim’;
b_out = b_out + learning_rate*delta_saida;
w2 = w2 + learning_rate*(delta_h2’*h1_out_estim)’;
b2 = b2 + learning_rate*sum(delta_h2);
w1 = w1 + learning_rate*delta_h1’*enter_estim;
b1 = b1 + learning_rate*sum(delta_h2);
% I wrote this code partially in portuguese so let me explain a litlle.
%’delta_saida’ is the gradient of the output layer
%delta_h2 is the gradient of the second hidden layer
%delta_h1 is the gradient of the first hidden layer
%w_out,w2 and w1 are the weights of output, second hidden layer and first hidden layer, respectively.
%b_out,b2 and b1 are the biases of output, second hidden layer and first hidden layer, respectively.
% the function selecionar_funcao() is just to calculate the derivative accordingly with the activation function of the layer
%As you can see, I need to change delta_h1 to match the matrix dimensions
It is right to change the formula like i’m doing in my code ? I’m asking it because in my mind, the way that we calculate the gradient of all hidden layers must be the same, but in my case it isn’t true. I will share part of my code here to help anyone to see if i’m doing some mistake
%weights and biases initialization
w1 = randn(num_entradas,n_h1)*sqrt(2/num_entradas);
w2 = randn(n_h1,n_h2) *sqrt(2/n_h1);
w_out = randn(n_h2,n_out) *sqrt(2/n_h2);
b1 = randn(1, n_h1) * sqrt(2/num_entradas);
b2 = randn(1, n_h2) * sqrt(2/n_h1);
b_out = randn(1,n_out) * sqrt(2/n_h2);
%backpropagation
for epoch =1:max_epocas
soma_valid = 0;
soma_estim = 0;
%embaralhar os dados
conj_estim = embaralhar(conj_estim);
% conj_valid = embaralhar(conj_valid);
%Validating
for j=1:size(conj_valid,1)
enter_valid = conj_valid(j,2:end);
h1_in_valid = [enter_valid,1]*[w1;b1];
h1_out_valid = selecionar_funcao(h1_in_valid,ativ_h1,sig_a,tanh_a,tanh_b,’False’);
h2_in_valid = [h1_out_valid,1]*[w2;b2];
h2_out_valid = selecionar_funcao(h2_in_valid,ativ_h2,sig_a,tanh_a,tanh_b,’False’);
saida_in_valid = [h2_out_valid,1]*[w_out;b_out];
saida_out_valid = selecionar_funcao(saida_in_valid,ativ_out,sig_a,tanh_a,tanh_b,’False’);
erro_valid = conj_valid(j,1) – saida_out_valid;
soma_valid = soma_valid + (erro_valid^2);
end
erro_atual_valid = (soma_valid/(2*size(conj_valid,1)));
erros_epoca_valid = [erros_epoca_valid;erro_atual_valid];
%trainning
for i =1:size(conj_estim,1)
enter_estim = conj_estim(i,2:end);
h1_in_estim = [enter_estim,1]*[w1;b1];
h1_out_estim = selecionar_funcao(h1_in_estim,ativ_h1,sig_a,tanh_a,tanh_b,’False’);
h2_in_estim = [h1_out_estim,1]*[w2;b2];
h2_out_estim = selecionar_funcao(h2_in_estim,ativ_h2,sig_a,tanh_a,tanh_b,’False’);
saida_in_estim = [h2_out_estim,1]*[w_out;b_out];
saida_out_estim = selecionar_funcao(saida_in_estim,ativ_out,sig_a,tanh_a,tanh_b,’False’);
erro_estim = conj_estim(i,1) – saida_out_estim;
soma_estim = soma_estim + (erro_estim^2);
% Backpropagation
delta_saida = erro_estim.*selecionar_funcao(saida_in_estim,ativ_out,sig_a,tanh_a,tanh_b,’True’);
delta_h2 = (w_out’*delta_saida).*selecionar_funcao(h2_in_estim,ativ_out,sig_a,tanh_a,tanh_b,’True’);
delta_h1 = (w2*delta_h2′)’.*selecionar_funcao(h1_in_estim,ativ_h1,sig_a,tanh_a,tanh_b,’True’);
%atualizar pesos e biases
w_out = w_out + learning_rate*delta_saida*h2_out_estim’;
b_out = b_out + learning_rate*delta_saida;
w2 = w2 + learning_rate*(delta_h2’*h1_out_estim)’;
b2 = b2 + learning_rate*sum(delta_h2);
w1 = w1 + learning_rate*delta_h1’*enter_estim;
b1 = b1 + learning_rate*sum(delta_h2);
end
erro_atual_estim = (soma_estim/(2*size(conj_estim,1)));
erros_epoca_estim = [erros_epoca_estim;erro_atual_estim];
if erros_epoca_estim(epoch) <limiar
break
else
end
end machine learning, backpropagation MATLAB Answers — New Questions
UDP Communication slowing Simulink Model
Gday all,
For context, I am building a simulink model that will communicate with the simulator X-plane through UDP to control an aircraft. The Simulink model inputs a UDP connection and a live video feed. Using an object detection system to identify key features, a command direction will begiven, and outputted to X-plane through UDP communication.
My issue is that the UDP communication input significantly decreases the performance of the model. I have tried every UDP commenction block avaialble in Simulink. I have tested the simulation with other blocks, and I’m confident that the UDP communication input block (either of them) are the root of the issue. I havent had this issue with other models (such as image segmentation model). Nor does the UDP output show performance issues.
Is there a way to fix the performance issues seen in the model? Is there some UDP secret I dont know about? Workarounds? Would the UDP comminication block be interfereing with any other block (deep learning, PID…)?
Any quidence or tips would be appreciatedGday all,
For context, I am building a simulink model that will communicate with the simulator X-plane through UDP to control an aircraft. The Simulink model inputs a UDP connection and a live video feed. Using an object detection system to identify key features, a command direction will begiven, and outputted to X-plane through UDP communication.
My issue is that the UDP communication input significantly decreases the performance of the model. I have tried every UDP commenction block avaialble in Simulink. I have tested the simulation with other blocks, and I’m confident that the UDP communication input block (either of them) are the root of the issue. I havent had this issue with other models (such as image segmentation model). Nor does the UDP output show performance issues.
Is there a way to fix the performance issues seen in the model? Is there some UDP secret I dont know about? Workarounds? Would the UDP comminication block be interfereing with any other block (deep learning, PID…)?
Any quidence or tips would be appreciated Gday all,
For context, I am building a simulink model that will communicate with the simulator X-plane through UDP to control an aircraft. The Simulink model inputs a UDP connection and a live video feed. Using an object detection system to identify key features, a command direction will begiven, and outputted to X-plane through UDP communication.
My issue is that the UDP communication input significantly decreases the performance of the model. I have tried every UDP commenction block avaialble in Simulink. I have tested the simulation with other blocks, and I’m confident that the UDP communication input block (either of them) are the root of the issue. I havent had this issue with other models (such as image segmentation model). Nor does the UDP output show performance issues.
Is there a way to fix the performance issues seen in the model? Is there some UDP secret I dont know about? Workarounds? Would the UDP comminication block be interfereing with any other block (deep learning, PID…)?
Any quidence or tips would be appreciated simulink, deep learning MATLAB Answers — New Questions
Having Matlab import data from SharePoint
I have a line in my code where data is imported, and I wantto add a directory that can import the data from SharePoint so that anyone who is in the sharepoint can use the same code without having to change the directory. I can’t seem to figur eout how to do this without adding a shortcut, becaus enot everyone would have the same shortcut. If anyone could help with how I can set up the import so that anyone can use the code that would be great! Thanks!I have a line in my code where data is imported, and I wantto add a directory that can import the data from SharePoint so that anyone who is in the sharepoint can use the same code without having to change the directory. I can’t seem to figur eout how to do this without adding a shortcut, becaus enot everyone would have the same shortcut. If anyone could help with how I can set up the import so that anyone can use the code that would be great! Thanks! I have a line in my code where data is imported, and I wantto add a directory that can import the data from SharePoint so that anyone who is in the sharepoint can use the same code without having to change the directory. I can’t seem to figur eout how to do this without adding a shortcut, becaus enot everyone would have the same shortcut. If anyone could help with how I can set up the import so that anyone can use the code that would be great! Thanks! data import, sharepoint, import MATLAB Answers — New Questions
How to return matrix in Matlab using codegen with no c++ memory allocation?
I do not want any dynamic memory allocation to be done in the C++ codegen. Here is the relevant Matlab code:
% Calculate cross-correlation matrix from a set of vectors (whose length
% can vary from one call to another). All 4 channels will have the length.
function [R] = xcorr_matrix(ch0,ch1,ch2,ch3,channelCount)
% channelCount is 2 or 4
% R is either 2×2 or 4×4 complex double matrix.
% ch0 … ch3 are complex single vectors.
N = int32(size(ch0,1));
% Convert ch0, …, ch3 to complex double
x_in = coder.nullcopy(complex(zeros(N,channelCount,’double’)));
switch channelCount
case 2
x_in(1:N,1) = ch0(1:N);
x_in(1:N,2) = ch1(1:N);
case 4
x_in(1:N,1) = ch0(1:N);
x_in(1:N,2) = ch1(1:N);
x_in(1:N,3) = ch2(1:N);
x_in(1:N,4) = ch3(1:N);
end
R = (x_in’ * x_in); % Compute cross-correlation matrix
Here is the C++ codegen result:
void xcorr_matrix(const creal32_T ch0_data[], const int ch0_size[1], % all the 4 ch_size will be the same value
const creal32_T ch1_data[], const int ch1_size[1],
const creal32_T ch2_data[], const int ch2_size[1],
const creal32_T ch3_data[], const int ch3_size[1],
int channelCount,
::coder::array<creal_T, 2U> &R)
{
::coder::array<creal_T, 2U> x_in;
R.set_size(channelCount, channelCount);
x_in.set_size(N, channelCount);
…
}
I think I can eliminate the x_in.set_size by not using x_in, and replace the matrix multiply with nested for-loops and using double casting; but am unsure how to define R (either 2×2 or 4×4) so as to remove the R.set_size allocations.
One idea I had was to try making R a fixed length 16 element vector, and just use 4 elements for the 2×2 R, and all 16 for the 4×4 R. Would be nicer to be able to use two indices for rows and columns.
Thanks in advance for your help.
PaulI do not want any dynamic memory allocation to be done in the C++ codegen. Here is the relevant Matlab code:
% Calculate cross-correlation matrix from a set of vectors (whose length
% can vary from one call to another). All 4 channels will have the length.
function [R] = xcorr_matrix(ch0,ch1,ch2,ch3,channelCount)
% channelCount is 2 or 4
% R is either 2×2 or 4×4 complex double matrix.
% ch0 … ch3 are complex single vectors.
N = int32(size(ch0,1));
% Convert ch0, …, ch3 to complex double
x_in = coder.nullcopy(complex(zeros(N,channelCount,’double’)));
switch channelCount
case 2
x_in(1:N,1) = ch0(1:N);
x_in(1:N,2) = ch1(1:N);
case 4
x_in(1:N,1) = ch0(1:N);
x_in(1:N,2) = ch1(1:N);
x_in(1:N,3) = ch2(1:N);
x_in(1:N,4) = ch3(1:N);
end
R = (x_in’ * x_in); % Compute cross-correlation matrix
Here is the C++ codegen result:
void xcorr_matrix(const creal32_T ch0_data[], const int ch0_size[1], % all the 4 ch_size will be the same value
const creal32_T ch1_data[], const int ch1_size[1],
const creal32_T ch2_data[], const int ch2_size[1],
const creal32_T ch3_data[], const int ch3_size[1],
int channelCount,
::coder::array<creal_T, 2U> &R)
{
::coder::array<creal_T, 2U> x_in;
R.set_size(channelCount, channelCount);
x_in.set_size(N, channelCount);
…
}
I think I can eliminate the x_in.set_size by not using x_in, and replace the matrix multiply with nested for-loops and using double casting; but am unsure how to define R (either 2×2 or 4×4) so as to remove the R.set_size allocations.
One idea I had was to try making R a fixed length 16 element vector, and just use 4 elements for the 2×2 R, and all 16 for the 4×4 R. Would be nicer to be able to use two indices for rows and columns.
Thanks in advance for your help.
Paul I do not want any dynamic memory allocation to be done in the C++ codegen. Here is the relevant Matlab code:
% Calculate cross-correlation matrix from a set of vectors (whose length
% can vary from one call to another). All 4 channels will have the length.
function [R] = xcorr_matrix(ch0,ch1,ch2,ch3,channelCount)
% channelCount is 2 or 4
% R is either 2×2 or 4×4 complex double matrix.
% ch0 … ch3 are complex single vectors.
N = int32(size(ch0,1));
% Convert ch0, …, ch3 to complex double
x_in = coder.nullcopy(complex(zeros(N,channelCount,’double’)));
switch channelCount
case 2
x_in(1:N,1) = ch0(1:N);
x_in(1:N,2) = ch1(1:N);
case 4
x_in(1:N,1) = ch0(1:N);
x_in(1:N,2) = ch1(1:N);
x_in(1:N,3) = ch2(1:N);
x_in(1:N,4) = ch3(1:N);
end
R = (x_in’ * x_in); % Compute cross-correlation matrix
Here is the C++ codegen result:
void xcorr_matrix(const creal32_T ch0_data[], const int ch0_size[1], % all the 4 ch_size will be the same value
const creal32_T ch1_data[], const int ch1_size[1],
const creal32_T ch2_data[], const int ch2_size[1],
const creal32_T ch3_data[], const int ch3_size[1],
int channelCount,
::coder::array<creal_T, 2U> &R)
{
::coder::array<creal_T, 2U> x_in;
R.set_size(channelCount, channelCount);
x_in.set_size(N, channelCount);
…
}
I think I can eliminate the x_in.set_size by not using x_in, and replace the matrix multiply with nested for-loops and using double casting; but am unsure how to define R (either 2×2 or 4×4) so as to remove the R.set_size allocations.
One idea I had was to try making R a fixed length 16 element vector, and just use 4 elements for the 2×2 R, and all 16 for the 4×4 R. Would be nicer to be able to use two indices for rows and columns.
Thanks in advance for your help.
Paul matlab, codegen, embedded coder, c++, dynamic memory allocation MATLAB Answers — New Questions
Speed up nested loops with parfor
I’m trying to speed up this part of code. I have a constraint: the inputs of ReturnFn must all be scalars. If it was not for this restriction, I could easily vectorize the code. So I would like to know if there is a way to make the code below faster, while still satisfying this restriction of the inputs of ReturnFn.
Any help is really appreciated!
N_d = 50;
N_a = 300;
N_z = 10;
% ParamCell contains: K_to_L,alpha,delta,pen,gamma,crra
% I need the cell array to handle variable number of inputs in ReturnFn
Fmatrix=zeros(N_d*N_a,N_a,N_z);
parfor i4i5=1:N_z
Fmatrix_z=zeros(N_d*N_a,N_a);
for i3=1:N_a % a today
for i2=1:N_a % a’ tomorrow
for i1=1:N_d % d choice
Fmatrix_z(i1+(i2-1)*N_d,i3)=ReturnFn(d_gridvals(i1),a_gridvals(i2),a_gridvals(i3),z_gridvals(i4i5,1),z_gridvals(i4i5,2),ParamCell{:});
end
end
end
Fmatrix(:,:,i4i5)=Fmatrix_z;
end
function F = f_ReturnFn(d,aprime,a,e,age,K_to_L,alpha,delta,pen,gamma,crra)
% INPUTS (always 5 inputs, plus some extra parameter inputs)
% d: Hours worked
% aprime: Next-period’s assets
% a: Current period assets
% e: Labor efficiency shock
% age: Age of individual: young or old
% TOOLKIT NOTATION
% (d,aprime,a,z), where z = [e;age]
F = -inf;
r = alpha*K_to_L^(alpha-1)-delta;
w = (1-alpha)*K_to_L^alpha;
income = (w*e*d)*(age==1)+pen*(age==2)+r*a;
c = income+a-aprime; % Budget Constraint
if c>0
% NOTE: 0<d<1 is already built into the grid
% WARNING: this will not work if crra=1
inside = (c^gamma)*((1-d)^(1-gamma));
F = inside^(1-crra)/(1-crra);
end
endI’m trying to speed up this part of code. I have a constraint: the inputs of ReturnFn must all be scalars. If it was not for this restriction, I could easily vectorize the code. So I would like to know if there is a way to make the code below faster, while still satisfying this restriction of the inputs of ReturnFn.
Any help is really appreciated!
N_d = 50;
N_a = 300;
N_z = 10;
% ParamCell contains: K_to_L,alpha,delta,pen,gamma,crra
% I need the cell array to handle variable number of inputs in ReturnFn
Fmatrix=zeros(N_d*N_a,N_a,N_z);
parfor i4i5=1:N_z
Fmatrix_z=zeros(N_d*N_a,N_a);
for i3=1:N_a % a today
for i2=1:N_a % a’ tomorrow
for i1=1:N_d % d choice
Fmatrix_z(i1+(i2-1)*N_d,i3)=ReturnFn(d_gridvals(i1),a_gridvals(i2),a_gridvals(i3),z_gridvals(i4i5,1),z_gridvals(i4i5,2),ParamCell{:});
end
end
end
Fmatrix(:,:,i4i5)=Fmatrix_z;
end
function F = f_ReturnFn(d,aprime,a,e,age,K_to_L,alpha,delta,pen,gamma,crra)
% INPUTS (always 5 inputs, plus some extra parameter inputs)
% d: Hours worked
% aprime: Next-period’s assets
% a: Current period assets
% e: Labor efficiency shock
% age: Age of individual: young or old
% TOOLKIT NOTATION
% (d,aprime,a,z), where z = [e;age]
F = -inf;
r = alpha*K_to_L^(alpha-1)-delta;
w = (1-alpha)*K_to_L^alpha;
income = (w*e*d)*(age==1)+pen*(age==2)+r*a;
c = income+a-aprime; % Budget Constraint
if c>0
% NOTE: 0<d<1 is already built into the grid
% WARNING: this will not work if crra=1
inside = (c^gamma)*((1-d)^(1-gamma));
F = inside^(1-crra)/(1-crra);
end
end I’m trying to speed up this part of code. I have a constraint: the inputs of ReturnFn must all be scalars. If it was not for this restriction, I could easily vectorize the code. So I would like to know if there is a way to make the code below faster, while still satisfying this restriction of the inputs of ReturnFn.
Any help is really appreciated!
N_d = 50;
N_a = 300;
N_z = 10;
% ParamCell contains: K_to_L,alpha,delta,pen,gamma,crra
% I need the cell array to handle variable number of inputs in ReturnFn
Fmatrix=zeros(N_d*N_a,N_a,N_z);
parfor i4i5=1:N_z
Fmatrix_z=zeros(N_d*N_a,N_a);
for i3=1:N_a % a today
for i2=1:N_a % a’ tomorrow
for i1=1:N_d % d choice
Fmatrix_z(i1+(i2-1)*N_d,i3)=ReturnFn(d_gridvals(i1),a_gridvals(i2),a_gridvals(i3),z_gridvals(i4i5,1),z_gridvals(i4i5,2),ParamCell{:});
end
end
end
Fmatrix(:,:,i4i5)=Fmatrix_z;
end
function F = f_ReturnFn(d,aprime,a,e,age,K_to_L,alpha,delta,pen,gamma,crra)
% INPUTS (always 5 inputs, plus some extra parameter inputs)
% d: Hours worked
% aprime: Next-period’s assets
% a: Current period assets
% e: Labor efficiency shock
% age: Age of individual: young or old
% TOOLKIT NOTATION
% (d,aprime,a,z), where z = [e;age]
F = -inf;
r = alpha*K_to_L^(alpha-1)-delta;
w = (1-alpha)*K_to_L^alpha;
income = (w*e*d)*(age==1)+pen*(age==2)+r*a;
c = income+a-aprime; % Budget Constraint
if c>0
% NOTE: 0<d<1 is already built into the grid
% WARNING: this will not work if crra=1
inside = (c^gamma)*((1-d)^(1-gamma));
F = inside^(1-crra)/(1-crra);
end
end nested loops, parfor MATLAB Answers — New Questions
How to change the axes position in matlab
Hi Everybody!
I want to be able to relocate my axes/the origin (0, 0) of my plot to the middle of the graphics window. I don’t know how to manipulate the set command to do this. There must be a way. Regards
% Code explores advanced graphics properties
clf
x= 0:pi/10:pi;
angle = x.*180/pi;
y = -sind(angle);
h =plot(angle, y)
set(h, ‘color’, ‘red’)
set(h, ‘marker’,’s’)
set(h, ‘LineWidth’, 2)
h_axis =gca; % Manipulate theaxis next
set(h_axis, ‘LineWidth’, 2)Hi Everybody!
I want to be able to relocate my axes/the origin (0, 0) of my plot to the middle of the graphics window. I don’t know how to manipulate the set command to do this. There must be a way. Regards
% Code explores advanced graphics properties
clf
x= 0:pi/10:pi;
angle = x.*180/pi;
y = -sind(angle);
h =plot(angle, y)
set(h, ‘color’, ‘red’)
set(h, ‘marker’,’s’)
set(h, ‘LineWidth’, 2)
h_axis =gca; % Manipulate theaxis next
set(h_axis, ‘LineWidth’, 2) Hi Everybody!
I want to be able to relocate my axes/the origin (0, 0) of my plot to the middle of the graphics window. I don’t know how to manipulate the set command to do this. There must be a way. Regards
% Code explores advanced graphics properties
clf
x= 0:pi/10:pi;
angle = x.*180/pi;
y = -sind(angle);
h =plot(angle, y)
set(h, ‘color’, ‘red’)
set(h, ‘marker’,’s’)
set(h, ‘LineWidth’, 2)
h_axis =gca; % Manipulate theaxis next
set(h_axis, ‘LineWidth’, 2) programming MATLAB Answers — New Questions
How can function argument declaration be introspected?
Is there a way to programmatically access the function argument validation declared in the argument block? meta.method introspection only allows to determine argument names, but I am interested in all of the validation features (dimensions, class, validation functions).
My need especially focuses on functions (not only class methods) and also on output arguments.Is there a way to programmatically access the function argument validation declared in the argument block? meta.method introspection only allows to determine argument names, but I am interested in all of the validation features (dimensions, class, validation functions).
My need especially focuses on functions (not only class methods) and also on output arguments. Is there a way to programmatically access the function argument validation declared in the argument block? meta.method introspection only allows to determine argument names, but I am interested in all of the validation features (dimensions, class, validation functions).
My need especially focuses on functions (not only class methods) and also on output arguments. introspection, argument validation, function MATLAB Answers — New Questions
Loading workspace variable contents into an array or for loop
I am new to Matlab but am well aware of the bad practice notion associated with dynamically creating workspace variables. Unfortunately, Matlab’s volumeSegmenter only allows saving of segmentations as either MAT-files or workspace variables, and the former creates far too many individual files for the amount I require.
In the next step after creating them, I need to run all the segmentations (workspace vars seg1, seg2, seg3 …) through a for loop. I am currently using who() to try and find all the needed workspace variables, but this doesn’t work as only the names are stored in cells seg_options and cannot be called as variables:
vars = who();
find = contains(vars, ‘seg’);
seg_options = vars(find);
This is part of the for loop I need to call the segmentation variables for:
for i = 1:length(seg_options);
A = double(seg_options(i));
end
which obviously doesn’t work properly as I need to be calling the actual variable and not just its name.
The code also needs to work for a flexible number of segmentations (ie I cannot initialize an array as a specific size). Is there a way to:
1) load the workspace variable into array, overwrite it, load the next one into the next array cell, etc. (ie saves the first segmentation as seg, loads into seg_array cell 1, saves the next segmentation as seg, load that into seg_array cell 2, and so on)
2) load all the created variables (seg1, seg2…) into an array seg_array
or
3) call and loop through all the workspace variables in the for loop itself – I know this is not ideal
Thanks in advance!I am new to Matlab but am well aware of the bad practice notion associated with dynamically creating workspace variables. Unfortunately, Matlab’s volumeSegmenter only allows saving of segmentations as either MAT-files or workspace variables, and the former creates far too many individual files for the amount I require.
In the next step after creating them, I need to run all the segmentations (workspace vars seg1, seg2, seg3 …) through a for loop. I am currently using who() to try and find all the needed workspace variables, but this doesn’t work as only the names are stored in cells seg_options and cannot be called as variables:
vars = who();
find = contains(vars, ‘seg’);
seg_options = vars(find);
This is part of the for loop I need to call the segmentation variables for:
for i = 1:length(seg_options);
A = double(seg_options(i));
end
which obviously doesn’t work properly as I need to be calling the actual variable and not just its name.
The code also needs to work for a flexible number of segmentations (ie I cannot initialize an array as a specific size). Is there a way to:
1) load the workspace variable into array, overwrite it, load the next one into the next array cell, etc. (ie saves the first segmentation as seg, loads into seg_array cell 1, saves the next segmentation as seg, load that into seg_array cell 2, and so on)
2) load all the created variables (seg1, seg2…) into an array seg_array
or
3) call and loop through all the workspace variables in the for loop itself – I know this is not ideal
Thanks in advance! I am new to Matlab but am well aware of the bad practice notion associated with dynamically creating workspace variables. Unfortunately, Matlab’s volumeSegmenter only allows saving of segmentations as either MAT-files or workspace variables, and the former creates far too many individual files for the amount I require.
In the next step after creating them, I need to run all the segmentations (workspace vars seg1, seg2, seg3 …) through a for loop. I am currently using who() to try and find all the needed workspace variables, but this doesn’t work as only the names are stored in cells seg_options and cannot be called as variables:
vars = who();
find = contains(vars, ‘seg’);
seg_options = vars(find);
This is part of the for loop I need to call the segmentation variables for:
for i = 1:length(seg_options);
A = double(seg_options(i));
end
which obviously doesn’t work properly as I need to be calling the actual variable and not just its name.
The code also needs to work for a flexible number of segmentations (ie I cannot initialize an array as a specific size). Is there a way to:
1) load the workspace variable into array, overwrite it, load the next one into the next array cell, etc. (ie saves the first segmentation as seg, loads into seg_array cell 1, saves the next segmentation as seg, load that into seg_array cell 2, and so on)
2) load all the created variables (seg1, seg2…) into an array seg_array
or
3) call and loop through all the workspace variables in the for loop itself – I know this is not ideal
Thanks in advance! array, cell array, variable, variables, arrays, cell arrays, image segmentation, for loop MATLAB Answers — New Questions
¿Cómo puedo hacer para obetener el valor logico si es que estou utilizando el siguiente diagrama de bloques?
Estoy realizado un este PID y quiero obtener el error maximo del PID y utilice los siguientes bloques. Pero al momento que yo lo compara el error anterior y el error actual para despues obtener un 1 logico. Sin embargo se pueden observar que las graficas se cruzan ya que le reste la tolerancia pero aun que se crucen digue saliendo 0 logico. Alguien sabe como solucionar esto por favor. Gracias de antemanoEstoy realizado un este PID y quiero obtener el error maximo del PID y utilice los siguientes bloques. Pero al momento que yo lo compara el error anterior y el error actual para despues obtener un 1 logico. Sin embargo se pueden observar que las graficas se cruzan ya que le reste la tolerancia pero aun que se crucen digue saliendo 0 logico. Alguien sabe como solucionar esto por favor. Gracias de antemano Estoy realizado un este PID y quiero obtener el error maximo del PID y utilice los siguientes bloques. Pero al momento que yo lo compara el error anterior y el error actual para despues obtener un 1 logico. Sin embargo se pueden observar que las graficas se cruzan ya que le reste la tolerancia pero aun que se crucen digue saliendo 0 logico. Alguien sabe como solucionar esto por favor. Gracias de antemano pid, simulink MATLAB Answers — New Questions
Trouble writing and saving .nii files
I have a working MATLAB code that produces 5 T1 maps for 5 different angles from an MRI data set of (512 512 170 5). When I look at them in MATLAB Figures they show the right values and look fine (see attached). However, when I try and save them using the code below, they save like the images shown in FSLeyes, where they are being split into different slices (5 x 34 slices = 170). I can’t understand why?
% Loop to save each flip angle as an individual NIfTI file
for i = 1:size(T1, 4)
% Convert the T1 data to double
Data = double(T1(:, :, :, i));
% Define voxel size
VoxSize = [0.6, 0.6, 1];
% Create a name for the NIfTI file
NameForSaving = sprintf(‘T1_Map_Angle%d.nii’, i);
% Define the file location for saving
filelocation = ‘location_hidden’;
fullpath = fullfile(filelocation, NameForSaving);
% Save the data as a NIfTI file
Nii_Saver(Data, VoxSize, fullpath);
endI have a working MATLAB code that produces 5 T1 maps for 5 different angles from an MRI data set of (512 512 170 5). When I look at them in MATLAB Figures they show the right values and look fine (see attached). However, when I try and save them using the code below, they save like the images shown in FSLeyes, where they are being split into different slices (5 x 34 slices = 170). I can’t understand why?
% Loop to save each flip angle as an individual NIfTI file
for i = 1:size(T1, 4)
% Convert the T1 data to double
Data = double(T1(:, :, :, i));
% Define voxel size
VoxSize = [0.6, 0.6, 1];
% Create a name for the NIfTI file
NameForSaving = sprintf(‘T1_Map_Angle%d.nii’, i);
% Define the file location for saving
filelocation = ‘location_hidden’;
fullpath = fullfile(filelocation, NameForSaving);
% Save the data as a NIfTI file
Nii_Saver(Data, VoxSize, fullpath);
end I have a working MATLAB code that produces 5 T1 maps for 5 different angles from an MRI data set of (512 512 170 5). When I look at them in MATLAB Figures they show the right values and look fine (see attached). However, when I try and save them using the code below, they save like the images shown in FSLeyes, where they are being split into different slices (5 x 34 slices = 170). I can’t understand why?
% Loop to save each flip angle as an individual NIfTI file
for i = 1:size(T1, 4)
% Convert the T1 data to double
Data = double(T1(:, :, :, i));
% Define voxel size
VoxSize = [0.6, 0.6, 1];
% Create a name for the NIfTI file
NameForSaving = sprintf(‘T1_Map_Angle%d.nii’, i);
% Define the file location for saving
filelocation = ‘location_hidden’;
fullpath = fullfile(filelocation, NameForSaving);
% Save the data as a NIfTI file
Nii_Saver(Data, VoxSize, fullpath);
end .nii, save MATLAB Answers — New Questions
Setting up communication via USB device with instrument
I am trying to communicate with a device via usb. When I connect the usb cable, it doesnt fall under the COM port section in my device manager but under the Universial Serial Bus Devices. How do I go about openning the port with serial command?I am trying to communicate with a device via usb. When I connect the usb cable, it doesnt fall under the COM port section in my device manager but under the Universial Serial Bus Devices. How do I go about openning the port with serial command? I am trying to communicate with a device via usb. When I connect the usb cable, it doesnt fall under the COM port section in my device manager but under the Universial Serial Bus Devices. How do I go about openning the port with serial command? serial, writeline MATLAB Answers — New Questions
Unable to resolve the warning on ill conditioned Jacobian
I want to fit my data (1st column: x , 2nd column: y, given in the text file) to a sigmoidal function using the given function file (sigm_fit_base_e.m) The function is the standard matlab function which I modified from base 10 to base e and increased the maxIter to 10000. My initial guess parameters are:
[0 0.2845 9.88 -1] and there are no fixed parameters.
Relevant code lines are:
fPar = sigm_fit_base_e(x,y,[],[0 0.2845 9.88 -1],0);
I get the following warning:
Warning: The Jacobian at the solution is ill-conditioned, and some model parameters may not be estimated well (they are not identifiable). Use caution in making predictions.
> In nlinfit (line 384)
In sigm_fit_base_e (line 130)
I checked all the related answers in the Matlab community and tried to play around by modifiying the initial guess parameters and fixing the 2nd parameter but the warning still persists. Could you please help me fix this warning?I want to fit my data (1st column: x , 2nd column: y, given in the text file) to a sigmoidal function using the given function file (sigm_fit_base_e.m) The function is the standard matlab function which I modified from base 10 to base e and increased the maxIter to 10000. My initial guess parameters are:
[0 0.2845 9.88 -1] and there are no fixed parameters.
Relevant code lines are:
fPar = sigm_fit_base_e(x,y,[],[0 0.2845 9.88 -1],0);
I get the following warning:
Warning: The Jacobian at the solution is ill-conditioned, and some model parameters may not be estimated well (they are not identifiable). Use caution in making predictions.
> In nlinfit (line 384)
In sigm_fit_base_e (line 130)
I checked all the related answers in the Matlab community and tried to play around by modifiying the initial guess parameters and fixing the 2nd parameter but the warning still persists. Could you please help me fix this warning? I want to fit my data (1st column: x , 2nd column: y, given in the text file) to a sigmoidal function using the given function file (sigm_fit_base_e.m) The function is the standard matlab function which I modified from base 10 to base e and increased the maxIter to 10000. My initial guess parameters are:
[0 0.2845 9.88 -1] and there are no fixed parameters.
Relevant code lines are:
fPar = sigm_fit_base_e(x,y,[],[0 0.2845 9.88 -1],0);
I get the following warning:
Warning: The Jacobian at the solution is ill-conditioned, and some model parameters may not be estimated well (they are not identifiable). Use caution in making predictions.
> In nlinfit (line 384)
In sigm_fit_base_e (line 130)
I checked all the related answers in the Matlab community and tried to play around by modifiying the initial guess parameters and fixing the 2nd parameter but the warning still persists. Could you please help me fix this warning? jacobian ill conditioned sigmoidal fit nlinfit MATLAB Answers — New Questions
xcpA2L error: Error using numel Bad subscripting index.
Hello I’m trying to run xcpA2L("test.a2l") on MATLAB 2021a but it keeps throwing this error. I can parse other ".a2l" files just fine. The only difference I can think of is that this new "test.a2l" is generated by buidling software from MATLAB 2023a and the ones before were from MATLAB 2018b. Not sure if I’m making the right correlation here that it’s not compatible. I tried running xcpA2L("test.a2l") on MATLAB 2023a and it does work.Hello I’m trying to run xcpA2L("test.a2l") on MATLAB 2021a but it keeps throwing this error. I can parse other ".a2l" files just fine. The only difference I can think of is that this new "test.a2l" is generated by buidling software from MATLAB 2023a and the ones before were from MATLAB 2018b. Not sure if I’m making the right correlation here that it’s not compatible. I tried running xcpA2L("test.a2l") on MATLAB 2023a and it does work. Hello I’m trying to run xcpA2L("test.a2l") on MATLAB 2021a but it keeps throwing this error. I can parse other ".a2l" files just fine. The only difference I can think of is that this new "test.a2l" is generated by buidling software from MATLAB 2023a and the ones before were from MATLAB 2018b. Not sure if I’m making the right correlation here that it’s not compatible. I tried running xcpA2L("test.a2l") on MATLAB 2023a and it does work. a2l, error MATLAB Answers — New Questions
How to use Matlab trainnet to train a network without an explicit output layer (R2024a)
I’ve attempted to train a CNN with the goal of assigning N numeric values to different input images, depending on image characteristics. It looked like the network’s output layer could be a fully-connected layer with N outputs (because I have not found a linear output layer in Deep Network Designer). I am not sure if I can use a non-linear output layer instead, because this is fundamentally a regression task.
However, when using a fully-connected layer in place of an output layer the trainnet gives repeating errors indicating that I must have an output layer.
So basically, I have two questions:
1) Is it possible to use trainnet in a network without an output layer? It is difficult to imagine that a built-in training function has an oversight like this. Do I really need to construct a custom training loop if my network?..
2) Are there any alternatives? In essence, all I am looking for is an output layer that is either a) linear or b) does not change the previous layer’s output. Just anything that is compatible with a regression task.
If any clarification is needed on my issue or network construction, I would be happy to provide it.
Thank you so much for your help!
Deep Learning Toolbox Version 24.1 (R2024a) , trainnet function, Matlab 2024.I’ve attempted to train a CNN with the goal of assigning N numeric values to different input images, depending on image characteristics. It looked like the network’s output layer could be a fully-connected layer with N outputs (because I have not found a linear output layer in Deep Network Designer). I am not sure if I can use a non-linear output layer instead, because this is fundamentally a regression task.
However, when using a fully-connected layer in place of an output layer the trainnet gives repeating errors indicating that I must have an output layer.
So basically, I have two questions:
1) Is it possible to use trainnet in a network without an output layer? It is difficult to imagine that a built-in training function has an oversight like this. Do I really need to construct a custom training loop if my network?..
2) Are there any alternatives? In essence, all I am looking for is an output layer that is either a) linear or b) does not change the previous layer’s output. Just anything that is compatible with a regression task.
If any clarification is needed on my issue or network construction, I would be happy to provide it.
Thank you so much for your help!
Deep Learning Toolbox Version 24.1 (R2024a) , trainnet function, Matlab 2024. I’ve attempted to train a CNN with the goal of assigning N numeric values to different input images, depending on image characteristics. It looked like the network’s output layer could be a fully-connected layer with N outputs (because I have not found a linear output layer in Deep Network Designer). I am not sure if I can use a non-linear output layer instead, because this is fundamentally a regression task.
However, when using a fully-connected layer in place of an output layer the trainnet gives repeating errors indicating that I must have an output layer.
So basically, I have two questions:
1) Is it possible to use trainnet in a network without an output layer? It is difficult to imagine that a built-in training function has an oversight like this. Do I really need to construct a custom training loop if my network?..
2) Are there any alternatives? In essence, all I am looking for is an output layer that is either a) linear or b) does not change the previous layer’s output. Just anything that is compatible with a regression task.
If any clarification is needed on my issue or network construction, I would be happy to provide it.
Thank you so much for your help!
Deep Learning Toolbox Version 24.1 (R2024a) , trainnet function, Matlab 2024. trainnet output layer MATLAB Answers — New Questions
Export a large table to a pdf file
I have a 16×23 table that I would like to export to a pdf through matlab. I have tried to turn the pdf to landscape to fit better, as well as reduce the text size. I have also tried to position it using "position" as an option in the function "uitable". I have also used the "print" function but it seems like saveas works better.
Essentially this is what I would like the output to look like (check the picture attached), but I want matlab to export and position it automatically after it runs. It can be in data form or table form.
Here is my current code:
fig = uifigure(‘Name’,’Value Averages’);
t = table([1;2;3;4;5;6;7;8;9;10;11;12;13;14;15;16],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)], …
[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)], …
[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)], …
[rand(16,1)],[rand(16,1)],’VariableNames’,{‘Number of Values’,’1st value’,’2nd value’,’3rd value’,’4th value’, …
‘5th value’,’6th value’,’7th vlaue’,’8th value’,’9th value’,’10th value’,’11th value’,’12th value’,’13th value’,…
’14th value’,’15th vlaue’,’16th value’,’117th value’,’18th value’,’19th value’,’20th value’,’21st value’,’22nd value’});
export = uitable(fig,"Data",t);
orient(fig,’landscape’)
saveas(fig,’Value Averages.pdf’,’pdf’)I have a 16×23 table that I would like to export to a pdf through matlab. I have tried to turn the pdf to landscape to fit better, as well as reduce the text size. I have also tried to position it using "position" as an option in the function "uitable". I have also used the "print" function but it seems like saveas works better.
Essentially this is what I would like the output to look like (check the picture attached), but I want matlab to export and position it automatically after it runs. It can be in data form or table form.
Here is my current code:
fig = uifigure(‘Name’,’Value Averages’);
t = table([1;2;3;4;5;6;7;8;9;10;11;12;13;14;15;16],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)], …
[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)], …
[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)], …
[rand(16,1)],[rand(16,1)],’VariableNames’,{‘Number of Values’,’1st value’,’2nd value’,’3rd value’,’4th value’, …
‘5th value’,’6th value’,’7th vlaue’,’8th value’,’9th value’,’10th value’,’11th value’,’12th value’,’13th value’,…
’14th value’,’15th vlaue’,’16th value’,’117th value’,’18th value’,’19th value’,’20th value’,’21st value’,’22nd value’});
export = uitable(fig,"Data",t);
orient(fig,’landscape’)
saveas(fig,’Value Averages.pdf’,’pdf’) I have a 16×23 table that I would like to export to a pdf through matlab. I have tried to turn the pdf to landscape to fit better, as well as reduce the text size. I have also tried to position it using "position" as an option in the function "uitable". I have also used the "print" function but it seems like saveas works better.
Essentially this is what I would like the output to look like (check the picture attached), but I want matlab to export and position it automatically after it runs. It can be in data form or table form.
Here is my current code:
fig = uifigure(‘Name’,’Value Averages’);
t = table([1;2;3;4;5;6;7;8;9;10;11;12;13;14;15;16],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)], …
[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)], …
[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)],[rand(16,1)], …
[rand(16,1)],[rand(16,1)],’VariableNames’,{‘Number of Values’,’1st value’,’2nd value’,’3rd value’,’4th value’, …
‘5th value’,’6th value’,’7th vlaue’,’8th value’,’9th value’,’10th value’,’11th value’,’12th value’,’13th value’,…
’14th value’,’15th vlaue’,’16th value’,’117th value’,’18th value’,’19th value’,’20th value’,’21st value’,’22nd value’});
export = uitable(fig,"Data",t);
orient(fig,’landscape’)
saveas(fig,’Value Averages.pdf’,’pdf’) export, table, figure, pdf MATLAB Answers — New Questions
i want to plot sin 27 degree and 26.94 amd frequency 108.25MHz i dont know how to plot help me pls.
i want to plot sin 27 degree and 26.94 amd frequency 108.25MHz i dont know how to plot help me pls. it is difference phase and frequncy equal ti want to plot sin 27 degree and 26.94 amd frequency 108.25MHz i dont know how to plot help me pls. it is difference phase and frequncy equal t i want to plot sin 27 degree and 26.94 amd frequency 108.25MHz i dont know how to plot help me pls. it is difference phase and frequncy equal t signal, frequency MATLAB Answers — New Questions
Hola. Matlab no inicia en mi PC, y hasta ayer lo hacía sin problemas. Qué pasa?
Cuando intento abrir el programa, no hace absolutamente nada. No da codigo de error, no abre ninguna ventana, solo queda como si no lo hubiera tocado. Como puedo solucionar éste problema?Cuando intento abrir el programa, no hace absolutamente nada. No da codigo de error, no abre ninguna ventana, solo queda como si no lo hubiera tocado. Como puedo solucionar éste problema? Cuando intento abrir el programa, no hace absolutamente nada. No da codigo de error, no abre ninguna ventana, solo queda como si no lo hubiera tocado. Como puedo solucionar éste problema? inicio fallido MATLAB Answers — New Questions
MATLAB extremely slow. How to fix?
Slow to the point of moving the cursor takes upward of ten seconds each time.
When I started MATLAB up it asked if I wanted to let it accept incoming network connections and I said deny. Could that be affecting it? How can I change that back.Slow to the point of moving the cursor takes upward of ten seconds each time.
When I started MATLAB up it asked if I wanted to let it accept incoming network connections and I said deny. Could that be affecting it? How can I change that back. Slow to the point of moving the cursor takes upward of ten seconds each time.
When I started MATLAB up it asked if I wanted to let it accept incoming network connections and I said deny. Could that be affecting it? How can I change that back. performance MATLAB Answers — New Questions
I am trying to setup a serial communication with a optical power meter and I am not able to read the returned string.
I am using a newport power meter (91936-R) with RS232. I set up the communication with the following code beow. I am running into a weird situation.
The readlinerespone is The first is ""PM:CALDATE?" However when I send the readline command again I get the right response "30NOV2022". Its almost like readline stops once it reaches the first termination character and doesnt continue. I tried adding a pause incase I am reading the port too fast but no sucess.
I am not sure and matlab doesnt have more documention online on serial port so I was hoping someone can answer this.
clear s
clear all
s = serialport("COM12",38400);
configureTerminator(s,"CR/LF","CR");
writeline(s,"PM:CALDATE?")
% pause(4)
readline(s)
%readline(s)I am using a newport power meter (91936-R) with RS232. I set up the communication with the following code beow. I am running into a weird situation.
The readlinerespone is The first is ""PM:CALDATE?" However when I send the readline command again I get the right response "30NOV2022". Its almost like readline stops once it reaches the first termination character and doesnt continue. I tried adding a pause incase I am reading the port too fast but no sucess.
I am not sure and matlab doesnt have more documention online on serial port so I was hoping someone can answer this.
clear s
clear all
s = serialport("COM12",38400);
configureTerminator(s,"CR/LF","CR");
writeline(s,"PM:CALDATE?")
% pause(4)
readline(s)
%readline(s) I am using a newport power meter (91936-R) with RS232. I set up the communication with the following code beow. I am running into a weird situation.
The readlinerespone is The first is ""PM:CALDATE?" However when I send the readline command again I get the right response "30NOV2022". Its almost like readline stops once it reaches the first termination character and doesnt continue. I tried adding a pause incase I am reading the port too fast but no sucess.
I am not sure and matlab doesnt have more documention online on serial port so I was hoping someone can answer this.
clear s
clear all
s = serialport("COM12",38400);
configureTerminator(s,"CR/LF","CR");
writeline(s,"PM:CALDATE?")
% pause(4)
readline(s)
%readline(s) serial, readline, writeline MATLAB Answers — New Questions