Tag Archives: matlab
If I change my PC, can I re-install a Matlab license in the new PC (uninstalling from the old)?
I am planning to purchase a Matlab license, and I plan to change my PC next year.
Is the MatLab license tied to a single PC?
Once installed in a PC, if I change my PC can I re-install the license in this new PC (uninstalling the licence from the old PC)?I am planning to purchase a Matlab license, and I plan to change my PC next year.
Is the MatLab license tied to a single PC?
Once installed in a PC, if I change my PC can I re-install the license in this new PC (uninstalling the licence from the old PC)? I am planning to purchase a Matlab license, and I plan to change my PC next year.
Is the MatLab license tied to a single PC?
Once installed in a PC, if I change my PC can I re-install the license in this new PC (uninstalling the licence from the old PC)? reinstall MATLAB Answers — New Questions
Fitting multiple curves with multiple data sets, partial and globally shared parameters using lsqcurvefit
Hello everyone, I hope you can help me with the following problem. I have 3 measurement data sets, consisting of x-values (e.g. x1) and 2 corresponding y-data sets (e.g. A1 and B1). The functions of these parameters have partially shared parameters. A simultaneous fit over both curves works so far. Now I want to run a global fit over all 3 measurement datasets simultaneously, where all 3 measurement datasets share one parameter (beta(10)). How do I do this? Previously, I received a value for beta (10) for each fit, but now I want to change this. My data and functions (non-linear) are very extensive, so I have created a simplified example.
b1 = 1; b2 = 0.85; b3 = 2.5;
b4 = 1.1; b5 = 2.2; b6 = 4.5;
b7 = 1.3; b8 = 7.2; b9 = 9.5;
b10 = 0.5;
%x data
x1 = linspace(0, 10, 20).’; x2 = linspace(0, 10, 23).’; x3 = linspace(0, 10, 14).’;
%constants
C1 = 3.7 ; C2 = 4.2; C3 = 20.2;
%measurement dataset 1
A1 = b1 + b2*x1 + C1 * b10 + rand(20,1);
B1 = b3 – b2*x1 + C1 * b10 + rand(20,1);
%measurement dataset 2
A2 = b4 + b5*x2 + C2 * b10 + rand(23,1);
B2 = b6 – b5*x2 + C2 * b10 + rand(23,1);
%measurement dataset 3
A3 = b7 + b8*x3 + C3 * b10 + rand(14,1);
B3 = b9 – b8*x3 + C3 * b10 + rand(14,1);
mdl1 = @(beta, x) [beta(1) + beta(2).*x + C1 .* beta(10) ,…
beta(3) – beta(2).*x + C1 .* beta(10)];
mdl2 = @(beta, x) [beta(4) + beta(5).*x + C2 .* beta(10) ,…
beta(6) – beta(5).*x + C2 .* beta(10)];
mdl3 = @(beta, x) [beta(7) + beta(8).*x + C3 .* beta(10) ,…
beta(9) – beta(8).*x + C3 .* beta(10)];
beta0 = [0.92, 0.8, 2, 0.7, 2, 4, 1, 7, 9, 1];
lb = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
ub = [15, 15, 15, 15, 15, 15, 15, 15, 15, 15];
options = optimoptions(@lsqcurvefit,’Algorithm’,’levenberg-marquardt’);
beta1 = lsqcurvefit(mdl1,beta0,x1,[A1, B1],lb,ub,options);
beta2 = lsqcurvefit(mdl2,beta0,x2,[A2, B2],lb,ub,options);
beta3 = lsqcurvefit(mdl3,beta0,x3,[A3, B3],lb,ub,options);
A1_fit = beta1(1) + beta1(2)*x1 + C1 * beta1(10);
B1_fit = beta1(3) – beta1(2)*x1 + C1 * beta1(10);
A2_fit = beta2(4) + beta2(5)*x2 + C2 * beta2(10);
B2_fit = beta2(6) – beta2(5)*x2 + C2 * beta2(10);
A3_fit = beta3(7) + beta3(8)*x3 + C3 * beta3(10);
B3_fit = beta3(9) – beta3(8)*x3 + C3 * beta3(10);
figure(1);
subplot(2,1,1);
hold on;
plot(x1, A1,’s’);
plot(x1, A1_fit);
plot(x2, A2,’d’);
plot(x2, A2_fit);
plot(x3, A3,’p’);
plot(x3, A3_fit);
subplot(2,1,2);
hold on;
plot(x1, B1,’s’);
plot(x1, B1_fit);
plot(x2, B2,’d’);
plot(x2, B2_fit);
plot(x3, B3,’p’);
plot(x3, B3_fit);
hold offHello everyone, I hope you can help me with the following problem. I have 3 measurement data sets, consisting of x-values (e.g. x1) and 2 corresponding y-data sets (e.g. A1 and B1). The functions of these parameters have partially shared parameters. A simultaneous fit over both curves works so far. Now I want to run a global fit over all 3 measurement datasets simultaneously, where all 3 measurement datasets share one parameter (beta(10)). How do I do this? Previously, I received a value for beta (10) for each fit, but now I want to change this. My data and functions (non-linear) are very extensive, so I have created a simplified example.
b1 = 1; b2 = 0.85; b3 = 2.5;
b4 = 1.1; b5 = 2.2; b6 = 4.5;
b7 = 1.3; b8 = 7.2; b9 = 9.5;
b10 = 0.5;
%x data
x1 = linspace(0, 10, 20).’; x2 = linspace(0, 10, 23).’; x3 = linspace(0, 10, 14).’;
%constants
C1 = 3.7 ; C2 = 4.2; C3 = 20.2;
%measurement dataset 1
A1 = b1 + b2*x1 + C1 * b10 + rand(20,1);
B1 = b3 – b2*x1 + C1 * b10 + rand(20,1);
%measurement dataset 2
A2 = b4 + b5*x2 + C2 * b10 + rand(23,1);
B2 = b6 – b5*x2 + C2 * b10 + rand(23,1);
%measurement dataset 3
A3 = b7 + b8*x3 + C3 * b10 + rand(14,1);
B3 = b9 – b8*x3 + C3 * b10 + rand(14,1);
mdl1 = @(beta, x) [beta(1) + beta(2).*x + C1 .* beta(10) ,…
beta(3) – beta(2).*x + C1 .* beta(10)];
mdl2 = @(beta, x) [beta(4) + beta(5).*x + C2 .* beta(10) ,…
beta(6) – beta(5).*x + C2 .* beta(10)];
mdl3 = @(beta, x) [beta(7) + beta(8).*x + C3 .* beta(10) ,…
beta(9) – beta(8).*x + C3 .* beta(10)];
beta0 = [0.92, 0.8, 2, 0.7, 2, 4, 1, 7, 9, 1];
lb = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
ub = [15, 15, 15, 15, 15, 15, 15, 15, 15, 15];
options = optimoptions(@lsqcurvefit,’Algorithm’,’levenberg-marquardt’);
beta1 = lsqcurvefit(mdl1,beta0,x1,[A1, B1],lb,ub,options);
beta2 = lsqcurvefit(mdl2,beta0,x2,[A2, B2],lb,ub,options);
beta3 = lsqcurvefit(mdl3,beta0,x3,[A3, B3],lb,ub,options);
A1_fit = beta1(1) + beta1(2)*x1 + C1 * beta1(10);
B1_fit = beta1(3) – beta1(2)*x1 + C1 * beta1(10);
A2_fit = beta2(4) + beta2(5)*x2 + C2 * beta2(10);
B2_fit = beta2(6) – beta2(5)*x2 + C2 * beta2(10);
A3_fit = beta3(7) + beta3(8)*x3 + C3 * beta3(10);
B3_fit = beta3(9) – beta3(8)*x3 + C3 * beta3(10);
figure(1);
subplot(2,1,1);
hold on;
plot(x1, A1,’s’);
plot(x1, A1_fit);
plot(x2, A2,’d’);
plot(x2, A2_fit);
plot(x3, A3,’p’);
plot(x3, A3_fit);
subplot(2,1,2);
hold on;
plot(x1, B1,’s’);
plot(x1, B1_fit);
plot(x2, B2,’d’);
plot(x2, B2_fit);
plot(x3, B3,’p’);
plot(x3, B3_fit);
hold off Hello everyone, I hope you can help me with the following problem. I have 3 measurement data sets, consisting of x-values (e.g. x1) and 2 corresponding y-data sets (e.g. A1 and B1). The functions of these parameters have partially shared parameters. A simultaneous fit over both curves works so far. Now I want to run a global fit over all 3 measurement datasets simultaneously, where all 3 measurement datasets share one parameter (beta(10)). How do I do this? Previously, I received a value for beta (10) for each fit, but now I want to change this. My data and functions (non-linear) are very extensive, so I have created a simplified example.
b1 = 1; b2 = 0.85; b3 = 2.5;
b4 = 1.1; b5 = 2.2; b6 = 4.5;
b7 = 1.3; b8 = 7.2; b9 = 9.5;
b10 = 0.5;
%x data
x1 = linspace(0, 10, 20).’; x2 = linspace(0, 10, 23).’; x3 = linspace(0, 10, 14).’;
%constants
C1 = 3.7 ; C2 = 4.2; C3 = 20.2;
%measurement dataset 1
A1 = b1 + b2*x1 + C1 * b10 + rand(20,1);
B1 = b3 – b2*x1 + C1 * b10 + rand(20,1);
%measurement dataset 2
A2 = b4 + b5*x2 + C2 * b10 + rand(23,1);
B2 = b6 – b5*x2 + C2 * b10 + rand(23,1);
%measurement dataset 3
A3 = b7 + b8*x3 + C3 * b10 + rand(14,1);
B3 = b9 – b8*x3 + C3 * b10 + rand(14,1);
mdl1 = @(beta, x) [beta(1) + beta(2).*x + C1 .* beta(10) ,…
beta(3) – beta(2).*x + C1 .* beta(10)];
mdl2 = @(beta, x) [beta(4) + beta(5).*x + C2 .* beta(10) ,…
beta(6) – beta(5).*x + C2 .* beta(10)];
mdl3 = @(beta, x) [beta(7) + beta(8).*x + C3 .* beta(10) ,…
beta(9) – beta(8).*x + C3 .* beta(10)];
beta0 = [0.92, 0.8, 2, 0.7, 2, 4, 1, 7, 9, 1];
lb = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
ub = [15, 15, 15, 15, 15, 15, 15, 15, 15, 15];
options = optimoptions(@lsqcurvefit,’Algorithm’,’levenberg-marquardt’);
beta1 = lsqcurvefit(mdl1,beta0,x1,[A1, B1],lb,ub,options);
beta2 = lsqcurvefit(mdl2,beta0,x2,[A2, B2],lb,ub,options);
beta3 = lsqcurvefit(mdl3,beta0,x3,[A3, B3],lb,ub,options);
A1_fit = beta1(1) + beta1(2)*x1 + C1 * beta1(10);
B1_fit = beta1(3) – beta1(2)*x1 + C1 * beta1(10);
A2_fit = beta2(4) + beta2(5)*x2 + C2 * beta2(10);
B2_fit = beta2(6) – beta2(5)*x2 + C2 * beta2(10);
A3_fit = beta3(7) + beta3(8)*x3 + C3 * beta3(10);
B3_fit = beta3(9) – beta3(8)*x3 + C3 * beta3(10);
figure(1);
subplot(2,1,1);
hold on;
plot(x1, A1,’s’);
plot(x1, A1_fit);
plot(x2, A2,’d’);
plot(x2, A2_fit);
plot(x3, A3,’p’);
plot(x3, A3_fit);
subplot(2,1,2);
hold on;
plot(x1, B1,’s’);
plot(x1, B1_fit);
plot(x2, B2,’d’);
plot(x2, B2_fit);
plot(x3, B3,’p’);
plot(x3, B3_fit);
hold off lsqcurvefit, fitting multiple curves MATLAB Answers — New Questions
Up-sampling in convolutional neural network
Hi everyone,
for a project at university I am trying to rebuild a NN described in a paper. It was orininally designed in Keras (I don’t have any code, only its rough describtion) and I’m struggling with one specific layer they’re using. To up-sample their data, they use a layer which takes a single entry of its input and replicates it to a 2×2-region of the output. This results in a matrix with doubled dimensions, without zero-entries (assuming there was none in input) and same entry in each 2×2-block. It is an approximation to the inverse of the maxPooling-Layer of MATLAB. It is similar, but NOT the same as maxUnpooling-Layer, which keeps the position of an maximum-entry and fills up with zeros. For this specific "up-sampling-operation", there is no explicit NN-layer in MATLAB.
Does someone have an idea how I can do this operation?
An idea I had in mind: Just using the given maxUnpooling-Layer and hope there will be no big difference. I tried this and prepared my maxPooling-Layers with "HasUnpoolingOutputs", but it seems that maxUnpooling-Layer has to follow immediately after the maxPooling-Layer. I get unused outputs for my maxPooling-Layers and missing outputs for my maxUnpooling-Layers (seen via analyzeNetwork) as I use convolution-layers in between (see code for example).
layers = [
imageInputLayer([32 32 1])
convolution2dLayer(filterSize, 32, ‘Padding’, ‘same’)
batchNormalizationLayer()
reluLayer()
maxPooling2dLayer(2,’Stride’,2,’HasUnpoolingOutputs’,true) %
convolution2dLayer(filterSize, 64, ‘Padding’, ‘same’)
batchNormalizationLayer()
reluLayer()
maxUnpooling2dLayer() %
convolution2dLayer(filterSize, 32, ‘Padding’, ‘same’)
batchNormalizationLayer()
reluLayer()
fullyConnectedLayer(32)
regressionLayer
];
So in this case, one has to bring the outputs "indices" and "size" of the maxPooling-layer to the maxUnpooling-layer. But I don’t know how this can be achieved :/
I’d be very thankful for any ideas.Hi everyone,
for a project at university I am trying to rebuild a NN described in a paper. It was orininally designed in Keras (I don’t have any code, only its rough describtion) and I’m struggling with one specific layer they’re using. To up-sample their data, they use a layer which takes a single entry of its input and replicates it to a 2×2-region of the output. This results in a matrix with doubled dimensions, without zero-entries (assuming there was none in input) and same entry in each 2×2-block. It is an approximation to the inverse of the maxPooling-Layer of MATLAB. It is similar, but NOT the same as maxUnpooling-Layer, which keeps the position of an maximum-entry and fills up with zeros. For this specific "up-sampling-operation", there is no explicit NN-layer in MATLAB.
Does someone have an idea how I can do this operation?
An idea I had in mind: Just using the given maxUnpooling-Layer and hope there will be no big difference. I tried this and prepared my maxPooling-Layers with "HasUnpoolingOutputs", but it seems that maxUnpooling-Layer has to follow immediately after the maxPooling-Layer. I get unused outputs for my maxPooling-Layers and missing outputs for my maxUnpooling-Layers (seen via analyzeNetwork) as I use convolution-layers in between (see code for example).
layers = [
imageInputLayer([32 32 1])
convolution2dLayer(filterSize, 32, ‘Padding’, ‘same’)
batchNormalizationLayer()
reluLayer()
maxPooling2dLayer(2,’Stride’,2,’HasUnpoolingOutputs’,true) %
convolution2dLayer(filterSize, 64, ‘Padding’, ‘same’)
batchNormalizationLayer()
reluLayer()
maxUnpooling2dLayer() %
convolution2dLayer(filterSize, 32, ‘Padding’, ‘same’)
batchNormalizationLayer()
reluLayer()
fullyConnectedLayer(32)
regressionLayer
];
So in this case, one has to bring the outputs "indices" and "size" of the maxPooling-layer to the maxUnpooling-layer. But I don’t know how this can be achieved :/
I’d be very thankful for any ideas. Hi everyone,
for a project at university I am trying to rebuild a NN described in a paper. It was orininally designed in Keras (I don’t have any code, only its rough describtion) and I’m struggling with one specific layer they’re using. To up-sample their data, they use a layer which takes a single entry of its input and replicates it to a 2×2-region of the output. This results in a matrix with doubled dimensions, without zero-entries (assuming there was none in input) and same entry in each 2×2-block. It is an approximation to the inverse of the maxPooling-Layer of MATLAB. It is similar, but NOT the same as maxUnpooling-Layer, which keeps the position of an maximum-entry and fills up with zeros. For this specific "up-sampling-operation", there is no explicit NN-layer in MATLAB.
Does someone have an idea how I can do this operation?
An idea I had in mind: Just using the given maxUnpooling-Layer and hope there will be no big difference. I tried this and prepared my maxPooling-Layers with "HasUnpoolingOutputs", but it seems that maxUnpooling-Layer has to follow immediately after the maxPooling-Layer. I get unused outputs for my maxPooling-Layers and missing outputs for my maxUnpooling-Layers (seen via analyzeNetwork) as I use convolution-layers in between (see code for example).
layers = [
imageInputLayer([32 32 1])
convolution2dLayer(filterSize, 32, ‘Padding’, ‘same’)
batchNormalizationLayer()
reluLayer()
maxPooling2dLayer(2,’Stride’,2,’HasUnpoolingOutputs’,true) %
convolution2dLayer(filterSize, 64, ‘Padding’, ‘same’)
batchNormalizationLayer()
reluLayer()
maxUnpooling2dLayer() %
convolution2dLayer(filterSize, 32, ‘Padding’, ‘same’)
batchNormalizationLayer()
reluLayer()
fullyConnectedLayer(32)
regressionLayer
];
So in this case, one has to bring the outputs "indices" and "size" of the maxPooling-layer to the maxUnpooling-layer. But I don’t know how this can be achieved :/
I’d be very thankful for any ideas. neural network MATLAB Answers — New Questions
Simulation Crash Issue with Current Limiter and Motor Drive System
Hello MATLAB Community,
I am currently working on a Simscape model where a battery is connected to a current limiter and subsequently to a "motor and drive" system block. I am experiencing an issue where the simulation crashes as soon as the current limiter attempts to limit the current.
Error Details:
Error:An error occurred during simulation and the simulation was terminated
Caused by:
[‘Model_Name/Solver’]: Transient initialization at time 0.04421663720467431, solving for consistent states and modes, failed to converge.
Nonlinear solver: failed to converge, residual norm too large.
Here is the set of components with unconverged equations:
‘Model_Name/Current Limiter’
Equation location is:
‘/Applications/MATLAB_R2024a.app/toolbox/physmod/elec/library/m/+ee/+semiconductors/current_limiter_base.sscp'(no line number info)
What I Have Tried
Solver Configuration: I have tried using stiff solvers such as ode15s and ode23t, and adjusted the tolerances.
Initialization: Ensured that all initial conditions are set correctly for the battery, current limiter, and motor system.
Current Limiter Settings: Reviewed and adjusted the current limiter parameters to avoid abrupt changes.
Simulation Step Size: Reduced the step size to capture transient behaviors more accurately.
Zero-Crossing Detection: Enabled zero-crossing detection to handle state changes more smoothly.
Component Parameters: Verified that all component parameters are realistic and consistent.
Protective Measures: Tried introducing capacitors and resistors to smooth out voltage drops and dampen sudden changes.
Despite these efforts, the simulation still crashes. I believe the issue might be related to the current limiter’s interaction with the motor and drive system.
Reference Model
I am using the model configuration similar to the one available here: Battery Electric Vehicle with Motor Cooling in Simscape.
Request for Assistance
I am seeking advice on how to effectively limit the current to the motor and drive system without causing the simulation to crash. Any insights or suggestions on how to resolve this issue would be greatly appreciated.
Thank you in advance for your help!
Best regards, AlexHello MATLAB Community,
I am currently working on a Simscape model where a battery is connected to a current limiter and subsequently to a "motor and drive" system block. I am experiencing an issue where the simulation crashes as soon as the current limiter attempts to limit the current.
Error Details:
Error:An error occurred during simulation and the simulation was terminated
Caused by:
[‘Model_Name/Solver’]: Transient initialization at time 0.04421663720467431, solving for consistent states and modes, failed to converge.
Nonlinear solver: failed to converge, residual norm too large.
Here is the set of components with unconverged equations:
‘Model_Name/Current Limiter’
Equation location is:
‘/Applications/MATLAB_R2024a.app/toolbox/physmod/elec/library/m/+ee/+semiconductors/current_limiter_base.sscp'(no line number info)
What I Have Tried
Solver Configuration: I have tried using stiff solvers such as ode15s and ode23t, and adjusted the tolerances.
Initialization: Ensured that all initial conditions are set correctly for the battery, current limiter, and motor system.
Current Limiter Settings: Reviewed and adjusted the current limiter parameters to avoid abrupt changes.
Simulation Step Size: Reduced the step size to capture transient behaviors more accurately.
Zero-Crossing Detection: Enabled zero-crossing detection to handle state changes more smoothly.
Component Parameters: Verified that all component parameters are realistic and consistent.
Protective Measures: Tried introducing capacitors and resistors to smooth out voltage drops and dampen sudden changes.
Despite these efforts, the simulation still crashes. I believe the issue might be related to the current limiter’s interaction with the motor and drive system.
Reference Model
I am using the model configuration similar to the one available here: Battery Electric Vehicle with Motor Cooling in Simscape.
Request for Assistance
I am seeking advice on how to effectively limit the current to the motor and drive system without causing the simulation to crash. Any insights or suggestions on how to resolve this issue would be greatly appreciated.
Thank you in advance for your help!
Best regards, Alex Hello MATLAB Community,
I am currently working on a Simscape model where a battery is connected to a current limiter and subsequently to a "motor and drive" system block. I am experiencing an issue where the simulation crashes as soon as the current limiter attempts to limit the current.
Error Details:
Error:An error occurred during simulation and the simulation was terminated
Caused by:
[‘Model_Name/Solver’]: Transient initialization at time 0.04421663720467431, solving for consistent states and modes, failed to converge.
Nonlinear solver: failed to converge, residual norm too large.
Here is the set of components with unconverged equations:
‘Model_Name/Current Limiter’
Equation location is:
‘/Applications/MATLAB_R2024a.app/toolbox/physmod/elec/library/m/+ee/+semiconductors/current_limiter_base.sscp'(no line number info)
What I Have Tried
Solver Configuration: I have tried using stiff solvers such as ode15s and ode23t, and adjusted the tolerances.
Initialization: Ensured that all initial conditions are set correctly for the battery, current limiter, and motor system.
Current Limiter Settings: Reviewed and adjusted the current limiter parameters to avoid abrupt changes.
Simulation Step Size: Reduced the step size to capture transient behaviors more accurately.
Zero-Crossing Detection: Enabled zero-crossing detection to handle state changes more smoothly.
Component Parameters: Verified that all component parameters are realistic and consistent.
Protective Measures: Tried introducing capacitors and resistors to smooth out voltage drops and dampen sudden changes.
Despite these efforts, the simulation still crashes. I believe the issue might be related to the current limiter’s interaction with the motor and drive system.
Reference Model
I am using the model configuration similar to the one available here: Battery Electric Vehicle with Motor Cooling in Simscape.
Request for Assistance
I am seeking advice on how to effectively limit the current to the motor and drive system without causing the simulation to crash. Any insights or suggestions on how to resolve this issue would be greatly appreciated.
Thank you in advance for your help!
Best regards, Alex simscape, battery_system_management MATLAB Answers — New Questions
Hi, why am I getting an error?
it gives me an error on the plot function. The error message is:
Error using LinearModel/plot
Wrong number of arguments.
load (‘DATI_PAZ1’);
LV_1=LV;
lat_1=lat;
sept_1=sept;
time_1=time;
load (‘DATI_PAZ2’);
LV_2=LV;
lat_2=lat;
ant_2=ant;
time_2=time;
figure
plot(time_1,LV_1,’k’)
hold on
plot(time_1,lat_1,’b’)
hold on
plot(time_1,sept_1,’r’)
grid on
xlabel(‘time [s]’),ylabel(‘colpi/(s*voxel)’)
title(‘Conc. FDG – CASO I’)
legend(‘ventr.sx’,’laterale’,’setto’)
figure
plot(time_2,LV_2,’k’)
hold on
plot(time_2,lat_2,’b’)
hold on
plot(time_2,ant_2,’r’)
grid on
xlabel(‘time [s]’),ylabel(‘colpi/(s*voxel)’)
title(‘Conc. FDG – CASO II’)
legend(‘ventr.sx’,’laterale’,’anteriore’)
y_lat_1=lat_1./LV_1;
y_sept_1=sept_1./LV_1;
x_1=cumtrapz(time_1,LV_1)./LV_1;
figure
subplot(1,2,1)
plot(x_1,y_lat_1,’*b’)
hold on
plot(x_1,y_sept_1,’*r’)
title(‘Patlak graph – CASO I’)
F_sept_1=fitlm(x_1(18:24,1),y_sept_1(18:24,1),’poly1′);
hold on
plot(F_sept_1,’r’) %error
F_lat_1=fitlm(x_1(18:24,1),y_lat_1(18:24,1),’poly1′);
hold on
plot(F_lat_1,’b’) %error
legend(‘parete laterale’,’setto’)
xlabel(‘trapz( C_p(t) dt)/C_p(t) [s]’),ylabel(‘C_t(t)/C_p(t) [adimensionale]’)
m_setto_1=F_sept_1.p1;
m_laterale_1=F_lat_1.p1;it gives me an error on the plot function. The error message is:
Error using LinearModel/plot
Wrong number of arguments.
load (‘DATI_PAZ1’);
LV_1=LV;
lat_1=lat;
sept_1=sept;
time_1=time;
load (‘DATI_PAZ2’);
LV_2=LV;
lat_2=lat;
ant_2=ant;
time_2=time;
figure
plot(time_1,LV_1,’k’)
hold on
plot(time_1,lat_1,’b’)
hold on
plot(time_1,sept_1,’r’)
grid on
xlabel(‘time [s]’),ylabel(‘colpi/(s*voxel)’)
title(‘Conc. FDG – CASO I’)
legend(‘ventr.sx’,’laterale’,’setto’)
figure
plot(time_2,LV_2,’k’)
hold on
plot(time_2,lat_2,’b’)
hold on
plot(time_2,ant_2,’r’)
grid on
xlabel(‘time [s]’),ylabel(‘colpi/(s*voxel)’)
title(‘Conc. FDG – CASO II’)
legend(‘ventr.sx’,’laterale’,’anteriore’)
y_lat_1=lat_1./LV_1;
y_sept_1=sept_1./LV_1;
x_1=cumtrapz(time_1,LV_1)./LV_1;
figure
subplot(1,2,1)
plot(x_1,y_lat_1,’*b’)
hold on
plot(x_1,y_sept_1,’*r’)
title(‘Patlak graph – CASO I’)
F_sept_1=fitlm(x_1(18:24,1),y_sept_1(18:24,1),’poly1′);
hold on
plot(F_sept_1,’r’) %error
F_lat_1=fitlm(x_1(18:24,1),y_lat_1(18:24,1),’poly1′);
hold on
plot(F_lat_1,’b’) %error
legend(‘parete laterale’,’setto’)
xlabel(‘trapz( C_p(t) dt)/C_p(t) [s]’),ylabel(‘C_t(t)/C_p(t) [adimensionale]’)
m_setto_1=F_sept_1.p1;
m_laterale_1=F_lat_1.p1; it gives me an error on the plot function. The error message is:
Error using LinearModel/plot
Wrong number of arguments.
load (‘DATI_PAZ1’);
LV_1=LV;
lat_1=lat;
sept_1=sept;
time_1=time;
load (‘DATI_PAZ2’);
LV_2=LV;
lat_2=lat;
ant_2=ant;
time_2=time;
figure
plot(time_1,LV_1,’k’)
hold on
plot(time_1,lat_1,’b’)
hold on
plot(time_1,sept_1,’r’)
grid on
xlabel(‘time [s]’),ylabel(‘colpi/(s*voxel)’)
title(‘Conc. FDG – CASO I’)
legend(‘ventr.sx’,’laterale’,’setto’)
figure
plot(time_2,LV_2,’k’)
hold on
plot(time_2,lat_2,’b’)
hold on
plot(time_2,ant_2,’r’)
grid on
xlabel(‘time [s]’),ylabel(‘colpi/(s*voxel)’)
title(‘Conc. FDG – CASO II’)
legend(‘ventr.sx’,’laterale’,’anteriore’)
y_lat_1=lat_1./LV_1;
y_sept_1=sept_1./LV_1;
x_1=cumtrapz(time_1,LV_1)./LV_1;
figure
subplot(1,2,1)
plot(x_1,y_lat_1,’*b’)
hold on
plot(x_1,y_sept_1,’*r’)
title(‘Patlak graph – CASO I’)
F_sept_1=fitlm(x_1(18:24,1),y_sept_1(18:24,1),’poly1′);
hold on
plot(F_sept_1,’r’) %error
F_lat_1=fitlm(x_1(18:24,1),y_lat_1(18:24,1),’poly1′);
hold on
plot(F_lat_1,’b’) %error
legend(‘parete laterale’,’setto’)
xlabel(‘trapz( C_p(t) dt)/C_p(t) [s]’),ylabel(‘C_t(t)/C_p(t) [adimensionale]’)
m_setto_1=F_sept_1.p1;
m_laterale_1=F_lat_1.p1; error using linearmodel/plot, wrong number of arguments MATLAB Answers — New Questions
After performing stereo calibration using a checkerboard, the same checkerboard is reconstructed in 3D. I have several questions regarding the results.
Currently, I am facing three issues related to 3D reconstruction. As shown in the first attached image, I describe the x-axis as positive to the right and the y-axis as positive upwards (since the checkerboard is planar, I will not consider the z-axis in this question). The green plots mean the ground truth, and the red plots mean reconstructed values in the first attached image.
When considering the detected checkerboard corner points in the same x×y layout as the premise, the ground truth is detected and plotted as 8×11, while the reconstructed values are detected and plotted as 10×7. Why is there a difference of one cell in the detected grid? Please refer to the attached checkerboard image for this situation.
As seen in the attached image, the long side of the ground truth checkerboard is aligned along the y-axis direction, while the long side of the reconstructed checkerboard appears to be aligned along the x-axis direction. Why does it seem like the long side’s position has rotated during reconstruction?
The first detected corner point of the reconstructed values should be located at the same origin as the first detected corner point of the ground truth, but it is located in a different position. What is the cause of this? Is this due to low accuracy in the camera parameters, or is it because the reconstructed values and the ground truth are represented in different coordinate systems? Could there be other reasons for this?
The specific causes for the second and third questions might be the same, but I appreciate your response. For reference, I am attaching the code as well.
% Loading stereo images
I1 = imread(‘/Users/uchidataisei/dev/uchida/kenkyu_data/2024-06-14/C3/GX010541_frames/frame0052.png’); % 左画像
I2 = imread(‘/Users/uchidataisei/dev/uchida/kenkyu_data/2024-06-14/C2/GX010611_frames/frame0052.png’); % 右画像
% Using pre-obtained calibration parameters
load(‘/Users/uchidataisei/dev/uchida/MATLAB_data/2024_06_14/calibrationSession_0614_C3_2_.mat’);
% Rectification
% [J1, J2, reprojectionMatrix] = rectifyStereoImages(I1, I2, stereoParams_0614_C3_2);
[J1, J2, reprojectionMatrix] = rectifyStereoImages(I1, I2, stereoParams_0614_C3_2_);
% Calculating the disparity map
disparityMap = disparitySGM(rgb2gray(J1), rgb2gray(J2));
% Reconstructing the 3D scene
points3D = reconstructScene(disparityMap, reprojectionMatrix);
% Setting the ground truth of the checkerboard, [mm]
squareSize = 114;
% Detecting the corners of the checkerboard
[imagePoints, boardSize] = detectCheckerboardPoints(J1);
% Calculating the ground truth grid of the checkerboard
[worldX, worldY] = meshgrid(0:squareSize:((boardSize(1)-1)*squareSize), 0:squareSize:((boardSize(2)-1)*squareSize));
worldPoints = [worldX(:), worldY(:), zeros(numel(worldX), 1)]; % Z=0
% Matching the number of detected corners with the number of ground truth points
% if size(worldPoints, 1) > size(imagePoints, 1)
% worldPoints = worldPoints(1:size(imagePoints, 1), :);
% elseif size(worldPoints, 1) < size(imagePoints, 1)
% error(‘The number of ground truth points is less than the number of detected corners. Please check the number of ground truth points.’);
% end
% Extracting the 3D points of the reconstructed checkerboard corners
detected3DPoints = zeros(size(imagePoints, 1), 3);
validIndices = true(size(imagePoints, 1), 1);
for i = 1:size(imagePoints, 1)
x = round(imagePoints(i, 1));
y = round(imagePoints(i, 2));
if x > 0 && y > 0 && x <= size(points3D, 2) && y <= size(points3D, 1)
detected3DPoints(i, 🙂 = points3D(y, x, :);
if any(isnan(detected3DPoints(i, :)) | isinf(detected3DPoints(i, :)))
validIndices(i) = false;
end
else
validIndices(i) = false;
end
end
% Excluding invalid points
% detected3DPoints = detected3DPoints(validIndices, :); %Reconstructed grid
% worldPoints = worldPoints(validIndices, :); %Ground truth grid
% Omitting scale transformation
detected3DPoints_mm = detected3DPoints;
% Comparison of ground truth and reconstructed results
figure;
plot3(worldPoints(:,1), worldPoints(:,2), worldPoints(:,3), ‘go’);
hold on;
plot3(detected3DPoints_mm(:,1), detected3DPoints_mm(:,2), detected3DPoints_mm(:,3), ‘rx’);
% Displaying numbers on each plot
for i = 1:size(worldPoints, 1)
text(worldPoints(i, 1), worldPoints(i, 2), worldPoints(i, 3), num2str(i), ‘Color’, ‘green’);
text(detected3DPoints_mm(i, 1), detected3DPoints_mm(i, 2), detected3DPoints_mm(i, 3), num2str(i), ‘Color’, ‘red’);
end
legend(‘True Points’, ‘Reconstructed Points’);
xlabel(‘X (mm)’);
ylabel(‘Y (mm)’);
zlabel(‘Z (mm)’);
title(‘Comparison of True and Reconstructed Points’);
grid on;
% Calculating the error
errors_xy = sqrt(sum((worldPoints(:, 1:2) – detected3DPoints_mm(:, 1:2)).^2, 2));
meanError_xy = mean(errors_xy);
disp([‘Mean Error in XY plane: ‘, num2str(meanError_xy), ‘ millimeters’]);Currently, I am facing three issues related to 3D reconstruction. As shown in the first attached image, I describe the x-axis as positive to the right and the y-axis as positive upwards (since the checkerboard is planar, I will not consider the z-axis in this question). The green plots mean the ground truth, and the red plots mean reconstructed values in the first attached image.
When considering the detected checkerboard corner points in the same x×y layout as the premise, the ground truth is detected and plotted as 8×11, while the reconstructed values are detected and plotted as 10×7. Why is there a difference of one cell in the detected grid? Please refer to the attached checkerboard image for this situation.
As seen in the attached image, the long side of the ground truth checkerboard is aligned along the y-axis direction, while the long side of the reconstructed checkerboard appears to be aligned along the x-axis direction. Why does it seem like the long side’s position has rotated during reconstruction?
The first detected corner point of the reconstructed values should be located at the same origin as the first detected corner point of the ground truth, but it is located in a different position. What is the cause of this? Is this due to low accuracy in the camera parameters, or is it because the reconstructed values and the ground truth are represented in different coordinate systems? Could there be other reasons for this?
The specific causes for the second and third questions might be the same, but I appreciate your response. For reference, I am attaching the code as well.
% Loading stereo images
I1 = imread(‘/Users/uchidataisei/dev/uchida/kenkyu_data/2024-06-14/C3/GX010541_frames/frame0052.png’); % 左画像
I2 = imread(‘/Users/uchidataisei/dev/uchida/kenkyu_data/2024-06-14/C2/GX010611_frames/frame0052.png’); % 右画像
% Using pre-obtained calibration parameters
load(‘/Users/uchidataisei/dev/uchida/MATLAB_data/2024_06_14/calibrationSession_0614_C3_2_.mat’);
% Rectification
% [J1, J2, reprojectionMatrix] = rectifyStereoImages(I1, I2, stereoParams_0614_C3_2);
[J1, J2, reprojectionMatrix] = rectifyStereoImages(I1, I2, stereoParams_0614_C3_2_);
% Calculating the disparity map
disparityMap = disparitySGM(rgb2gray(J1), rgb2gray(J2));
% Reconstructing the 3D scene
points3D = reconstructScene(disparityMap, reprojectionMatrix);
% Setting the ground truth of the checkerboard, [mm]
squareSize = 114;
% Detecting the corners of the checkerboard
[imagePoints, boardSize] = detectCheckerboardPoints(J1);
% Calculating the ground truth grid of the checkerboard
[worldX, worldY] = meshgrid(0:squareSize:((boardSize(1)-1)*squareSize), 0:squareSize:((boardSize(2)-1)*squareSize));
worldPoints = [worldX(:), worldY(:), zeros(numel(worldX), 1)]; % Z=0
% Matching the number of detected corners with the number of ground truth points
% if size(worldPoints, 1) > size(imagePoints, 1)
% worldPoints = worldPoints(1:size(imagePoints, 1), :);
% elseif size(worldPoints, 1) < size(imagePoints, 1)
% error(‘The number of ground truth points is less than the number of detected corners. Please check the number of ground truth points.’);
% end
% Extracting the 3D points of the reconstructed checkerboard corners
detected3DPoints = zeros(size(imagePoints, 1), 3);
validIndices = true(size(imagePoints, 1), 1);
for i = 1:size(imagePoints, 1)
x = round(imagePoints(i, 1));
y = round(imagePoints(i, 2));
if x > 0 && y > 0 && x <= size(points3D, 2) && y <= size(points3D, 1)
detected3DPoints(i, 🙂 = points3D(y, x, :);
if any(isnan(detected3DPoints(i, :)) | isinf(detected3DPoints(i, :)))
validIndices(i) = false;
end
else
validIndices(i) = false;
end
end
% Excluding invalid points
% detected3DPoints = detected3DPoints(validIndices, :); %Reconstructed grid
% worldPoints = worldPoints(validIndices, :); %Ground truth grid
% Omitting scale transformation
detected3DPoints_mm = detected3DPoints;
% Comparison of ground truth and reconstructed results
figure;
plot3(worldPoints(:,1), worldPoints(:,2), worldPoints(:,3), ‘go’);
hold on;
plot3(detected3DPoints_mm(:,1), detected3DPoints_mm(:,2), detected3DPoints_mm(:,3), ‘rx’);
% Displaying numbers on each plot
for i = 1:size(worldPoints, 1)
text(worldPoints(i, 1), worldPoints(i, 2), worldPoints(i, 3), num2str(i), ‘Color’, ‘green’);
text(detected3DPoints_mm(i, 1), detected3DPoints_mm(i, 2), detected3DPoints_mm(i, 3), num2str(i), ‘Color’, ‘red’);
end
legend(‘True Points’, ‘Reconstructed Points’);
xlabel(‘X (mm)’);
ylabel(‘Y (mm)’);
zlabel(‘Z (mm)’);
title(‘Comparison of True and Reconstructed Points’);
grid on;
% Calculating the error
errors_xy = sqrt(sum((worldPoints(:, 1:2) – detected3DPoints_mm(:, 1:2)).^2, 2));
meanError_xy = mean(errors_xy);
disp([‘Mean Error in XY plane: ‘, num2str(meanError_xy), ‘ millimeters’]); Currently, I am facing three issues related to 3D reconstruction. As shown in the first attached image, I describe the x-axis as positive to the right and the y-axis as positive upwards (since the checkerboard is planar, I will not consider the z-axis in this question). The green plots mean the ground truth, and the red plots mean reconstructed values in the first attached image.
When considering the detected checkerboard corner points in the same x×y layout as the premise, the ground truth is detected and plotted as 8×11, while the reconstructed values are detected and plotted as 10×7. Why is there a difference of one cell in the detected grid? Please refer to the attached checkerboard image for this situation.
As seen in the attached image, the long side of the ground truth checkerboard is aligned along the y-axis direction, while the long side of the reconstructed checkerboard appears to be aligned along the x-axis direction. Why does it seem like the long side’s position has rotated during reconstruction?
The first detected corner point of the reconstructed values should be located at the same origin as the first detected corner point of the ground truth, but it is located in a different position. What is the cause of this? Is this due to low accuracy in the camera parameters, or is it because the reconstructed values and the ground truth are represented in different coordinate systems? Could there be other reasons for this?
The specific causes for the second and third questions might be the same, but I appreciate your response. For reference, I am attaching the code as well.
% Loading stereo images
I1 = imread(‘/Users/uchidataisei/dev/uchida/kenkyu_data/2024-06-14/C3/GX010541_frames/frame0052.png’); % 左画像
I2 = imread(‘/Users/uchidataisei/dev/uchida/kenkyu_data/2024-06-14/C2/GX010611_frames/frame0052.png’); % 右画像
% Using pre-obtained calibration parameters
load(‘/Users/uchidataisei/dev/uchida/MATLAB_data/2024_06_14/calibrationSession_0614_C3_2_.mat’);
% Rectification
% [J1, J2, reprojectionMatrix] = rectifyStereoImages(I1, I2, stereoParams_0614_C3_2);
[J1, J2, reprojectionMatrix] = rectifyStereoImages(I1, I2, stereoParams_0614_C3_2_);
% Calculating the disparity map
disparityMap = disparitySGM(rgb2gray(J1), rgb2gray(J2));
% Reconstructing the 3D scene
points3D = reconstructScene(disparityMap, reprojectionMatrix);
% Setting the ground truth of the checkerboard, [mm]
squareSize = 114;
% Detecting the corners of the checkerboard
[imagePoints, boardSize] = detectCheckerboardPoints(J1);
% Calculating the ground truth grid of the checkerboard
[worldX, worldY] = meshgrid(0:squareSize:((boardSize(1)-1)*squareSize), 0:squareSize:((boardSize(2)-1)*squareSize));
worldPoints = [worldX(:), worldY(:), zeros(numel(worldX), 1)]; % Z=0
% Matching the number of detected corners with the number of ground truth points
% if size(worldPoints, 1) > size(imagePoints, 1)
% worldPoints = worldPoints(1:size(imagePoints, 1), :);
% elseif size(worldPoints, 1) < size(imagePoints, 1)
% error(‘The number of ground truth points is less than the number of detected corners. Please check the number of ground truth points.’);
% end
% Extracting the 3D points of the reconstructed checkerboard corners
detected3DPoints = zeros(size(imagePoints, 1), 3);
validIndices = true(size(imagePoints, 1), 1);
for i = 1:size(imagePoints, 1)
x = round(imagePoints(i, 1));
y = round(imagePoints(i, 2));
if x > 0 && y > 0 && x <= size(points3D, 2) && y <= size(points3D, 1)
detected3DPoints(i, 🙂 = points3D(y, x, :);
if any(isnan(detected3DPoints(i, :)) | isinf(detected3DPoints(i, :)))
validIndices(i) = false;
end
else
validIndices(i) = false;
end
end
% Excluding invalid points
% detected3DPoints = detected3DPoints(validIndices, :); %Reconstructed grid
% worldPoints = worldPoints(validIndices, :); %Ground truth grid
% Omitting scale transformation
detected3DPoints_mm = detected3DPoints;
% Comparison of ground truth and reconstructed results
figure;
plot3(worldPoints(:,1), worldPoints(:,2), worldPoints(:,3), ‘go’);
hold on;
plot3(detected3DPoints_mm(:,1), detected3DPoints_mm(:,2), detected3DPoints_mm(:,3), ‘rx’);
% Displaying numbers on each plot
for i = 1:size(worldPoints, 1)
text(worldPoints(i, 1), worldPoints(i, 2), worldPoints(i, 3), num2str(i), ‘Color’, ‘green’);
text(detected3DPoints_mm(i, 1), detected3DPoints_mm(i, 2), detected3DPoints_mm(i, 3), num2str(i), ‘Color’, ‘red’);
end
legend(‘True Points’, ‘Reconstructed Points’);
xlabel(‘X (mm)’);
ylabel(‘Y (mm)’);
zlabel(‘Z (mm)’);
title(‘Comparison of True and Reconstructed Points’);
grid on;
% Calculating the error
errors_xy = sqrt(sum((worldPoints(:, 1:2) – detected3DPoints_mm(:, 1:2)).^2, 2));
meanError_xy = mean(errors_xy);
disp([‘Mean Error in XY plane: ‘, num2str(meanError_xy), ‘ millimeters’]); #reconstructscene, stereoparameters, detectcheckerboardpoints, 3d reconstruction MATLAB Answers — New Questions
Fill confidence band hexadecimal color
Hello!
I am plotting a confidence band using fill, and get an error-message when using hecadecimal color:
I have tried with both ‘Color’ and ‘FaceColor’ before the hexadecimal color, without it helping. It works when I use a default color such as ‘b’.
Also, when trying just to plot the line (not filling), it works with the html-code.
fill([0:hmax, fliplr(0:hmax)], [upper_bounds, fliplr(lower_bounds)], ‘#7E2F8E’, ‘FaceAlpha’, 0.2, ‘EdgeColor’, ‘none’);
Thanks!Hello!
I am plotting a confidence band using fill, and get an error-message when using hecadecimal color:
I have tried with both ‘Color’ and ‘FaceColor’ before the hexadecimal color, without it helping. It works when I use a default color such as ‘b’.
Also, when trying just to plot the line (not filling), it works with the html-code.
fill([0:hmax, fliplr(0:hmax)], [upper_bounds, fliplr(lower_bounds)], ‘#7E2F8E’, ‘FaceAlpha’, 0.2, ‘EdgeColor’, ‘none’);
Thanks! Hello!
I am plotting a confidence band using fill, and get an error-message when using hecadecimal color:
I have tried with both ‘Color’ and ‘FaceColor’ before the hexadecimal color, without it helping. It works when I use a default color such as ‘b’.
Also, when trying just to plot the line (not filling), it works with the html-code.
fill([0:hmax, fliplr(0:hmax)], [upper_bounds, fliplr(lower_bounds)], ‘#7E2F8E’, ‘FaceAlpha’, 0.2, ‘EdgeColor’, ‘none’);
Thanks! #plot #fill #hexadecimal #confidenceband MATLAB Answers — New Questions
Working with modified code -2024A
I am working with a slightly modified version of ode113. Until I was using the same code I copied and pasted and modified a few versions ago. Now in 2024A I got an error message, so I copied the code of ode113 again into a new file and modified again. However, a new problem has risen- it seems that 2023 is dependant on private functions, and I am now getting "unrecognized function or variable" errors. A workaround that I found is copying all the relevant private folders into a new folder. Is that the best way to solve the issue?
many thanks
NathanI am working with a slightly modified version of ode113. Until I was using the same code I copied and pasted and modified a few versions ago. Now in 2024A I got an error message, so I copied the code of ode113 again into a new file and modified again. However, a new problem has risen- it seems that 2023 is dependant on private functions, and I am now getting "unrecognized function or variable" errors. A workaround that I found is copying all the relevant private folders into a new folder. Is that the best way to solve the issue?
many thanks
Nathan I am working with a slightly modified version of ode113. Until I was using the same code I copied and pasted and modified a few versions ago. Now in 2024A I got an error message, so I copied the code of ode113 again into a new file and modified again. However, a new problem has risen- it seems that 2023 is dependant on private functions, and I am now getting "unrecognized function or variable" errors. A workaround that I found is copying all the relevant private folders into a new folder. Is that the best way to solve the issue?
many thanks
Nathan private, private functions, modified MATLAB Answers — New Questions
Why is this matlab program not able to solve accurately?
B=[E_b*I_b*(-beta^3*cos(beta*l)-beta^3*cosh(beta*l))+m2*omega^2*(sin(beta*l)-sinh(beta*l)),E_b*I_b*(beta^3*sin(beta*l)-beta^3*sinh(beta*l))+m2*omega^2*(cos(beta*l)-cosh(beta*l));
E_b*I_b*(-beta^2*sin(beta*l)-beta^2*sinh(beta*l))-J*omega^2*(sin(beta*l)-sinh(beta*l)),E_b*I_b*(-beta^2*cos(beta*l)-beta^2*cosh(beta*l))-J*omega^2*(cos(beta*l)-cosh(beta*l)) ]B=[E_b*I_b*(-beta^3*cos(beta*l)-beta^3*cosh(beta*l))+m2*omega^2*(sin(beta*l)-sinh(beta*l)),E_b*I_b*(beta^3*sin(beta*l)-beta^3*sinh(beta*l))+m2*omega^2*(cos(beta*l)-cosh(beta*l));
E_b*I_b*(-beta^2*sin(beta*l)-beta^2*sinh(beta*l))-J*omega^2*(sin(beta*l)-sinh(beta*l)),E_b*I_b*(-beta^2*cos(beta*l)-beta^2*cosh(beta*l))-J*omega^2*(cos(beta*l)-cosh(beta*l)) ] B=[E_b*I_b*(-beta^3*cos(beta*l)-beta^3*cosh(beta*l))+m2*omega^2*(sin(beta*l)-sinh(beta*l)),E_b*I_b*(beta^3*sin(beta*l)-beta^3*sinh(beta*l))+m2*omega^2*(cos(beta*l)-cosh(beta*l));
E_b*I_b*(-beta^2*sin(beta*l)-beta^2*sinh(beta*l))-J*omega^2*(sin(beta*l)-sinh(beta*l)),E_b*I_b*(-beta^2*cos(beta*l)-beta^2*cosh(beta*l))-J*omega^2*(cos(beta*l)-cosh(beta*l)) ] solve the determinant MATLAB Answers — New Questions
Training agent in reinforcement learning: reproducibility of the code
I get two different results from running this water-tank system example for reinforcement learning made by Mathworks:
https://uk.mathworks.com/help/reinforcement-learning/ug/create-simulink-environment-and-train-agent.html
This example has fixed the random number generator seed rng(0), so I expected the result to be the same on all computer. However, I ended up with two different agents on two computers:
Computer A finished training the agent after 86 episodes (just like the published example) and gave me an identical agent to the example.
Computer B needed 182 episodes to train the agent and gave me a different agent.
Both computers run MATLAB R2023b 64-bit on MS Windows 10. The code is unchanged from the example (except for changing doTraining = false to doTraining = true).
Computer A has an 8-core i7 processor. Computer B has a 6-core i7 processor.
I’m writing a tutorial for a univeristy-level course, so reproducibility is necessary so that students can follow the example. Any tip on how to facilitate this is also much appreciated.I get two different results from running this water-tank system example for reinforcement learning made by Mathworks:
https://uk.mathworks.com/help/reinforcement-learning/ug/create-simulink-environment-and-train-agent.html
This example has fixed the random number generator seed rng(0), so I expected the result to be the same on all computer. However, I ended up with two different agents on two computers:
Computer A finished training the agent after 86 episodes (just like the published example) and gave me an identical agent to the example.
Computer B needed 182 episodes to train the agent and gave me a different agent.
Both computers run MATLAB R2023b 64-bit on MS Windows 10. The code is unchanged from the example (except for changing doTraining = false to doTraining = true).
Computer A has an 8-core i7 processor. Computer B has a 6-core i7 processor.
I’m writing a tutorial for a univeristy-level course, so reproducibility is necessary so that students can follow the example. Any tip on how to facilitate this is also much appreciated. I get two different results from running this water-tank system example for reinforcement learning made by Mathworks:
https://uk.mathworks.com/help/reinforcement-learning/ug/create-simulink-environment-and-train-agent.html
This example has fixed the random number generator seed rng(0), so I expected the result to be the same on all computer. However, I ended up with two different agents on two computers:
Computer A finished training the agent after 86 episodes (just like the published example) and gave me an identical agent to the example.
Computer B needed 182 episodes to train the agent and gave me a different agent.
Both computers run MATLAB R2023b 64-bit on MS Windows 10. The code is unchanged from the example (except for changing doTraining = false to doTraining = true).
Computer A has an 8-core i7 processor. Computer B has a 6-core i7 processor.
I’m writing a tutorial for a univeristy-level course, so reproducibility is necessary so that students can follow the example. Any tip on how to facilitate this is also much appreciated. reinforcement learning, agent, training, random number generator MATLAB Answers — New Questions
Colormap a plot based on value of (x,y)
Hello,
I have a table T of dimensions x,y where each point has a value between -100 and 100.
I want to graph this data such that T(1,1) is point 1,1 on the graph and the color of that point is determined by the value of T(1,1)
I included a picture of something similar to what I want my plot to look like, along with some code that generates an example table for graphing
Thanks in advance for any help you can provide.
clear
clc
close all
%This will generate a table with example data.
x=100
y=50
data=zeros(y,x);
data(1,1)=50;
dchartdx=25/x;
dchartdy=25/y;
for ii=1:x
data(1,ii+1)=data(1,ii)+dchartdx;
end
for ii=1:x+1
for ij=1:y
data(ij+1,ii)=data(ij,ii)+dchartdy;
end
endHello,
I have a table T of dimensions x,y where each point has a value between -100 and 100.
I want to graph this data such that T(1,1) is point 1,1 on the graph and the color of that point is determined by the value of T(1,1)
I included a picture of something similar to what I want my plot to look like, along with some code that generates an example table for graphing
Thanks in advance for any help you can provide.
clear
clc
close all
%This will generate a table with example data.
x=100
y=50
data=zeros(y,x);
data(1,1)=50;
dchartdx=25/x;
dchartdy=25/y;
for ii=1:x
data(1,ii+1)=data(1,ii)+dchartdx;
end
for ii=1:x+1
for ij=1:y
data(ij+1,ii)=data(ij,ii)+dchartdy;
end
end Hello,
I have a table T of dimensions x,y where each point has a value between -100 and 100.
I want to graph this data such that T(1,1) is point 1,1 on the graph and the color of that point is determined by the value of T(1,1)
I included a picture of something similar to what I want my plot to look like, along with some code that generates an example table for graphing
Thanks in advance for any help you can provide.
clear
clc
close all
%This will generate a table with example data.
x=100
y=50
data=zeros(y,x);
data(1,1)=50;
dchartdx=25/x;
dchartdy=25/y;
for ii=1:x
data(1,ii+1)=data(1,ii)+dchartdx;
end
for ii=1:x+1
for ij=1:y
data(ij+1,ii)=data(ij,ii)+dchartdy;
end
end colormap plot MATLAB Answers — New Questions
i am working on Image Compression Using Run Length Encoding
am getting error in scanning in zigzag
the error is as follows
Undefined function or variable ‘toZigzag’.
Error in rlc_haar (line 21)
ImageArray=toZigzag(QuantizedImage);
plese help me
%% Matlab code for Image Compression Using Run Length Encoding
clc;
clear;
close all;
%% Set Quantization Parameter
quantizedvalue=10;
%% Read Input Image
InputImage=imread(‘cameraman.tif’);
[row col p]=size(InputImage);
%% Wavelet Decomposition
[LL LH HL HH]=dwt2(InputImage,’haar’);
WaveletDecomposeImage=[LL,LH;HL,HH];
imshow(WaveletDecomposeImage,[]);
%uniform quantization
QuantizedImage= WaveletDecomposeImage/quantizedvalue;
QuantizedImage= round(QuantizedImage);
% Convert the Two dimensional Image to a one dimensional Array using ZigZag Scanning
ImageArray=toZigzag(QuantizedImage);
%% Run Length Encoding
j=1;
a=length(ImageArray);
count=0;
for n=1:a
b=ImageArray(n);
if n==a
count=count+1;
c(j)=count;
s(j)=ImageArray(n);
elseif ImageArray(n)==ImageArray(n+1)
count=count+1;
elseif ImageArray(n)==b
count=count+1;
c(j)=count;
s(j)=ImageArray(n);
j=j+1;
count=0;
end
end
%% Calculation Bit Cost
InputBitcost=row*col*8;
InputBitcost=(InputBitcost);
c1=length(c);
s1=length(s);
OutputBitcost= (c1*8)+(s1*8);
OutputBitcost=(OutputBitcost);
%% Run Length Decoding g=length(s);
j=1;
l=1;
for i=1:g
v(l)=s(j);
if c(j)~=0
w=l+c(j)-1;
for p=l:w
v(l)=s(j);
l=l+1;
end
end
j=j+1;
end
ReconstructedImageArray=v;
%% Inverse ZigZag
ReconstructedImage=invZigzag(ReconstructedImageArray)
%% Inverse Quantization
ReconstructedImage=ReconstructedImage*quantizedvalue;am getting error in scanning in zigzag
the error is as follows
Undefined function or variable ‘toZigzag’.
Error in rlc_haar (line 21)
ImageArray=toZigzag(QuantizedImage);
plese help me
%% Matlab code for Image Compression Using Run Length Encoding
clc;
clear;
close all;
%% Set Quantization Parameter
quantizedvalue=10;
%% Read Input Image
InputImage=imread(‘cameraman.tif’);
[row col p]=size(InputImage);
%% Wavelet Decomposition
[LL LH HL HH]=dwt2(InputImage,’haar’);
WaveletDecomposeImage=[LL,LH;HL,HH];
imshow(WaveletDecomposeImage,[]);
%uniform quantization
QuantizedImage= WaveletDecomposeImage/quantizedvalue;
QuantizedImage= round(QuantizedImage);
% Convert the Two dimensional Image to a one dimensional Array using ZigZag Scanning
ImageArray=toZigzag(QuantizedImage);
%% Run Length Encoding
j=1;
a=length(ImageArray);
count=0;
for n=1:a
b=ImageArray(n);
if n==a
count=count+1;
c(j)=count;
s(j)=ImageArray(n);
elseif ImageArray(n)==ImageArray(n+1)
count=count+1;
elseif ImageArray(n)==b
count=count+1;
c(j)=count;
s(j)=ImageArray(n);
j=j+1;
count=0;
end
end
%% Calculation Bit Cost
InputBitcost=row*col*8;
InputBitcost=(InputBitcost);
c1=length(c);
s1=length(s);
OutputBitcost= (c1*8)+(s1*8);
OutputBitcost=(OutputBitcost);
%% Run Length Decoding g=length(s);
j=1;
l=1;
for i=1:g
v(l)=s(j);
if c(j)~=0
w=l+c(j)-1;
for p=l:w
v(l)=s(j);
l=l+1;
end
end
j=j+1;
end
ReconstructedImageArray=v;
%% Inverse ZigZag
ReconstructedImage=invZigzag(ReconstructedImageArray)
%% Inverse Quantization
ReconstructedImage=ReconstructedImage*quantizedvalue; am getting error in scanning in zigzag
the error is as follows
Undefined function or variable ‘toZigzag’.
Error in rlc_haar (line 21)
ImageArray=toZigzag(QuantizedImage);
plese help me
%% Matlab code for Image Compression Using Run Length Encoding
clc;
clear;
close all;
%% Set Quantization Parameter
quantizedvalue=10;
%% Read Input Image
InputImage=imread(‘cameraman.tif’);
[row col p]=size(InputImage);
%% Wavelet Decomposition
[LL LH HL HH]=dwt2(InputImage,’haar’);
WaveletDecomposeImage=[LL,LH;HL,HH];
imshow(WaveletDecomposeImage,[]);
%uniform quantization
QuantizedImage= WaveletDecomposeImage/quantizedvalue;
QuantizedImage= round(QuantizedImage);
% Convert the Two dimensional Image to a one dimensional Array using ZigZag Scanning
ImageArray=toZigzag(QuantizedImage);
%% Run Length Encoding
j=1;
a=length(ImageArray);
count=0;
for n=1:a
b=ImageArray(n);
if n==a
count=count+1;
c(j)=count;
s(j)=ImageArray(n);
elseif ImageArray(n)==ImageArray(n+1)
count=count+1;
elseif ImageArray(n)==b
count=count+1;
c(j)=count;
s(j)=ImageArray(n);
j=j+1;
count=0;
end
end
%% Calculation Bit Cost
InputBitcost=row*col*8;
InputBitcost=(InputBitcost);
c1=length(c);
s1=length(s);
OutputBitcost= (c1*8)+(s1*8);
OutputBitcost=(OutputBitcost);
%% Run Length Decoding g=length(s);
j=1;
l=1;
for i=1:g
v(l)=s(j);
if c(j)~=0
w=l+c(j)-1;
for p=l:w
v(l)=s(j);
l=l+1;
end
end
j=j+1;
end
ReconstructedImageArray=v;
%% Inverse ZigZag
ReconstructedImage=invZigzag(ReconstructedImageArray)
%% Inverse Quantization
ReconstructedImage=ReconstructedImage*quantizedvalue; . MATLAB Answers — New Questions
Mex-file not being found for Data Translation hardware
Hey everyone,
i recently move from MATLAB R2023b to MATLAB R2024a and must reinstall some package again.
So i reinstall Data Acquisition Toolbox and Data Acquisition Support Package for Data Translation to pursue a project i’m working on.
Everything was working fine on the previous MATLAB version but now for any try to acquire a signal for example, i receive the following error message:
‘The required MEX file to communicate with Data Translation hardware could not be loaded.
The attempt gave the Error ID of MATLAB:mex:ErrInvalidMEXFile and the message
Invalid MEX-file ‘C:Users"Username"AppDataRoamingMathWorksMATLAB Add-OnsToolboxesData Acquisition Toolbox Support Package for Data Translation Hardwareadaptorwin64mexOldaApi.mexw64′: Das angegebene Modul wurde nicht gefunden.’
reads the indicated modul ist not found. But this file mexOldaApi.mexw64 is precisely there.
I checked the vendors available and found out that ‘dt’ is set as not to be operational although the drivers are installed in MATLAB and both MATLAB and the computer have been reset.
Thanks for your suggestions to fix this.
Best regards,
GalvaniHey everyone,
i recently move from MATLAB R2023b to MATLAB R2024a and must reinstall some package again.
So i reinstall Data Acquisition Toolbox and Data Acquisition Support Package for Data Translation to pursue a project i’m working on.
Everything was working fine on the previous MATLAB version but now for any try to acquire a signal for example, i receive the following error message:
‘The required MEX file to communicate with Data Translation hardware could not be loaded.
The attempt gave the Error ID of MATLAB:mex:ErrInvalidMEXFile and the message
Invalid MEX-file ‘C:Users"Username"AppDataRoamingMathWorksMATLAB Add-OnsToolboxesData Acquisition Toolbox Support Package for Data Translation Hardwareadaptorwin64mexOldaApi.mexw64′: Das angegebene Modul wurde nicht gefunden.’
reads the indicated modul ist not found. But this file mexOldaApi.mexw64 is precisely there.
I checked the vendors available and found out that ‘dt’ is set as not to be operational although the drivers are installed in MATLAB and both MATLAB and the computer have been reset.
Thanks for your suggestions to fix this.
Best regards,
Galvani Hey everyone,
i recently move from MATLAB R2023b to MATLAB R2024a and must reinstall some package again.
So i reinstall Data Acquisition Toolbox and Data Acquisition Support Package for Data Translation to pursue a project i’m working on.
Everything was working fine on the previous MATLAB version but now for any try to acquire a signal for example, i receive the following error message:
‘The required MEX file to communicate with Data Translation hardware could not be loaded.
The attempt gave the Error ID of MATLAB:mex:ErrInvalidMEXFile and the message
Invalid MEX-file ‘C:Users"Username"AppDataRoamingMathWorksMATLAB Add-OnsToolboxesData Acquisition Toolbox Support Package for Data Translation Hardwareadaptorwin64mexOldaApi.mexw64′: Das angegebene Modul wurde nicht gefunden.’
reads the indicated modul ist not found. But this file mexOldaApi.mexw64 is precisely there.
I checked the vendors available and found out that ‘dt’ is set as not to be operational although the drivers are installed in MATLAB and both MATLAB and the computer have been reset.
Thanks for your suggestions to fix this.
Best regards,
Galvani dt_package_mexfile MATLAB Answers — New Questions
Removing outliers from the data creates gaps. Filling these gaps with missing values or the median of surrounding values does not address the issue.Why?
I am analyzing EMG data in windows. In each window, I apply z-score normalization to identify and remove outliers. To address the gaps created by removing these outliers, I attempt to fill the empty spaces with the median of the surrounding values. Additionally, I have experimented with MATLAB built-in functions such as ‘movmedian’ for this purpose.
here is my function:
function data_clean = remove_outliers_and_fill(data)
% Calculate z-scores for each column
z_scores = zscore(data);
% Define outlier threshold
threshold =3;
% Identify outliers
outliers = abs(z_scores) > threshold;
% Copy data to preserve original shape
data_clean = data;
% Loop through each column
[num_rows, num_cols] = size(data);
for col = 1:num_cols
for row = 1:num_rows
if outliers(row, col)
range_start = max(1, row-10);
range_end = min(num_rows, row+10);
neighbors = data(range_start:range_end, col);
% Exclude the outlier from median calculation
filtered_neighbors = neighbors(neighbors ~= data(row, col));
median_value = median(filtered_neighbors);
data_clean(row, col) = median_value;
end
end
end
end
here is the plot where it creates gaps after applying the above function.I am analyzing EMG data in windows. In each window, I apply z-score normalization to identify and remove outliers. To address the gaps created by removing these outliers, I attempt to fill the empty spaces with the median of the surrounding values. Additionally, I have experimented with MATLAB built-in functions such as ‘movmedian’ for this purpose.
here is my function:
function data_clean = remove_outliers_and_fill(data)
% Calculate z-scores for each column
z_scores = zscore(data);
% Define outlier threshold
threshold =3;
% Identify outliers
outliers = abs(z_scores) > threshold;
% Copy data to preserve original shape
data_clean = data;
% Loop through each column
[num_rows, num_cols] = size(data);
for col = 1:num_cols
for row = 1:num_rows
if outliers(row, col)
range_start = max(1, row-10);
range_end = min(num_rows, row+10);
neighbors = data(range_start:range_end, col);
% Exclude the outlier from median calculation
filtered_neighbors = neighbors(neighbors ~= data(row, col));
median_value = median(filtered_neighbors);
data_clean(row, col) = median_value;
end
end
end
end
here is the plot where it creates gaps after applying the above function. I am analyzing EMG data in windows. In each window, I apply z-score normalization to identify and remove outliers. To address the gaps created by removing these outliers, I attempt to fill the empty spaces with the median of the surrounding values. Additionally, I have experimented with MATLAB built-in functions such as ‘movmedian’ for this purpose.
here is my function:
function data_clean = remove_outliers_and_fill(data)
% Calculate z-scores for each column
z_scores = zscore(data);
% Define outlier threshold
threshold =3;
% Identify outliers
outliers = abs(z_scores) > threshold;
% Copy data to preserve original shape
data_clean = data;
% Loop through each column
[num_rows, num_cols] = size(data);
for col = 1:num_cols
for row = 1:num_rows
if outliers(row, col)
range_start = max(1, row-10);
range_end = min(num_rows, row+10);
neighbors = data(range_start:range_end, col);
% Exclude the outlier from median calculation
filtered_neighbors = neighbors(neighbors ~= data(row, col));
median_value = median(filtered_neighbors);
data_clean(row, col) = median_value;
end
end
end
end
here is the plot where it creates gaps after applying the above function. outliers, matlab function, emg signal MATLAB Answers — New Questions
How can i change the font size of XTick and YTick (x axis and y axis) in histogram of a image?
I have a image as lena.jpg, from which i was trying to obtain hist graph.
x=imread(‘lena.jpg’);
imhist(x);
set(gca,’FontSize’,15);
with this code i am able to change the font size of YTick only but i want to change font size of both. how can i do that????I have a image as lena.jpg, from which i was trying to obtain hist graph.
x=imread(‘lena.jpg’);
imhist(x);
set(gca,’FontSize’,15);
with this code i am able to change the font size of YTick only but i want to change font size of both. how can i do that???? I have a image as lena.jpg, from which i was trying to obtain hist graph.
x=imread(‘lena.jpg’);
imhist(x);
set(gca,’FontSize’,15);
with this code i am able to change the font size of YTick only but i want to change font size of both. how can i do that???? image processing, matlab, histogram MATLAB Answers — New Questions
how to output capacity degradation curve based on simscape battery (table-based) block
I am using simscape battery toolbox for battery calendar aging simulation, with the battery table-based block, I enable the calendar aging function, I think which means my cell capacity now will decrease with the time
but I cannot find any port to output or somewhere to allow me to extract the battery capacity data. So I want to know, if I run a 100 weeks calendar aging simulation, and I want to obtain the capacity degradation curve (like the figure below, here I choose a cycle aging as an example, the axle x is the cycle life or time, the axle y is the remaining capacity)
how could I extract the battery capacity information from the Battery (table-based) block, if someone can help me, I will be very grateful, thanks.
(By the way, I have reviewed the help document regarding Battery (table-based) and Simscape battery, I also check the video "simscape battery essentials part 1 to 7" on Youtube, but cannot find the solution.)I am using simscape battery toolbox for battery calendar aging simulation, with the battery table-based block, I enable the calendar aging function, I think which means my cell capacity now will decrease with the time
but I cannot find any port to output or somewhere to allow me to extract the battery capacity data. So I want to know, if I run a 100 weeks calendar aging simulation, and I want to obtain the capacity degradation curve (like the figure below, here I choose a cycle aging as an example, the axle x is the cycle life or time, the axle y is the remaining capacity)
how could I extract the battery capacity information from the Battery (table-based) block, if someone can help me, I will be very grateful, thanks.
(By the way, I have reviewed the help document regarding Battery (table-based) and Simscape battery, I also check the video "simscape battery essentials part 1 to 7" on Youtube, but cannot find the solution.) I am using simscape battery toolbox for battery calendar aging simulation, with the battery table-based block, I enable the calendar aging function, I think which means my cell capacity now will decrease with the time
but I cannot find any port to output or somewhere to allow me to extract the battery capacity data. So I want to know, if I run a 100 weeks calendar aging simulation, and I want to obtain the capacity degradation curve (like the figure below, here I choose a cycle aging as an example, the axle x is the cycle life or time, the axle y is the remaining capacity)
how could I extract the battery capacity information from the Battery (table-based) block, if someone can help me, I will be very grateful, thanks.
(By the way, I have reviewed the help document regarding Battery (table-based) and Simscape battery, I also check the video "simscape battery essentials part 1 to 7" on Youtube, but cannot find the solution.) calendar aging, simscape battery, battery table-based block MATLAB Answers — New Questions
In stereocalibration, is the relationship between the ‘R and T output as PoseCamera2’ and the actual camera position the same, or does the sign of x in T reverse?
I am currently calibrating four cameras (Camera1, Camera2, Camera3, Camera4). To do this, I have created pairs (Camera1 & Camera2, Camera2 & Camera3, Camera1 & Camera4) and performed calibration to determine the relative positions of all cameras in the coordinate system of Camera1. For Camera1 and Camera2, I added about 80 images of a checkerboard taken using the stereocalibration feature of the calibration app for calibration.
As a result, I obtained the following:
R = [0.794, -0.0318, 0.605; 0.0226, 0.999, 0.0228; -0.606, -0.00446, 0.795]
T = [-2793, 44.86, 483.2] (units in [mm]).
The visual output, which I have attached as an image, shows that rotating Camera2 by R and translating it by T to align with the coordinate system of Camera1 makes it coincide with Camera1. Therefore, it can be seen that R and T correspond with the visual output.
However, the actual relative position of Camera2 to Camera1 in the coordinate system of Camera1 should be [2793, 44.86, 482.3]. Thus, I am considering that the sign of the x component of T obtained through stereocalibration might be reversed compared to the actual T. Is my understanding incorrect?I am currently calibrating four cameras (Camera1, Camera2, Camera3, Camera4). To do this, I have created pairs (Camera1 & Camera2, Camera2 & Camera3, Camera1 & Camera4) and performed calibration to determine the relative positions of all cameras in the coordinate system of Camera1. For Camera1 and Camera2, I added about 80 images of a checkerboard taken using the stereocalibration feature of the calibration app for calibration.
As a result, I obtained the following:
R = [0.794, -0.0318, 0.605; 0.0226, 0.999, 0.0228; -0.606, -0.00446, 0.795]
T = [-2793, 44.86, 483.2] (units in [mm]).
The visual output, which I have attached as an image, shows that rotating Camera2 by R and translating it by T to align with the coordinate system of Camera1 makes it coincide with Camera1. Therefore, it can be seen that R and T correspond with the visual output.
However, the actual relative position of Camera2 to Camera1 in the coordinate system of Camera1 should be [2793, 44.86, 482.3]. Thus, I am considering that the sign of the x component of T obtained through stereocalibration might be reversed compared to the actual T. Is my understanding incorrect? I am currently calibrating four cameras (Camera1, Camera2, Camera3, Camera4). To do this, I have created pairs (Camera1 & Camera2, Camera2 & Camera3, Camera1 & Camera4) and performed calibration to determine the relative positions of all cameras in the coordinate system of Camera1. For Camera1 and Camera2, I added about 80 images of a checkerboard taken using the stereocalibration feature of the calibration app for calibration.
As a result, I obtained the following:
R = [0.794, -0.0318, 0.605; 0.0226, 0.999, 0.0228; -0.606, -0.00446, 0.795]
T = [-2793, 44.86, 483.2] (units in [mm]).
The visual output, which I have attached as an image, shows that rotating Camera2 by R and translating it by T to align with the coordinate system of Camera1 makes it coincide with Camera1. Therefore, it can be seen that R and T correspond with the visual output.
However, the actual relative position of Camera2 to Camera1 in the coordinate system of Camera1 should be [2793, 44.86, 482.3]. Thus, I am considering that the sign of the x component of T obtained through stereocalibration might be reversed compared to the actual T. Is my understanding incorrect? image processing, calibratrion, stereocalibration MATLAB Answers — New Questions
A problem in using imhist to display histogram of indexed image
First i convert the image to an indexed image, and use only 5 colors to show what is my problem.
clear
img = imread(‘peppers.png’);
[x,map] = rgb2ind(img,5);
figure;
imhist(x,map)
<</matlabcentral/answers/uploaded_files/44260/imhist.PNG>>
the colorbar doesn’t match with the histogarm bars, this is my colormap:
map =
0.2784 0.1373 0.2353
0.7608 0.1686 0.1373
0.8902 0.7255 0.6353
0.4275 0.3765 0.2235
0.8471 0.5569 0.1020
the first one isn’t white, It seems that the colorbar is shifted by one.First i convert the image to an indexed image, and use only 5 colors to show what is my problem.
clear
img = imread(‘peppers.png’);
[x,map] = rgb2ind(img,5);
figure;
imhist(x,map)
<</matlabcentral/answers/uploaded_files/44260/imhist.PNG>>
the colorbar doesn’t match with the histogarm bars, this is my colormap:
map =
0.2784 0.1373 0.2353
0.7608 0.1686 0.1373
0.8902 0.7255 0.6353
0.4275 0.3765 0.2235
0.8471 0.5569 0.1020
the first one isn’t white, It seems that the colorbar is shifted by one. First i convert the image to an indexed image, and use only 5 colors to show what is my problem.
clear
img = imread(‘peppers.png’);
[x,map] = rgb2ind(img,5);
figure;
imhist(x,map)
<</matlabcentral/answers/uploaded_files/44260/imhist.PNG>>
the colorbar doesn’t match with the histogarm bars, this is my colormap:
map =
0.2784 0.1373 0.2353
0.7608 0.1686 0.1373
0.8902 0.7255 0.6353
0.4275 0.3765 0.2235
0.8471 0.5569 0.1020
the first one isn’t white, It seems that the colorbar is shifted by one. imhist, colorbar, indexed image MATLAB Answers — New Questions
Reinforcement learning with action updated once every few (say 100) time steps
Hello,
I am trying to learn a controller in Simulink environment. I am tryng to use reinforcement learning where the action determined by the agent is updated once every few time steps, i.e., an action once determined by the agent is used for by Simulink to run the simulation for a few time steps before it is updated again. Please provide me with suggestions on this. Thank you.Hello,
I am trying to learn a controller in Simulink environment. I am tryng to use reinforcement learning where the action determined by the agent is updated once every few time steps, i.e., an action once determined by the agent is used for by Simulink to run the simulation for a few time steps before it is updated again. Please provide me with suggestions on this. Thank you. Hello,
I am trying to learn a controller in Simulink environment. I am tryng to use reinforcement learning where the action determined by the agent is updated once every few time steps, i.e., an action once determined by the agent is used for by Simulink to run the simulation for a few time steps before it is updated again. Please provide me with suggestions on this. Thank you. reinforcement learing, simulink MATLAB Answers — New Questions
What’s the best way to visualize position/zscore over multiple subjects?
I’ve been struggling to find a way to visualize the correlation between zscore (zscore.csv) and position (xcord.csv) from electrophysiological recordings. I only one dimension of position, but because the recordings were done over time I end up with 17940×2 per animal.
I tried to average z-scores into a new set of bins (new_bins.csv) that spams all the positions of all the animals, but don’t know if that would make a good visualization. Also tried to plot all the values as average of x values and average of z scores, but because there’s a clear preferance of side by the animal, I end up with too many datapoints on one end.
Is there a way to average the values of zscores respective to the values for x that fall into the new bins? Or is there a better way to visualize this altogether? The best example I found so far was panel D of this figure, but I don’t think it made it to their github.I’ve been struggling to find a way to visualize the correlation between zscore (zscore.csv) and position (xcord.csv) from electrophysiological recordings. I only one dimension of position, but because the recordings were done over time I end up with 17940×2 per animal.
I tried to average z-scores into a new set of bins (new_bins.csv) that spams all the positions of all the animals, but don’t know if that would make a good visualization. Also tried to plot all the values as average of x values and average of z scores, but because there’s a clear preferance of side by the animal, I end up with too many datapoints on one end.
Is there a way to average the values of zscores respective to the values for x that fall into the new bins? Or is there a better way to visualize this altogether? The best example I found so far was panel D of this figure, but I don’t think it made it to their github. I’ve been struggling to find a way to visualize the correlation between zscore (zscore.csv) and position (xcord.csv) from electrophysiological recordings. I only one dimension of position, but because the recordings were done over time I end up with 17940×2 per animal.
I tried to average z-scores into a new set of bins (new_bins.csv) that spams all the positions of all the animals, but don’t know if that would make a good visualization. Also tried to plot all the values as average of x values and average of z scores, but because there’s a clear preferance of side by the animal, I end up with too many datapoints on one end.
Is there a way to average the values of zscores respective to the values for x that fall into the new bins? Or is there a better way to visualize this altogether? The best example I found so far was panel D of this figure, but I don’t think it made it to their github. visualization, heatmaps, zscore MATLAB Answers — New Questions