Tag Archives: matlab
Is there a variant of nlfilter for color images?
I need to perform quite a complex operation on color image using sliding window way. Is there any variant for RGB images?I need to perform quite a complex operation on color image using sliding window way. Is there any variant for RGB images? I need to perform quite a complex operation on color image using sliding window way. Is there any variant for RGB images? sliding window, nlfilter, image processing MATLAB Answers — New Questions
How to Fix Polyspace CodeProver Orange Overflow errors
Hello,
I am getting the below Orange Overflow error due to operator *
How to fix these Overflow errors as we know this operation is not going to generate a result bigger than the datatype of int32 ?Hello,
I am getting the below Orange Overflow error due to operator *
How to fix these Overflow errors as we know this operation is not going to generate a result bigger than the datatype of int32 ? Hello,
I am getting the below Orange Overflow error due to operator *
How to fix these Overflow errors as we know this operation is not going to generate a result bigger than the datatype of int32 ? codeprover, overflow, *operator MATLAB Answers — New Questions
Why are the final values for velocity and acceleration from bsplinepolytraj() always equal to zero?
When creating splines using bsplinepolytraj() the last values for the x and y component of velocity and acceleration are always zero. Here’s an example from the documentation:
% Interpolate with B-Spline
% Create waypoints to interpolate with a B-Spline.
wpts1 = [0 1 2.1 8 4 3];
wpts2 = [0 1 1.3 .8 .3 .3];
wpts = [wpts1; wpts2];
L = length(wpts) – 1;
% Form matrices used to compute interior points of control polygon
r = zeros(L+1, size(wpts,1));
A = eye(L+1);
for i= 1:(L-1)
A(i+1,(i):(i+2)) = [1 4 1];
r(i+1,:) = 6*wpts(:,i+1)’;
end
% Override end points and choose r0 and rL.
A(2,1:3) = [3/2 7/2 1];
A(L,(L-1):(L+1)) = [1 7/2 3/2];
r(1,:) = (wpts(:,1) + (wpts(:,2) – wpts(:,1))/2)’;
r(end,:) = (wpts(:,end-1) + (wpts(:,end) – wpts(:,end-1))/2)’;
dInterior = (Ar)’;
% Construct a complete control polygon and use bsplinepolytraj to compute a polynomial with the new control points
cpts = [wpts(:,1) dInterior wpts(:,end)];
t = 0:0.01:1;
[q, dq, ddq, ~] = bsplinepolytraj(cpts, [0 1], t);
The values for
disp(dq(:,end))
and
disp(ddq(:, end))
I feel like this is wrong. Why are these values zero and how can I get a non-zero answer?When creating splines using bsplinepolytraj() the last values for the x and y component of velocity and acceleration are always zero. Here’s an example from the documentation:
% Interpolate with B-Spline
% Create waypoints to interpolate with a B-Spline.
wpts1 = [0 1 2.1 8 4 3];
wpts2 = [0 1 1.3 .8 .3 .3];
wpts = [wpts1; wpts2];
L = length(wpts) – 1;
% Form matrices used to compute interior points of control polygon
r = zeros(L+1, size(wpts,1));
A = eye(L+1);
for i= 1:(L-1)
A(i+1,(i):(i+2)) = [1 4 1];
r(i+1,:) = 6*wpts(:,i+1)’;
end
% Override end points and choose r0 and rL.
A(2,1:3) = [3/2 7/2 1];
A(L,(L-1):(L+1)) = [1 7/2 3/2];
r(1,:) = (wpts(:,1) + (wpts(:,2) – wpts(:,1))/2)’;
r(end,:) = (wpts(:,end-1) + (wpts(:,end) – wpts(:,end-1))/2)’;
dInterior = (Ar)’;
% Construct a complete control polygon and use bsplinepolytraj to compute a polynomial with the new control points
cpts = [wpts(:,1) dInterior wpts(:,end)];
t = 0:0.01:1;
[q, dq, ddq, ~] = bsplinepolytraj(cpts, [0 1], t);
The values for
disp(dq(:,end))
and
disp(ddq(:, end))
I feel like this is wrong. Why are these values zero and how can I get a non-zero answer? When creating splines using bsplinepolytraj() the last values for the x and y component of velocity and acceleration are always zero. Here’s an example from the documentation:
% Interpolate with B-Spline
% Create waypoints to interpolate with a B-Spline.
wpts1 = [0 1 2.1 8 4 3];
wpts2 = [0 1 1.3 .8 .3 .3];
wpts = [wpts1; wpts2];
L = length(wpts) – 1;
% Form matrices used to compute interior points of control polygon
r = zeros(L+1, size(wpts,1));
A = eye(L+1);
for i= 1:(L-1)
A(i+1,(i):(i+2)) = [1 4 1];
r(i+1,:) = 6*wpts(:,i+1)’;
end
% Override end points and choose r0 and rL.
A(2,1:3) = [3/2 7/2 1];
A(L,(L-1):(L+1)) = [1 7/2 3/2];
r(1,:) = (wpts(:,1) + (wpts(:,2) – wpts(:,1))/2)’;
r(end,:) = (wpts(:,end-1) + (wpts(:,end) – wpts(:,end-1))/2)’;
dInterior = (Ar)’;
% Construct a complete control polygon and use bsplinepolytraj to compute a polynomial with the new control points
cpts = [wpts(:,1) dInterior wpts(:,end)];
t = 0:0.01:1;
[q, dq, ddq, ~] = bsplinepolytraj(cpts, [0 1], t);
The values for
disp(dq(:,end))
and
disp(ddq(:, end))
I feel like this is wrong. Why are these values zero and how can I get a non-zero answer? bsplinepolytraj, spline, curve fitting, robotics, trajectory MATLAB Answers — New Questions
How do I get run time or system time of my Speedgoat target computer in R2020b?
I’m upgrading from R2019a to R2020b and I can’t find analogous blocks to the "Elapsed Time" and "Time Stamp Delta" blocks to get run time or clock time from my Simulink Real-Time (SLRT) target computer.I’m upgrading from R2019a to R2020b and I can’t find analogous blocks to the "Elapsed Time" and "Time Stamp Delta" blocks to get run time or clock time from my Simulink Real-Time (SLRT) target computer. I’m upgrading from R2019a to R2020b and I can’t find analogous blocks to the "Elapsed Time" and "Time Stamp Delta" blocks to get run time or clock time from my Simulink Real-Time (SLRT) target computer. MATLAB Answers — New Questions
Training a neural network for different operating points
Hello,
i want to train a neural network to predict the temperature of an electrical machine in different operating points.
I have input data in the form of:
4×1 Cell, each cell with 101×3 elements
So the first cell contains the data for the first operating point, the second for the second…
And Target Data:
4×1 Cell, each cell with 101×1 elements
Where the first cell contains data for the first operating point, the second…
My question is now, which input layer i should use, so that the data is treated correctly ?Hello,
i want to train a neural network to predict the temperature of an electrical machine in different operating points.
I have input data in the form of:
4×1 Cell, each cell with 101×3 elements
So the first cell contains the data for the first operating point, the second for the second…
And Target Data:
4×1 Cell, each cell with 101×1 elements
Where the first cell contains data for the first operating point, the second…
My question is now, which input layer i should use, so that the data is treated correctly ? Hello,
i want to train a neural network to predict the temperature of an electrical machine in different operating points.
I have input data in the form of:
4×1 Cell, each cell with 101×3 elements
So the first cell contains the data for the first operating point, the second for the second…
And Target Data:
4×1 Cell, each cell with 101×1 elements
Where the first cell contains data for the first operating point, the second…
My question is now, which input layer i should use, so that the data is treated correctly ? matlab, neural network, deep learning MATLAB Answers — New Questions
How to fix Polyspace CodeProver Orange warnings due to + operator
Hello,
I am getting a Polyspace CodeProver Orange Overflow error due to + operator in the attached code
How to fix these issues as we are sure the expression is not going to generate a result that can extend beyond the data type of int32 ?Hello,
I am getting a Polyspace CodeProver Orange Overflow error due to + operator in the attached code
How to fix these issues as we are sure the expression is not going to generate a result that can extend beyond the data type of int32 ? Hello,
I am getting a Polyspace CodeProver Orange Overflow error due to + operator in the attached code
How to fix these issues as we are sure the expression is not going to generate a result that can extend beyond the data type of int32 ? codeprover, orange, overflow, +operator MATLAB Answers — New Questions
Call to inv() function seems to have (undesired) impact on Thread pool or maxNumCompThreads()
I tried to parallelize parts of my code via parpool("Threads"). I also use maxNumCompThreads to limit the maximum CPU utilization. I use a parfor loop which works as expected, meaning that the defined number of cores corresponds to the total cpu utilization shown in the windows taks manager (more or less).
However, if a call to the inv() function appears somewhere in the code before the thread pool is started, then the cpu utilization of the thread pool is unexpectedly higher, although the number of cores and also maxNumCompThreads is not changed. This happens reproducible, until matlab is restarted (and inv() is not called).
To obtain the unexpected behavior the input to inv() must exceed a certain size: inv(rand(10)) –> nothing happens, but inv(rand(1000)) –> CPU utilization of the following parfor loop is unexpectedly high.
A simple script to reproduce the described behavior (in matlab2023b):
maxNumCompThreads(12);
nCores = 12;
%% random parallel code
fprintf("Before inv function call:n");
pp = parpool("Threads", nCores);
for j = 1:3
tic;
parfor (i = 1:100)
A = rand(1000) / rand(1000);
end
toc
pause(2);
end
delete(pp);
%% matrix inverse
Minv = inv(rand(5000));
pause(5);
%% same random parallel code as before –> CPU Utilization goes up to 100%
fprintf("nnAfter inv function call:n");
pp = parpool("Threads", nCores);
for j = 1:3
tic;
parfor (i = 1:100)
A = rand(1000) / rand(1000);
end
toc
pause(2);
end
delete(pp);
On a 56-core machine, the first parallel block runs with < 20% CPU utilization, while the second block has ~50%.
I get the following output:
Before inv function call:
Starting parallel pool (parpool) using the ‘Threads’ profile …
Connected to parallel pool with 12 workers.
Elapsed time is 5.852217 seconds.
Elapsed time is 2.475874 seconds.
Elapsed time is 2.447292 seconds.
Parallel pool using the ‘Threads’ profile is shutting down.
After inv function call:
Starting parallel pool (parpool) using the ‘Threads’ profile …
Connected to parallel pool with 12 workers.
Elapsed time is 23.414892 seconds.
Elapsed time is 24.350276 seconds.
Elapsed time is 23.297744 seconds.
Parallel pool using the ‘Threads’ profile is shutting down.
The increased core utilization for Thread pools stays present until matlab is closed an restarted. With parpool("Processes") I did not observe this behavior.
Am I missing anything here?I tried to parallelize parts of my code via parpool("Threads"). I also use maxNumCompThreads to limit the maximum CPU utilization. I use a parfor loop which works as expected, meaning that the defined number of cores corresponds to the total cpu utilization shown in the windows taks manager (more or less).
However, if a call to the inv() function appears somewhere in the code before the thread pool is started, then the cpu utilization of the thread pool is unexpectedly higher, although the number of cores and also maxNumCompThreads is not changed. This happens reproducible, until matlab is restarted (and inv() is not called).
To obtain the unexpected behavior the input to inv() must exceed a certain size: inv(rand(10)) –> nothing happens, but inv(rand(1000)) –> CPU utilization of the following parfor loop is unexpectedly high.
A simple script to reproduce the described behavior (in matlab2023b):
maxNumCompThreads(12);
nCores = 12;
%% random parallel code
fprintf("Before inv function call:n");
pp = parpool("Threads", nCores);
for j = 1:3
tic;
parfor (i = 1:100)
A = rand(1000) / rand(1000);
end
toc
pause(2);
end
delete(pp);
%% matrix inverse
Minv = inv(rand(5000));
pause(5);
%% same random parallel code as before –> CPU Utilization goes up to 100%
fprintf("nnAfter inv function call:n");
pp = parpool("Threads", nCores);
for j = 1:3
tic;
parfor (i = 1:100)
A = rand(1000) / rand(1000);
end
toc
pause(2);
end
delete(pp);
On a 56-core machine, the first parallel block runs with < 20% CPU utilization, while the second block has ~50%.
I get the following output:
Before inv function call:
Starting parallel pool (parpool) using the ‘Threads’ profile …
Connected to parallel pool with 12 workers.
Elapsed time is 5.852217 seconds.
Elapsed time is 2.475874 seconds.
Elapsed time is 2.447292 seconds.
Parallel pool using the ‘Threads’ profile is shutting down.
After inv function call:
Starting parallel pool (parpool) using the ‘Threads’ profile …
Connected to parallel pool with 12 workers.
Elapsed time is 23.414892 seconds.
Elapsed time is 24.350276 seconds.
Elapsed time is 23.297744 seconds.
Parallel pool using the ‘Threads’ profile is shutting down.
The increased core utilization for Thread pools stays present until matlab is closed an restarted. With parpool("Processes") I did not observe this behavior.
Am I missing anything here? I tried to parallelize parts of my code via parpool("Threads"). I also use maxNumCompThreads to limit the maximum CPU utilization. I use a parfor loop which works as expected, meaning that the defined number of cores corresponds to the total cpu utilization shown in the windows taks manager (more or less).
However, if a call to the inv() function appears somewhere in the code before the thread pool is started, then the cpu utilization of the thread pool is unexpectedly higher, although the number of cores and also maxNumCompThreads is not changed. This happens reproducible, until matlab is restarted (and inv() is not called).
To obtain the unexpected behavior the input to inv() must exceed a certain size: inv(rand(10)) –> nothing happens, but inv(rand(1000)) –> CPU utilization of the following parfor loop is unexpectedly high.
A simple script to reproduce the described behavior (in matlab2023b):
maxNumCompThreads(12);
nCores = 12;
%% random parallel code
fprintf("Before inv function call:n");
pp = parpool("Threads", nCores);
for j = 1:3
tic;
parfor (i = 1:100)
A = rand(1000) / rand(1000);
end
toc
pause(2);
end
delete(pp);
%% matrix inverse
Minv = inv(rand(5000));
pause(5);
%% same random parallel code as before –> CPU Utilization goes up to 100%
fprintf("nnAfter inv function call:n");
pp = parpool("Threads", nCores);
for j = 1:3
tic;
parfor (i = 1:100)
A = rand(1000) / rand(1000);
end
toc
pause(2);
end
delete(pp);
On a 56-core machine, the first parallel block runs with < 20% CPU utilization, while the second block has ~50%.
I get the following output:
Before inv function call:
Starting parallel pool (parpool) using the ‘Threads’ profile …
Connected to parallel pool with 12 workers.
Elapsed time is 5.852217 seconds.
Elapsed time is 2.475874 seconds.
Elapsed time is 2.447292 seconds.
Parallel pool using the ‘Threads’ profile is shutting down.
After inv function call:
Starting parallel pool (parpool) using the ‘Threads’ profile …
Connected to parallel pool with 12 workers.
Elapsed time is 23.414892 seconds.
Elapsed time is 24.350276 seconds.
Elapsed time is 23.297744 seconds.
Parallel pool using the ‘Threads’ profile is shutting down.
The increased core utilization for Thread pools stays present until matlab is closed an restarted. With parpool("Processes") I did not observe this behavior.
Am I missing anything here? maxnumcompthreads, parpool, threads, inv MATLAB Answers — New Questions
Polyspace Orange Scalar Overflow error
Attached the snippet of the scalar orange overflow error reported by polyspace in the project we are working with
how to overcome this orange error as we are sure the reported operation is within sint16 data type result ?Attached the snippet of the scalar orange overflow error reported by polyspace in the project we are working with
how to overcome this orange error as we are sure the reported operation is within sint16 data type result ? Attached the snippet of the scalar orange overflow error reported by polyspace in the project we are working with
how to overcome this orange error as we are sure the reported operation is within sint16 data type result ? codeprover, overflow MATLAB Answers — New Questions
When i call and run this code it just save the Phase1 results in CSvV file not other results .
When i call and run this code it just save the Phase1 results in CSvV file not other results .When i call and run this code it just save the Phase1 results in CSvV file not other results . When i call and run this code it just save the Phase1 results in CSvV file not other results . matlab code, matlab coder MATLAB Answers — New Questions
Does the NR HDL Downlink Receiver work on real raw data?
Hi, I have run the Downlink Receiver code with simulations (generated from the 5G Waveform Generator in MATLAB) and there are no issues there, however, when I try to tun the same code through collected raw data, it has never worked before (it says PSS not found). The collected data is confirmed to contain relevant information by other members of my research team, so the problem does not lie with the data itself. Has anyone faced a similar issue?
Do note that both the SSB detection code and the cell search code do not work.
Thank you so much for your help!
Here is the spectrogram of the data for reference:
Here is my code snippet for reference:
loaded_data = load("srsRAN_octoclock_samprate_2304_10MHz_scscommon_15khz_b200_fdd_n71_pci_1_2phones_onevoice.mat");
num_entries = 2e6; % number of entries to considered
rxWaveform = loaded_data.transposedData(1:num_entries);
minChanBW = 5;
Lmax = 100;
FoCoarse = 0;
rxSampleRate = 10e6;
%% Plot the spectogram of the waveform.
scsSSB = 15;
figure(2); clf;
nfft = round(rxSampleRate/(scsSSB*1e3));
spectrogram(rxWaveform(:,1),ones(nfft,1),0,nfft,’centered’,rxSampleRate,’yaxis’,’MinThreshold’,-110);
title(‘Spectrogram of the Received Waveform (15 KHz)’)
%% Detect SSBs
scsSSB = 15
[pssList,diagnostics] = nrhdlexamples.ssbDetect(rxWaveform,FoCoarse,scsSSB);
% Check if any PSS have been detected
if isempty(pssList)
disp(‘No PSS found during SSB detection.’);
return;
end
disp(‘Detected PSS list:’)
disp(struct2table(pssList));
%% Search for Cells
%%
% Define the frequency range endpoints and subcarrier spacing search space
% and call the |nrhdlexamples.cellSearch| function. The function displays
% information on the search progress as it runs.
% The frequency range endpoints must be multiples of half the
% maximum subcarrier spacing.
frequencyRange = [-120 120];
subcarrierSpacings = [15 30];
[ssBlockInfo,ssbGrid] = nrhdlexamples.cellSearch(rxWaveform,frequencyRange,subcarrierSpacings,struct(…
‘DisplayPlots’,false,…
‘DisplayCommandWindowOutput’,true));
% Check cell search successfully found and demodulated SSB.
if isempty(ssBlockInfo)
disp(‘Cell search failed to find or demodulate SSB.’);
return;
endHi, I have run the Downlink Receiver code with simulations (generated from the 5G Waveform Generator in MATLAB) and there are no issues there, however, when I try to tun the same code through collected raw data, it has never worked before (it says PSS not found). The collected data is confirmed to contain relevant information by other members of my research team, so the problem does not lie with the data itself. Has anyone faced a similar issue?
Do note that both the SSB detection code and the cell search code do not work.
Thank you so much for your help!
Here is the spectrogram of the data for reference:
Here is my code snippet for reference:
loaded_data = load("srsRAN_octoclock_samprate_2304_10MHz_scscommon_15khz_b200_fdd_n71_pci_1_2phones_onevoice.mat");
num_entries = 2e6; % number of entries to considered
rxWaveform = loaded_data.transposedData(1:num_entries);
minChanBW = 5;
Lmax = 100;
FoCoarse = 0;
rxSampleRate = 10e6;
%% Plot the spectogram of the waveform.
scsSSB = 15;
figure(2); clf;
nfft = round(rxSampleRate/(scsSSB*1e3));
spectrogram(rxWaveform(:,1),ones(nfft,1),0,nfft,’centered’,rxSampleRate,’yaxis’,’MinThreshold’,-110);
title(‘Spectrogram of the Received Waveform (15 KHz)’)
%% Detect SSBs
scsSSB = 15
[pssList,diagnostics] = nrhdlexamples.ssbDetect(rxWaveform,FoCoarse,scsSSB);
% Check if any PSS have been detected
if isempty(pssList)
disp(‘No PSS found during SSB detection.’);
return;
end
disp(‘Detected PSS list:’)
disp(struct2table(pssList));
%% Search for Cells
%%
% Define the frequency range endpoints and subcarrier spacing search space
% and call the |nrhdlexamples.cellSearch| function. The function displays
% information on the search progress as it runs.
% The frequency range endpoints must be multiples of half the
% maximum subcarrier spacing.
frequencyRange = [-120 120];
subcarrierSpacings = [15 30];
[ssBlockInfo,ssbGrid] = nrhdlexamples.cellSearch(rxWaveform,frequencyRange,subcarrierSpacings,struct(…
‘DisplayPlots’,false,…
‘DisplayCommandWindowOutput’,true));
% Check cell search successfully found and demodulated SSB.
if isempty(ssBlockInfo)
disp(‘Cell search failed to find or demodulate SSB.’);
return;
end Hi, I have run the Downlink Receiver code with simulations (generated from the 5G Waveform Generator in MATLAB) and there are no issues there, however, when I try to tun the same code through collected raw data, it has never worked before (it says PSS not found). The collected data is confirmed to contain relevant information by other members of my research team, so the problem does not lie with the data itself. Has anyone faced a similar issue?
Do note that both the SSB detection code and the cell search code do not work.
Thank you so much for your help!
Here is the spectrogram of the data for reference:
Here is my code snippet for reference:
loaded_data = load("srsRAN_octoclock_samprate_2304_10MHz_scscommon_15khz_b200_fdd_n71_pci_1_2phones_onevoice.mat");
num_entries = 2e6; % number of entries to considered
rxWaveform = loaded_data.transposedData(1:num_entries);
minChanBW = 5;
Lmax = 100;
FoCoarse = 0;
rxSampleRate = 10e6;
%% Plot the spectogram of the waveform.
scsSSB = 15;
figure(2); clf;
nfft = round(rxSampleRate/(scsSSB*1e3));
spectrogram(rxWaveform(:,1),ones(nfft,1),0,nfft,’centered’,rxSampleRate,’yaxis’,’MinThreshold’,-110);
title(‘Spectrogram of the Received Waveform (15 KHz)’)
%% Detect SSBs
scsSSB = 15
[pssList,diagnostics] = nrhdlexamples.ssbDetect(rxWaveform,FoCoarse,scsSSB);
% Check if any PSS have been detected
if isempty(pssList)
disp(‘No PSS found during SSB detection.’);
return;
end
disp(‘Detected PSS list:’)
disp(struct2table(pssList));
%% Search for Cells
%%
% Define the frequency range endpoints and subcarrier spacing search space
% and call the |nrhdlexamples.cellSearch| function. The function displays
% information on the search progress as it runs.
% The frequency range endpoints must be multiples of half the
% maximum subcarrier spacing.
frequencyRange = [-120 120];
subcarrierSpacings = [15 30];
[ssBlockInfo,ssbGrid] = nrhdlexamples.cellSearch(rxWaveform,frequencyRange,subcarrierSpacings,struct(…
‘DisplayPlots’,false,…
‘DisplayCommandWindowOutput’,true));
% Check cell search successfully found and demodulated SSB.
if isempty(ssBlockInfo)
disp(‘Cell search failed to find or demodulate SSB.’);
return;
end 5g, signal processing, ssb MATLAB Answers — New Questions
MATLAB file on opening showing weird symbols or letters
I had saved a .m file and today when i tried opening the file it just shows some symbols and letters like
MATLAB 5.0 MAT-file, Platform: PCWIN64, Created on: Fri Aug 9 16:22:11 2024
‰ì=³÷<öã{ÏÌdÙ„·ý^Ÿë÷=×õ<ës]çÜÎýñ|¾vÐÑѱÐÑ1ýï‘XLÿ»#nŒÄb%Ãÿ=ðÿߦ£ÿ¿×ŠwMI had saved a .m file and today when i tried opening the file it just shows some symbols and letters like
MATLAB 5.0 MAT-file, Platform: PCWIN64, Created on: Fri Aug 9 16:22:11 2024
‰ì=³÷<öã{ÏÌdÙ„·ý^Ÿë÷=×õ<ës]çÜÎýñ|¾vÐÑѱÐÑ1ýï‘XLÿ»#nŒÄb%Ãÿ=ðÿߦ£ÿ¿×ŠwM I had saved a .m file and today when i tried opening the file it just shows some symbols and letters like
MATLAB 5.0 MAT-file, Platform: PCWIN64, Created on: Fri Aug 9 16:22:11 2024
‰ì=³÷<öã{ÏÌdÙ„·ý^Ÿë÷=×õ<ës]çÜÎýñ|¾vÐÑѱÐÑ1ýï‘XLÿ»#nŒÄb%Ãÿ=ðÿߦ£ÿ¿×ŠwM file opening MATLAB Answers — New Questions
Temperature increase after increasing flowrate
Hello community,
I am modelling a liquid cooling network which i will parameterize based on a real liquid cooling network that i built.
It consistis of a pump, radiator, reservoir and a cooler element beside the tubing and all additional sensors.
In that particular scenario, the pump is turned on after 10 minutes and reaches it full flowrate after 10 seconds, from there it is constantly working till stop.
The cooler is modelled as a tube element hooked up to a thermal network which is applying 10W of power. The simple tubes before and after should modell the natural convection that i see in the real measurements while powering the heater and keeping the pump turned off.
If the pump is turned on at 600s, we can see a temperatue increase in the sensor located after the cooler which i cannot see in the real measurements and rather looks like the decreasing cooler surface temperature.
I would like to understand how this behavior is created and if possible how to avoid it. I can imagine that at the port of the temperature sensor, after the liquid was heated for that time, that the mass of the liquid inside the cooler is virtually passing the temperature sensor and it taking some time to cool down to average coolant temperature. But as a novice in Simscape Modelling I am having a hard time understanding the mechanisms behind that.
I greatly appreciate any suggestion and explanations from you, looking forward to it. Thanks much!Hello community,
I am modelling a liquid cooling network which i will parameterize based on a real liquid cooling network that i built.
It consistis of a pump, radiator, reservoir and a cooler element beside the tubing and all additional sensors.
In that particular scenario, the pump is turned on after 10 minutes and reaches it full flowrate after 10 seconds, from there it is constantly working till stop.
The cooler is modelled as a tube element hooked up to a thermal network which is applying 10W of power. The simple tubes before and after should modell the natural convection that i see in the real measurements while powering the heater and keeping the pump turned off.
If the pump is turned on at 600s, we can see a temperatue increase in the sensor located after the cooler which i cannot see in the real measurements and rather looks like the decreasing cooler surface temperature.
I would like to understand how this behavior is created and if possible how to avoid it. I can imagine that at the port of the temperature sensor, after the liquid was heated for that time, that the mass of the liquid inside the cooler is virtually passing the temperature sensor and it taking some time to cool down to average coolant temperature. But as a novice in Simscape Modelling I am having a hard time understanding the mechanisms behind that.
I greatly appreciate any suggestion and explanations from you, looking forward to it. Thanks much! Hello community,
I am modelling a liquid cooling network which i will parameterize based on a real liquid cooling network that i built.
It consistis of a pump, radiator, reservoir and a cooler element beside the tubing and all additional sensors.
In that particular scenario, the pump is turned on after 10 minutes and reaches it full flowrate after 10 seconds, from there it is constantly working till stop.
The cooler is modelled as a tube element hooked up to a thermal network which is applying 10W of power. The simple tubes before and after should modell the natural convection that i see in the real measurements while powering the heater and keeping the pump turned off.
If the pump is turned on at 600s, we can see a temperatue increase in the sensor located after the cooler which i cannot see in the real measurements and rather looks like the decreasing cooler surface temperature.
I would like to understand how this behavior is created and if possible how to avoid it. I can imagine that at the port of the temperature sensor, after the liquid was heated for that time, that the mass of the liquid inside the cooler is virtually passing the temperature sensor and it taking some time to cool down to average coolant temperature. But as a novice in Simscape Modelling I am having a hard time understanding the mechanisms behind that.
I greatly appreciate any suggestion and explanations from you, looking forward to it. Thanks much! simscape, simulink, scope, model, parameter MATLAB Answers — New Questions
ousterFileReader give no correct result
The function ousterFileReader not work correcty in 2023b with lidar’s FW 2.4. As in json there is a negative shift, and intensivity value shift in row… BugThe function ousterFileReader not work correcty in 2023b with lidar’s FW 2.4. As in json there is a negative shift, and intensivity value shift in row… Bug The function ousterFileReader not work correcty in 2023b with lidar’s FW 2.4. As in json there is a negative shift, and intensivity value shift in row… Bug lidar toolbox, ouster lidar, matlab MATLAB Answers — New Questions
Unrecognized method, property, or field for generated protobuf message
Hi all,
We have a project where we communicate with a device using gRPC. On the computer, we run Python in a virtual enviornment. We load that environment in matlab using pyenv. We use MATLAB 2023b (23.2.0.2599560 (R2023b) Update 8) and Python 3.10.11.
Connecting with the device and querying some device information works in both, Python and MATLAB. However, since 2-3 months, we are unable to access members of protobuf messages. Below is an example of the code we execute in Python and the equivalent code executed in MATLAB:
Python:
% C:workdatakingfisher-py.venvScriptspython.exe "…"
import kingfisher_py.lib as kgfLib
scn = kgfLib.device.Scanner(‘10.10.1.1’, ‘8081’)
device_info = scn.device_info.get_info()
device_info.sw_rev
items {
key: "mcu"
value: "v1.0.1"
}
items {
key: "kingfisher"
value: "5.2.0-cam-cal-third-party-rc.1-5-gd6f7161"
}
% response continues…
MATLAB:
>> kgfLib = py.importlib.import_module(‘kingfisher_py.lib’);
>> scn = kgfLib.device.Scanner(‘10.10.1.1’, ‘8081’);
>> deviceInfo = scn.device_info.get_info();
>> deviceInfo.HasField(‘sw_rev’)
ans =
logical
1
>> deviceInfo.sw_rev
Unrecognized method, property, or field ‘sw_rev’ for class ‘py.kf.api.messages.system_pb2.GetDeviceInfoResponse’.
>> deviceInfo
deviceInfo =
Python GetDeviceInfoResponse with properties:
DESCRIPTOR: [1×1 py.google._upb._message.Descriptor]
manufact_rev {
device_name: "BLK360-2060047"
serial_number: "2060047"
}
sw_rev {
items {
key: "mcu"
value: "v1.0.1"
}
items {
key: "kingfisher"
value: "5.2.0-cam-cal-third-party-rc.1-5-gd6f7161"
}
% response continues…
Note that the same virtual environment is active. MATLAB even says that the field sw_rev exists, but still cannot access it. Also, we checked different versions and combinations of MATLAB and Python. Specifically:
Matlab 9.10.0.2198249 (R2021a) Update 8 + Python 3.8.10
Matlab 9.13.0.2193358 (R2022b) Update 5 + Python 3.8.10
Matlab 9.13.0.2193358 (R2022b) Update 5 + Python 3.10.11
Matlab 23.2.0.2599560 (R2023b) Update 8 + Python 3.10.11
The behaviour is the same with all versions. As the whole communication with the device is set up using protobuf, there is not much we can do with the device at this point as we run into this problem all accorss our MATLAB code base.
Is this a known issue, e.g. with a newer protobuf version? As mentioned, we did not have any issues like this until 2-3 months ago.
Hope to get some help or at least an explanation. Thank you :-)Hi all,
We have a project where we communicate with a device using gRPC. On the computer, we run Python in a virtual enviornment. We load that environment in matlab using pyenv. We use MATLAB 2023b (23.2.0.2599560 (R2023b) Update 8) and Python 3.10.11.
Connecting with the device and querying some device information works in both, Python and MATLAB. However, since 2-3 months, we are unable to access members of protobuf messages. Below is an example of the code we execute in Python and the equivalent code executed in MATLAB:
Python:
% C:workdatakingfisher-py.venvScriptspython.exe "…"
import kingfisher_py.lib as kgfLib
scn = kgfLib.device.Scanner(‘10.10.1.1’, ‘8081’)
device_info = scn.device_info.get_info()
device_info.sw_rev
items {
key: "mcu"
value: "v1.0.1"
}
items {
key: "kingfisher"
value: "5.2.0-cam-cal-third-party-rc.1-5-gd6f7161"
}
% response continues…
MATLAB:
>> kgfLib = py.importlib.import_module(‘kingfisher_py.lib’);
>> scn = kgfLib.device.Scanner(‘10.10.1.1’, ‘8081’);
>> deviceInfo = scn.device_info.get_info();
>> deviceInfo.HasField(‘sw_rev’)
ans =
logical
1
>> deviceInfo.sw_rev
Unrecognized method, property, or field ‘sw_rev’ for class ‘py.kf.api.messages.system_pb2.GetDeviceInfoResponse’.
>> deviceInfo
deviceInfo =
Python GetDeviceInfoResponse with properties:
DESCRIPTOR: [1×1 py.google._upb._message.Descriptor]
manufact_rev {
device_name: "BLK360-2060047"
serial_number: "2060047"
}
sw_rev {
items {
key: "mcu"
value: "v1.0.1"
}
items {
key: "kingfisher"
value: "5.2.0-cam-cal-third-party-rc.1-5-gd6f7161"
}
% response continues…
Note that the same virtual environment is active. MATLAB even says that the field sw_rev exists, but still cannot access it. Also, we checked different versions and combinations of MATLAB and Python. Specifically:
Matlab 9.10.0.2198249 (R2021a) Update 8 + Python 3.8.10
Matlab 9.13.0.2193358 (R2022b) Update 5 + Python 3.8.10
Matlab 9.13.0.2193358 (R2022b) Update 5 + Python 3.10.11
Matlab 23.2.0.2599560 (R2023b) Update 8 + Python 3.10.11
The behaviour is the same with all versions. As the whole communication with the device is set up using protobuf, there is not much we can do with the device at this point as we run into this problem all accorss our MATLAB code base.
Is this a known issue, e.g. with a newer protobuf version? As mentioned, we did not have any issues like this until 2-3 months ago.
Hope to get some help or at least an explanation. Thank you 🙂 Hi all,
We have a project where we communicate with a device using gRPC. On the computer, we run Python in a virtual enviornment. We load that environment in matlab using pyenv. We use MATLAB 2023b (23.2.0.2599560 (R2023b) Update 8) and Python 3.10.11.
Connecting with the device and querying some device information works in both, Python and MATLAB. However, since 2-3 months, we are unable to access members of protobuf messages. Below is an example of the code we execute in Python and the equivalent code executed in MATLAB:
Python:
% C:workdatakingfisher-py.venvScriptspython.exe "…"
import kingfisher_py.lib as kgfLib
scn = kgfLib.device.Scanner(‘10.10.1.1’, ‘8081’)
device_info = scn.device_info.get_info()
device_info.sw_rev
items {
key: "mcu"
value: "v1.0.1"
}
items {
key: "kingfisher"
value: "5.2.0-cam-cal-third-party-rc.1-5-gd6f7161"
}
% response continues…
MATLAB:
>> kgfLib = py.importlib.import_module(‘kingfisher_py.lib’);
>> scn = kgfLib.device.Scanner(‘10.10.1.1’, ‘8081’);
>> deviceInfo = scn.device_info.get_info();
>> deviceInfo.HasField(‘sw_rev’)
ans =
logical
1
>> deviceInfo.sw_rev
Unrecognized method, property, or field ‘sw_rev’ for class ‘py.kf.api.messages.system_pb2.GetDeviceInfoResponse’.
>> deviceInfo
deviceInfo =
Python GetDeviceInfoResponse with properties:
DESCRIPTOR: [1×1 py.google._upb._message.Descriptor]
manufact_rev {
device_name: "BLK360-2060047"
serial_number: "2060047"
}
sw_rev {
items {
key: "mcu"
value: "v1.0.1"
}
items {
key: "kingfisher"
value: "5.2.0-cam-cal-third-party-rc.1-5-gd6f7161"
}
% response continues…
Note that the same virtual environment is active. MATLAB even says that the field sw_rev exists, but still cannot access it. Also, we checked different versions and combinations of MATLAB and Python. Specifically:
Matlab 9.10.0.2198249 (R2021a) Update 8 + Python 3.8.10
Matlab 9.13.0.2193358 (R2022b) Update 5 + Python 3.8.10
Matlab 9.13.0.2193358 (R2022b) Update 5 + Python 3.10.11
Matlab 23.2.0.2599560 (R2023b) Update 8 + Python 3.10.11
The behaviour is the same with all versions. As the whole communication with the device is set up using protobuf, there is not much we can do with the device at this point as we run into this problem all accorss our MATLAB code base.
Is this a known issue, e.g. with a newer protobuf version? As mentioned, we did not have any issues like this until 2-3 months ago.
Hope to get some help or at least an explanation. Thank you 🙂 python, grpc, protobuf MATLAB Answers — New Questions
How to set Categories consistent when using “signalMask” and “plotsigroi” ?
I want to plot using the code below with the loaded mat file attached.
For example, when "m1=200" I obtain the attached figure.
But, the issue is that, the cetegories are not the same order between two subplots.
I want the categories being the same order, e.g., "n/a", "N", "V", "A" in both subplots.
I would appreciate it if you could help me how to do it.
Thank you,
%%
load qNa.mat
m1=200;
figure;
M = signalMask(tl{m1}); subplot(2,1,1);
p1 = plotsigroi(M,G2{m1});
ls = p1.Children;
for i2=1:size(ls,1)
ls(i2).LineWidth = 2.0;
end
srt = sprintf(‘N only w/GT, data-%d ‘,m1);
title(srt)
M = signalMask(pl{m1}); subplot(2,1,2);
p2 = plotsigroi(M,G2{m1});
ls = p2.Children;
for i2=1:size(ls,1)
ls(i2).LineWidth = 2.0;
end
srt2 = sprintf(‘N only w/Est, data-%d’,m1);
title(srt2)I want to plot using the code below with the loaded mat file attached.
For example, when "m1=200" I obtain the attached figure.
But, the issue is that, the cetegories are not the same order between two subplots.
I want the categories being the same order, e.g., "n/a", "N", "V", "A" in both subplots.
I would appreciate it if you could help me how to do it.
Thank you,
%%
load qNa.mat
m1=200;
figure;
M = signalMask(tl{m1}); subplot(2,1,1);
p1 = plotsigroi(M,G2{m1});
ls = p1.Children;
for i2=1:size(ls,1)
ls(i2).LineWidth = 2.0;
end
srt = sprintf(‘N only w/GT, data-%d ‘,m1);
title(srt)
M = signalMask(pl{m1}); subplot(2,1,2);
p2 = plotsigroi(M,G2{m1});
ls = p2.Children;
for i2=1:size(ls,1)
ls(i2).LineWidth = 2.0;
end
srt2 = sprintf(‘N only w/Est, data-%d’,m1);
title(srt2) I want to plot using the code below with the loaded mat file attached.
For example, when "m1=200" I obtain the attached figure.
But, the issue is that, the cetegories are not the same order between two subplots.
I want the categories being the same order, e.g., "n/a", "N", "V", "A" in both subplots.
I would appreciate it if you could help me how to do it.
Thank you,
%%
load qNa.mat
m1=200;
figure;
M = signalMask(tl{m1}); subplot(2,1,1);
p1 = plotsigroi(M,G2{m1});
ls = p1.Children;
for i2=1:size(ls,1)
ls(i2).LineWidth = 2.0;
end
srt = sprintf(‘N only w/GT, data-%d ‘,m1);
title(srt)
M = signalMask(pl{m1}); subplot(2,1,2);
p2 = plotsigroi(M,G2{m1});
ls = p2.Children;
for i2=1:size(ls,1)
ls(i2).LineWidth = 2.0;
end
srt2 = sprintf(‘N only w/Est, data-%d’,m1);
title(srt2) plotsigroi MATLAB Answers — New Questions
How to get metrics during yolov4 detector training process and how to solve data overfitting?
When i use yolov4 detector to achieve a single-class target detection tasks, i can only get training-loss and validation-loss after training process.I have tried to use "options" to get metrics ,but it made Errow, wrong use " nnet.cnn.TrainingOptions/validateDataFormatsAndMetricsAreEmpty ,trainYOLOv4ObjectDetector can’t support ‘Metrics’ training options",my code and errow are below:
Then how to get metrics during yolov4 detector training process?
———————————————————————————————-
Meanwhile, after fine-tuning options parameters,I have trained the detector many times, the final training loss maintains 0.1 while validation loss can only get 2.0, how can I avoid this data overfitting?I’ve tried many methods , if you can give me some suggestions , it will be very helpful .When i use yolov4 detector to achieve a single-class target detection tasks, i can only get training-loss and validation-loss after training process.I have tried to use "options" to get metrics ,but it made Errow, wrong use " nnet.cnn.TrainingOptions/validateDataFormatsAndMetricsAreEmpty ,trainYOLOv4ObjectDetector can’t support ‘Metrics’ training options",my code and errow are below:
Then how to get metrics during yolov4 detector training process?
———————————————————————————————-
Meanwhile, after fine-tuning options parameters,I have trained the detector many times, the final training loss maintains 0.1 while validation loss can only get 2.0, how can I avoid this data overfitting?I’ve tried many methods , if you can give me some suggestions , it will be very helpful . When i use yolov4 detector to achieve a single-class target detection tasks, i can only get training-loss and validation-loss after training process.I have tried to use "options" to get metrics ,but it made Errow, wrong use " nnet.cnn.TrainingOptions/validateDataFormatsAndMetricsAreEmpty ,trainYOLOv4ObjectDetector can’t support ‘Metrics’ training options",my code and errow are below:
Then how to get metrics during yolov4 detector training process?
———————————————————————————————-
Meanwhile, after fine-tuning options parameters,I have trained the detector many times, the final training loss maintains 0.1 while validation loss can only get 2.0, how can I avoid this data overfitting?I’ve tried many methods , if you can give me some suggestions , it will be very helpful . yolov4 detector, metrics, overfitting MATLAB Answers — New Questions
Simscape 2P Condenser: Fluid Temp lower than ambient air
I’m attempting to simulate a thermosiphon in Simulink using a 2P Condenser operating in ambient, moist air at 20 C. I’ve had a lot of problems with the simulation’s initial values converging, but I think I’ve been able to knock those out by adjusting the reservoir properties downstream to atmospheric temperature and pressure water.
The problem I am having is that the outlet temperature of the condenser is somehow coming back to me lower than the ambient temperature of the cooling air flowing in. It’s always been slightly lower by about 1 or 2 degrees. For 80 C water vapour (quality 1) entering the heat exchanger, the temperature of the liquid (quality 0) leaving the condenser is 18.92 C. Why is this occurring? Clearly, it isn’t physical–what in the model is breaking?
I have attached the simulink file and a photo of the T_out scope and the model as it appears after the simulation has completed.
I am not very familiar with Simulink, so please be thorough with your explanations. Thanks!I’m attempting to simulate a thermosiphon in Simulink using a 2P Condenser operating in ambient, moist air at 20 C. I’ve had a lot of problems with the simulation’s initial values converging, but I think I’ve been able to knock those out by adjusting the reservoir properties downstream to atmospheric temperature and pressure water.
The problem I am having is that the outlet temperature of the condenser is somehow coming back to me lower than the ambient temperature of the cooling air flowing in. It’s always been slightly lower by about 1 or 2 degrees. For 80 C water vapour (quality 1) entering the heat exchanger, the temperature of the liquid (quality 0) leaving the condenser is 18.92 C. Why is this occurring? Clearly, it isn’t physical–what in the model is breaking?
I have attached the simulink file and a photo of the T_out scope and the model as it appears after the simulation has completed.
I am not very familiar with Simulink, so please be thorough with your explanations. Thanks! I’m attempting to simulate a thermosiphon in Simulink using a 2P Condenser operating in ambient, moist air at 20 C. I’ve had a lot of problems with the simulation’s initial values converging, but I think I’ve been able to knock those out by adjusting the reservoir properties downstream to atmospheric temperature and pressure water.
The problem I am having is that the outlet temperature of the condenser is somehow coming back to me lower than the ambient temperature of the cooling air flowing in. It’s always been slightly lower by about 1 or 2 degrees. For 80 C water vapour (quality 1) entering the heat exchanger, the temperature of the liquid (quality 0) leaving the condenser is 18.92 C. Why is this occurring? Clearly, it isn’t physical–what in the model is breaking?
I have attached the simulink file and a photo of the T_out scope and the model as it appears after the simulation has completed.
I am not very familiar with Simulink, so please be thorough with your explanations. Thanks! simulink, simscape MATLAB Answers — New Questions
错误使用 rlDeterministicActorRepresentation。Observation names must match the names of the deep neural network’s input layers.
% Create Environment
env = MYEnv();
% Define State and Action Specifications
stateSpec = env.getObservationInfo();
actionSpec = env.getActionInfo();
stateName = stateSpec.Name;
% Create Actor Network
actorNetwork = [
featureInputLayer(stateSpec.Dimension(1), ‘Name’, stateName)
fullyConnectedLayer(400, ‘Name’, ‘ActorHiddenLayer1’)
reluLayer(‘Name’, ‘ActorReLU1’)
fullyConnectedLayer(300, ‘Name’, ‘ActorHiddenLayer2’)
reluLayer(‘Name’, ‘ActorReLU2’)
fullyConnectedLayer(actionSpec.Dimension(1), ‘Name’, ‘ActorOutputLayer’)
tanhLayer(‘Name’, ‘ActorTanh’)
];
% Create Critic Network
criticNetwork = [
featureInputLayer(stateSpec.Dimension(1), ‘Name’, stateName)
fullyConnectedLayer(400, ‘Name’, ‘CriticHiddenLayer1’)
reluLayer(‘Name’, ‘CriticReLU1’)
fullyConnectedLayer(300, ‘Name’, ‘CriticHiddenLayer2’)
reluLayer(‘Name’, ‘CriticReLU2’)
fullyConnectedLayer(1, ‘Name’, ‘CriticOutputLayer’)
];
% Create Actor Representation
actorOpts = rlRepresentationOptions(‘LearnRate’, actorLearningRate);
actor = rlDeterministicActorRepresentation(actorNetwork, stateSpec, actionSpec, actorOpts);% Create Environment
env = MYEnv();
% Define State and Action Specifications
stateSpec = env.getObservationInfo();
actionSpec = env.getActionInfo();
stateName = stateSpec.Name;
% Create Actor Network
actorNetwork = [
featureInputLayer(stateSpec.Dimension(1), ‘Name’, stateName)
fullyConnectedLayer(400, ‘Name’, ‘ActorHiddenLayer1’)
reluLayer(‘Name’, ‘ActorReLU1’)
fullyConnectedLayer(300, ‘Name’, ‘ActorHiddenLayer2’)
reluLayer(‘Name’, ‘ActorReLU2’)
fullyConnectedLayer(actionSpec.Dimension(1), ‘Name’, ‘ActorOutputLayer’)
tanhLayer(‘Name’, ‘ActorTanh’)
];
% Create Critic Network
criticNetwork = [
featureInputLayer(stateSpec.Dimension(1), ‘Name’, stateName)
fullyConnectedLayer(400, ‘Name’, ‘CriticHiddenLayer1’)
reluLayer(‘Name’, ‘CriticReLU1’)
fullyConnectedLayer(300, ‘Name’, ‘CriticHiddenLayer2’)
reluLayer(‘Name’, ‘CriticReLU2’)
fullyConnectedLayer(1, ‘Name’, ‘CriticOutputLayer’)
];
% Create Actor Representation
actorOpts = rlRepresentationOptions(‘LearnRate’, actorLearningRate);
actor = rlDeterministicActorRepresentation(actorNetwork, stateSpec, actionSpec, actorOpts); % Create Environment
env = MYEnv();
% Define State and Action Specifications
stateSpec = env.getObservationInfo();
actionSpec = env.getActionInfo();
stateName = stateSpec.Name;
% Create Actor Network
actorNetwork = [
featureInputLayer(stateSpec.Dimension(1), ‘Name’, stateName)
fullyConnectedLayer(400, ‘Name’, ‘ActorHiddenLayer1’)
reluLayer(‘Name’, ‘ActorReLU1’)
fullyConnectedLayer(300, ‘Name’, ‘ActorHiddenLayer2’)
reluLayer(‘Name’, ‘ActorReLU2’)
fullyConnectedLayer(actionSpec.Dimension(1), ‘Name’, ‘ActorOutputLayer’)
tanhLayer(‘Name’, ‘ActorTanh’)
];
% Create Critic Network
criticNetwork = [
featureInputLayer(stateSpec.Dimension(1), ‘Name’, stateName)
fullyConnectedLayer(400, ‘Name’, ‘CriticHiddenLayer1’)
reluLayer(‘Name’, ‘CriticReLU1’)
fullyConnectedLayer(300, ‘Name’, ‘CriticHiddenLayer2’)
reluLayer(‘Name’, ‘CriticReLU2’)
fullyConnectedLayer(1, ‘Name’, ‘CriticOutputLayer’)
];
% Create Actor Representation
actorOpts = rlRepresentationOptions(‘LearnRate’, actorLearningRate);
actor = rlDeterministicActorRepresentation(actorNetwork, stateSpec, actionSpec, actorOpts); observation names, rldeterministicactorrepresentation MATLAB Answers — New Questions
How to export AFM image to a data matrix?
I have exported .mi file of AFM data to .jpeg format using gwyddion software. I have attached the image. In this image, each pixel contains z-axis values or surface topography values (color coded). The maximum height of particles is 18 nm and minimum is 0. I would like to export this image into a 3D data matrix which contains information about X, Y and Z-axes values. Can anyone help to find a way?I have exported .mi file of AFM data to .jpeg format using gwyddion software. I have attached the image. In this image, each pixel contains z-axis values or surface topography values (color coded). The maximum height of particles is 18 nm and minimum is 0. I would like to export this image into a 3D data matrix which contains information about X, Y and Z-axes values. Can anyone help to find a way? I have exported .mi file of AFM data to .jpeg format using gwyddion software. I have attached the image. In this image, each pixel contains z-axis values or surface topography values (color coded). The maximum height of particles is 18 nm and minimum is 0. I would like to export this image into a 3D data matrix which contains information about X, Y and Z-axes values. Can anyone help to find a way? afm, image processing, image analysis MATLAB Answers — New Questions
Problem within App Designer when selecting Label and UITable (Multiple Selection)
When selecting a Label and then a UITable (Multiple Selection) in App Designer (R2024a), Table selection does not work properly until App Designer full resart (not Matlab). Closing and re-opening the Project does not fix it.
More specifically, the Table can be selected and moved but the Properties section remains at the previously selected component.
It affects all UITable components, not only the one used for multiple selection.
This can be reproduced with a new App a UITable and a Label. See screenshots.When selecting a Label and then a UITable (Multiple Selection) in App Designer (R2024a), Table selection does not work properly until App Designer full resart (not Matlab). Closing and re-opening the Project does not fix it.
More specifically, the Table can be selected and moved but the Properties section remains at the previously selected component.
It affects all UITable components, not only the one used for multiple selection.
This can be reproduced with a new App a UITable and a Label. See screenshots. When selecting a Label and then a UITable (Multiple Selection) in App Designer (R2024a), Table selection does not work properly until App Designer full resart (not Matlab). Closing and re-opening the Project does not fix it.
More specifically, the Table can be selected and moved but the Properties section remains at the previously selected component.
It affects all UITable components, not only the one used for multiple selection.
This can be reproduced with a new App a UITable and a Label. See screenshots. app designer, uitable, label, multiple selection, bug MATLAB Answers — New Questions