Tag Archives: matlab
How to make binary to gray scale
Dear All,
I created the image. My coding as below.
Z = zeros(99); % create square matrix of zeroes
origin = [round((size(Z,2)-1)/2+1) round((size(Z,1)-1)/2+1)]; % "center" of the matrix
radius = round(sqrt(numel(Z)/(2*pi))); % radius for a circle that fills half the area of the matrix
[xx,yy] = meshgrid((1:size(Z,2))-origin(1),(1:size(Z,1))-origin(2)); % create x and y grid
Z(sqrt(xx.^2 + yy.^2) <= radius) = 1; % set points inside the radius equal to one
imshow(Z); % show the "image"
NOW my pixel number just 1 and 0 (binary).
Do you know how to make the pixel number is gray scale. I mean is the gradient pixel number from center to edge is decending?Dear All,
I created the image. My coding as below.
Z = zeros(99); % create square matrix of zeroes
origin = [round((size(Z,2)-1)/2+1) round((size(Z,1)-1)/2+1)]; % "center" of the matrix
radius = round(sqrt(numel(Z)/(2*pi))); % radius for a circle that fills half the area of the matrix
[xx,yy] = meshgrid((1:size(Z,2))-origin(1),(1:size(Z,1))-origin(2)); % create x and y grid
Z(sqrt(xx.^2 + yy.^2) <= radius) = 1; % set points inside the radius equal to one
imshow(Z); % show the "image"
NOW my pixel number just 1 and 0 (binary).
Do you know how to make the pixel number is gray scale. I mean is the gradient pixel number from center to edge is decending? Dear All,
I created the image. My coding as below.
Z = zeros(99); % create square matrix of zeroes
origin = [round((size(Z,2)-1)/2+1) round((size(Z,1)-1)/2+1)]; % "center" of the matrix
radius = round(sqrt(numel(Z)/(2*pi))); % radius for a circle that fills half the area of the matrix
[xx,yy] = meshgrid((1:size(Z,2))-origin(1),(1:size(Z,1))-origin(2)); % create x and y grid
Z(sqrt(xx.^2 + yy.^2) <= radius) = 1; % set points inside the radius equal to one
imshow(Z); % show the "image"
NOW my pixel number just 1 and 0 (binary).
Do you know how to make the pixel number is gray scale. I mean is the gradient pixel number from center to edge is decending? image analysis, image processing, image acquisition, image segmentation, digital image processing MATLAB Answers — New Questions
How to change the graph of frequency domain
Dear all,
I have coding to convert from spatial domain to frequency domain. Below is my coding.
Z = zeros(99); % create square matrix of zeroes
origin = [round((size(Z,2)-1)/2+1) round((size(Z,1)-1)/2+1)]; % "center" of the matrix
radius = round(sqrt(numel(Z)/(2*pi))); % radius for a circle that fills half the area of the matrix
[xx,yy] = meshgrid((1:size(Z,2))-origin(1),(1:size(Z,1))-origin(2)); % create x and y grid
Z(sqrt(xx.^2 + yy.^2) <= radius) = 1; % set points inside the radius equal to one
Z = im2double(Z)
imshow(Z); % show the "image"
%spatial domain
figure, imtool(Z)
%Frequency Domain
j = fftshift(fft2(Z));
figure, imshow(j)
j1 = log(1+abs(j));
figure ,imshow(j1)
j2 = bar(j1);
My graph frequency domain like this
How to make the graph like belowDear all,
I have coding to convert from spatial domain to frequency domain. Below is my coding.
Z = zeros(99); % create square matrix of zeroes
origin = [round((size(Z,2)-1)/2+1) round((size(Z,1)-1)/2+1)]; % "center" of the matrix
radius = round(sqrt(numel(Z)/(2*pi))); % radius for a circle that fills half the area of the matrix
[xx,yy] = meshgrid((1:size(Z,2))-origin(1),(1:size(Z,1))-origin(2)); % create x and y grid
Z(sqrt(xx.^2 + yy.^2) <= radius) = 1; % set points inside the radius equal to one
Z = im2double(Z)
imshow(Z); % show the "image"
%spatial domain
figure, imtool(Z)
%Frequency Domain
j = fftshift(fft2(Z));
figure, imshow(j)
j1 = log(1+abs(j));
figure ,imshow(j1)
j2 = bar(j1);
My graph frequency domain like this
How to make the graph like below Dear all,
I have coding to convert from spatial domain to frequency domain. Below is my coding.
Z = zeros(99); % create square matrix of zeroes
origin = [round((size(Z,2)-1)/2+1) round((size(Z,1)-1)/2+1)]; % "center" of the matrix
radius = round(sqrt(numel(Z)/(2*pi))); % radius for a circle that fills half the area of the matrix
[xx,yy] = meshgrid((1:size(Z,2))-origin(1),(1:size(Z,1))-origin(2)); % create x and y grid
Z(sqrt(xx.^2 + yy.^2) <= radius) = 1; % set points inside the radius equal to one
Z = im2double(Z)
imshow(Z); % show the "image"
%spatial domain
figure, imtool(Z)
%Frequency Domain
j = fftshift(fft2(Z));
figure, imshow(j)
j1 = log(1+abs(j));
figure ,imshow(j1)
j2 = bar(j1);
My graph frequency domain like this
How to make the graph like below image processing, image acquisition, image analysis, image segmentation, digital image processing MATLAB Answers — New Questions
how to detect center of an object in an image and then crop the original image? using original image and green outline region
Hello.
I need to detect an object (suspicious area in an image). Then I need to find its centroid and then crop it the image into 256×256 pixels using image centroid as the centre of bounding box.
img = imread(‘1_245_original.jpg’);
% colorspace
jmg = rgb2gray(img);
jm = mat2gray(jmg(:,:,1));
jm = imcomplement(jm);
% thresh
bw = imbinarize(jm, graythresh(jm));
% filter noise
bw = imopen(bw, strel(‘disk’, 5));
bw = imfill(bw, ‘holes’);
% label every target
[L,num] = bwlabel(bw);
stats = regionprops(L);
figure; imshow(img, []);
for i = 1 : num
% get rect
recti = stats(i).BoundingBox;
% get cen
ceni = stats(i).Centroid;
% crop image
imi = imcrop(img, round(recti));
ims{i} = imi;
% rect and cen
hold on; rectangle(‘Position’, recti, ‘EdgeColor’, ‘c’, ‘LineWidth’, 2);
plot(ceni(1), ceni(2), ‘yp’, ‘MarkerFaceColor’, ‘y’, ‘MarkerSize’, 20);
end
I have tried out this code, but the cropping area was not in 256×256 pixels. and I want it to save at folder after detect the bounding box. Where can I put that code? Thanks.Hello.
I need to detect an object (suspicious area in an image). Then I need to find its centroid and then crop it the image into 256×256 pixels using image centroid as the centre of bounding box.
img = imread(‘1_245_original.jpg’);
% colorspace
jmg = rgb2gray(img);
jm = mat2gray(jmg(:,:,1));
jm = imcomplement(jm);
% thresh
bw = imbinarize(jm, graythresh(jm));
% filter noise
bw = imopen(bw, strel(‘disk’, 5));
bw = imfill(bw, ‘holes’);
% label every target
[L,num] = bwlabel(bw);
stats = regionprops(L);
figure; imshow(img, []);
for i = 1 : num
% get rect
recti = stats(i).BoundingBox;
% get cen
ceni = stats(i).Centroid;
% crop image
imi = imcrop(img, round(recti));
ims{i} = imi;
% rect and cen
hold on; rectangle(‘Position’, recti, ‘EdgeColor’, ‘c’, ‘LineWidth’, 2);
plot(ceni(1), ceni(2), ‘yp’, ‘MarkerFaceColor’, ‘y’, ‘MarkerSize’, 20);
end
I have tried out this code, but the cropping area was not in 256×256 pixels. and I want it to save at folder after detect the bounding box. Where can I put that code? Thanks. Hello.
I need to detect an object (suspicious area in an image). Then I need to find its centroid and then crop it the image into 256×256 pixels using image centroid as the centre of bounding box.
img = imread(‘1_245_original.jpg’);
% colorspace
jmg = rgb2gray(img);
jm = mat2gray(jmg(:,:,1));
jm = imcomplement(jm);
% thresh
bw = imbinarize(jm, graythresh(jm));
% filter noise
bw = imopen(bw, strel(‘disk’, 5));
bw = imfill(bw, ‘holes’);
% label every target
[L,num] = bwlabel(bw);
stats = regionprops(L);
figure; imshow(img, []);
for i = 1 : num
% get rect
recti = stats(i).BoundingBox;
% get cen
ceni = stats(i).Centroid;
% crop image
imi = imcrop(img, round(recti));
ims{i} = imi;
% rect and cen
hold on; rectangle(‘Position’, recti, ‘EdgeColor’, ‘c’, ‘LineWidth’, 2);
plot(ceni(1), ceni(2), ‘yp’, ‘MarkerFaceColor’, ‘y’, ‘MarkerSize’, 20);
end
I have tried out this code, but the cropping area was not in 256×256 pixels. and I want it to save at folder after detect the bounding box. Where can I put that code? Thanks. centroid, crop, automatic, duplicate post MATLAB Answers — New Questions
How to reconstruct SPECT image from sinogram using iterative technique?
I have a raw data of sinogram in dicom file. I have to reconstruct SPECT image. I tried to reconstruct SPECT image from sinogram using radon transform (iradon) in MATLAB but this technique add artifact in the image. I would like to reconstruct by iterative technique. Could anyone help me for the code of iterative reconstruction?
Please advise me. Thank you.I have a raw data of sinogram in dicom file. I have to reconstruct SPECT image. I tried to reconstruct SPECT image from sinogram using radon transform (iradon) in MATLAB but this technique add artifact in the image. I would like to reconstruct by iterative technique. Could anyone help me for the code of iterative reconstruction?
Please advise me. Thank you. I have a raw data of sinogram in dicom file. I have to reconstruct SPECT image. I tried to reconstruct SPECT image from sinogram using radon transform (iradon) in MATLAB but this technique add artifact in the image. I would like to reconstruct by iterative technique. Could anyone help me for the code of iterative reconstruction?
Please advise me. Thank you. iterative reconstruction technique, spect image, sinogram MATLAB Answers — New Questions
contour plot required for this code
K = 0.5; M = 0.5; p1 = 0.01; p2 = 0.01; p3 = 0.0; Pr = 2; Ec = 0.05; Q = 0.05; D = 10; b = 0.05; Bi = 0.15;
p2v = linspace(0,0.2,101);Mv = [1 3 5];
for k = 1:numel(Mv)
M = Mv(k);
for i = 1:length(p2v)
p2 = p2v(i);
Cpf = 4179;rhof = 997;kf = 0.613;sgf = 5.5*10^(-5); Cps1 = 765;rhos1 = 3970;ks1 = 40;sis1 = 10^(-10); Cps2 = 5315;rhos2 = 6320;ks2 = 76.5;sis2 = 2.7*10^(-8); Cps3 = 686.2;rhos3 = 4250;ks3 = 8.9538;sis3 = 6.27*10^(-5);H1 = ((1-p1)*(1-p2)*(1-p3))^-2.5; H2 = (1-p3)*( (1-p2)*( 1-p1 + p1*rhos1/rhof ) + p2*rhos2/rhof ) + p3*rhos3/rhof; H3 = (1-p3)*( (1-p2)*(1-p1 + p1*rhos1*Cps1/(rhof*Cpf)) + p2*rhos2*Cps2/(rhof*Cpf) ) + p3*rhos3*Cps3/(rhof*Cpf);C2 = ( (sis1+2*sgf-2*p1*(sgf-sis1))/(sis1+2*sgf+p1*(sgf-sis1))); C3 = ( (sis2+2*C2-2*p2*(C2-sis2))/(sis2+2*C2+p2*(C2-sis2)) );A3 = ( (sis3+2*C3-2*p3*(C3-sis3))/(sis3+2*C3+p3*(C3-sis3)) ); B1 = ( (ks1+2*kf-2*p1*(kf-ks1))/(ks1+2*kf+p1*(kf-ks1)) ); B2 = ( (ks2+2*B1-2*p2*(B1-ks2))/(ks2+2*B1+p2*(B1-ks2)) ); H4 = ( (ks3+2*B2-2*p3*(B2-ks3))/(ks3+2*B2+p3*(B2-ks3)) );
ODE = @(x,y)[y(2); y(3); y(4); M*(x+K).^2*(A3/H1).*(y(2) + (x+K).*y(3)) – 2*y(4)./(x+K) + y(3)./(x+K).^2 – y(2)./(x+K).^3 – (H2/H1)*K*((x+K).^2.*(y(1)*y(4) – y(2)*y(3))) – y(1)*y(2) + (x+K).*(y(1)*y(3)-y(2)^2); y(6); – (Pr/H4)*( Q*(y(5) + exp(-D*x)) + H3*K*y(1)*y(6) + M*Ec*A3*y(2)^2 ) – y(6) ]; BC = @(ya,yb)[ya(1); ya(2)-1-b*(ya(3)-ya(2)/K); ya(6)-Bi*(ya(5)-1); yb([2;3;5])]; xa = 0; xb = 6; x = linspace(xa,xb,101); solinit = bvpinit(x,[0 1 0 1 0 1]); sol = bvp5c(ODE,BC,solinit); S = deval(sol,x);
[X,Y] = meshgrid(p2,S(1,:)); psi(i,:) = X.*Y;
end
end
figure(42),contourf(X,Y,psi,50,’ko’,’ShowText’,’on’,’LineWidth’,1.5),hold on;ax = gca; ax.XColor = ‘blue’; ax.YColor = ‘blue’; ax.XAxis.FontSize = 12; ax.YAxis.FontSize = 12; ax.FontWeight = ‘bold’;
xlabel(‘bfp2′,’color’,’blue’,’FontSize’, 14); ylabel(‘bfpsi’,’color’,’blue’,’FontSize’, 14); % colormap hot
%% I need contourf plot for ‘p2’ versuses ‘psi’ for different values of M = [1 3 5];K = 0.5; M = 0.5; p1 = 0.01; p2 = 0.01; p3 = 0.0; Pr = 2; Ec = 0.05; Q = 0.05; D = 10; b = 0.05; Bi = 0.15;
p2v = linspace(0,0.2,101);Mv = [1 3 5];
for k = 1:numel(Mv)
M = Mv(k);
for i = 1:length(p2v)
p2 = p2v(i);
Cpf = 4179;rhof = 997;kf = 0.613;sgf = 5.5*10^(-5); Cps1 = 765;rhos1 = 3970;ks1 = 40;sis1 = 10^(-10); Cps2 = 5315;rhos2 = 6320;ks2 = 76.5;sis2 = 2.7*10^(-8); Cps3 = 686.2;rhos3 = 4250;ks3 = 8.9538;sis3 = 6.27*10^(-5);H1 = ((1-p1)*(1-p2)*(1-p3))^-2.5; H2 = (1-p3)*( (1-p2)*( 1-p1 + p1*rhos1/rhof ) + p2*rhos2/rhof ) + p3*rhos3/rhof; H3 = (1-p3)*( (1-p2)*(1-p1 + p1*rhos1*Cps1/(rhof*Cpf)) + p2*rhos2*Cps2/(rhof*Cpf) ) + p3*rhos3*Cps3/(rhof*Cpf);C2 = ( (sis1+2*sgf-2*p1*(sgf-sis1))/(sis1+2*sgf+p1*(sgf-sis1))); C3 = ( (sis2+2*C2-2*p2*(C2-sis2))/(sis2+2*C2+p2*(C2-sis2)) );A3 = ( (sis3+2*C3-2*p3*(C3-sis3))/(sis3+2*C3+p3*(C3-sis3)) ); B1 = ( (ks1+2*kf-2*p1*(kf-ks1))/(ks1+2*kf+p1*(kf-ks1)) ); B2 = ( (ks2+2*B1-2*p2*(B1-ks2))/(ks2+2*B1+p2*(B1-ks2)) ); H4 = ( (ks3+2*B2-2*p3*(B2-ks3))/(ks3+2*B2+p3*(B2-ks3)) );
ODE = @(x,y)[y(2); y(3); y(4); M*(x+K).^2*(A3/H1).*(y(2) + (x+K).*y(3)) – 2*y(4)./(x+K) + y(3)./(x+K).^2 – y(2)./(x+K).^3 – (H2/H1)*K*((x+K).^2.*(y(1)*y(4) – y(2)*y(3))) – y(1)*y(2) + (x+K).*(y(1)*y(3)-y(2)^2); y(6); – (Pr/H4)*( Q*(y(5) + exp(-D*x)) + H3*K*y(1)*y(6) + M*Ec*A3*y(2)^2 ) – y(6) ]; BC = @(ya,yb)[ya(1); ya(2)-1-b*(ya(3)-ya(2)/K); ya(6)-Bi*(ya(5)-1); yb([2;3;5])]; xa = 0; xb = 6; x = linspace(xa,xb,101); solinit = bvpinit(x,[0 1 0 1 0 1]); sol = bvp5c(ODE,BC,solinit); S = deval(sol,x);
[X,Y] = meshgrid(p2,S(1,:)); psi(i,:) = X.*Y;
end
end
figure(42),contourf(X,Y,psi,50,’ko’,’ShowText’,’on’,’LineWidth’,1.5),hold on;ax = gca; ax.XColor = ‘blue’; ax.YColor = ‘blue’; ax.XAxis.FontSize = 12; ax.YAxis.FontSize = 12; ax.FontWeight = ‘bold’;
xlabel(‘bfp2′,’color’,’blue’,’FontSize’, 14); ylabel(‘bfpsi’,’color’,’blue’,’FontSize’, 14); % colormap hot
%% I need contourf plot for ‘p2’ versuses ‘psi’ for different values of M = [1 3 5]; K = 0.5; M = 0.5; p1 = 0.01; p2 = 0.01; p3 = 0.0; Pr = 2; Ec = 0.05; Q = 0.05; D = 10; b = 0.05; Bi = 0.15;
p2v = linspace(0,0.2,101);Mv = [1 3 5];
for k = 1:numel(Mv)
M = Mv(k);
for i = 1:length(p2v)
p2 = p2v(i);
Cpf = 4179;rhof = 997;kf = 0.613;sgf = 5.5*10^(-5); Cps1 = 765;rhos1 = 3970;ks1 = 40;sis1 = 10^(-10); Cps2 = 5315;rhos2 = 6320;ks2 = 76.5;sis2 = 2.7*10^(-8); Cps3 = 686.2;rhos3 = 4250;ks3 = 8.9538;sis3 = 6.27*10^(-5);H1 = ((1-p1)*(1-p2)*(1-p3))^-2.5; H2 = (1-p3)*( (1-p2)*( 1-p1 + p1*rhos1/rhof ) + p2*rhos2/rhof ) + p3*rhos3/rhof; H3 = (1-p3)*( (1-p2)*(1-p1 + p1*rhos1*Cps1/(rhof*Cpf)) + p2*rhos2*Cps2/(rhof*Cpf) ) + p3*rhos3*Cps3/(rhof*Cpf);C2 = ( (sis1+2*sgf-2*p1*(sgf-sis1))/(sis1+2*sgf+p1*(sgf-sis1))); C3 = ( (sis2+2*C2-2*p2*(C2-sis2))/(sis2+2*C2+p2*(C2-sis2)) );A3 = ( (sis3+2*C3-2*p3*(C3-sis3))/(sis3+2*C3+p3*(C3-sis3)) ); B1 = ( (ks1+2*kf-2*p1*(kf-ks1))/(ks1+2*kf+p1*(kf-ks1)) ); B2 = ( (ks2+2*B1-2*p2*(B1-ks2))/(ks2+2*B1+p2*(B1-ks2)) ); H4 = ( (ks3+2*B2-2*p3*(B2-ks3))/(ks3+2*B2+p3*(B2-ks3)) );
ODE = @(x,y)[y(2); y(3); y(4); M*(x+K).^2*(A3/H1).*(y(2) + (x+K).*y(3)) – 2*y(4)./(x+K) + y(3)./(x+K).^2 – y(2)./(x+K).^3 – (H2/H1)*K*((x+K).^2.*(y(1)*y(4) – y(2)*y(3))) – y(1)*y(2) + (x+K).*(y(1)*y(3)-y(2)^2); y(6); – (Pr/H4)*( Q*(y(5) + exp(-D*x)) + H3*K*y(1)*y(6) + M*Ec*A3*y(2)^2 ) – y(6) ]; BC = @(ya,yb)[ya(1); ya(2)-1-b*(ya(3)-ya(2)/K); ya(6)-Bi*(ya(5)-1); yb([2;3;5])]; xa = 0; xb = 6; x = linspace(xa,xb,101); solinit = bvpinit(x,[0 1 0 1 0 1]); sol = bvp5c(ODE,BC,solinit); S = deval(sol,x);
[X,Y] = meshgrid(p2,S(1,:)); psi(i,:) = X.*Y;
end
end
figure(42),contourf(X,Y,psi,50,’ko’,’ShowText’,’on’,’LineWidth’,1.5),hold on;ax = gca; ax.XColor = ‘blue’; ax.YColor = ‘blue’; ax.XAxis.FontSize = 12; ax.YAxis.FontSize = 12; ax.FontWeight = ‘bold’;
xlabel(‘bfp2′,’color’,’blue’,’FontSize’, 14); ylabel(‘bfpsi’,’color’,’blue’,’FontSize’, 14); % colormap hot
%% I need contourf plot for ‘p2’ versuses ‘psi’ for different values of M = [1 3 5]; contour plot required for this code MATLAB Answers — New Questions
How to open VR Sink simulation,with a button in app designer ?
Hello everyone , İ have a question
when i click on the vr sink model ,i can see block parameter properties.
i need to do, open VRSink file simulation in app designer .How can i do it ?
and Can i change ;"open wiever automatically" output from app designer with code ?Hello everyone , İ have a question
when i click on the vr sink model ,i can see block parameter properties.
i need to do, open VRSink file simulation in app designer .How can i do it ?
and Can i change ;"open wiever automatically" output from app designer with code ? Hello everyone , İ have a question
when i click on the vr sink model ,i can see block parameter properties.
i need to do, open VRSink file simulation in app designer .How can i do it ?
and Can i change ;"open wiever automatically" output from app designer with code ? simulink, vr MATLAB Answers — New Questions
Why does Matlab not recognize fieldnames function?
Hi,
Back in the past I used the ‘fieldnames’ function to retrieve fieldnames. But suddenly it´s not working anymore.
Strangely the function works before starting my script as console input. But as soon as I start my script, the function doesn´t get recognized anymore. It also works in other scripts. See code below.
Thank you in advance!
if length(other_data.digitalMap) == 1 || length(results.digitalMap) == 1
fieldnames = fieldnames(results.digitalMap);
for y =1: length(other_data.digitalMap)
for z = 1:length(fieldnames)-1
field_name = fieldnames{z};
results.digitalMap(y).(field_name) = other_data.digitalMap(y).(field_name);
end
end
end
%Strangely the function works before starting my script as console input:
% Create a structure with some fields
myStruct.name = ‘John’;
myStruct.age = 30;
myStruct.occupation = ‘Engineer’;
% Get the field names of the structure
fields = fieldnames(myStruct);Hi,
Back in the past I used the ‘fieldnames’ function to retrieve fieldnames. But suddenly it´s not working anymore.
Strangely the function works before starting my script as console input. But as soon as I start my script, the function doesn´t get recognized anymore. It also works in other scripts. See code below.
Thank you in advance!
if length(other_data.digitalMap) == 1 || length(results.digitalMap) == 1
fieldnames = fieldnames(results.digitalMap);
for y =1: length(other_data.digitalMap)
for z = 1:length(fieldnames)-1
field_name = fieldnames{z};
results.digitalMap(y).(field_name) = other_data.digitalMap(y).(field_name);
end
end
end
%Strangely the function works before starting my script as console input:
% Create a structure with some fields
myStruct.name = ‘John’;
myStruct.age = 30;
myStruct.occupation = ‘Engineer’;
% Get the field names of the structure
fields = fieldnames(myStruct); Hi,
Back in the past I used the ‘fieldnames’ function to retrieve fieldnames. But suddenly it´s not working anymore.
Strangely the function works before starting my script as console input. But as soon as I start my script, the function doesn´t get recognized anymore. It also works in other scripts. See code below.
Thank you in advance!
if length(other_data.digitalMap) == 1 || length(results.digitalMap) == 1
fieldnames = fieldnames(results.digitalMap);
for y =1: length(other_data.digitalMap)
for z = 1:length(fieldnames)-1
field_name = fieldnames{z};
results.digitalMap(y).(field_name) = other_data.digitalMap(y).(field_name);
end
end
end
%Strangely the function works before starting my script as console input:
% Create a structure with some fields
myStruct.name = ‘John’;
myStruct.age = 30;
myStruct.occupation = ‘Engineer’;
% Get the field names of the structure
fields = fieldnames(myStruct); matlab, help, toolbox, function MATLAB Answers — New Questions
C-RNN dual output regression
Hi. I am writing a C-RNN regression learning code with single matrix input – dual scalar output. The loaded "paddedData2.mat" file is saved as paddedData, and it is stored as an N X 3 cell, as shown in the attached image. The input matrix used for training is the 3rd column of paddedData, which is [440 5] double, and the regression variable is the values in the 1st column. With this, I plan to create features of size [436 1] using two [3 3] kernels of convolution and train them using LSTM. The code is as follows. But it doesn’t work and the error code "trainnet (line 46), Error forming mini-batch of targets for network output "fc_1". Data interpreted with format "BC". To specify a different format use the TargetDataFormats option."
How can I modify the code?
clc;
clear all;
load("paddedData2.mat","-mat")
XTrain = paddedData(:,3);
YTrain1 = cell2mat(paddedData(:,1));
YTrain2 = cell2mat(paddedData(:,2));
dsX = arrayDatastore(XTrain, ‘OutputType’, ‘same’);
dsY1 = arrayDatastore(YTrain1, ‘OutputType’, ‘same’);
dsY2 = arrayDatastore(YTrain2, ‘OutputType’, ‘same’);
net = dlnetwork;
tempNet = [
sequenceInputLayer([440 5 1],"Name","sequenceinput")
convolution2dLayer([3 3],8,"Name","conv_A1")
batchNormalizationLayer("Name","batchnorm_A1")
reluLayer("Name","relu_A1")
convolution2dLayer([3 3],8,"Name","conv_2")
batchNormalizationLayer("Name","batchnorm_2")
reluLayer("Name","relu_2")
flattenLayer("Name","flatten")
fullyConnectedLayer(100,"Name","fc")
lstmLayer(100,"Name","lstm","OutputMode","last")];
net = addLayers(net,tempNet);
tempNet = fullyConnectedLayer(1,"Name","fc_1");
net = addLayers(net,tempNet);
tempNet = fullyConnectedLayer(1,"Name","fc_2");
net = addLayers(net,tempNet);
clear tempNet;
net = connectLayers(net,"lstm","fc_1");
net = connectLayers(net,"lstm","fc_2");
net = initialize(net);
options = trainingOptions(‘adam’, …
‘MaxEpochs’, 2000, …
‘MiniBatchSize’, 100, …
‘Shuffle’, ‘every-epoch’, …
‘Plots’, ‘training-progress’);
lossFcn = @(Y1,Y2,dsY1,dsY2) crossentropy(Y1,dsY1) + 0.1*mse(Y2,dsY2);
net = trainnet(dsX, net, lossFcn, options);Hi. I am writing a C-RNN regression learning code with single matrix input – dual scalar output. The loaded "paddedData2.mat" file is saved as paddedData, and it is stored as an N X 3 cell, as shown in the attached image. The input matrix used for training is the 3rd column of paddedData, which is [440 5] double, and the regression variable is the values in the 1st column. With this, I plan to create features of size [436 1] using two [3 3] kernels of convolution and train them using LSTM. The code is as follows. But it doesn’t work and the error code "trainnet (line 46), Error forming mini-batch of targets for network output "fc_1". Data interpreted with format "BC". To specify a different format use the TargetDataFormats option."
How can I modify the code?
clc;
clear all;
load("paddedData2.mat","-mat")
XTrain = paddedData(:,3);
YTrain1 = cell2mat(paddedData(:,1));
YTrain2 = cell2mat(paddedData(:,2));
dsX = arrayDatastore(XTrain, ‘OutputType’, ‘same’);
dsY1 = arrayDatastore(YTrain1, ‘OutputType’, ‘same’);
dsY2 = arrayDatastore(YTrain2, ‘OutputType’, ‘same’);
net = dlnetwork;
tempNet = [
sequenceInputLayer([440 5 1],"Name","sequenceinput")
convolution2dLayer([3 3],8,"Name","conv_A1")
batchNormalizationLayer("Name","batchnorm_A1")
reluLayer("Name","relu_A1")
convolution2dLayer([3 3],8,"Name","conv_2")
batchNormalizationLayer("Name","batchnorm_2")
reluLayer("Name","relu_2")
flattenLayer("Name","flatten")
fullyConnectedLayer(100,"Name","fc")
lstmLayer(100,"Name","lstm","OutputMode","last")];
net = addLayers(net,tempNet);
tempNet = fullyConnectedLayer(1,"Name","fc_1");
net = addLayers(net,tempNet);
tempNet = fullyConnectedLayer(1,"Name","fc_2");
net = addLayers(net,tempNet);
clear tempNet;
net = connectLayers(net,"lstm","fc_1");
net = connectLayers(net,"lstm","fc_2");
net = initialize(net);
options = trainingOptions(‘adam’, …
‘MaxEpochs’, 2000, …
‘MiniBatchSize’, 100, …
‘Shuffle’, ‘every-epoch’, …
‘Plots’, ‘training-progress’);
lossFcn = @(Y1,Y2,dsY1,dsY2) crossentropy(Y1,dsY1) + 0.1*mse(Y2,dsY2);
net = trainnet(dsX, net, lossFcn, options); Hi. I am writing a C-RNN regression learning code with single matrix input – dual scalar output. The loaded "paddedData2.mat" file is saved as paddedData, and it is stored as an N X 3 cell, as shown in the attached image. The input matrix used for training is the 3rd column of paddedData, which is [440 5] double, and the regression variable is the values in the 1st column. With this, I plan to create features of size [436 1] using two [3 3] kernels of convolution and train them using LSTM. The code is as follows. But it doesn’t work and the error code "trainnet (line 46), Error forming mini-batch of targets for network output "fc_1". Data interpreted with format "BC". To specify a different format use the TargetDataFormats option."
How can I modify the code?
clc;
clear all;
load("paddedData2.mat","-mat")
XTrain = paddedData(:,3);
YTrain1 = cell2mat(paddedData(:,1));
YTrain2 = cell2mat(paddedData(:,2));
dsX = arrayDatastore(XTrain, ‘OutputType’, ‘same’);
dsY1 = arrayDatastore(YTrain1, ‘OutputType’, ‘same’);
dsY2 = arrayDatastore(YTrain2, ‘OutputType’, ‘same’);
net = dlnetwork;
tempNet = [
sequenceInputLayer([440 5 1],"Name","sequenceinput")
convolution2dLayer([3 3],8,"Name","conv_A1")
batchNormalizationLayer("Name","batchnorm_A1")
reluLayer("Name","relu_A1")
convolution2dLayer([3 3],8,"Name","conv_2")
batchNormalizationLayer("Name","batchnorm_2")
reluLayer("Name","relu_2")
flattenLayer("Name","flatten")
fullyConnectedLayer(100,"Name","fc")
lstmLayer(100,"Name","lstm","OutputMode","last")];
net = addLayers(net,tempNet);
tempNet = fullyConnectedLayer(1,"Name","fc_1");
net = addLayers(net,tempNet);
tempNet = fullyConnectedLayer(1,"Name","fc_2");
net = addLayers(net,tempNet);
clear tempNet;
net = connectLayers(net,"lstm","fc_1");
net = connectLayers(net,"lstm","fc_2");
net = initialize(net);
options = trainingOptions(‘adam’, …
‘MaxEpochs’, 2000, …
‘MiniBatchSize’, 100, …
‘Shuffle’, ‘every-epoch’, …
‘Plots’, ‘training-progress’);
lossFcn = @(Y1,Y2,dsY1,dsY2) crossentropy(Y1,dsY1) + 0.1*mse(Y2,dsY2);
net = trainnet(dsX, net, lossFcn, options); deep learning, regression, multiple output MATLAB Answers — New Questions
Import to digsilent a dll generated starting from embedded coder in Simulink
Good morning
I read that MathWorks has developed a solution specifically for PowerFactory regarding to dll import.
Do you have some guidelines?
Could you help me?
Thank you for your time.
Regards,
AndreaGood morning
I read that MathWorks has developed a solution specifically for PowerFactory regarding to dll import.
Do you have some guidelines?
Could you help me?
Thank you for your time.
Regards,
Andrea Good morning
I read that MathWorks has developed a solution specifically for PowerFactory regarding to dll import.
Do you have some guidelines?
Could you help me?
Thank you for your time.
Regards,
Andrea dll, digsilent, embedded coder, software interface MATLAB Answers — New Questions
Can I call a Simulink generated DLL file in a Simulink model (Matlab 2018b)?
I have created a .dll file (see fig: PID_win64.dll) and associated headers (see fig: in PID_ert_shrlib_rtw) with the aid of Simulink (see fig: PID.slx). I now want to call it in a simulink model (see fig: test_dll.slx) where I am going to test it. I have read in older posts that I have to use S-Function block. Please let me know if this is the proper route I should follow and if so could you please share with me the exact steps (where should I allocate the name of dll and headers – which headers) ?
The final aim is to import the created .dll file in DIgSILENT POWERFACTORY. If anyone can share any further information regarding this would be highly appreciated.I have created a .dll file (see fig: PID_win64.dll) and associated headers (see fig: in PID_ert_shrlib_rtw) with the aid of Simulink (see fig: PID.slx). I now want to call it in a simulink model (see fig: test_dll.slx) where I am going to test it. I have read in older posts that I have to use S-Function block. Please let me know if this is the proper route I should follow and if so could you please share with me the exact steps (where should I allocate the name of dll and headers – which headers) ?
The final aim is to import the created .dll file in DIgSILENT POWERFACTORY. If anyone can share any further information regarding this would be highly appreciated. I have created a .dll file (see fig: PID_win64.dll) and associated headers (see fig: in PID_ert_shrlib_rtw) with the aid of Simulink (see fig: PID.slx). I now want to call it in a simulink model (see fig: test_dll.slx) where I am going to test it. I have read in older posts that I have to use S-Function block. Please let me know if this is the proper route I should follow and if so could you please share with me the exact steps (where should I allocate the name of dll and headers – which headers) ?
The final aim is to import the created .dll file in DIgSILENT POWERFACTORY. If anyone can share any further information regarding this would be highly appreciated. dll, s-function, 2018b, simulink, digsilent, powerfactory, import, call MATLAB Answers — New Questions
Generate 3D model from a 2D image
Hi friends,
I would like to generate a 3D model from a 2D image but I don’t have any clue.
From some good instruction, I have successfully generated a binary image with my defined masks. Here the reason why I want a binary image is that the 3D printer only accepts binary slices. I would like to just extrude my pixels into a 3D model (all numbers 1 need to be given the same height, but 0 don’t need a height or a neglectable height), and slice them with my printer software. Here is the first section to generate a good binary image. I know there is some other ways we can use to reconstruct a 3D model from a 2D image, but I just want do it in matlab.Hi friends,
I would like to generate a 3D model from a 2D image but I don’t have any clue.
From some good instruction, I have successfully generated a binary image with my defined masks. Here the reason why I want a binary image is that the 3D printer only accepts binary slices. I would like to just extrude my pixels into a 3D model (all numbers 1 need to be given the same height, but 0 don’t need a height or a neglectable height), and slice them with my printer software. Here is the first section to generate a good binary image. I know there is some other ways we can use to reconstruct a 3D model from a 2D image, but I just want do it in matlab. Hi friends,
I would like to generate a 3D model from a 2D image but I don’t have any clue.
From some good instruction, I have successfully generated a binary image with my defined masks. Here the reason why I want a binary image is that the 3D printer only accepts binary slices. I would like to just extrude my pixels into a 3D model (all numbers 1 need to be given the same height, but 0 don’t need a height or a neglectable height), and slice them with my printer software. Here is the first section to generate a good binary image. I know there is some other ways we can use to reconstruct a 3D model from a 2D image, but I just want do it in matlab. 3d plots, grayscale, binary image, digital image processing MATLAB Answers — New Questions
Erro apply Differential Evolution
When applying Differential Evolution, this error appeared and I was unable to resolve it. Does anyone know how to solve it, please?When applying Differential Evolution, this error appeared and I was unable to resolve it. Does anyone know how to solve it, please? When applying Differential Evolution, this error appeared and I was unable to resolve it. Does anyone know how to solve it, please? matlab, differential evolution MATLAB Answers — New Questions
Modelling anisotropic materials in PDE Toolbox
Hi I’m using the PDE toolbox (unified workflow) to model electromagnetics (DC conduction). I’m working with a material that is anisotropic in its conductivity, ie. it has a conductivity of = 0.6 S/m in the x-direction and = 0.087 S/m in the y-direction. Right now it seems I can only set isotropic conductivity using the code below, where the conductivity is set to 0.6 S/m in all directions:
model.MaterialProperties(1) = materialProperties(ElectricalConductivity=0.6,RelativePermittivity=4.96e4);
I know you can use function handles to alter the way that the material property is applied spatially, but how would someone do this for properties that depend on the direction (x or y)?
Thanks for the help.Hi I’m using the PDE toolbox (unified workflow) to model electromagnetics (DC conduction). I’m working with a material that is anisotropic in its conductivity, ie. it has a conductivity of = 0.6 S/m in the x-direction and = 0.087 S/m in the y-direction. Right now it seems I can only set isotropic conductivity using the code below, where the conductivity is set to 0.6 S/m in all directions:
model.MaterialProperties(1) = materialProperties(ElectricalConductivity=0.6,RelativePermittivity=4.96e4);
I know you can use function handles to alter the way that the material property is applied spatially, but how would someone do this for properties that depend on the direction (x or y)?
Thanks for the help. Hi I’m using the PDE toolbox (unified workflow) to model electromagnetics (DC conduction). I’m working with a material that is anisotropic in its conductivity, ie. it has a conductivity of = 0.6 S/m in the x-direction and = 0.087 S/m in the y-direction. Right now it seems I can only set isotropic conductivity using the code below, where the conductivity is set to 0.6 S/m in all directions:
model.MaterialProperties(1) = materialProperties(ElectricalConductivity=0.6,RelativePermittivity=4.96e4);
I know you can use function handles to alter the way that the material property is applied spatially, but how would someone do this for properties that depend on the direction (x or y)?
Thanks for the help. anisotropic, material property, pde toolbox MATLAB Answers — New Questions
How can I make the layout in the attached image with tiledlayout
I am able to get plot 1 and plot 2 in but not 3, 4 and 5. I can also get 3, 4 and 5 in without plot 1 and 2 but that is not what I want, per the attached image.I am able to get plot 1 and plot 2 in but not 3, 4 and 5. I can also get 3, 4 and 5 in without plot 1 and 2 but that is not what I want, per the attached image. I am able to get plot 1 and plot 2 in but not 3, 4 and 5. I can also get 3, 4 and 5 in without plot 1 and 2 but that is not what I want, per the attached image. tiledlayout MATLAB Answers — New Questions
How to fix java.lang.ClassNotFoundException: com.mathworks.toolbox.javabuilder.MWException?
I have a RESTful API project where I use Spring Boot Maven. I’m also processing in Matlab with a jar file. I converted this project to .jar file, but running it with java -jar demo.jar closes after running. I got some errors. However, since it was a restful API, it had to remain open so that I could access the APIs.
In VS Code Java: Java 11
Matlab: R2018a, JRE 1.8
Errors that are related to Matlab also, after java -jar demo.jar:
Error starting ApplicationContext. To display the conditions report re-run your application with ‘debug’ enabled.
2023-10-02 11:21:51.075 ERROR 11484 — [ main] o.s.boot.SpringApplication : Application run failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘beso3D_PD’: Lookup method resolution failed; nested exception is java.lang.IllegalStateException: Failed to introspect Class [peridynamics.Beso3D_PD] from ClassLoader [org.springframework.boot.loader.LaunchedURLClassLoader@1ed6993a]
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.determineCandidateConstructors(AutowiredAnnotationBeanPostProcessor.java:298) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.determineConstructorsFromBeanPostProcessors(AbstractAutowireCapableBeanFactory.java:1302) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1219) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:955) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:921) ~[spring-context-5.3.30.jar!/:5.3.30]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583) ~[spring-context-5.3.30.jar!/:5.3.30]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:147) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:731) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:408) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1303) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1292) ~[spring-boot-2.7.16.jar!/:2.7.16]
at peridynamics.demoApp.main(demoApp.java:11) ~[classes!/:0.0.1-SNAPSHOT]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:568) ~[na:na]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:65) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
Caused by: java.lang.IllegalStateException: Failed to introspect Class [peridynamics.Beso3D_PD] from ClassLoader [org.springframework.boot.loader.LaunchedURLClassLoader@1ed6993a]
at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:485) ~[spring-core-5.3.30.jar!/:5.3.30]
at org.springframework.util.ReflectionUtils.doWithLocalMethods(ReflectionUtils.java:321) ~[spring-core-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.determineCandidateConstructors(AutowiredAnnotationBeanPostProcessor.java:276) ~[spring-beans-5.3.30.jar!/:5.3.30]
… 26 common frames omitted
Caused by: java.lang.NoClassDefFoundError: com/mathworks/toolbox/javabuilder/MWException
at java.base/java.lang.Class.getDeclaredMethods0(Native Method) ~[na:na]
at java.base/java.lang.Class.privateGetDeclaredMethods(Class.java:3402) ~[na:na]
at java.base/java.lang.Class.getDeclaredMethods(Class.java:2504) ~[na:na]
at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:467) ~[spring-core-5.3.30.jar!/:5.3.30]
… 28 common frames omitted
Caused by: java.lang.ClassNotFoundException: com.mathworks.toolbox.javabuilder.MWException
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445) ~[na:na]
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:587) ~[na:na]
at org.springframework.boot.loader.LaunchedURLClassLoader.loadClass(LaunchedURLClassLoader.java:151) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:520) ~[na:na]
… 32 common frames omitted
How to solve this problem?I have a RESTful API project where I use Spring Boot Maven. I’m also processing in Matlab with a jar file. I converted this project to .jar file, but running it with java -jar demo.jar closes after running. I got some errors. However, since it was a restful API, it had to remain open so that I could access the APIs.
In VS Code Java: Java 11
Matlab: R2018a, JRE 1.8
Errors that are related to Matlab also, after java -jar demo.jar:
Error starting ApplicationContext. To display the conditions report re-run your application with ‘debug’ enabled.
2023-10-02 11:21:51.075 ERROR 11484 — [ main] o.s.boot.SpringApplication : Application run failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘beso3D_PD’: Lookup method resolution failed; nested exception is java.lang.IllegalStateException: Failed to introspect Class [peridynamics.Beso3D_PD] from ClassLoader [org.springframework.boot.loader.LaunchedURLClassLoader@1ed6993a]
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.determineCandidateConstructors(AutowiredAnnotationBeanPostProcessor.java:298) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.determineConstructorsFromBeanPostProcessors(AbstractAutowireCapableBeanFactory.java:1302) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1219) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:955) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:921) ~[spring-context-5.3.30.jar!/:5.3.30]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583) ~[spring-context-5.3.30.jar!/:5.3.30]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:147) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:731) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:408) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1303) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1292) ~[spring-boot-2.7.16.jar!/:2.7.16]
at peridynamics.demoApp.main(demoApp.java:11) ~[classes!/:0.0.1-SNAPSHOT]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:568) ~[na:na]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:65) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
Caused by: java.lang.IllegalStateException: Failed to introspect Class [peridynamics.Beso3D_PD] from ClassLoader [org.springframework.boot.loader.LaunchedURLClassLoader@1ed6993a]
at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:485) ~[spring-core-5.3.30.jar!/:5.3.30]
at org.springframework.util.ReflectionUtils.doWithLocalMethods(ReflectionUtils.java:321) ~[spring-core-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.determineCandidateConstructors(AutowiredAnnotationBeanPostProcessor.java:276) ~[spring-beans-5.3.30.jar!/:5.3.30]
… 26 common frames omitted
Caused by: java.lang.NoClassDefFoundError: com/mathworks/toolbox/javabuilder/MWException
at java.base/java.lang.Class.getDeclaredMethods0(Native Method) ~[na:na]
at java.base/java.lang.Class.privateGetDeclaredMethods(Class.java:3402) ~[na:na]
at java.base/java.lang.Class.getDeclaredMethods(Class.java:2504) ~[na:na]
at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:467) ~[spring-core-5.3.30.jar!/:5.3.30]
… 28 common frames omitted
Caused by: java.lang.ClassNotFoundException: com.mathworks.toolbox.javabuilder.MWException
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445) ~[na:na]
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:587) ~[na:na]
at org.springframework.boot.loader.LaunchedURLClassLoader.loadClass(LaunchedURLClassLoader.java:151) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:520) ~[na:na]
… 32 common frames omitted
How to solve this problem? I have a RESTful API project where I use Spring Boot Maven. I’m also processing in Matlab with a jar file. I converted this project to .jar file, but running it with java -jar demo.jar closes after running. I got some errors. However, since it was a restful API, it had to remain open so that I could access the APIs.
In VS Code Java: Java 11
Matlab: R2018a, JRE 1.8
Errors that are related to Matlab also, after java -jar demo.jar:
Error starting ApplicationContext. To display the conditions report re-run your application with ‘debug’ enabled.
2023-10-02 11:21:51.075 ERROR 11484 — [ main] o.s.boot.SpringApplication : Application run failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘beso3D_PD’: Lookup method resolution failed; nested exception is java.lang.IllegalStateException: Failed to introspect Class [peridynamics.Beso3D_PD] from ClassLoader [org.springframework.boot.loader.LaunchedURLClassLoader@1ed6993a]
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.determineCandidateConstructors(AutowiredAnnotationBeanPostProcessor.java:298) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.determineConstructorsFromBeanPostProcessors(AbstractAutowireCapableBeanFactory.java:1302) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1219) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:582) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:208) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:955) ~[spring-beans-5.3.30.jar!/:5.3.30]
at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:921) ~[spring-context-5.3.30.jar!/:5.3.30]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:583) ~[spring-context-5.3.30.jar!/:5.3.30]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:147) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:731) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:408) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1303) ~[spring-boot-2.7.16.jar!/:2.7.16]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1292) ~[spring-boot-2.7.16.jar!/:2.7.16]
at peridynamics.demoApp.main(demoApp.java:11) ~[classes!/:0.0.1-SNAPSHOT]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:568) ~[na:na]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:108) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:58) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:65) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
Caused by: java.lang.IllegalStateException: Failed to introspect Class [peridynamics.Beso3D_PD] from ClassLoader [org.springframework.boot.loader.LaunchedURLClassLoader@1ed6993a]
at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:485) ~[spring-core-5.3.30.jar!/:5.3.30]
at org.springframework.util.ReflectionUtils.doWithLocalMethods(ReflectionUtils.java:321) ~[spring-core-5.3.30.jar!/:5.3.30]
at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.determineCandidateConstructors(AutowiredAnnotationBeanPostProcessor.java:276) ~[spring-beans-5.3.30.jar!/:5.3.30]
… 26 common frames omitted
Caused by: java.lang.NoClassDefFoundError: com/mathworks/toolbox/javabuilder/MWException
at java.base/java.lang.Class.getDeclaredMethods0(Native Method) ~[na:na]
at java.base/java.lang.Class.privateGetDeclaredMethods(Class.java:3402) ~[na:na]
at java.base/java.lang.Class.getDeclaredMethods(Class.java:2504) ~[na:na]
at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:467) ~[spring-core-5.3.30.jar!/:5.3.30]
… 28 common frames omitted
Caused by: java.lang.ClassNotFoundException: com.mathworks.toolbox.javabuilder.MWException
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445) ~[na:na]
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:587) ~[na:na]
at org.springframework.boot.loader.LaunchedURLClassLoader.loadClass(LaunchedURLClassLoader.java:151) ~[demo1-0.0.1-SNAPSHOT.jar:0.0.1-SNAPSHOT]
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:520) ~[na:na]
… 32 common frames omitted
How to solve this problem? matlab, spring boot, mwexception MATLAB Answers — New Questions
Conditional array accumulation inside parfor
I have a situation were I am testing a condition inside a parfor loop, and if true append the results of a computation to an array. A simplified example is as follows
ary = [];
parfor n=1:N
for m = 1:M
if (f(m,n)>0) % do some test, this is not easily vectorizable
ary = [ary; n m];
end
end
end
I would like, however, to avoid growing arrays in the loop.
I could estimate an upperbound for the size of ary and try to do it this way,
ary = zeros(ubound,2);
ind = 0;
parfor n=1:N
for m = 1:M
if (f(m,n)>0) % do some test, this is not easily vectorizable
ind = ind + 1;
ary(ind,:) = [n m]; % such indexing will not work within parfor
end
end
end
but that wouldn’t work as shown in the comment.
Another idea I had was using a logical array to keep track of the conditional result.
condary = false(N*M);
for k = 1:N*M % flatten the loop
% get n and m from k; k = (n-1)*M+m, therefore
m = mod(k,M); if m == 0, m = M; end
n = (k-m)/M+1;
if (f(m,n)>0)
condary(k) = true;
end
end
The desired array, ary, can then be back-constructed from the logical array in a second loop. In fact, ary, can be preallocated at this point. Or the operations meant to be performed using ary can be performed based on condary in a second loop. But this involves flattening the loop.
I was wondering if there are any better ways to do this.I have a situation were I am testing a condition inside a parfor loop, and if true append the results of a computation to an array. A simplified example is as follows
ary = [];
parfor n=1:N
for m = 1:M
if (f(m,n)>0) % do some test, this is not easily vectorizable
ary = [ary; n m];
end
end
end
I would like, however, to avoid growing arrays in the loop.
I could estimate an upperbound for the size of ary and try to do it this way,
ary = zeros(ubound,2);
ind = 0;
parfor n=1:N
for m = 1:M
if (f(m,n)>0) % do some test, this is not easily vectorizable
ind = ind + 1;
ary(ind,:) = [n m]; % such indexing will not work within parfor
end
end
end
but that wouldn’t work as shown in the comment.
Another idea I had was using a logical array to keep track of the conditional result.
condary = false(N*M);
for k = 1:N*M % flatten the loop
% get n and m from k; k = (n-1)*M+m, therefore
m = mod(k,M); if m == 0, m = M; end
n = (k-m)/M+1;
if (f(m,n)>0)
condary(k) = true;
end
end
The desired array, ary, can then be back-constructed from the logical array in a second loop. In fact, ary, can be preallocated at this point. Or the operations meant to be performed using ary can be performed based on condary in a second loop. But this involves flattening the loop.
I was wondering if there are any better ways to do this. I have a situation were I am testing a condition inside a parfor loop, and if true append the results of a computation to an array. A simplified example is as follows
ary = [];
parfor n=1:N
for m = 1:M
if (f(m,n)>0) % do some test, this is not easily vectorizable
ary = [ary; n m];
end
end
end
I would like, however, to avoid growing arrays in the loop.
I could estimate an upperbound for the size of ary and try to do it this way,
ary = zeros(ubound,2);
ind = 0;
parfor n=1:N
for m = 1:M
if (f(m,n)>0) % do some test, this is not easily vectorizable
ind = ind + 1;
ary(ind,:) = [n m]; % such indexing will not work within parfor
end
end
end
but that wouldn’t work as shown in the comment.
Another idea I had was using a logical array to keep track of the conditional result.
condary = false(N*M);
for k = 1:N*M % flatten the loop
% get n and m from k; k = (n-1)*M+m, therefore
m = mod(k,M); if m == 0, m = M; end
n = (k-m)/M+1;
if (f(m,n)>0)
condary(k) = true;
end
end
The desired array, ary, can then be back-constructed from the logical array in a second loop. In fact, ary, can be preallocated at this point. Or the operations meant to be performed using ary can be performed based on condary in a second loop. But this involves flattening the loop.
I was wondering if there are any better ways to do this. parfor, array, preallocation MATLAB Answers — New Questions
shift-mean pour l’image
S’il vous plaît, j’ai besoin d’un code d’algorithme Shift Mean pour la segmentation d’une image en niveaux de gris. Si quelqu’un peut m’aider, merci d’avance.S’il vous plaît, j’ai besoin d’un code d’algorithme Shift Mean pour la segmentation d’une image en niveaux de gris. Si quelqu’un peut m’aider, merci d’avance. S’il vous plaît, j’ai besoin d’un code d’algorithme Shift Mean pour la segmentation d’une image en niveaux de gris. Si quelqu’un peut m’aider, merci d’avance. shift mean, segmentation, image niveau gris MATLAB Answers — New Questions
identify faces of a 3D geometry
I have two 3D geometries composed of nodes and faces.
file_im = importdata("f_mm.mat");
nodes_e = file_im.nodes_e;
faces_e = file_im.faces_e;
g_P_sez = file_im.g_P_sez;
figure
trimesh(faces_e(:,:),nodes_e(:,1),nodes_e(:,2),nodes_e(:,3),’EdgeColor’,’k’,’Linewidth’,0.1,’Facecolor’,[255 0 0]/255,’FaceAlpha’,1)
hold on
plot3(g_P_sez(:,1),g_P_sez(:,2),g_P_sez(:,3),’k.’,’Markersize’,15)
hold off
axis equal
xlabel(‘x’)
ylabel(‘y’)
zlabel(‘z’)
I want to locate the faces of this geometry (yellow box) that are contained in ‘faces_e’.
Of help I have the node ‘g_P_sez’. So could select the faces at a distance X from that node.
There would be the nearestFace function but it is not suitable for my case. Are there alternatives?I have two 3D geometries composed of nodes and faces.
file_im = importdata("f_mm.mat");
nodes_e = file_im.nodes_e;
faces_e = file_im.faces_e;
g_P_sez = file_im.g_P_sez;
figure
trimesh(faces_e(:,:),nodes_e(:,1),nodes_e(:,2),nodes_e(:,3),’EdgeColor’,’k’,’Linewidth’,0.1,’Facecolor’,[255 0 0]/255,’FaceAlpha’,1)
hold on
plot3(g_P_sez(:,1),g_P_sez(:,2),g_P_sez(:,3),’k.’,’Markersize’,15)
hold off
axis equal
xlabel(‘x’)
ylabel(‘y’)
zlabel(‘z’)
I want to locate the faces of this geometry (yellow box) that are contained in ‘faces_e’.
Of help I have the node ‘g_P_sez’. So could select the faces at a distance X from that node.
There would be the nearestFace function but it is not suitable for my case. Are there alternatives? I have two 3D geometries composed of nodes and faces.
file_im = importdata("f_mm.mat");
nodes_e = file_im.nodes_e;
faces_e = file_im.faces_e;
g_P_sez = file_im.g_P_sez;
figure
trimesh(faces_e(:,:),nodes_e(:,1),nodes_e(:,2),nodes_e(:,3),’EdgeColor’,’k’,’Linewidth’,0.1,’Facecolor’,[255 0 0]/255,’FaceAlpha’,1)
hold on
plot3(g_P_sez(:,1),g_P_sez(:,2),g_P_sez(:,3),’k.’,’Markersize’,15)
hold off
axis equal
xlabel(‘x’)
ylabel(‘y’)
zlabel(‘z’)
I want to locate the faces of this geometry (yellow box) that are contained in ‘faces_e’.
Of help I have the node ‘g_P_sez’. So could select the faces at a distance X from that node.
There would be the nearestFace function but it is not suitable for my case. Are there alternatives? faces, geometry, 3d, 3d plots, select MATLAB Answers — New Questions
Physics-informed NN for parameter identification
Dear all,
I am trying to use the physics-informed neural network (PINN) for an inverse parameter identification for ODE or PDE.
I referenced the example in this link to write the code:https://ww2.mathworks.cn/matlabcentral/answers/2019216-physical-informed-neural-network-identify-coefficient-of-loss-function#answer_1312867
Here’s the program I wrote:
clear; clc;
% Specify training configuration
numEpochs = 500000;
avgG = [];
avgSqG = [];
batchSize = 500;
lossFcn = @modelLoss;
lr = 1e-5;
% Inverse PINN for d2x/dt2 = mu1*x + mu2*x^2
mu1Actual = -rand;
mu2Actual = rand;
x = @(t) cos(sqrt(-mu1Actual)*t) + sin(sqrt(-mu2Actual)*t);
maxT = 2*pi/sqrt(max(-mu1Actual, -mu2Actual));
t = dlarray(linspace(0, maxT, batchSize), "CB");
xactual = dlarray(x(t), "CB");
% Specify a network and initial guesses for mu1 and mu2
net = [
featureInputLayer(1)
fullyConnectedLayer(100)
tanhLayer
fullyConnectedLayer(100)
tanhLayer
fullyConnectedLayer(1)];
params.net = dlnetwork(net);
params.mu1 = dlarray(-0.5);
params.mu2 = dlarray(0.5);
% Train
for i = 1:numEpochs
[loss, grad] = dlfeval(lossFcn, t, xactual, params);
[params, avgG, avgSqG] = adamupdate(params, grad, avgG, avgSqG, i, lr);
if mod(i, 1000) == 0
fprintf("Epoch: %d, Predicted mu1: %.3f, Actual mu1: %.3f, Predicted mu2: %.3f, Actual mu2: %.3fn", …
i, extractdata(params.mu1), mu1Actual, extractdata(params.mu2), mu2Actual);
end
end
function [loss, grad] = modelLoss(t, x, params)
xpred = forward(params.net, t);
dxdt = dlgradient(sum(real(xpred)), t, ‘EnableHigherDerivatives’, true);
d2xdt2 = dlgradient(sum(dxdt), t);
% Modify the ODE residual based on your specific ODE
odeResidual = d2xdt2 – (params.mu1 * xpred + params.mu2 * xpred.^2);
% Compute the mean square error of the ODE residual
odeLoss = mean(odeResidual.^2);
% Compute the L2 difference between the predicted xpred and the true x.
dataLoss = l2loss(real(x), real(xpred)); % Ensure real part is used
% Sum the losses and take gradients
loss = odeLoss + dataLoss;
[grad.net, grad.mu1, grad.mu2] = dlgradient(loss, params.net.Learnables, params.mu1, params.mu2);
end
When I run the script no errors are reported, but the two parameters learned are not getting closer to the true values as the number of iterations increases:
I would like to know the reason for this situation and the corresponding solution, if you can help me to change the code I will be very grateful!Dear all,
I am trying to use the physics-informed neural network (PINN) for an inverse parameter identification for ODE or PDE.
I referenced the example in this link to write the code:https://ww2.mathworks.cn/matlabcentral/answers/2019216-physical-informed-neural-network-identify-coefficient-of-loss-function#answer_1312867
Here’s the program I wrote:
clear; clc;
% Specify training configuration
numEpochs = 500000;
avgG = [];
avgSqG = [];
batchSize = 500;
lossFcn = @modelLoss;
lr = 1e-5;
% Inverse PINN for d2x/dt2 = mu1*x + mu2*x^2
mu1Actual = -rand;
mu2Actual = rand;
x = @(t) cos(sqrt(-mu1Actual)*t) + sin(sqrt(-mu2Actual)*t);
maxT = 2*pi/sqrt(max(-mu1Actual, -mu2Actual));
t = dlarray(linspace(0, maxT, batchSize), "CB");
xactual = dlarray(x(t), "CB");
% Specify a network and initial guesses for mu1 and mu2
net = [
featureInputLayer(1)
fullyConnectedLayer(100)
tanhLayer
fullyConnectedLayer(100)
tanhLayer
fullyConnectedLayer(1)];
params.net = dlnetwork(net);
params.mu1 = dlarray(-0.5);
params.mu2 = dlarray(0.5);
% Train
for i = 1:numEpochs
[loss, grad] = dlfeval(lossFcn, t, xactual, params);
[params, avgG, avgSqG] = adamupdate(params, grad, avgG, avgSqG, i, lr);
if mod(i, 1000) == 0
fprintf("Epoch: %d, Predicted mu1: %.3f, Actual mu1: %.3f, Predicted mu2: %.3f, Actual mu2: %.3fn", …
i, extractdata(params.mu1), mu1Actual, extractdata(params.mu2), mu2Actual);
end
end
function [loss, grad] = modelLoss(t, x, params)
xpred = forward(params.net, t);
dxdt = dlgradient(sum(real(xpred)), t, ‘EnableHigherDerivatives’, true);
d2xdt2 = dlgradient(sum(dxdt), t);
% Modify the ODE residual based on your specific ODE
odeResidual = d2xdt2 – (params.mu1 * xpred + params.mu2 * xpred.^2);
% Compute the mean square error of the ODE residual
odeLoss = mean(odeResidual.^2);
% Compute the L2 difference between the predicted xpred and the true x.
dataLoss = l2loss(real(x), real(xpred)); % Ensure real part is used
% Sum the losses and take gradients
loss = odeLoss + dataLoss;
[grad.net, grad.mu1, grad.mu2] = dlgradient(loss, params.net.Learnables, params.mu1, params.mu2);
end
When I run the script no errors are reported, but the two parameters learned are not getting closer to the true values as the number of iterations increases:
I would like to know the reason for this situation and the corresponding solution, if you can help me to change the code I will be very grateful! Dear all,
I am trying to use the physics-informed neural network (PINN) for an inverse parameter identification for ODE or PDE.
I referenced the example in this link to write the code:https://ww2.mathworks.cn/matlabcentral/answers/2019216-physical-informed-neural-network-identify-coefficient-of-loss-function#answer_1312867
Here’s the program I wrote:
clear; clc;
% Specify training configuration
numEpochs = 500000;
avgG = [];
avgSqG = [];
batchSize = 500;
lossFcn = @modelLoss;
lr = 1e-5;
% Inverse PINN for d2x/dt2 = mu1*x + mu2*x^2
mu1Actual = -rand;
mu2Actual = rand;
x = @(t) cos(sqrt(-mu1Actual)*t) + sin(sqrt(-mu2Actual)*t);
maxT = 2*pi/sqrt(max(-mu1Actual, -mu2Actual));
t = dlarray(linspace(0, maxT, batchSize), "CB");
xactual = dlarray(x(t), "CB");
% Specify a network and initial guesses for mu1 and mu2
net = [
featureInputLayer(1)
fullyConnectedLayer(100)
tanhLayer
fullyConnectedLayer(100)
tanhLayer
fullyConnectedLayer(1)];
params.net = dlnetwork(net);
params.mu1 = dlarray(-0.5);
params.mu2 = dlarray(0.5);
% Train
for i = 1:numEpochs
[loss, grad] = dlfeval(lossFcn, t, xactual, params);
[params, avgG, avgSqG] = adamupdate(params, grad, avgG, avgSqG, i, lr);
if mod(i, 1000) == 0
fprintf("Epoch: %d, Predicted mu1: %.3f, Actual mu1: %.3f, Predicted mu2: %.3f, Actual mu2: %.3fn", …
i, extractdata(params.mu1), mu1Actual, extractdata(params.mu2), mu2Actual);
end
end
function [loss, grad] = modelLoss(t, x, params)
xpred = forward(params.net, t);
dxdt = dlgradient(sum(real(xpred)), t, ‘EnableHigherDerivatives’, true);
d2xdt2 = dlgradient(sum(dxdt), t);
% Modify the ODE residual based on your specific ODE
odeResidual = d2xdt2 – (params.mu1 * xpred + params.mu2 * xpred.^2);
% Compute the mean square error of the ODE residual
odeLoss = mean(odeResidual.^2);
% Compute the L2 difference between the predicted xpred and the true x.
dataLoss = l2loss(real(x), real(xpred)); % Ensure real part is used
% Sum the losses and take gradients
loss = odeLoss + dataLoss;
[grad.net, grad.mu1, grad.mu2] = dlgradient(loss, params.net.Learnables, params.mu1, params.mu2);
end
When I run the script no errors are reported, but the two parameters learned are not getting closer to the true values as the number of iterations increases:
I would like to know the reason for this situation and the corresponding solution, if you can help me to change the code I will be very grateful! deep learning, pinn, physics-informed nn MATLAB Answers — New Questions
Why Does dcm2angle Work Like This?
Suppose I have a direction cosine matrix, brought to my attention by a colleague
C = round(angle2dcm(-pi/2,-pi/2,0,’ZYX’))
Extract the angles with @doc:dcm2angle (Aerospace Toolbox) using the Default option
[a1,a2,a3] = dcm2angle(C,’ZYX’,’Default’);[a1,a2,a3]
Because the middle angle is -pi/2, the extracted angle triplets have multiple solutions, but the default result isn’t one of them.
Sometime after R2019b and before or at R2022a, an additonal optional argument can be specified, though I could find nothing about this new argument in the release notes nor the bug fixes.
[a1,a2,a3] = dcm2angle(C,’ZYX’,’Robust’);[a1,a2,a3]
Now we get a correct answer.
Trying with @doc:rotm2eul that’s used in other toolboxes (Robotics, Navigation, UAV) we see that it returns a correct result without any optional arguments.
eul = rotm2eul(C.’,’ZYX’)
round(angle2dcm(eul(1),eul(2),eul(3),’ZYX’))
@doc:dcm2angle with the Robust option actually computes two sets of angles, then computes the DCM with each set, and then compares the recomputed DCMs to the input DCM, and then returns the set of angles that result in a DCM that’s closest to the input. @doc:rotm2eul uses a different approach altogether, though is limited to only three axes sequences.
To be sure, the Default option with @doc:dcm2angle is considerably faster than Robust, but Robust appears to take the same amout of time as @doc:rotm2eul
timeit(@() dcm2angle(repmat(C,1,1,1e5),’ZYX’,’Default’),3)
timeit(@() dcm2angle(repmat(C,1,1,1e5),’ZYX’,’Robust’),3)
timeit(@() rotm2eul(repmat(C.’,1,1,1e5),’ZYX’),1)
Does anyone have a thought as to why dcm2angle is implemented as it is with Default and forces the user to use Robust? And why wouldn’t that same reason apply to rotm2eul?
Once MathWorks realized that dcm2angle Default was returning incorrect results in some cases, why patch it with Robust and keep Default instead of just fixing the bug?Suppose I have a direction cosine matrix, brought to my attention by a colleague
C = round(angle2dcm(-pi/2,-pi/2,0,’ZYX’))
Extract the angles with @doc:dcm2angle (Aerospace Toolbox) using the Default option
[a1,a2,a3] = dcm2angle(C,’ZYX’,’Default’);[a1,a2,a3]
Because the middle angle is -pi/2, the extracted angle triplets have multiple solutions, but the default result isn’t one of them.
Sometime after R2019b and before or at R2022a, an additonal optional argument can be specified, though I could find nothing about this new argument in the release notes nor the bug fixes.
[a1,a2,a3] = dcm2angle(C,’ZYX’,’Robust’);[a1,a2,a3]
Now we get a correct answer.
Trying with @doc:rotm2eul that’s used in other toolboxes (Robotics, Navigation, UAV) we see that it returns a correct result without any optional arguments.
eul = rotm2eul(C.’,’ZYX’)
round(angle2dcm(eul(1),eul(2),eul(3),’ZYX’))
@doc:dcm2angle with the Robust option actually computes two sets of angles, then computes the DCM with each set, and then compares the recomputed DCMs to the input DCM, and then returns the set of angles that result in a DCM that’s closest to the input. @doc:rotm2eul uses a different approach altogether, though is limited to only three axes sequences.
To be sure, the Default option with @doc:dcm2angle is considerably faster than Robust, but Robust appears to take the same amout of time as @doc:rotm2eul
timeit(@() dcm2angle(repmat(C,1,1,1e5),’ZYX’,’Default’),3)
timeit(@() dcm2angle(repmat(C,1,1,1e5),’ZYX’,’Robust’),3)
timeit(@() rotm2eul(repmat(C.’,1,1,1e5),’ZYX’),1)
Does anyone have a thought as to why dcm2angle is implemented as it is with Default and forces the user to use Robust? And why wouldn’t that same reason apply to rotm2eul?
Once MathWorks realized that dcm2angle Default was returning incorrect results in some cases, why patch it with Robust and keep Default instead of just fixing the bug? Suppose I have a direction cosine matrix, brought to my attention by a colleague
C = round(angle2dcm(-pi/2,-pi/2,0,’ZYX’))
Extract the angles with @doc:dcm2angle (Aerospace Toolbox) using the Default option
[a1,a2,a3] = dcm2angle(C,’ZYX’,’Default’);[a1,a2,a3]
Because the middle angle is -pi/2, the extracted angle triplets have multiple solutions, but the default result isn’t one of them.
Sometime after R2019b and before or at R2022a, an additonal optional argument can be specified, though I could find nothing about this new argument in the release notes nor the bug fixes.
[a1,a2,a3] = dcm2angle(C,’ZYX’,’Robust’);[a1,a2,a3]
Now we get a correct answer.
Trying with @doc:rotm2eul that’s used in other toolboxes (Robotics, Navigation, UAV) we see that it returns a correct result without any optional arguments.
eul = rotm2eul(C.’,’ZYX’)
round(angle2dcm(eul(1),eul(2),eul(3),’ZYX’))
@doc:dcm2angle with the Robust option actually computes two sets of angles, then computes the DCM with each set, and then compares the recomputed DCMs to the input DCM, and then returns the set of angles that result in a DCM that’s closest to the input. @doc:rotm2eul uses a different approach altogether, though is limited to only three axes sequences.
To be sure, the Default option with @doc:dcm2angle is considerably faster than Robust, but Robust appears to take the same amout of time as @doc:rotm2eul
timeit(@() dcm2angle(repmat(C,1,1,1e5),’ZYX’,’Default’),3)
timeit(@() dcm2angle(repmat(C,1,1,1e5),’ZYX’,’Robust’),3)
timeit(@() rotm2eul(repmat(C.’,1,1,1e5),’ZYX’),1)
Does anyone have a thought as to why dcm2angle is implemented as it is with Default and forces the user to use Robust? And why wouldn’t that same reason apply to rotm2eul?
Once MathWorks realized that dcm2angle Default was returning incorrect results in some cases, why patch it with Robust and keep Default instead of just fixing the bug? dcm2angle, rotm2eul MATLAB Answers — New Questions