Physics-informed NN for parameter identification
Dear all,
I am trying to use the physics-informed neural network (PINN) for an inverse parameter identification for ODE or PDE.
I referenced the example in this link to write the code:https://ww2.mathworks.cn/matlabcentral/answers/2019216-physical-informed-neural-network-identify-coefficient-of-loss-function#answer_1312867
Here’s the program I wrote:
clear; clc;
% Specify training configuration
numEpochs = 500000;
avgG = [];
avgSqG = [];
batchSize = 500;
lossFcn = @modelLoss;
lr = 1e-5;
% Inverse PINN for d2x/dt2 = mu1*x + mu2*x^2
mu1Actual = -rand;
mu2Actual = rand;
x = @(t) cos(sqrt(-mu1Actual)*t) + sin(sqrt(-mu2Actual)*t);
maxT = 2*pi/sqrt(max(-mu1Actual, -mu2Actual));
t = dlarray(linspace(0, maxT, batchSize), "CB");
xactual = dlarray(x(t), "CB");
% Specify a network and initial guesses for mu1 and mu2
net = [
featureInputLayer(1)
fullyConnectedLayer(100)
tanhLayer
fullyConnectedLayer(100)
tanhLayer
fullyConnectedLayer(1)];
params.net = dlnetwork(net);
params.mu1 = dlarray(-0.5);
params.mu2 = dlarray(0.5);
% Train
for i = 1:numEpochs
[loss, grad] = dlfeval(lossFcn, t, xactual, params);
[params, avgG, avgSqG] = adamupdate(params, grad, avgG, avgSqG, i, lr);
if mod(i, 1000) == 0
fprintf("Epoch: %d, Predicted mu1: %.3f, Actual mu1: %.3f, Predicted mu2: %.3f, Actual mu2: %.3fn", …
i, extractdata(params.mu1), mu1Actual, extractdata(params.mu2), mu2Actual);
end
end
function [loss, grad] = modelLoss(t, x, params)
xpred = forward(params.net, t);
dxdt = dlgradient(sum(real(xpred)), t, ‘EnableHigherDerivatives’, true);
d2xdt2 = dlgradient(sum(dxdt), t);
% Modify the ODE residual based on your specific ODE
odeResidual = d2xdt2 – (params.mu1 * xpred + params.mu2 * xpred.^2);
% Compute the mean square error of the ODE residual
odeLoss = mean(odeResidual.^2);
% Compute the L2 difference between the predicted xpred and the true x.
dataLoss = l2loss(real(x), real(xpred)); % Ensure real part is used
% Sum the losses and take gradients
loss = odeLoss + dataLoss;
[grad.net, grad.mu1, grad.mu2] = dlgradient(loss, params.net.Learnables, params.mu1, params.mu2);
end
When I run the script no errors are reported, but the two parameters learned are not getting closer to the true values as the number of iterations increases:
I would like to know the reason for this situation and the corresponding solution, if you can help me to change the code I will be very grateful!Dear all,
I am trying to use the physics-informed neural network (PINN) for an inverse parameter identification for ODE or PDE.
I referenced the example in this link to write the code:https://ww2.mathworks.cn/matlabcentral/answers/2019216-physical-informed-neural-network-identify-coefficient-of-loss-function#answer_1312867
Here’s the program I wrote:
clear; clc;
% Specify training configuration
numEpochs = 500000;
avgG = [];
avgSqG = [];
batchSize = 500;
lossFcn = @modelLoss;
lr = 1e-5;
% Inverse PINN for d2x/dt2 = mu1*x + mu2*x^2
mu1Actual = -rand;
mu2Actual = rand;
x = @(t) cos(sqrt(-mu1Actual)*t) + sin(sqrt(-mu2Actual)*t);
maxT = 2*pi/sqrt(max(-mu1Actual, -mu2Actual));
t = dlarray(linspace(0, maxT, batchSize), "CB");
xactual = dlarray(x(t), "CB");
% Specify a network and initial guesses for mu1 and mu2
net = [
featureInputLayer(1)
fullyConnectedLayer(100)
tanhLayer
fullyConnectedLayer(100)
tanhLayer
fullyConnectedLayer(1)];
params.net = dlnetwork(net);
params.mu1 = dlarray(-0.5);
params.mu2 = dlarray(0.5);
% Train
for i = 1:numEpochs
[loss, grad] = dlfeval(lossFcn, t, xactual, params);
[params, avgG, avgSqG] = adamupdate(params, grad, avgG, avgSqG, i, lr);
if mod(i, 1000) == 0
fprintf("Epoch: %d, Predicted mu1: %.3f, Actual mu1: %.3f, Predicted mu2: %.3f, Actual mu2: %.3fn", …
i, extractdata(params.mu1), mu1Actual, extractdata(params.mu2), mu2Actual);
end
end
function [loss, grad] = modelLoss(t, x, params)
xpred = forward(params.net, t);
dxdt = dlgradient(sum(real(xpred)), t, ‘EnableHigherDerivatives’, true);
d2xdt2 = dlgradient(sum(dxdt), t);
% Modify the ODE residual based on your specific ODE
odeResidual = d2xdt2 – (params.mu1 * xpred + params.mu2 * xpred.^2);
% Compute the mean square error of the ODE residual
odeLoss = mean(odeResidual.^2);
% Compute the L2 difference between the predicted xpred and the true x.
dataLoss = l2loss(real(x), real(xpred)); % Ensure real part is used
% Sum the losses and take gradients
loss = odeLoss + dataLoss;
[grad.net, grad.mu1, grad.mu2] = dlgradient(loss, params.net.Learnables, params.mu1, params.mu2);
end
When I run the script no errors are reported, but the two parameters learned are not getting closer to the true values as the number of iterations increases:
I would like to know the reason for this situation and the corresponding solution, if you can help me to change the code I will be very grateful! Dear all,
I am trying to use the physics-informed neural network (PINN) for an inverse parameter identification for ODE or PDE.
I referenced the example in this link to write the code:https://ww2.mathworks.cn/matlabcentral/answers/2019216-physical-informed-neural-network-identify-coefficient-of-loss-function#answer_1312867
Here’s the program I wrote:
clear; clc;
% Specify training configuration
numEpochs = 500000;
avgG = [];
avgSqG = [];
batchSize = 500;
lossFcn = @modelLoss;
lr = 1e-5;
% Inverse PINN for d2x/dt2 = mu1*x + mu2*x^2
mu1Actual = -rand;
mu2Actual = rand;
x = @(t) cos(sqrt(-mu1Actual)*t) + sin(sqrt(-mu2Actual)*t);
maxT = 2*pi/sqrt(max(-mu1Actual, -mu2Actual));
t = dlarray(linspace(0, maxT, batchSize), "CB");
xactual = dlarray(x(t), "CB");
% Specify a network and initial guesses for mu1 and mu2
net = [
featureInputLayer(1)
fullyConnectedLayer(100)
tanhLayer
fullyConnectedLayer(100)
tanhLayer
fullyConnectedLayer(1)];
params.net = dlnetwork(net);
params.mu1 = dlarray(-0.5);
params.mu2 = dlarray(0.5);
% Train
for i = 1:numEpochs
[loss, grad] = dlfeval(lossFcn, t, xactual, params);
[params, avgG, avgSqG] = adamupdate(params, grad, avgG, avgSqG, i, lr);
if mod(i, 1000) == 0
fprintf("Epoch: %d, Predicted mu1: %.3f, Actual mu1: %.3f, Predicted mu2: %.3f, Actual mu2: %.3fn", …
i, extractdata(params.mu1), mu1Actual, extractdata(params.mu2), mu2Actual);
end
end
function [loss, grad] = modelLoss(t, x, params)
xpred = forward(params.net, t);
dxdt = dlgradient(sum(real(xpred)), t, ‘EnableHigherDerivatives’, true);
d2xdt2 = dlgradient(sum(dxdt), t);
% Modify the ODE residual based on your specific ODE
odeResidual = d2xdt2 – (params.mu1 * xpred + params.mu2 * xpred.^2);
% Compute the mean square error of the ODE residual
odeLoss = mean(odeResidual.^2);
% Compute the L2 difference between the predicted xpred and the true x.
dataLoss = l2loss(real(x), real(xpred)); % Ensure real part is used
% Sum the losses and take gradients
loss = odeLoss + dataLoss;
[grad.net, grad.mu1, grad.mu2] = dlgradient(loss, params.net.Learnables, params.mu1, params.mu2);
end
When I run the script no errors are reported, but the two parameters learned are not getting closer to the true values as the number of iterations increases:
I would like to know the reason for this situation and the corresponding solution, if you can help me to change the code I will be very grateful! deep learning, pinn, physics-informed nn MATLAB Answers — New Questions