NLP problem with fmincon: Non-differentiable point in objective function
Consider the simplified optimisation problem having as decision variable the vector:
where is the number of discretisation points, multiplied by the number of variables per point (e.g. 6 state variables and 3 control variables).
Let the objective function be defined such that the control effort is minimised:
Where is the number of discretisation points and is the 3×1 control vector at the i-th time instant.
In code notation, this becomes:
%% Example:
% Problem data:
timeInstants = 10;
stateVariables = 6;
controlVariables = 3;
allVariables = stateVariables + controlVariables;
decisionVariables = allVariables*timeInstants;
% Indeces:
idx = reshape(1:decisionVariables,allVariables,[])’;
controlIdx = idx(:,1:controlVariables)’;
% NLP vector:
x = rand(1,decisionVariables);
% Objective function:
f = sum(vecnorm(x(controlIdx)));
From literature, I know that the optimal solution of my problem is bang-bang, i.e.:
The analytical Jacobian and Hessian are defined as:
Considering the control profile I’m looking for, the convergence is affected by the singularity when , where both Jacobian and Hessian become indeterminate.
I tried to overcome the problem by imposing alternatives forms when the NaN occurs (such as ones or zeros), but considering that the zero solution is optimal, this causes convergence failures.
The same problem occurs with numerical derivatives.
How can I overcome this problem?
Do you have any reference about other people facing this issue?
Thanks in advance.Consider the simplified optimisation problem having as decision variable the vector:
where is the number of discretisation points, multiplied by the number of variables per point (e.g. 6 state variables and 3 control variables).
Let the objective function be defined such that the control effort is minimised:
Where is the number of discretisation points and is the 3×1 control vector at the i-th time instant.
In code notation, this becomes:
%% Example:
% Problem data:
timeInstants = 10;
stateVariables = 6;
controlVariables = 3;
allVariables = stateVariables + controlVariables;
decisionVariables = allVariables*timeInstants;
% Indeces:
idx = reshape(1:decisionVariables,allVariables,[])’;
controlIdx = idx(:,1:controlVariables)’;
% NLP vector:
x = rand(1,decisionVariables);
% Objective function:
f = sum(vecnorm(x(controlIdx)));
From literature, I know that the optimal solution of my problem is bang-bang, i.e.:
The analytical Jacobian and Hessian are defined as:
Considering the control profile I’m looking for, the convergence is affected by the singularity when , where both Jacobian and Hessian become indeterminate.
I tried to overcome the problem by imposing alternatives forms when the NaN occurs (such as ones or zeros), but considering that the zero solution is optimal, this causes convergence failures.
The same problem occurs with numerical derivatives.
How can I overcome this problem?
Do you have any reference about other people facing this issue?
Thanks in advance. Consider the simplified optimisation problem having as decision variable the vector:
where is the number of discretisation points, multiplied by the number of variables per point (e.g. 6 state variables and 3 control variables).
Let the objective function be defined such that the control effort is minimised:
Where is the number of discretisation points and is the 3×1 control vector at the i-th time instant.
In code notation, this becomes:
%% Example:
% Problem data:
timeInstants = 10;
stateVariables = 6;
controlVariables = 3;
allVariables = stateVariables + controlVariables;
decisionVariables = allVariables*timeInstants;
% Indeces:
idx = reshape(1:decisionVariables,allVariables,[])’;
controlIdx = idx(:,1:controlVariables)’;
% NLP vector:
x = rand(1,decisionVariables);
% Objective function:
f = sum(vecnorm(x(controlIdx)));
From literature, I know that the optimal solution of my problem is bang-bang, i.e.:
The analytical Jacobian and Hessian are defined as:
Considering the control profile I’m looking for, the convergence is affected by the singularity when , where both Jacobian and Hessian become indeterminate.
I tried to overcome the problem by imposing alternatives forms when the NaN occurs (such as ones or zeros), but considering that the zero solution is optimal, this causes convergence failures.
The same problem occurs with numerical derivatives.
How can I overcome this problem?
Do you have any reference about other people facing this issue?
Thanks in advance. nlp, fmincon, hessian, jacobian, indeterminate, nondifferentiable, analytical MATLAB Answers — New Questions