Steady state error in DDPG control
I am trying to make some modifications in Control Water Level in a Tank Using a DDPG Agent example. I want to reduce sample time from 1.0 to 0.5, so I set Ts = 0.5. Consequently, I had to make adjustment on StopTrainingValue, i.e., changed its value from 2000 to 4000. The training process was successfully completed as it can be seen below.
But there is something unexpected happened: this modifications introduce a steady state error (or something similar to) that wasn’t there in the original example.
How to overcome this steady state error? Do I need to make additional adjustments, e.g. make changes to the structure of observations, reward function, actor/critic network, StopTrainingCriteria, etc?
Update:
This is the error I get using pre-trained agent (doTraining = false, no change on the original example)
This is the error I get using re-trained agent (doTraining = true, no change on the original example)I am trying to make some modifications in Control Water Level in a Tank Using a DDPG Agent example. I want to reduce sample time from 1.0 to 0.5, so I set Ts = 0.5. Consequently, I had to make adjustment on StopTrainingValue, i.e., changed its value from 2000 to 4000. The training process was successfully completed as it can be seen below.
But there is something unexpected happened: this modifications introduce a steady state error (or something similar to) that wasn’t there in the original example.
How to overcome this steady state error? Do I need to make additional adjustments, e.g. make changes to the structure of observations, reward function, actor/critic network, StopTrainingCriteria, etc?
Update:
This is the error I get using pre-trained agent (doTraining = false, no change on the original example)
This is the error I get using re-trained agent (doTraining = true, no change on the original example) I am trying to make some modifications in Control Water Level in a Tank Using a DDPG Agent example. I want to reduce sample time from 1.0 to 0.5, so I set Ts = 0.5. Consequently, I had to make adjustment on StopTrainingValue, i.e., changed its value from 2000 to 4000. The training process was successfully completed as it can be seen below.
But there is something unexpected happened: this modifications introduce a steady state error (or something similar to) that wasn’t there in the original example.
How to overcome this steady state error? Do I need to make additional adjustments, e.g. make changes to the structure of observations, reward function, actor/critic network, StopTrainingCriteria, etc?
Update:
This is the error I get using pre-trained agent (doTraining = false, no change on the original example)
This is the error I get using re-trained agent (doTraining = true, no change on the original example) control, reinforcement learning, deep learning MATLAB Answers — New Questions