Different Sample time for RL environment
Hello Everyone,
I am trying to build a RL agent using DQN currently. My environment model is composed of Nonlinear equations which have faster dynamics around 1ms. I was wondering if it possible to have my RL agent to work at 10ms and let the environment run at 1ms. This will obviously mean that for 10 timesteps the environment will be feeding at a constant control action from the RL agent.
I know I can code this up if I create the agent and environment myself, but currently I’m taking advantage of MATLAB’s Reinforcement Learning Toolbox in MATLAB editor and I would save a lot of time if this is possible in the toolbox itself.
Thanks.Hello Everyone,
I am trying to build a RL agent using DQN currently. My environment model is composed of Nonlinear equations which have faster dynamics around 1ms. I was wondering if it possible to have my RL agent to work at 10ms and let the environment run at 1ms. This will obviously mean that for 10 timesteps the environment will be feeding at a constant control action from the RL agent.
I know I can code this up if I create the agent and environment myself, but currently I’m taking advantage of MATLAB’s Reinforcement Learning Toolbox in MATLAB editor and I would save a lot of time if this is possible in the toolbox itself.
Thanks. Hello Everyone,
I am trying to build a RL agent using DQN currently. My environment model is composed of Nonlinear equations which have faster dynamics around 1ms. I was wondering if it possible to have my RL agent to work at 10ms and let the environment run at 1ms. This will obviously mean that for 10 timesteps the environment will be feeding at a constant control action from the RL agent.
I know I can code this up if I create the agent and environment myself, but currently I’m taking advantage of MATLAB’s Reinforcement Learning Toolbox in MATLAB editor and I would save a lot of time if this is possible in the toolbox itself.
Thanks. rl environment, sample time, rl toolbox MATLAB Answers — New Questions