Unable to generate RLAgent
Hi,
I am new to Reinforcement Learning Toolbox and its utilities. I have created a continuous Observation and Action Environment basis my requirements.
%% Code for Setting up Environment
% Author : Srivatsank P
% Created : 05/12/2024
% Edited : 05/13/2024
clear;
clc;
%% Setup Dynamics
% learned dynamics
model_path = ‘doubleInt2D_net_longtime.mat’;
% Load neural dynamics
load(model_path);
nnet = FCNReLUDyns([6, 5, 5, 4], ‘state_idx’, 1:4, ‘control_idx’, 5:6);
nnet = nnet.load_model_from_SeriesNetwork(net);
% Load black box agent
A = doubleInt2D_NN(nnet);
%% Create RL Environment
% States
ObservationInfo = rlNumericSpec([8 1]);
ObservationInfo.Name = ‘DoubleInt2D_States’;
ObservationInfo.Description = ‘x,y,xdot,ydot,x_goal,y_goal,xdot_goal,ydot_goal’;
ObservationInfo.LowerLimit = [A.min_x;A.min_x];
ObservationInfo.UpperLimit = [A.max_x;A.max_x];
% Control Variables
ActionInfo = rlNumericSpec([2 1]);
ActionInfo.Name = ‘DoubleInt2D_Control’;
ActionInfo.Description = ‘u1,u2’;
ActionInfo.LowerLimit = A.min_u;
ActionInfo.UpperLimit = A.max_u;
% Functions that define Initialization and Reward Calculation
reset_handle = @() reset_dynamics(A);
reward_handle = @(Action,LoggedSignals) dynamics_and_reward(Action,LoggedSignals,A);
%Environment
doubleInt2D_env = rlFunctionEnv(ObservationInfo,ActionInfo,reward_handle,reset_handle);
doubleInt2D_NN is an equivalent NN model I am using for the dynamics, the details of which I can’t share unfortunately.
After this, I tried using "Reinforcement Learning Designer" to generate an agent. Agent details are shown below:
I run into the following error on my console:
Warning: Error occurred while executing the listener callback for event ButtonPushed defined for class matlab.ui.control.Button:
Error using dlnetwork/connectLayers (line 250)
Dot indexing is not supported for variables of this type.
Error in rl.util.default.createSingleChannelOutNet (line 12)
Net = connectLayers(Net,BridgeOutputName,OutputLayerName);
Error in rl.function.rlContinuousDeterministicActor.createDefault (line 200)
[actorNet,actionLayerName] = rl.util.default.createSingleChannelOutNet(inputGraph,bridgeOutputName,numOutput);
Error in rlTD3Agent (line 99)
Actor = rl.function.rlContinuousDeterministicActor.createDefault(ObservationInfo, ActionInfo, InitOptions);
Error in rl.util.createAgentFactory (line 16)
Agent = rlTD3Agent(Oinfo,Ainfo,AgentInitOpts);
Error in rl.util.createAgentFromEnvFactory (line 11)
Agent = rl.util.createAgentFactory(Type,Oinfo,Ainfo,AgentInitOpts);
Error in rl.internal.app.dialog.NewAgentFromEnvironmentDialog/createAgent (line 297)
Agent = rl.util.createAgentFromEnvFactory(AgentType,Env,AgentInitOpts);
Error in rl.internal.app.dialog.NewAgentFromEnvironmentDialog/callbackOK (line 274)
Agent = createAgent(obj);
Error in rl.internal.app.dialog.NewAgentFromEnvironmentDialog>@(es,ed)callbackOK(obj) (line 183)
addlistener(obj.OKButton,’ButtonPushed’,@(es,ed) callbackOK(obj));
Error in appdesservices.internal.interfaces.model.AbstractModel/executeUserCallback (line 282)
notify(obj, matlabEventName, matlabEventData);
Error in matlab.ui.control.internal.controller.ComponentController/handleUserInteraction (line 442)
obj.Model.executeUserCallback(callbackInfo{:});
Error in matlab.ui.control.internal.controller.PushButtonController/handleEvent (line 95)
obj.handleUserInteraction(‘ButtonPushed’, event.Data, {‘ButtonPushed’, eventData});
Error in appdesservices.internal.interfaces.controller.AbstractController>@(varargin)obj.handleEvent(varargin{:}) (line 214)
obj.Listeners = addlistener(obj.ViewModel, ‘peerEvent’, @obj.handleEvent);
Error in
viewmodel.internal.factory.ManagerFactoryProducer>@(src,event)callback(src,viewmodel.internal.factory.ManagerFactoryProducer.convertStructToEventData(event))
(line 79)
proxyCallback = @(src, event)callback(src, …
> In appdesservices.internal.interfaces.model/AbstractModel/executeUserCallback (line 282)
In matlab.ui.control.internal.controller/ComponentController/handleUserInteraction (line 442)
In matlab.ui.control.internal.controller/PushButtonController/handleEvent (line 95)
In appdesservices.internal.interfaces.controller.AbstractController>@(varargin)obj.handleEvent(varargin{:}) (line 214)
In viewmodel.internal.factory.ManagerFactoryProducer>@(src,event)callback(src,viewmodel.internal.factory.ManagerFactoryProducer.convertStructToEventData(event)) (line 79)
>>
I run into the same error even with the default provided environments. I am currently on R2024a and have tried the same on Linux(ubuntu) and Windows 11. I do not know how to proceed and am unable to create agents. TIA for help!Hi,
I am new to Reinforcement Learning Toolbox and its utilities. I have created a continuous Observation and Action Environment basis my requirements.
%% Code for Setting up Environment
% Author : Srivatsank P
% Created : 05/12/2024
% Edited : 05/13/2024
clear;
clc;
%% Setup Dynamics
% learned dynamics
model_path = ‘doubleInt2D_net_longtime.mat’;
% Load neural dynamics
load(model_path);
nnet = FCNReLUDyns([6, 5, 5, 4], ‘state_idx’, 1:4, ‘control_idx’, 5:6);
nnet = nnet.load_model_from_SeriesNetwork(net);
% Load black box agent
A = doubleInt2D_NN(nnet);
%% Create RL Environment
% States
ObservationInfo = rlNumericSpec([8 1]);
ObservationInfo.Name = ‘DoubleInt2D_States’;
ObservationInfo.Description = ‘x,y,xdot,ydot,x_goal,y_goal,xdot_goal,ydot_goal’;
ObservationInfo.LowerLimit = [A.min_x;A.min_x];
ObservationInfo.UpperLimit = [A.max_x;A.max_x];
% Control Variables
ActionInfo = rlNumericSpec([2 1]);
ActionInfo.Name = ‘DoubleInt2D_Control’;
ActionInfo.Description = ‘u1,u2’;
ActionInfo.LowerLimit = A.min_u;
ActionInfo.UpperLimit = A.max_u;
% Functions that define Initialization and Reward Calculation
reset_handle = @() reset_dynamics(A);
reward_handle = @(Action,LoggedSignals) dynamics_and_reward(Action,LoggedSignals,A);
%Environment
doubleInt2D_env = rlFunctionEnv(ObservationInfo,ActionInfo,reward_handle,reset_handle);
doubleInt2D_NN is an equivalent NN model I am using for the dynamics, the details of which I can’t share unfortunately.
After this, I tried using "Reinforcement Learning Designer" to generate an agent. Agent details are shown below:
I run into the following error on my console:
Warning: Error occurred while executing the listener callback for event ButtonPushed defined for class matlab.ui.control.Button:
Error using dlnetwork/connectLayers (line 250)
Dot indexing is not supported for variables of this type.
Error in rl.util.default.createSingleChannelOutNet (line 12)
Net = connectLayers(Net,BridgeOutputName,OutputLayerName);
Error in rl.function.rlContinuousDeterministicActor.createDefault (line 200)
[actorNet,actionLayerName] = rl.util.default.createSingleChannelOutNet(inputGraph,bridgeOutputName,numOutput);
Error in rlTD3Agent (line 99)
Actor = rl.function.rlContinuousDeterministicActor.createDefault(ObservationInfo, ActionInfo, InitOptions);
Error in rl.util.createAgentFactory (line 16)
Agent = rlTD3Agent(Oinfo,Ainfo,AgentInitOpts);
Error in rl.util.createAgentFromEnvFactory (line 11)
Agent = rl.util.createAgentFactory(Type,Oinfo,Ainfo,AgentInitOpts);
Error in rl.internal.app.dialog.NewAgentFromEnvironmentDialog/createAgent (line 297)
Agent = rl.util.createAgentFromEnvFactory(AgentType,Env,AgentInitOpts);
Error in rl.internal.app.dialog.NewAgentFromEnvironmentDialog/callbackOK (line 274)
Agent = createAgent(obj);
Error in rl.internal.app.dialog.NewAgentFromEnvironmentDialog>@(es,ed)callbackOK(obj) (line 183)
addlistener(obj.OKButton,’ButtonPushed’,@(es,ed) callbackOK(obj));
Error in appdesservices.internal.interfaces.model.AbstractModel/executeUserCallback (line 282)
notify(obj, matlabEventName, matlabEventData);
Error in matlab.ui.control.internal.controller.ComponentController/handleUserInteraction (line 442)
obj.Model.executeUserCallback(callbackInfo{:});
Error in matlab.ui.control.internal.controller.PushButtonController/handleEvent (line 95)
obj.handleUserInteraction(‘ButtonPushed’, event.Data, {‘ButtonPushed’, eventData});
Error in appdesservices.internal.interfaces.controller.AbstractController>@(varargin)obj.handleEvent(varargin{:}) (line 214)
obj.Listeners = addlistener(obj.ViewModel, ‘peerEvent’, @obj.handleEvent);
Error in
viewmodel.internal.factory.ManagerFactoryProducer>@(src,event)callback(src,viewmodel.internal.factory.ManagerFactoryProducer.convertStructToEventData(event))
(line 79)
proxyCallback = @(src, event)callback(src, …
> In appdesservices.internal.interfaces.model/AbstractModel/executeUserCallback (line 282)
In matlab.ui.control.internal.controller/ComponentController/handleUserInteraction (line 442)
In matlab.ui.control.internal.controller/PushButtonController/handleEvent (line 95)
In appdesservices.internal.interfaces.controller.AbstractController>@(varargin)obj.handleEvent(varargin{:}) (line 214)
In viewmodel.internal.factory.ManagerFactoryProducer>@(src,event)callback(src,viewmodel.internal.factory.ManagerFactoryProducer.convertStructToEventData(event)) (line 79)
>>
I run into the same error even with the default provided environments. I am currently on R2024a and have tried the same on Linux(ubuntu) and Windows 11. I do not know how to proceed and am unable to create agents. TIA for help! Hi,
I am new to Reinforcement Learning Toolbox and its utilities. I have created a continuous Observation and Action Environment basis my requirements.
%% Code for Setting up Environment
% Author : Srivatsank P
% Created : 05/12/2024
% Edited : 05/13/2024
clear;
clc;
%% Setup Dynamics
% learned dynamics
model_path = ‘doubleInt2D_net_longtime.mat’;
% Load neural dynamics
load(model_path);
nnet = FCNReLUDyns([6, 5, 5, 4], ‘state_idx’, 1:4, ‘control_idx’, 5:6);
nnet = nnet.load_model_from_SeriesNetwork(net);
% Load black box agent
A = doubleInt2D_NN(nnet);
%% Create RL Environment
% States
ObservationInfo = rlNumericSpec([8 1]);
ObservationInfo.Name = ‘DoubleInt2D_States’;
ObservationInfo.Description = ‘x,y,xdot,ydot,x_goal,y_goal,xdot_goal,ydot_goal’;
ObservationInfo.LowerLimit = [A.min_x;A.min_x];
ObservationInfo.UpperLimit = [A.max_x;A.max_x];
% Control Variables
ActionInfo = rlNumericSpec([2 1]);
ActionInfo.Name = ‘DoubleInt2D_Control’;
ActionInfo.Description = ‘u1,u2’;
ActionInfo.LowerLimit = A.min_u;
ActionInfo.UpperLimit = A.max_u;
% Functions that define Initialization and Reward Calculation
reset_handle = @() reset_dynamics(A);
reward_handle = @(Action,LoggedSignals) dynamics_and_reward(Action,LoggedSignals,A);
%Environment
doubleInt2D_env = rlFunctionEnv(ObservationInfo,ActionInfo,reward_handle,reset_handle);
doubleInt2D_NN is an equivalent NN model I am using for the dynamics, the details of which I can’t share unfortunately.
After this, I tried using "Reinforcement Learning Designer" to generate an agent. Agent details are shown below:
I run into the following error on my console:
Warning: Error occurred while executing the listener callback for event ButtonPushed defined for class matlab.ui.control.Button:
Error using dlnetwork/connectLayers (line 250)
Dot indexing is not supported for variables of this type.
Error in rl.util.default.createSingleChannelOutNet (line 12)
Net = connectLayers(Net,BridgeOutputName,OutputLayerName);
Error in rl.function.rlContinuousDeterministicActor.createDefault (line 200)
[actorNet,actionLayerName] = rl.util.default.createSingleChannelOutNet(inputGraph,bridgeOutputName,numOutput);
Error in rlTD3Agent (line 99)
Actor = rl.function.rlContinuousDeterministicActor.createDefault(ObservationInfo, ActionInfo, InitOptions);
Error in rl.util.createAgentFactory (line 16)
Agent = rlTD3Agent(Oinfo,Ainfo,AgentInitOpts);
Error in rl.util.createAgentFromEnvFactory (line 11)
Agent = rl.util.createAgentFactory(Type,Oinfo,Ainfo,AgentInitOpts);
Error in rl.internal.app.dialog.NewAgentFromEnvironmentDialog/createAgent (line 297)
Agent = rl.util.createAgentFromEnvFactory(AgentType,Env,AgentInitOpts);
Error in rl.internal.app.dialog.NewAgentFromEnvironmentDialog/callbackOK (line 274)
Agent = createAgent(obj);
Error in rl.internal.app.dialog.NewAgentFromEnvironmentDialog>@(es,ed)callbackOK(obj) (line 183)
addlistener(obj.OKButton,’ButtonPushed’,@(es,ed) callbackOK(obj));
Error in appdesservices.internal.interfaces.model.AbstractModel/executeUserCallback (line 282)
notify(obj, matlabEventName, matlabEventData);
Error in matlab.ui.control.internal.controller.ComponentController/handleUserInteraction (line 442)
obj.Model.executeUserCallback(callbackInfo{:});
Error in matlab.ui.control.internal.controller.PushButtonController/handleEvent (line 95)
obj.handleUserInteraction(‘ButtonPushed’, event.Data, {‘ButtonPushed’, eventData});
Error in appdesservices.internal.interfaces.controller.AbstractController>@(varargin)obj.handleEvent(varargin{:}) (line 214)
obj.Listeners = addlistener(obj.ViewModel, ‘peerEvent’, @obj.handleEvent);
Error in
viewmodel.internal.factory.ManagerFactoryProducer>@(src,event)callback(src,viewmodel.internal.factory.ManagerFactoryProducer.convertStructToEventData(event))
(line 79)
proxyCallback = @(src, event)callback(src, …
> In appdesservices.internal.interfaces.model/AbstractModel/executeUserCallback (line 282)
In matlab.ui.control.internal.controller/ComponentController/handleUserInteraction (line 442)
In matlab.ui.control.internal.controller/PushButtonController/handleEvent (line 95)
In appdesservices.internal.interfaces.controller.AbstractController>@(varargin)obj.handleEvent(varargin{:}) (line 214)
In viewmodel.internal.factory.ManagerFactoryProducer>@(src,event)callback(src,viewmodel.internal.factory.ManagerFactoryProducer.convertStructToEventData(event)) (line 79)
>>
I run into the same error even with the default provided environments. I am currently on R2024a and have tried the same on Linux(ubuntu) and Windows 11. I do not know how to proceed and am unable to create agents. TIA for help! reinforcement learning MATLAB Answers — New Questions