Category: Matlab
Category Archives: Matlab
I’m an employee and use Matlab in my work computer. Is that possible to obtain some kind of free license to install Matlab in my home computer for personal use?
I’m an employee and use Matlab in my work computer. Is that possible to "take credit of this" and then obtain some kind of free license to install Matlab in my home computer for personal use?I’m an employee and use Matlab in my work computer. Is that possible to "take credit of this" and then obtain some kind of free license to install Matlab in my home computer for personal use? I’m an employee and use Matlab in my work computer. Is that possible to "take credit of this" and then obtain some kind of free license to install Matlab in my home computer for personal use? home license MATLAB Answers — New Questions
How to gather digital audio stream via STM32?
Is it possible to gather data stream from serial audio interface for procedding by matlab/simulink?
I want to use STM32H743 for this purpose.
My goal is to receive audioi stream from the codec via SAI interface.
MarekIs it possible to gather data stream from serial audio interface for procedding by matlab/simulink?
I want to use STM32H743 for this purpose.
My goal is to receive audioi stream from the codec via SAI interface.
Marek Is it possible to gather data stream from serial audio interface for procedding by matlab/simulink?
I want to use STM32H743 for this purpose.
My goal is to receive audioi stream from the codec via SAI interface.
Marek stm32h7 MATLAB Answers — New Questions
FOC with BLDC motor
Hello,
I want to control a bldc motor using FOC but after doing some research i find that i can’t use FOC with BLDC only with PMSM.Can any one say if i can use FOC
Thank youHello,
I want to control a bldc motor using FOC but after doing some research i find that i can’t use FOC with BLDC only with PMSM.Can any one say if i can use FOC
Thank you Hello,
I want to control a bldc motor using FOC but after doing some research i find that i can’t use FOC with BLDC only with PMSM.Can any one say if i can use FOC
Thank you foc, bldc, pmsm MATLAB Answers — New Questions
How to label multiple objects in object detection with different names?
Hi there!
I’ve a problem with labelling my objects in an image.
Let’s have a look at the image:
This programme is detecting front/rear of the cars and a stop sign. I want the labels to say what they’re looking at. For example: "Stop Sign Confidence: 1.0000", CarRear Confidence: 0.6446 and etc. As you may see, my programme adds the probability factors correctly. But there are still no strings/labelnames attached.
You can have a look at my code:
%%
% Read test image
testImage = imread(‘StopSignTest2.jpg’);
% Detect stop signs
[bboxes,score,label] = detect(rcnn,testImage,’MiniBatchSize’,128)
% Display detection results
label_str = cell(3,1);
conf_val = [score];
conf_lab = [label];
for ii=1:3
label_str{ii} = [ ‘ Confidence: ‘ num2str(conf_val(ii), ‘%0.4f’)];
end
position = [bboxes];
outputImage = insertObjectAnnotation(testImage,’rectangle’,position,label_str,…
‘TextBoxOpacity’,0.9,’FontSize’,10);
figure
imshow(outputImage)
%%
I have NO clue in how to add strings in the label_str{ii} like the way I did with scores (num2str(conf_val(ii)).
Thanking you in advance!Hi there!
I’ve a problem with labelling my objects in an image.
Let’s have a look at the image:
This programme is detecting front/rear of the cars and a stop sign. I want the labels to say what they’re looking at. For example: "Stop Sign Confidence: 1.0000", CarRear Confidence: 0.6446 and etc. As you may see, my programme adds the probability factors correctly. But there are still no strings/labelnames attached.
You can have a look at my code:
%%
% Read test image
testImage = imread(‘StopSignTest2.jpg’);
% Detect stop signs
[bboxes,score,label] = detect(rcnn,testImage,’MiniBatchSize’,128)
% Display detection results
label_str = cell(3,1);
conf_val = [score];
conf_lab = [label];
for ii=1:3
label_str{ii} = [ ‘ Confidence: ‘ num2str(conf_val(ii), ‘%0.4f’)];
end
position = [bboxes];
outputImage = insertObjectAnnotation(testImage,’rectangle’,position,label_str,…
‘TextBoxOpacity’,0.9,’FontSize’,10);
figure
imshow(outputImage)
%%
I have NO clue in how to add strings in the label_str{ii} like the way I did with scores (num2str(conf_val(ii)).
Thanking you in advance! Hi there!
I’ve a problem with labelling my objects in an image.
Let’s have a look at the image:
This programme is detecting front/rear of the cars and a stop sign. I want the labels to say what they’re looking at. For example: "Stop Sign Confidence: 1.0000", CarRear Confidence: 0.6446 and etc. As you may see, my programme adds the probability factors correctly. But there are still no strings/labelnames attached.
You can have a look at my code:
%%
% Read test image
testImage = imread(‘StopSignTest2.jpg’);
% Detect stop signs
[bboxes,score,label] = detect(rcnn,testImage,’MiniBatchSize’,128)
% Display detection results
label_str = cell(3,1);
conf_val = [score];
conf_lab = [label];
for ii=1:3
label_str{ii} = [ ‘ Confidence: ‘ num2str(conf_val(ii), ‘%0.4f’)];
end
position = [bboxes];
outputImage = insertObjectAnnotation(testImage,’rectangle’,position,label_str,…
‘TextBoxOpacity’,0.9,’FontSize’,10);
figure
imshow(outputImage)
%%
I have NO clue in how to add strings in the label_str{ii} like the way I did with scores (num2str(conf_val(ii)).
Thanking you in advance! multiple objects, object detection, strings, objects detection, neural network, cnn MATLAB Answers — New Questions
I am getting this error when running my code >>>>Dot indexing is not supported for variables of this type. Error in rl.util.expstruct2timeserstruct (line 7) observation
The below code is the one I am running
Create Simulink Environment and Train Agent
This example shows how to convert the PI controller in the watertank Simulink® model to a reinforcement learning deep deterministic policy gradient (DDPG) agent. For an example that trains a DDPG agent in MATLAB®, see Train DDPG Agent to Balance Double Integrator Environment.
Water Tank Model
The original model for this example is the water tank model. The goal is to control the level of the water in the tank. For more information about the water tank model, see watertank Simulink Model.
Modify the original model by making the following changes:
Delete the PID Controller.
Insert the RL Agent block.
Connect the observation vector , where is the height of the water in the tank, , and is the reference height.
Set up the reward .
Configure the termination signal such that the simulation stops if or .
The resulting model is rlwatertank.slx. For more information on this model and the changes, see Create Simulink Environment for Reinforcement Learning.
open_system("RLFinal_PhD_Model_DroopPQ1")
Create the Environment
Creating an environment model includes defining the following:
Action and observation signals that the agent uses to interact with the environment. For more information, see rlNumericSpec and rlFiniteSetSpec.
Reward signal that the agent uses to measure its success. For more information, see Define Reward Signals.
Define the observation specification obsInfo and action specification actInfo.
% Observation info
obsInfo = rlNumericSpec([3 1],…
LowerLimit=[-inf -inf 0 ]’,…
UpperLimit=[ inf inf inf]’);
% Name and description are optional and not used by the software
obsInfo.Name = "observations";
obsInfo.Description = "integrated error, error, and measured height";
% Action info
actInfo = rlNumericSpec([1 1]);
actInfo.Name = "flow";
Create the environment object.
env = rlSimulinkEnv("RLFinal_PhD_Model_DroopPQ1","RLFinal_PhD_Model_DroopPQ1/RL Agent1",…
obsInfo,actInfo);
Set a custom reset function that randomizes the reference values for the model.
env.ResetFcn = @(in)localResetFcn(in);
Specify the simulation time Tf and the agent sample time Ts in seconds.
Ts = 1.0;
Tf = 200;
Fix the random generator seed for reproducibility.
rng(0)
Create the Critic
DDPG agents use a parametrized Q-value function approximator to estimate the value of the policy. A Q-value function critic takes the current observation and an action as inputs and returns a single scalar as output (the estimated discounted cumulative long-term reward for which receives the action from the state corresponding to the current observation, and following the policy thereafter).
To model the parametrized Q-value function within the critic, use a neural network with two input layers (one for the observation channel, as specified by obsInfo, and the other for the action channel, as specified by actInfo) and one output layer (which returns the scalar value).
Define each network path as an array of layer objects. Assign names to the input and output layers of each path. These names allow you to connect the paths and then later explicitly associate the network input and output layers with the appropriate environment channel. Obtain the dimension of the observation and action spaces from the obsInfo and actInfo specifications.
% Observation path
obsPath = [
featureInputLayer(obsInfo.Dimension(1),Name="obsInLyr")
fullyConnectedLayer(50)
reluLayer
fullyConnectedLayer(25,Name="obsPathOutLyr")
];
% Action path
actPath = [
featureInputLayer(actInfo.Dimension(1),Name="actInLyr")
fullyConnectedLayer(25,Name="actPathOutLyr")
];
% Common path
commonPath = [
additionLayer(2,Name="add")
reluLayer
fullyConnectedLayer(1,Name="QValue")
];
% Create the network object and add the layers
criticNet = dlnetwork();
criticNet = addLayers(criticNet,obsPath);
criticNet = addLayers(criticNet,actPath);
criticNet = addLayers(criticNet,commonPath);
% Connect the layers
criticNet = connectLayers(criticNet, …
"obsPathOutLyr","add/in1");
criticNet = connectLayers(criticNet, …
"actPathOutLyr","add/in2");
View the critic network configuration.
figure
plot(criticNet)
Initialize the dlnetwork object and summarize its properties.
criticNet = initialize(criticNet);
summary(criticNet)
Create the critic approximator object using the specified deep neural network, the environment specification objects, and the names if the network inputs to be associated with the observation and action channels.
critic = rlQValueFunction(criticNet, …
obsInfo,actInfo, …
ObservationInputNames="obsInLyr", …
ActionInputNames="actInLyr");
For more information on Q-value function objects, see rlQValueFunction.
Check the critic with a random input observation and action.
getValue(critic, …
{rand(obsInfo.Dimension)}, …
{rand(actInfo.Dimension)})
For more information on creating critics, see Create Policies and Value Functions.
Create the Actor
DDPG agents use a parametrized deterministic policy over continuous action spaces, which is learned by a continuous deterministic actor.
A continuous deterministic actor implements a parametrized deterministic policy for a continuous action space. This actor takes the current observation as input and returns as output an action that is a deterministic function of the observation.
To model the parametrized policy within the actor, use a neural network with one input layer (which receives the content of the environment observation channel, as specified by obsInfo) and one output layer (which returns the action to the environment action channel, as specified by actInfo).
Define the network as an array of layer objects.
actorNet = [
featureInputLayer(obsInfo.Dimension(1))
fullyConnectedLayer(3)
tanhLayer
fullyConnectedLayer(actInfo.Dimension(1))
];
Convert the network to a dlnetwork object and summarize its properties.
actorNet = dlnetwork(actorNet);
summary(actorNet)
Create the actor approximator object using the specified deep neural network, the environment specification objects, and the name if the network input to be associated with the observation channel.
actor = rlContinuousDeterministicActor(actorNet,obsInfo,actInfo);
For more information, see rlContinuousDeterministicActor.
Check the actor with a random input observation.
getAction(actor,{rand(obsInfo.Dimension)})
For more information on creating critics, see Create Policies and Value Functions.
Create the DDPG Agent
Create the DDPG agent using the specified actor and critic approximator objects.
agent = rlDDPGAgent(actor,critic);
For more information, see rlDDPGAgent.
Specify options for the agent, the actor, and the critic using dot notation.
agent.SampleTime = Ts;
agent.AgentOptions.TargetSmoothFactor = 1e-3;
agent.AgentOptions.DiscountFactor = 1.0;
agent.AgentOptions.MiniBatchSize = 64;
agent.AgentOptions.ExperienceBufferLength = 1e6;
agent.AgentOptions.NoiseOptions.Variance = 0.3;
agent.AgentOptions.NoiseOptions.VarianceDecayRate = 1e-5;
agent.AgentOptions.CriticOptimizerOptions.LearnRate = 1e-03;
agent.AgentOptions.CriticOptimizerOptions.GradientThreshold = 1;
agent.AgentOptions.ActorOptimizerOptions.LearnRate = 1e-04;
agent.AgentOptions.ActorOptimizerOptions.GradientThreshold = 1;
Alternatively, you can specify the agent options using an rlDDPGAgentOptions object.
Check the agent with a random input observation.
getAction(agent,{rand(obsInfo.Dimension)})
Train Agent
To train the agent, first specify the training options. For this example, use the following options:
Run each training for at most 5000 episodes. Specify that each episode lasts for at most ceil(Tf/Ts) (that is 200) time steps.
Display the training progress in the Episode Manager dialog box (set the Plots option) and disable the command line display (set the Verbose option to false).
Stop training when the agent receives an average cumulative reward greater than 800 over 20 consecutive episodes. At this point, the agent can control the level of water in the tank.
For more information, see rlTrainingOptions.
trainOpts = rlTrainingOptions(…
MaxEpisodes=5000, …
MaxStepsPerEpisode=ceil(Tf/Ts), …
ScoreAveragingWindowLength=20, …
Verbose=false, …
Plots="training-progress",…
StopTrainingCriteria="AverageReward",…
StopTrainingValue=800);
Train the agent using the train function. Training is a computationally intensive process that takes several minutes to complete. To save time while running this example, load a pretrained agent by setting doTraining to false. To train the agent yourself, set doTraining to true.
doTraining = true;
if doTraining
% Train the agent.
trainingStats = train(agent,env,trainOpts);
else
% Load the pretrained agent for the example.
load("WaterTankDDPG.mat","agent")
end
Validate Trained Agent
Validate the learned agent against the model by simulation. Since the reset function randomizes the reference values, fix the random generator seed to ensure simulation reproducibility.
rng(1)
Simulate the agent within the environment, and return the experiences as output.
simOpts = rlSimulationOptions(MaxSteps=ceil(Tf/Ts),StopOnError="on");
experiences = sim(env,agent,simOpts);
Local Reset Function
function in = localResetFcn(in)
% Randomize reference signal
blk = sprintf("RLFinal_PhD_Model_DroopPQ1/Droop/Voutref");
h = 3*randn + 0.5;
while h <= 0 || h >= 400
h = 3*randn + 200;
end
in = setBlockParameter(in,blk,Value=num2str(h));
% Randomize initial height
h1 = 3*randn + 200;
while h1 <= 0 || h1 >= 1
h1 = 3*randn + 0.5;
end
blk = "RLFinal_PhD_Model_DroopPQ1/Gain";
in = setBlockParameter(in,blk,Gain=num2str(h1));
end
I am getting the following results without rewards at all
Zero rewards
When I stop the trainging I see this error::::
Dot indexing is not supported for variables of this type.
Error in rl.util.expstruct2timeserstruct (line 7)
observation = {experiences.Observation};
Error in rl.env.AbstractEnv/sim (line 138)
s = rl.util.expstruct2timeserstruct(exp,time,oinfo,ainfo);
Copyright 2019 – 2023 The MathWorks, Inc.The below code is the one I am running
Create Simulink Environment and Train Agent
This example shows how to convert the PI controller in the watertank Simulink® model to a reinforcement learning deep deterministic policy gradient (DDPG) agent. For an example that trains a DDPG agent in MATLAB®, see Train DDPG Agent to Balance Double Integrator Environment.
Water Tank Model
The original model for this example is the water tank model. The goal is to control the level of the water in the tank. For more information about the water tank model, see watertank Simulink Model.
Modify the original model by making the following changes:
Delete the PID Controller.
Insert the RL Agent block.
Connect the observation vector , where is the height of the water in the tank, , and is the reference height.
Set up the reward .
Configure the termination signal such that the simulation stops if or .
The resulting model is rlwatertank.slx. For more information on this model and the changes, see Create Simulink Environment for Reinforcement Learning.
open_system("RLFinal_PhD_Model_DroopPQ1")
Create the Environment
Creating an environment model includes defining the following:
Action and observation signals that the agent uses to interact with the environment. For more information, see rlNumericSpec and rlFiniteSetSpec.
Reward signal that the agent uses to measure its success. For more information, see Define Reward Signals.
Define the observation specification obsInfo and action specification actInfo.
% Observation info
obsInfo = rlNumericSpec([3 1],…
LowerLimit=[-inf -inf 0 ]’,…
UpperLimit=[ inf inf inf]’);
% Name and description are optional and not used by the software
obsInfo.Name = "observations";
obsInfo.Description = "integrated error, error, and measured height";
% Action info
actInfo = rlNumericSpec([1 1]);
actInfo.Name = "flow";
Create the environment object.
env = rlSimulinkEnv("RLFinal_PhD_Model_DroopPQ1","RLFinal_PhD_Model_DroopPQ1/RL Agent1",…
obsInfo,actInfo);
Set a custom reset function that randomizes the reference values for the model.
env.ResetFcn = @(in)localResetFcn(in);
Specify the simulation time Tf and the agent sample time Ts in seconds.
Ts = 1.0;
Tf = 200;
Fix the random generator seed for reproducibility.
rng(0)
Create the Critic
DDPG agents use a parametrized Q-value function approximator to estimate the value of the policy. A Q-value function critic takes the current observation and an action as inputs and returns a single scalar as output (the estimated discounted cumulative long-term reward for which receives the action from the state corresponding to the current observation, and following the policy thereafter).
To model the parametrized Q-value function within the critic, use a neural network with two input layers (one for the observation channel, as specified by obsInfo, and the other for the action channel, as specified by actInfo) and one output layer (which returns the scalar value).
Define each network path as an array of layer objects. Assign names to the input and output layers of each path. These names allow you to connect the paths and then later explicitly associate the network input and output layers with the appropriate environment channel. Obtain the dimension of the observation and action spaces from the obsInfo and actInfo specifications.
% Observation path
obsPath = [
featureInputLayer(obsInfo.Dimension(1),Name="obsInLyr")
fullyConnectedLayer(50)
reluLayer
fullyConnectedLayer(25,Name="obsPathOutLyr")
];
% Action path
actPath = [
featureInputLayer(actInfo.Dimension(1),Name="actInLyr")
fullyConnectedLayer(25,Name="actPathOutLyr")
];
% Common path
commonPath = [
additionLayer(2,Name="add")
reluLayer
fullyConnectedLayer(1,Name="QValue")
];
% Create the network object and add the layers
criticNet = dlnetwork();
criticNet = addLayers(criticNet,obsPath);
criticNet = addLayers(criticNet,actPath);
criticNet = addLayers(criticNet,commonPath);
% Connect the layers
criticNet = connectLayers(criticNet, …
"obsPathOutLyr","add/in1");
criticNet = connectLayers(criticNet, …
"actPathOutLyr","add/in2");
View the critic network configuration.
figure
plot(criticNet)
Initialize the dlnetwork object and summarize its properties.
criticNet = initialize(criticNet);
summary(criticNet)
Create the critic approximator object using the specified deep neural network, the environment specification objects, and the names if the network inputs to be associated with the observation and action channels.
critic = rlQValueFunction(criticNet, …
obsInfo,actInfo, …
ObservationInputNames="obsInLyr", …
ActionInputNames="actInLyr");
For more information on Q-value function objects, see rlQValueFunction.
Check the critic with a random input observation and action.
getValue(critic, …
{rand(obsInfo.Dimension)}, …
{rand(actInfo.Dimension)})
For more information on creating critics, see Create Policies and Value Functions.
Create the Actor
DDPG agents use a parametrized deterministic policy over continuous action spaces, which is learned by a continuous deterministic actor.
A continuous deterministic actor implements a parametrized deterministic policy for a continuous action space. This actor takes the current observation as input and returns as output an action that is a deterministic function of the observation.
To model the parametrized policy within the actor, use a neural network with one input layer (which receives the content of the environment observation channel, as specified by obsInfo) and one output layer (which returns the action to the environment action channel, as specified by actInfo).
Define the network as an array of layer objects.
actorNet = [
featureInputLayer(obsInfo.Dimension(1))
fullyConnectedLayer(3)
tanhLayer
fullyConnectedLayer(actInfo.Dimension(1))
];
Convert the network to a dlnetwork object and summarize its properties.
actorNet = dlnetwork(actorNet);
summary(actorNet)
Create the actor approximator object using the specified deep neural network, the environment specification objects, and the name if the network input to be associated with the observation channel.
actor = rlContinuousDeterministicActor(actorNet,obsInfo,actInfo);
For more information, see rlContinuousDeterministicActor.
Check the actor with a random input observation.
getAction(actor,{rand(obsInfo.Dimension)})
For more information on creating critics, see Create Policies and Value Functions.
Create the DDPG Agent
Create the DDPG agent using the specified actor and critic approximator objects.
agent = rlDDPGAgent(actor,critic);
For more information, see rlDDPGAgent.
Specify options for the agent, the actor, and the critic using dot notation.
agent.SampleTime = Ts;
agent.AgentOptions.TargetSmoothFactor = 1e-3;
agent.AgentOptions.DiscountFactor = 1.0;
agent.AgentOptions.MiniBatchSize = 64;
agent.AgentOptions.ExperienceBufferLength = 1e6;
agent.AgentOptions.NoiseOptions.Variance = 0.3;
agent.AgentOptions.NoiseOptions.VarianceDecayRate = 1e-5;
agent.AgentOptions.CriticOptimizerOptions.LearnRate = 1e-03;
agent.AgentOptions.CriticOptimizerOptions.GradientThreshold = 1;
agent.AgentOptions.ActorOptimizerOptions.LearnRate = 1e-04;
agent.AgentOptions.ActorOptimizerOptions.GradientThreshold = 1;
Alternatively, you can specify the agent options using an rlDDPGAgentOptions object.
Check the agent with a random input observation.
getAction(agent,{rand(obsInfo.Dimension)})
Train Agent
To train the agent, first specify the training options. For this example, use the following options:
Run each training for at most 5000 episodes. Specify that each episode lasts for at most ceil(Tf/Ts) (that is 200) time steps.
Display the training progress in the Episode Manager dialog box (set the Plots option) and disable the command line display (set the Verbose option to false).
Stop training when the agent receives an average cumulative reward greater than 800 over 20 consecutive episodes. At this point, the agent can control the level of water in the tank.
For more information, see rlTrainingOptions.
trainOpts = rlTrainingOptions(…
MaxEpisodes=5000, …
MaxStepsPerEpisode=ceil(Tf/Ts), …
ScoreAveragingWindowLength=20, …
Verbose=false, …
Plots="training-progress",…
StopTrainingCriteria="AverageReward",…
StopTrainingValue=800);
Train the agent using the train function. Training is a computationally intensive process that takes several minutes to complete. To save time while running this example, load a pretrained agent by setting doTraining to false. To train the agent yourself, set doTraining to true.
doTraining = true;
if doTraining
% Train the agent.
trainingStats = train(agent,env,trainOpts);
else
% Load the pretrained agent for the example.
load("WaterTankDDPG.mat","agent")
end
Validate Trained Agent
Validate the learned agent against the model by simulation. Since the reset function randomizes the reference values, fix the random generator seed to ensure simulation reproducibility.
rng(1)
Simulate the agent within the environment, and return the experiences as output.
simOpts = rlSimulationOptions(MaxSteps=ceil(Tf/Ts),StopOnError="on");
experiences = sim(env,agent,simOpts);
Local Reset Function
function in = localResetFcn(in)
% Randomize reference signal
blk = sprintf("RLFinal_PhD_Model_DroopPQ1/Droop/Voutref");
h = 3*randn + 0.5;
while h <= 0 || h >= 400
h = 3*randn + 200;
end
in = setBlockParameter(in,blk,Value=num2str(h));
% Randomize initial height
h1 = 3*randn + 200;
while h1 <= 0 || h1 >= 1
h1 = 3*randn + 0.5;
end
blk = "RLFinal_PhD_Model_DroopPQ1/Gain";
in = setBlockParameter(in,blk,Gain=num2str(h1));
end
I am getting the following results without rewards at all
Zero rewards
When I stop the trainging I see this error::::
Dot indexing is not supported for variables of this type.
Error in rl.util.expstruct2timeserstruct (line 7)
observation = {experiences.Observation};
Error in rl.env.AbstractEnv/sim (line 138)
s = rl.util.expstruct2timeserstruct(exp,time,oinfo,ainfo);
Copyright 2019 – 2023 The MathWorks, Inc. The below code is the one I am running
Create Simulink Environment and Train Agent
This example shows how to convert the PI controller in the watertank Simulink® model to a reinforcement learning deep deterministic policy gradient (DDPG) agent. For an example that trains a DDPG agent in MATLAB®, see Train DDPG Agent to Balance Double Integrator Environment.
Water Tank Model
The original model for this example is the water tank model. The goal is to control the level of the water in the tank. For more information about the water tank model, see watertank Simulink Model.
Modify the original model by making the following changes:
Delete the PID Controller.
Insert the RL Agent block.
Connect the observation vector , where is the height of the water in the tank, , and is the reference height.
Set up the reward .
Configure the termination signal such that the simulation stops if or .
The resulting model is rlwatertank.slx. For more information on this model and the changes, see Create Simulink Environment for Reinforcement Learning.
open_system("RLFinal_PhD_Model_DroopPQ1")
Create the Environment
Creating an environment model includes defining the following:
Action and observation signals that the agent uses to interact with the environment. For more information, see rlNumericSpec and rlFiniteSetSpec.
Reward signal that the agent uses to measure its success. For more information, see Define Reward Signals.
Define the observation specification obsInfo and action specification actInfo.
% Observation info
obsInfo = rlNumericSpec([3 1],…
LowerLimit=[-inf -inf 0 ]’,…
UpperLimit=[ inf inf inf]’);
% Name and description are optional and not used by the software
obsInfo.Name = "observations";
obsInfo.Description = "integrated error, error, and measured height";
% Action info
actInfo = rlNumericSpec([1 1]);
actInfo.Name = "flow";
Create the environment object.
env = rlSimulinkEnv("RLFinal_PhD_Model_DroopPQ1","RLFinal_PhD_Model_DroopPQ1/RL Agent1",…
obsInfo,actInfo);
Set a custom reset function that randomizes the reference values for the model.
env.ResetFcn = @(in)localResetFcn(in);
Specify the simulation time Tf and the agent sample time Ts in seconds.
Ts = 1.0;
Tf = 200;
Fix the random generator seed for reproducibility.
rng(0)
Create the Critic
DDPG agents use a parametrized Q-value function approximator to estimate the value of the policy. A Q-value function critic takes the current observation and an action as inputs and returns a single scalar as output (the estimated discounted cumulative long-term reward for which receives the action from the state corresponding to the current observation, and following the policy thereafter).
To model the parametrized Q-value function within the critic, use a neural network with two input layers (one for the observation channel, as specified by obsInfo, and the other for the action channel, as specified by actInfo) and one output layer (which returns the scalar value).
Define each network path as an array of layer objects. Assign names to the input and output layers of each path. These names allow you to connect the paths and then later explicitly associate the network input and output layers with the appropriate environment channel. Obtain the dimension of the observation and action spaces from the obsInfo and actInfo specifications.
% Observation path
obsPath = [
featureInputLayer(obsInfo.Dimension(1),Name="obsInLyr")
fullyConnectedLayer(50)
reluLayer
fullyConnectedLayer(25,Name="obsPathOutLyr")
];
% Action path
actPath = [
featureInputLayer(actInfo.Dimension(1),Name="actInLyr")
fullyConnectedLayer(25,Name="actPathOutLyr")
];
% Common path
commonPath = [
additionLayer(2,Name="add")
reluLayer
fullyConnectedLayer(1,Name="QValue")
];
% Create the network object and add the layers
criticNet = dlnetwork();
criticNet = addLayers(criticNet,obsPath);
criticNet = addLayers(criticNet,actPath);
criticNet = addLayers(criticNet,commonPath);
% Connect the layers
criticNet = connectLayers(criticNet, …
"obsPathOutLyr","add/in1");
criticNet = connectLayers(criticNet, …
"actPathOutLyr","add/in2");
View the critic network configuration.
figure
plot(criticNet)
Initialize the dlnetwork object and summarize its properties.
criticNet = initialize(criticNet);
summary(criticNet)
Create the critic approximator object using the specified deep neural network, the environment specification objects, and the names if the network inputs to be associated with the observation and action channels.
critic = rlQValueFunction(criticNet, …
obsInfo,actInfo, …
ObservationInputNames="obsInLyr", …
ActionInputNames="actInLyr");
For more information on Q-value function objects, see rlQValueFunction.
Check the critic with a random input observation and action.
getValue(critic, …
{rand(obsInfo.Dimension)}, …
{rand(actInfo.Dimension)})
For more information on creating critics, see Create Policies and Value Functions.
Create the Actor
DDPG agents use a parametrized deterministic policy over continuous action spaces, which is learned by a continuous deterministic actor.
A continuous deterministic actor implements a parametrized deterministic policy for a continuous action space. This actor takes the current observation as input and returns as output an action that is a deterministic function of the observation.
To model the parametrized policy within the actor, use a neural network with one input layer (which receives the content of the environment observation channel, as specified by obsInfo) and one output layer (which returns the action to the environment action channel, as specified by actInfo).
Define the network as an array of layer objects.
actorNet = [
featureInputLayer(obsInfo.Dimension(1))
fullyConnectedLayer(3)
tanhLayer
fullyConnectedLayer(actInfo.Dimension(1))
];
Convert the network to a dlnetwork object and summarize its properties.
actorNet = dlnetwork(actorNet);
summary(actorNet)
Create the actor approximator object using the specified deep neural network, the environment specification objects, and the name if the network input to be associated with the observation channel.
actor = rlContinuousDeterministicActor(actorNet,obsInfo,actInfo);
For more information, see rlContinuousDeterministicActor.
Check the actor with a random input observation.
getAction(actor,{rand(obsInfo.Dimension)})
For more information on creating critics, see Create Policies and Value Functions.
Create the DDPG Agent
Create the DDPG agent using the specified actor and critic approximator objects.
agent = rlDDPGAgent(actor,critic);
For more information, see rlDDPGAgent.
Specify options for the agent, the actor, and the critic using dot notation.
agent.SampleTime = Ts;
agent.AgentOptions.TargetSmoothFactor = 1e-3;
agent.AgentOptions.DiscountFactor = 1.0;
agent.AgentOptions.MiniBatchSize = 64;
agent.AgentOptions.ExperienceBufferLength = 1e6;
agent.AgentOptions.NoiseOptions.Variance = 0.3;
agent.AgentOptions.NoiseOptions.VarianceDecayRate = 1e-5;
agent.AgentOptions.CriticOptimizerOptions.LearnRate = 1e-03;
agent.AgentOptions.CriticOptimizerOptions.GradientThreshold = 1;
agent.AgentOptions.ActorOptimizerOptions.LearnRate = 1e-04;
agent.AgentOptions.ActorOptimizerOptions.GradientThreshold = 1;
Alternatively, you can specify the agent options using an rlDDPGAgentOptions object.
Check the agent with a random input observation.
getAction(agent,{rand(obsInfo.Dimension)})
Train Agent
To train the agent, first specify the training options. For this example, use the following options:
Run each training for at most 5000 episodes. Specify that each episode lasts for at most ceil(Tf/Ts) (that is 200) time steps.
Display the training progress in the Episode Manager dialog box (set the Plots option) and disable the command line display (set the Verbose option to false).
Stop training when the agent receives an average cumulative reward greater than 800 over 20 consecutive episodes. At this point, the agent can control the level of water in the tank.
For more information, see rlTrainingOptions.
trainOpts = rlTrainingOptions(…
MaxEpisodes=5000, …
MaxStepsPerEpisode=ceil(Tf/Ts), …
ScoreAveragingWindowLength=20, …
Verbose=false, …
Plots="training-progress",…
StopTrainingCriteria="AverageReward",…
StopTrainingValue=800);
Train the agent using the train function. Training is a computationally intensive process that takes several minutes to complete. To save time while running this example, load a pretrained agent by setting doTraining to false. To train the agent yourself, set doTraining to true.
doTraining = true;
if doTraining
% Train the agent.
trainingStats = train(agent,env,trainOpts);
else
% Load the pretrained agent for the example.
load("WaterTankDDPG.mat","agent")
end
Validate Trained Agent
Validate the learned agent against the model by simulation. Since the reset function randomizes the reference values, fix the random generator seed to ensure simulation reproducibility.
rng(1)
Simulate the agent within the environment, and return the experiences as output.
simOpts = rlSimulationOptions(MaxSteps=ceil(Tf/Ts),StopOnError="on");
experiences = sim(env,agent,simOpts);
Local Reset Function
function in = localResetFcn(in)
% Randomize reference signal
blk = sprintf("RLFinal_PhD_Model_DroopPQ1/Droop/Voutref");
h = 3*randn + 0.5;
while h <= 0 || h >= 400
h = 3*randn + 200;
end
in = setBlockParameter(in,blk,Value=num2str(h));
% Randomize initial height
h1 = 3*randn + 200;
while h1 <= 0 || h1 >= 1
h1 = 3*randn + 0.5;
end
blk = "RLFinal_PhD_Model_DroopPQ1/Gain";
in = setBlockParameter(in,blk,Gain=num2str(h1));
end
I am getting the following results without rewards at all
Zero rewards
When I stop the trainging I see this error::::
Dot indexing is not supported for variables of this type.
Error in rl.util.expstruct2timeserstruct (line 7)
observation = {experiences.Observation};
Error in rl.env.AbstractEnv/sim (line 138)
s = rl.util.expstruct2timeserstruct(exp,time,oinfo,ainfo);
Copyright 2019 – 2023 The MathWorks, Inc. can someone help MATLAB Answers — New Questions
How can I plot a hyperbola?
Hi everyone,
I’m a beginner at Matlab, so I don’t have much experience. Right now I’m trying to plot a hyperbola that I’m using for Time Difference of Arrival(TDoA), but I’ve been lost for hours now, and I still can’t figure out how to plot it. Any suggestions how to solve this problem?
Here is my code:
function hyperbola()
syms x y ;
f = @(x)0.4829 == sqrt((95-x)^2-(0-y)^2)-sqrt((0-x)^2-(0-y)^2);
fplot(f);
endHi everyone,
I’m a beginner at Matlab, so I don’t have much experience. Right now I’m trying to plot a hyperbola that I’m using for Time Difference of Arrival(TDoA), but I’ve been lost for hours now, and I still can’t figure out how to plot it. Any suggestions how to solve this problem?
Here is my code:
function hyperbola()
syms x y ;
f = @(x)0.4829 == sqrt((95-x)^2-(0-y)^2)-sqrt((0-x)^2-(0-y)^2);
fplot(f);
end Hi everyone,
I’m a beginner at Matlab, so I don’t have much experience. Right now I’m trying to plot a hyperbola that I’m using for Time Difference of Arrival(TDoA), but I’ve been lost for hours now, and I still can’t figure out how to plot it. Any suggestions how to solve this problem?
Here is my code:
function hyperbola()
syms x y ;
f = @(x)0.4829 == sqrt((95-x)^2-(0-y)^2)-sqrt((0-x)^2-(0-y)^2);
fplot(f);
end hyperbola, tdoa, nonlinear MATLAB Answers — New Questions
Unrecognized function or variable ‘doPlot’.
if doPlot == 1
plot(density)
title("Sample Densities")
xticklabels(element)
ylabel("Density (g/cm^3)")
endif doPlot == 1
plot(density)
title("Sample Densities")
xticklabels(element)
ylabel("Density (g/cm^3)")
end if doPlot == 1
plot(density)
title("Sample Densities")
xticklabels(element)
ylabel("Density (g/cm^3)")
end showing error while submitting MATLAB Answers — New Questions
Métodos numéricos en simulink
Tengo una tarea de resolver analíticamente una ecuación diferencial por el método de Euler (lo hice con maltab) con su gráfica, y luego hacer diagrama de bloques en simulink para ver la gráfica nuevamente con un scope. Soy completamente nueva en simulink y el profesor no lo explicó por lo que quería ver si alguien puede ayudarme
El código de matlab es el archivo "Euler_analitico01.mlx"
Me da la gráfica:
y el scope de simulink me sale:Tengo una tarea de resolver analíticamente una ecuación diferencial por el método de Euler (lo hice con maltab) con su gráfica, y luego hacer diagrama de bloques en simulink para ver la gráfica nuevamente con un scope. Soy completamente nueva en simulink y el profesor no lo explicó por lo que quería ver si alguien puede ayudarme
El código de matlab es el archivo "Euler_analitico01.mlx"
Me da la gráfica:
y el scope de simulink me sale: Tengo una tarea de resolver analíticamente una ecuación diferencial por el método de Euler (lo hice con maltab) con su gráfica, y luego hacer diagrama de bloques en simulink para ver la gráfica nuevamente con un scope. Soy completamente nueva en simulink y el profesor no lo explicó por lo que quería ver si alguien puede ayudarme
El código de matlab es el archivo "Euler_analitico01.mlx"
Me da la gráfica:
y el scope de simulink me sale: matlab, simulink, euler, metodos numericos MATLAB Answers — New Questions
error loading shared libraries: libicuuc.so.69
I get this message when trying to run MATLAB after an install without any "errors":
/data/MATLAB/R2022a/bin/glnxa64/MATLAB: error while loading shared libraries: libicuuc.so.69: cannot open shared object file: No such file or directory
I tried this with R2022a and R2022b.
I do have R2021a and R2023a running…I get this message when trying to run MATLAB after an install without any "errors":
/data/MATLAB/R2022a/bin/glnxa64/MATLAB: error while loading shared libraries: libicuuc.so.69: cannot open shared object file: No such file or directory
I tried this with R2022a and R2022b.
I do have R2021a and R2023a running… I get this message when trying to run MATLAB after an install without any "errors":
/data/MATLAB/R2022a/bin/glnxa64/MATLAB: error while loading shared libraries: libicuuc.so.69: cannot open shared object file: No such file or directory
I tried this with R2022a and R2022b.
I do have R2021a and R2023a running… shared libraries, install, ubuntu MATLAB Answers — New Questions
Maximizing Spectral efficiency instead of maximizing SINR in RI selection in 5G NR toolbox
Hi all,
I’ve noticed that the new version of the 5G Toolbox includes two different algorithms for calculating Rank Indication (RI): ‘MaxSINR’ and ‘MaxSE’. The ‘MaxSINR’ algorithm selects the RI based on maximizing the SINR, while ‘MaxSE’ selects it based on maximizing spectral efficiency.
I was under the impression that the standard approach was to select the rank that maximizes SINR. Could anyone clarify the rationale behind including both algorithms and when one might be preferred over the other?
Thanks a lotHi all,
I’ve noticed that the new version of the 5G Toolbox includes two different algorithms for calculating Rank Indication (RI): ‘MaxSINR’ and ‘MaxSE’. The ‘MaxSINR’ algorithm selects the RI based on maximizing the SINR, while ‘MaxSE’ selects it based on maximizing spectral efficiency.
I was under the impression that the standard approach was to select the rank that maximizes SINR. Could anyone clarify the rationale behind including both algorithms and when one might be preferred over the other?
Thanks a lot Hi all,
I’ve noticed that the new version of the 5G Toolbox includes two different algorithms for calculating Rank Indication (RI): ‘MaxSINR’ and ‘MaxSE’. The ‘MaxSINR’ algorithm selects the RI based on maximizing the SINR, while ‘MaxSE’ selects it based on maximizing spectral efficiency.
I was under the impression that the standard approach was to select the rank that maximizes SINR. Could anyone clarify the rationale behind including both algorithms and when one might be preferred over the other?
Thanks a lot 5g, ri, sinr, spectral efficiency MATLAB Answers — New Questions
Why does the value of “PRBSetType — PRB allocation type” change to (VRB) through code even if I set it as (PRB) in the configuration?
Why does the value of "PRBSetType — PRB allocation type" change to (VRB) through code even if I set it as (PRB) in the configuration?
Specifically, "getPXSCHobject" function doesn’t read the input value of "PRBSetType — PRB allocation type" and always sets it by default value (VRB).
And as a result, code always go through one case of the two possible cases -PRBsetType = VRB-, so we can’t get the results when we want to set PRBsetType as (PRB).
The two possible cases are inserted below:
Case 1: PRBsetType = PRB
Case 2: PRBsetType = VRBWhy does the value of "PRBSetType — PRB allocation type" change to (VRB) through code even if I set it as (PRB) in the configuration?
Specifically, "getPXSCHobject" function doesn’t read the input value of "PRBSetType — PRB allocation type" and always sets it by default value (VRB).
And as a result, code always go through one case of the two possible cases -PRBsetType = VRB-, so we can’t get the results when we want to set PRBsetType as (PRB).
The two possible cases are inserted below:
Case 1: PRBsetType = PRB
Case 2: PRBsetType = VRB Why does the value of "PRBSetType — PRB allocation type" change to (VRB) through code even if I set it as (PRB) in the configuration?
Specifically, "getPXSCHobject" function doesn’t read the input value of "PRBSetType — PRB allocation type" and always sets it by default value (VRB).
And as a result, code always go through one case of the two possible cases -PRBsetType = VRB-, so we can’t get the results when we want to set PRBsetType as (PRB).
The two possible cases are inserted below:
Case 1: PRBsetType = PRB
Case 2: PRBsetType = VRB 5g, 5g toolbox, vrbinterleaving MATLAB Answers — New Questions
Average code length and entropy
Hello,
I have an uin16 vector and I got the Huffman code from the built-in functions in MATLAB. The thing is that the entropy of this file is different from the average code length that I’ve got from the Huffman code. Isn’t the average code length supposed to be equal to the entropy of the file?
Thanks.Hello,
I have an uin16 vector and I got the Huffman code from the built-in functions in MATLAB. The thing is that the entropy of this file is different from the average code length that I’ve got from the Huffman code. Isn’t the average code length supposed to be equal to the entropy of the file?
Thanks. Hello,
I have an uin16 vector and I got the Huffman code from the built-in functions in MATLAB. The thing is that the entropy of this file is different from the average code length that I’ve got from the Huffman code. Isn’t the average code length supposed to be equal to the entropy of the file?
Thanks. huffman, average code length, entropy MATLAB Answers — New Questions
Function to capitalize first letter in each word in string but forces all other letters to be lowercase
Does anyone know how to create a function which accepts a string and capitalizes the first letter in each word of the string, but also forces the other letters to be lowercase?
Any advice would be greatly appreciated!!
This is my attempt so far:
str=[‘this is a TEST’];
for i=1:length(str);
if str(1,i(1));
str= upper(str);
else str(1,i);
str= lower(str);
end
endDoes anyone know how to create a function which accepts a string and capitalizes the first letter in each word of the string, but also forces the other letters to be lowercase?
Any advice would be greatly appreciated!!
This is my attempt so far:
str=[‘this is a TEST’];
for i=1:length(str);
if str(1,i(1));
str= upper(str);
else str(1,i);
str= lower(str);
end
end Does anyone know how to create a function which accepts a string and capitalizes the first letter in each word of the string, but also forces the other letters to be lowercase?
Any advice would be greatly appreciated!!
This is my attempt so far:
str=[‘this is a TEST’];
for i=1:length(str);
if str(1,i(1));
str= upper(str);
else str(1,i);
str= lower(str);
end
end matlab, function, uppercase, live script MATLAB Answers — New Questions
error in task 6 of robotic vacuum stateflow
says that the transition is not correct, but is the same as the solutionsays that the transition is not correct, but is the same as the solution says that the transition is not correct, but is the same as the solution stateflow, task 6, robotic vacuum MATLAB Answers — New Questions
Extract only diagonal elements from matrix
I have a matrix in one variable and a list of coordinates in another variable. Is there a way to extract only the matching pairs of coordinate from the matrix? I.e. X(1),Y(1); X(2), Y(2)…
I can extract all of the permutations (X1, Y1; X1 Y2 …X2,Y1 … etc) and then take the diagonal, but I was wondering if there was a simple solution I’m missing to only extract the matched pairs.
Thanks,
Will
%Data
mat = rand(100);
%Coordinates
x_coord = round(rand(10,1)*100);
y_coord = round(rand(10,1)*100);
%Extract coordinates
extracted_coord = diag(mat(x_coord,y_coord));I have a matrix in one variable and a list of coordinates in another variable. Is there a way to extract only the matching pairs of coordinate from the matrix? I.e. X(1),Y(1); X(2), Y(2)…
I can extract all of the permutations (X1, Y1; X1 Y2 …X2,Y1 … etc) and then take the diagonal, but I was wondering if there was a simple solution I’m missing to only extract the matched pairs.
Thanks,
Will
%Data
mat = rand(100);
%Coordinates
x_coord = round(rand(10,1)*100);
y_coord = round(rand(10,1)*100);
%Extract coordinates
extracted_coord = diag(mat(x_coord,y_coord)); I have a matrix in one variable and a list of coordinates in another variable. Is there a way to extract only the matching pairs of coordinate from the matrix? I.e. X(1),Y(1); X(2), Y(2)…
I can extract all of the permutations (X1, Y1; X1 Y2 …X2,Y1 … etc) and then take the diagonal, but I was wondering if there was a simple solution I’m missing to only extract the matched pairs.
Thanks,
Will
%Data
mat = rand(100);
%Coordinates
x_coord = round(rand(10,1)*100);
y_coord = round(rand(10,1)*100);
%Extract coordinates
extracted_coord = diag(mat(x_coord,y_coord)); matrix manipulation, indexing MATLAB Answers — New Questions
how to set initial signal out from relay simulink
Hello all, Paul @Stephen23
I have simulink model with relay bock.
this relay block switch on when the value reach 100 and the output signal is 1 and switch off when reach 20 and the output signal is 0.
my problem is that the relay initial output is always off, this means when the initial value is for example 44 the out out is 0.
but in some cases I want to be my initial output is on (1)
How can i change the initial output signal?
best regards, AhmadHello all, Paul @Stephen23
I have simulink model with relay bock.
this relay block switch on when the value reach 100 and the output signal is 1 and switch off when reach 20 and the output signal is 0.
my problem is that the relay initial output is always off, this means when the initial value is for example 44 the out out is 0.
but in some cases I want to be my initial output is on (1)
How can i change the initial output signal?
best regards, Ahmad Hello all, Paul @Stephen23
I have simulink model with relay bock.
this relay block switch on when the value reach 100 and the output signal is 1 and switch off when reach 20 and the output signal is 0.
my problem is that the relay initial output is always off, this means when the initial value is for example 44 the out out is 0.
but in some cases I want to be my initial output is on (1)
How can i change the initial output signal?
best regards, Ahmad relay block MATLAB Answers — New Questions
boxchart – different box width according to number of data points
Hi,
I am looking for a way to set the width of a boxplot according to the number of datapoints within a boxchart.
See attachet an example how it looks like in R. When I try this in matlab, I get an error, because matlab accepts only scalars and no vectors. I would prefer to do all my statistics with matlab, so this function would be very helpful.
Thank you
MarkusHi,
I am looking for a way to set the width of a boxplot according to the number of datapoints within a boxchart.
See attachet an example how it looks like in R. When I try this in matlab, I get an error, because matlab accepts only scalars and no vectors. I would prefer to do all my statistics with matlab, so this function would be very helpful.
Thank you
Markus Hi,
I am looking for a way to set the width of a boxplot according to the number of datapoints within a boxchart.
See attachet an example how it looks like in R. When I try this in matlab, I get an error, because matlab accepts only scalars and no vectors. I would prefer to do all my statistics with matlab, so this function would be very helpful.
Thank you
Markus boxchart, width of box MATLAB Answers — New Questions
Why doesn’t the figure show the text and fitting line?
Text in the left corner and fitting line is missing from my fiures, please make correction to my code:
% Define heights, FOVs, and SNR values to test
heights = [1000, 2000, 3000, 4000];
fovs = [0.2, 0.5, 1, 2, 5, 10];
snr_values = [0, 25, 50, 75, 100];
% Function to calculate performance metrics
calculate_r_squared = @(x, y) 1 – sum((y – x).^2) / sum((y – mean(y)).^2);
calculate_rmse = @(x, y) sqrt(mean((y – x).^2));
calculate_mape = @(x, y) mean(abs((y – x) ./ y)) * 100;
calculate_mae = @(x, y) mean(abs(y – x));
calculate_made = @(x, y) mean(abs(y – mean(x)));
% Initialize arrays to store performance metrics
performance_metrics = struct();
% Loop over height values
for h = heights
% Filter the data for the current height
idx = (lookup_table(:, 1) == h);
data_filtered = lookup_table(idx, :);
% Initialize arrays to store performance metrics for each FOV and SNR value
performance_metrics(h).r_squared_r = zeros(length(fovs), length(snr_values));
performance_metrics(h).rmse_r = zeros(length(fovs), length(snr_values));
performance_metrics(h).mape_r = zeros(length(fovs), length(snr_values));
performance_metrics(h).mae_r = zeros(length(fovs), length(snr_values));
performance_metrics(h).made_r = zeros(length(fovs), length(snr_values));
performance_metrics(h).r_squared_a = zeros(length(fovs), length(snr_values));
performance_metrics(h).rmse_a = zeros(length(fovs), length(snr_values));
performance_metrics(h).mape_a = zeros(length(fovs), length(snr_values));
performance_metrics(h).mae_a = zeros(length(fovs), length(snr_values));
performance_metrics(h).made_a = zeros(length(fovs), length(snr_values));
% Plot optimal_r_input vs. optimal_r_interp
figure;
hold on;
colors = jet(length(fovs) * length(snr_values));
c_idx = 1;
for fov_idx = 1:length(fovs)
for snr_idx = 1:length(snr_values)
fov = fovs(fov_idx);
snr = snr_values(snr_idx);
% Filter data for the current FOV and SNR
idx_fov_snr = (data_filtered(:, 2) == fov) & (data_filtered(:, 3) == snr);
optimal_r_input = data_filtered(idx_fov_snr, 4);
optimal_r_interp = data_filtered(idx_fov_snr, 5);
% Scatter plot
if ~isempty(optimal_r_input)
scatter(optimal_r_input, optimal_r_interp, 50, colors(c_idx, :), ‘filled’);
% Fit and plot linear regression line if there is sufficient data
if length(optimal_r_input) > 1
model_r = fitlm(optimal_r_input, optimal_r_interp);
plot(model_r.Variables.x1, model_r.Fitted, ‘Color’, colors(c_idx, :), ‘LineWidth’, 2);
% Calculate additional performance metrics
r_squared_r = model_r.Rsquared.Ordinary;
rmse_r = calculate_rmse(optimal_r_input, optimal_r_interp);
mape_r = calculate_mape(optimal_r_input, optimal_r_interp);
mae_r = calculate_mae(optimal_r_input, optimal_r_interp);
made_r = calculate_made(optimal_r_input, optimal_r_interp);
% Store the performance metrics for this FOV and SNR value
performance_metrics(h).r_squared_r(fov_idx, snr_idx) = r_squared_r;
performance_metrics(h).rmse_r(fov_idx, snr_idx) = rmse_r;
performance_metrics(h).mape_r(fov_idx, snr_idx) = mape_r;
performance_metrics(h).mae_r(fov_idx, snr_idx) = mae_r;
performance_metrics(h).made_r(fov_idx, snr_idx) = made_r;
% Display text with performance metrics
text(mean(optimal_r_input), mean(optimal_r_interp), …
{[‘SNR = ‘, num2str(snr), ‘ dB’], …
[‘R^2 = ‘, num2str(r_squared_r)], …
[‘RMSE = ‘, num2str(rmse_r)], …
[‘MAPE = ‘, num2str(mape_r), ‘%’], …
[‘MAE = ‘, num2str(mae_r)], …
[‘MADE = ‘, num2str(made_r)]}, …
‘FontSize’, 10, ‘Color’, colors(c_idx, :));
end
end
c_idx = c_idx + 1;
end
end
xlabel(‘Optimal R_{e} (mum)’);
ylabel(‘Optimal R_{e} interp (mum)’);
title([‘Plot of optimal R_{e} and optimal R_{e} interp for Height = ‘, num2str(h)]);
grid on;
hold off;
% Plot optimal_a_input vs. optimal_a_interp
figure;
hold on;
c_idx = 1;
for fov_idx = 1:length(fovs)
for snr_idx = 1:length(snr_values)
fov = fovs(fov_idx);
snr = snr_values(snr_idx);
% Filter data for the current FOV and SNR
idx_fov_snr = (data_filtered(:, 2) == fov) & (data_filtered(:, 3) == snr);
optimal_a_input = data_filtered(idx_fov_snr, 6);
optimal_a_interp = data_filtered(idx_fov_snr, 7);
% Scatter plot
if ~isempty(optimal_a_input)
scatter(optimal_a_input, optimal_a_interp, 50, colors(c_idx, :), ‘filled’);
% Fit and plot linear regression line if there is sufficient data
if length(optimal_a_input) > 1
model_a = fitlm(optimal_a_input, optimal_a_interp);
plot(model_a.Variables.x1, model_a.Fitted, ‘Color’, colors(c_idx, :), ‘LineWidth’, 2);
% Calculate additional performance metrics
r_squared_a = model_a.Rsquared.Ordinary;
rmse_a = calculate_rmse(optimal_a_input, optimal_a_interp);
mape_a = calculate_mape(optimal_a_input, optimal_a_interp);
mae_a = calculate_mae(optimal_a_input, optimal_a_interp);
made_a = calculate_made(optimal_a_input, optimal_a_interp);
% Store the performance metrics for this FOV and SNR value
performance_metrics(h).r_squared_a(fov_idx, snr_idx) = r_squared_a;
performance_metrics(h).rmse_a(fov_idx, snr_idx) = rmse_a;
performance_metrics(h).mape_a(fov_idx, snr_idx) = mape_a;
performance_metrics(h).mae_a(fov_idx, snr_idx) = mae_a;
performance_metrics(h).made_a(fov_idx, snr_idx) = made_a;
% Display text with performance metrics
text(mean(optimal_a_input), mean(optimal_a_interp), …
{[‘SNR = ‘, num2str(snr), ‘ dB’], …
[‘R^2 = ‘, num2str(r_squared_a)], …
[‘RMSE = ‘, num2str(rmse_a)], …
[‘MAPE = ‘, num2str(mape_a), ‘%’], …
[‘MAE = ‘, num2str(mae_a)], …
[‘MADE = ‘, num2str(made_a)]}, …
‘FontSize’, 10, ‘Color’, colors(c_idx, :));
end
end
c_idx = c_idx + 1;
end
end
xlabel(‘Optimal alpha_{e} (m^{-1})’);
ylabel(‘Optimal alpha_{e} interp (m^{-1})’);
title([‘Plot of optimal alpha_{e} vs optimal alpha_{e} interp for Height = ‘, num2str(h)]);
grid on;
hold off;
endText in the left corner and fitting line is missing from my fiures, please make correction to my code:
% Define heights, FOVs, and SNR values to test
heights = [1000, 2000, 3000, 4000];
fovs = [0.2, 0.5, 1, 2, 5, 10];
snr_values = [0, 25, 50, 75, 100];
% Function to calculate performance metrics
calculate_r_squared = @(x, y) 1 – sum((y – x).^2) / sum((y – mean(y)).^2);
calculate_rmse = @(x, y) sqrt(mean((y – x).^2));
calculate_mape = @(x, y) mean(abs((y – x) ./ y)) * 100;
calculate_mae = @(x, y) mean(abs(y – x));
calculate_made = @(x, y) mean(abs(y – mean(x)));
% Initialize arrays to store performance metrics
performance_metrics = struct();
% Loop over height values
for h = heights
% Filter the data for the current height
idx = (lookup_table(:, 1) == h);
data_filtered = lookup_table(idx, :);
% Initialize arrays to store performance metrics for each FOV and SNR value
performance_metrics(h).r_squared_r = zeros(length(fovs), length(snr_values));
performance_metrics(h).rmse_r = zeros(length(fovs), length(snr_values));
performance_metrics(h).mape_r = zeros(length(fovs), length(snr_values));
performance_metrics(h).mae_r = zeros(length(fovs), length(snr_values));
performance_metrics(h).made_r = zeros(length(fovs), length(snr_values));
performance_metrics(h).r_squared_a = zeros(length(fovs), length(snr_values));
performance_metrics(h).rmse_a = zeros(length(fovs), length(snr_values));
performance_metrics(h).mape_a = zeros(length(fovs), length(snr_values));
performance_metrics(h).mae_a = zeros(length(fovs), length(snr_values));
performance_metrics(h).made_a = zeros(length(fovs), length(snr_values));
% Plot optimal_r_input vs. optimal_r_interp
figure;
hold on;
colors = jet(length(fovs) * length(snr_values));
c_idx = 1;
for fov_idx = 1:length(fovs)
for snr_idx = 1:length(snr_values)
fov = fovs(fov_idx);
snr = snr_values(snr_idx);
% Filter data for the current FOV and SNR
idx_fov_snr = (data_filtered(:, 2) == fov) & (data_filtered(:, 3) == snr);
optimal_r_input = data_filtered(idx_fov_snr, 4);
optimal_r_interp = data_filtered(idx_fov_snr, 5);
% Scatter plot
if ~isempty(optimal_r_input)
scatter(optimal_r_input, optimal_r_interp, 50, colors(c_idx, :), ‘filled’);
% Fit and plot linear regression line if there is sufficient data
if length(optimal_r_input) > 1
model_r = fitlm(optimal_r_input, optimal_r_interp);
plot(model_r.Variables.x1, model_r.Fitted, ‘Color’, colors(c_idx, :), ‘LineWidth’, 2);
% Calculate additional performance metrics
r_squared_r = model_r.Rsquared.Ordinary;
rmse_r = calculate_rmse(optimal_r_input, optimal_r_interp);
mape_r = calculate_mape(optimal_r_input, optimal_r_interp);
mae_r = calculate_mae(optimal_r_input, optimal_r_interp);
made_r = calculate_made(optimal_r_input, optimal_r_interp);
% Store the performance metrics for this FOV and SNR value
performance_metrics(h).r_squared_r(fov_idx, snr_idx) = r_squared_r;
performance_metrics(h).rmse_r(fov_idx, snr_idx) = rmse_r;
performance_metrics(h).mape_r(fov_idx, snr_idx) = mape_r;
performance_metrics(h).mae_r(fov_idx, snr_idx) = mae_r;
performance_metrics(h).made_r(fov_idx, snr_idx) = made_r;
% Display text with performance metrics
text(mean(optimal_r_input), mean(optimal_r_interp), …
{[‘SNR = ‘, num2str(snr), ‘ dB’], …
[‘R^2 = ‘, num2str(r_squared_r)], …
[‘RMSE = ‘, num2str(rmse_r)], …
[‘MAPE = ‘, num2str(mape_r), ‘%’], …
[‘MAE = ‘, num2str(mae_r)], …
[‘MADE = ‘, num2str(made_r)]}, …
‘FontSize’, 10, ‘Color’, colors(c_idx, :));
end
end
c_idx = c_idx + 1;
end
end
xlabel(‘Optimal R_{e} (mum)’);
ylabel(‘Optimal R_{e} interp (mum)’);
title([‘Plot of optimal R_{e} and optimal R_{e} interp for Height = ‘, num2str(h)]);
grid on;
hold off;
% Plot optimal_a_input vs. optimal_a_interp
figure;
hold on;
c_idx = 1;
for fov_idx = 1:length(fovs)
for snr_idx = 1:length(snr_values)
fov = fovs(fov_idx);
snr = snr_values(snr_idx);
% Filter data for the current FOV and SNR
idx_fov_snr = (data_filtered(:, 2) == fov) & (data_filtered(:, 3) == snr);
optimal_a_input = data_filtered(idx_fov_snr, 6);
optimal_a_interp = data_filtered(idx_fov_snr, 7);
% Scatter plot
if ~isempty(optimal_a_input)
scatter(optimal_a_input, optimal_a_interp, 50, colors(c_idx, :), ‘filled’);
% Fit and plot linear regression line if there is sufficient data
if length(optimal_a_input) > 1
model_a = fitlm(optimal_a_input, optimal_a_interp);
plot(model_a.Variables.x1, model_a.Fitted, ‘Color’, colors(c_idx, :), ‘LineWidth’, 2);
% Calculate additional performance metrics
r_squared_a = model_a.Rsquared.Ordinary;
rmse_a = calculate_rmse(optimal_a_input, optimal_a_interp);
mape_a = calculate_mape(optimal_a_input, optimal_a_interp);
mae_a = calculate_mae(optimal_a_input, optimal_a_interp);
made_a = calculate_made(optimal_a_input, optimal_a_interp);
% Store the performance metrics for this FOV and SNR value
performance_metrics(h).r_squared_a(fov_idx, snr_idx) = r_squared_a;
performance_metrics(h).rmse_a(fov_idx, snr_idx) = rmse_a;
performance_metrics(h).mape_a(fov_idx, snr_idx) = mape_a;
performance_metrics(h).mae_a(fov_idx, snr_idx) = mae_a;
performance_metrics(h).made_a(fov_idx, snr_idx) = made_a;
% Display text with performance metrics
text(mean(optimal_a_input), mean(optimal_a_interp), …
{[‘SNR = ‘, num2str(snr), ‘ dB’], …
[‘R^2 = ‘, num2str(r_squared_a)], …
[‘RMSE = ‘, num2str(rmse_a)], …
[‘MAPE = ‘, num2str(mape_a), ‘%’], …
[‘MAE = ‘, num2str(mae_a)], …
[‘MADE = ‘, num2str(made_a)]}, …
‘FontSize’, 10, ‘Color’, colors(c_idx, :));
end
end
c_idx = c_idx + 1;
end
end
xlabel(‘Optimal alpha_{e} (m^{-1})’);
ylabel(‘Optimal alpha_{e} interp (m^{-1})’);
title([‘Plot of optimal alpha_{e} vs optimal alpha_{e} interp for Height = ‘, num2str(h)]);
grid on;
hold off;
end Text in the left corner and fitting line is missing from my fiures, please make correction to my code:
% Define heights, FOVs, and SNR values to test
heights = [1000, 2000, 3000, 4000];
fovs = [0.2, 0.5, 1, 2, 5, 10];
snr_values = [0, 25, 50, 75, 100];
% Function to calculate performance metrics
calculate_r_squared = @(x, y) 1 – sum((y – x).^2) / sum((y – mean(y)).^2);
calculate_rmse = @(x, y) sqrt(mean((y – x).^2));
calculate_mape = @(x, y) mean(abs((y – x) ./ y)) * 100;
calculate_mae = @(x, y) mean(abs(y – x));
calculate_made = @(x, y) mean(abs(y – mean(x)));
% Initialize arrays to store performance metrics
performance_metrics = struct();
% Loop over height values
for h = heights
% Filter the data for the current height
idx = (lookup_table(:, 1) == h);
data_filtered = lookup_table(idx, :);
% Initialize arrays to store performance metrics for each FOV and SNR value
performance_metrics(h).r_squared_r = zeros(length(fovs), length(snr_values));
performance_metrics(h).rmse_r = zeros(length(fovs), length(snr_values));
performance_metrics(h).mape_r = zeros(length(fovs), length(snr_values));
performance_metrics(h).mae_r = zeros(length(fovs), length(snr_values));
performance_metrics(h).made_r = zeros(length(fovs), length(snr_values));
performance_metrics(h).r_squared_a = zeros(length(fovs), length(snr_values));
performance_metrics(h).rmse_a = zeros(length(fovs), length(snr_values));
performance_metrics(h).mape_a = zeros(length(fovs), length(snr_values));
performance_metrics(h).mae_a = zeros(length(fovs), length(snr_values));
performance_metrics(h).made_a = zeros(length(fovs), length(snr_values));
% Plot optimal_r_input vs. optimal_r_interp
figure;
hold on;
colors = jet(length(fovs) * length(snr_values));
c_idx = 1;
for fov_idx = 1:length(fovs)
for snr_idx = 1:length(snr_values)
fov = fovs(fov_idx);
snr = snr_values(snr_idx);
% Filter data for the current FOV and SNR
idx_fov_snr = (data_filtered(:, 2) == fov) & (data_filtered(:, 3) == snr);
optimal_r_input = data_filtered(idx_fov_snr, 4);
optimal_r_interp = data_filtered(idx_fov_snr, 5);
% Scatter plot
if ~isempty(optimal_r_input)
scatter(optimal_r_input, optimal_r_interp, 50, colors(c_idx, :), ‘filled’);
% Fit and plot linear regression line if there is sufficient data
if length(optimal_r_input) > 1
model_r = fitlm(optimal_r_input, optimal_r_interp);
plot(model_r.Variables.x1, model_r.Fitted, ‘Color’, colors(c_idx, :), ‘LineWidth’, 2);
% Calculate additional performance metrics
r_squared_r = model_r.Rsquared.Ordinary;
rmse_r = calculate_rmse(optimal_r_input, optimal_r_interp);
mape_r = calculate_mape(optimal_r_input, optimal_r_interp);
mae_r = calculate_mae(optimal_r_input, optimal_r_interp);
made_r = calculate_made(optimal_r_input, optimal_r_interp);
% Store the performance metrics for this FOV and SNR value
performance_metrics(h).r_squared_r(fov_idx, snr_idx) = r_squared_r;
performance_metrics(h).rmse_r(fov_idx, snr_idx) = rmse_r;
performance_metrics(h).mape_r(fov_idx, snr_idx) = mape_r;
performance_metrics(h).mae_r(fov_idx, snr_idx) = mae_r;
performance_metrics(h).made_r(fov_idx, snr_idx) = made_r;
% Display text with performance metrics
text(mean(optimal_r_input), mean(optimal_r_interp), …
{[‘SNR = ‘, num2str(snr), ‘ dB’], …
[‘R^2 = ‘, num2str(r_squared_r)], …
[‘RMSE = ‘, num2str(rmse_r)], …
[‘MAPE = ‘, num2str(mape_r), ‘%’], …
[‘MAE = ‘, num2str(mae_r)], …
[‘MADE = ‘, num2str(made_r)]}, …
‘FontSize’, 10, ‘Color’, colors(c_idx, :));
end
end
c_idx = c_idx + 1;
end
end
xlabel(‘Optimal R_{e} (mum)’);
ylabel(‘Optimal R_{e} interp (mum)’);
title([‘Plot of optimal R_{e} and optimal R_{e} interp for Height = ‘, num2str(h)]);
grid on;
hold off;
% Plot optimal_a_input vs. optimal_a_interp
figure;
hold on;
c_idx = 1;
for fov_idx = 1:length(fovs)
for snr_idx = 1:length(snr_values)
fov = fovs(fov_idx);
snr = snr_values(snr_idx);
% Filter data for the current FOV and SNR
idx_fov_snr = (data_filtered(:, 2) == fov) & (data_filtered(:, 3) == snr);
optimal_a_input = data_filtered(idx_fov_snr, 6);
optimal_a_interp = data_filtered(idx_fov_snr, 7);
% Scatter plot
if ~isempty(optimal_a_input)
scatter(optimal_a_input, optimal_a_interp, 50, colors(c_idx, :), ‘filled’);
% Fit and plot linear regression line if there is sufficient data
if length(optimal_a_input) > 1
model_a = fitlm(optimal_a_input, optimal_a_interp);
plot(model_a.Variables.x1, model_a.Fitted, ‘Color’, colors(c_idx, :), ‘LineWidth’, 2);
% Calculate additional performance metrics
r_squared_a = model_a.Rsquared.Ordinary;
rmse_a = calculate_rmse(optimal_a_input, optimal_a_interp);
mape_a = calculate_mape(optimal_a_input, optimal_a_interp);
mae_a = calculate_mae(optimal_a_input, optimal_a_interp);
made_a = calculate_made(optimal_a_input, optimal_a_interp);
% Store the performance metrics for this FOV and SNR value
performance_metrics(h).r_squared_a(fov_idx, snr_idx) = r_squared_a;
performance_metrics(h).rmse_a(fov_idx, snr_idx) = rmse_a;
performance_metrics(h).mape_a(fov_idx, snr_idx) = mape_a;
performance_metrics(h).mae_a(fov_idx, snr_idx) = mae_a;
performance_metrics(h).made_a(fov_idx, snr_idx) = made_a;
% Display text with performance metrics
text(mean(optimal_a_input), mean(optimal_a_interp), …
{[‘SNR = ‘, num2str(snr), ‘ dB’], …
[‘R^2 = ‘, num2str(r_squared_a)], …
[‘RMSE = ‘, num2str(rmse_a)], …
[‘MAPE = ‘, num2str(mape_a), ‘%’], …
[‘MAE = ‘, num2str(mae_a)], …
[‘MADE = ‘, num2str(made_a)]}, …
‘FontSize’, 10, ‘Color’, colors(c_idx, :));
end
end
c_idx = c_idx + 1;
end
end
xlabel(‘Optimal alpha_{e} (m^{-1})’);
ylabel(‘Optimal alpha_{e} interp (m^{-1})’);
title([‘Plot of optimal alpha_{e} vs optimal alpha_{e} interp for Height = ‘, num2str(h)]);
grid on;
hold off;
end figure, text MATLAB Answers — New Questions
Implementing Transfer Function In Simulink.
Hi, I have a transfer function and have been struggling to implement it properly into simulink. ,,, and all change with respect to time and I want to be able to get . Furthermore, and depend on which I pass into a Matlab Block that calculates and respectively.
I tried using the Varying Transfer Fcn block, but I’d like to implement it using signals and blocks.
Thanks in advance, I’d really appreciate the help!Hi, I have a transfer function and have been struggling to implement it properly into simulink. ,,, and all change with respect to time and I want to be able to get . Furthermore, and depend on which I pass into a Matlab Block that calculates and respectively.
I tried using the Varying Transfer Fcn block, but I’d like to implement it using signals and blocks.
Thanks in advance, I’d really appreciate the help! Hi, I have a transfer function and have been struggling to implement it properly into simulink. ,,, and all change with respect to time and I want to be able to get . Furthermore, and depend on which I pass into a Matlab Block that calculates and respectively.
I tried using the Varying Transfer Fcn block, but I’d like to implement it using signals and blocks.
Thanks in advance, I’d really appreciate the help! transfer function, simulink MATLAB Answers — New Questions
signals matching between random modulations transmitter and humminbird sonar
hello i would like to ask how to match random signals of a transmitter to the signals of a humminbird imaging sonar so that the sonar is able to obtain imaging information from the random signals of the transmitter. is there a way of determining all the modulations of the sonar and the transmitter by scanning them. i will be using an sdr to send the random signals from the transmitter to the sonar. thanks very much.hello i would like to ask how to match random signals of a transmitter to the signals of a humminbird imaging sonar so that the sonar is able to obtain imaging information from the random signals of the transmitter. is there a way of determining all the modulations of the sonar and the transmitter by scanning them. i will be using an sdr to send the random signals from the transmitter to the sonar. thanks very much. hello i would like to ask how to match random signals of a transmitter to the signals of a humminbird imaging sonar so that the sonar is able to obtain imaging information from the random signals of the transmitter. is there a way of determining all the modulations of the sonar and the transmitter by scanning them. i will be using an sdr to send the random signals from the transmitter to the sonar. thanks very much. signals, match, transmitter, sonars MATLAB Answers — New Questions