Tag Archives: matlab
How to create dynamic options in system object block mask parameters
I want to make the dropdown content of one system object parameter based on the value of another parameter. In other words, Timer 1 may support options A, B and C, while Timer 2 would only support options A and B. I can do this in a standard subsystem block mask by modifying the option parameter dropdown content on the callback for the timer parameter. MATLAB system objects only seem to support defining dropdown content for their parameters statically. Is this possible?I want to make the dropdown content of one system object parameter based on the value of another parameter. In other words, Timer 1 may support options A, B and C, while Timer 2 would only support options A and B. I can do this in a standard subsystem block mask by modifying the option parameter dropdown content on the callback for the timer parameter. MATLAB system objects only seem to support defining dropdown content for their parameters statically. Is this possible? I want to make the dropdown content of one system object parameter based on the value of another parameter. In other words, Timer 1 may support options A, B and C, while Timer 2 would only support options A and B. I can do this in a standard subsystem block mask by modifying the option parameter dropdown content on the callback for the timer parameter. MATLAB system objects only seem to support defining dropdown content for their parameters statically. Is this possible? matlab system objects MATLAB Answers — New Questions
Impact of Gripper’s Roll Angle on Reachable Poses for UR5e Robot
When I change the roll angle of the gripper, as demonstrated in my example code, the number of reachable poses varies for each roll angle. I’ve tested this with the same number of reference bodies (bodyName). The results were 411, 540, 513, and 547 reachable poses for different roll angles. I understand that this variation arises because each roll angle results in a different final configuration for the robot, affecting the GIK (Generalized Inverse Kinematics) solution. However, for a UR5e robot, this variation should not occur in real, right ? In practical use, can the UR5e achieve all 547 (assuming it’s the maximum it’s capable of reaching in this case) reachable poses for each roll angle?
for orientationIdx = 1:size(orientationsToTest,1)
for rollIdx = 1:numRollAngles
orientationsToTest(:,3) = rollAngles(rollIdx);
currentOrientation = orientationsToTest(orientationIdx,:);
targetPose = constraintPoseTarget(gripper);
targetPose.ReferenceBody = bodyName; %reference body
targetPose.TargetTransform = trvec2tform([0 0 0]) * eul2tform(currentOrientation,"XYZ");
[qWaypoints(2,:),solutionInfo] = gik_Pick(q0,targetPose);
end
endWhen I change the roll angle of the gripper, as demonstrated in my example code, the number of reachable poses varies for each roll angle. I’ve tested this with the same number of reference bodies (bodyName). The results were 411, 540, 513, and 547 reachable poses for different roll angles. I understand that this variation arises because each roll angle results in a different final configuration for the robot, affecting the GIK (Generalized Inverse Kinematics) solution. However, for a UR5e robot, this variation should not occur in real, right ? In practical use, can the UR5e achieve all 547 (assuming it’s the maximum it’s capable of reaching in this case) reachable poses for each roll angle?
for orientationIdx = 1:size(orientationsToTest,1)
for rollIdx = 1:numRollAngles
orientationsToTest(:,3) = rollAngles(rollIdx);
currentOrientation = orientationsToTest(orientationIdx,:);
targetPose = constraintPoseTarget(gripper);
targetPose.ReferenceBody = bodyName; %reference body
targetPose.TargetTransform = trvec2tform([0 0 0]) * eul2tform(currentOrientation,"XYZ");
[qWaypoints(2,:),solutionInfo] = gik_Pick(q0,targetPose);
end
end When I change the roll angle of the gripper, as demonstrated in my example code, the number of reachable poses varies for each roll angle. I’ve tested this with the same number of reference bodies (bodyName). The results were 411, 540, 513, and 547 reachable poses for different roll angles. I understand that this variation arises because each roll angle results in a different final configuration for the robot, affecting the GIK (Generalized Inverse Kinematics) solution. However, for a UR5e robot, this variation should not occur in real, right ? In practical use, can the UR5e achieve all 547 (assuming it’s the maximum it’s capable of reaching in this case) reachable poses for each roll angle?
for orientationIdx = 1:size(orientationsToTest,1)
for rollIdx = 1:numRollAngles
orientationsToTest(:,3) = rollAngles(rollIdx);
currentOrientation = orientationsToTest(orientationIdx,:);
targetPose = constraintPoseTarget(gripper);
targetPose.ReferenceBody = bodyName; %reference body
targetPose.TargetTransform = trvec2tform([0 0 0]) * eul2tform(currentOrientation,"XYZ");
[qWaypoints(2,:),solutionInfo] = gik_Pick(q0,targetPose);
end
end matlab MATLAB Answers — New Questions
Conv2d, fully connected layers, and regression – number of predictions and number of channels mismatch
Hello! I’m trying to get a CNN up and running and I think I’m almost there, but I’m still running into a few errors. What I would like is to have a series of 1D convolutions with a featureInputLayer, but those throw the following error:
Caused by:
Layer ‘conv1d1’: Input data must have one spatial dimension only, one temporal dimension only, or one of each. Instead, it
has 0 spatial dimensions and 0 temporal dimensions.
According to https://www.mathworks.com/matlabcentral/answers/1747170-error-on-convolutional-layer-s-input-data-has-0-spatial-dimensions-and-0-temporal-dimensions the workaround is to reformat the CNN to a conv2d using N x 1 "images." So, I’ve tried that and now I have a new and interesting problem:
Error using trainnet (line 46)
Number of channels in predictions (3) must match the number of channels in the targets (1).
Error in convNet_1_edits (line 97)
[trainedNet, trainInfo]=trainnet(masterTrain,net,’mse’,options);
This problem has been approached several times before (https://www.mathworks.com/matlabcentral/answers/2123216-error-in-deep-learning-classification-code/?s_tid=ans_lp_feed_leaf and others) but none of them that I’ve found have used fully connected layers. For reference, my CNN is the following:
layers = [
imageInputLayer(nFeatures, "name", "input");
convolution2dLayer(f1Size, numFilters1, ‘padding’, ‘same’,…
"name", "conv1")
batchNormalizationLayer();
reluLayer();
convolution2dLayer(f1Size, numFilters2, "padding", "same",…
‘numchannels’, numFilters1, ‘name’, ‘conv2’)
batchNormalizationLayer();
reluLayer();
maxPooling2dLayer([1, 3]);
convolution2dLayer(f1Size, numFilters3, "padding", "same",…
‘numchannels’, numFilters2, ‘name’, ‘conv3’)
batchNormalizationLayer();
reluLayer();
maxPooling2dLayer([1, 5]);
fullyConnectedLayer(60, ‘name’, ‘fc1’)
reluLayer()
fullyConnectedLayer(30, ‘name’, ‘fc2’)
reluLayer()
fullyConnectedLayer(15, ‘name’, ‘fc3’)
reluLayer()
fullyConnectedLayer(3, ‘name’, ‘fc4’)
% regressionLayer()
];
net = dlnetwork;
net = addLayers(net, layers);
And I am using trainnetwork and a datastore. The output of read(ds) produces the following:
read(masterTest)
ans =
1×4 cell array
{1×1341 double} {[0.6500]} {[6.8000e-07]} {[0.0250]}
Where I have a 1 x 1341 set of features being used to predict three outputs. I thought the three neurons in my final fully connected layer would be the regression outputs, but there seems to be a mismatch in the number of predictions and number of targets. How can I align the number of predictions and targets when using regression in FC layers?Hello! I’m trying to get a CNN up and running and I think I’m almost there, but I’m still running into a few errors. What I would like is to have a series of 1D convolutions with a featureInputLayer, but those throw the following error:
Caused by:
Layer ‘conv1d1’: Input data must have one spatial dimension only, one temporal dimension only, or one of each. Instead, it
has 0 spatial dimensions and 0 temporal dimensions.
According to https://www.mathworks.com/matlabcentral/answers/1747170-error-on-convolutional-layer-s-input-data-has-0-spatial-dimensions-and-0-temporal-dimensions the workaround is to reformat the CNN to a conv2d using N x 1 "images." So, I’ve tried that and now I have a new and interesting problem:
Error using trainnet (line 46)
Number of channels in predictions (3) must match the number of channels in the targets (1).
Error in convNet_1_edits (line 97)
[trainedNet, trainInfo]=trainnet(masterTrain,net,’mse’,options);
This problem has been approached several times before (https://www.mathworks.com/matlabcentral/answers/2123216-error-in-deep-learning-classification-code/?s_tid=ans_lp_feed_leaf and others) but none of them that I’ve found have used fully connected layers. For reference, my CNN is the following:
layers = [
imageInputLayer(nFeatures, "name", "input");
convolution2dLayer(f1Size, numFilters1, ‘padding’, ‘same’,…
"name", "conv1")
batchNormalizationLayer();
reluLayer();
convolution2dLayer(f1Size, numFilters2, "padding", "same",…
‘numchannels’, numFilters1, ‘name’, ‘conv2’)
batchNormalizationLayer();
reluLayer();
maxPooling2dLayer([1, 3]);
convolution2dLayer(f1Size, numFilters3, "padding", "same",…
‘numchannels’, numFilters2, ‘name’, ‘conv3’)
batchNormalizationLayer();
reluLayer();
maxPooling2dLayer([1, 5]);
fullyConnectedLayer(60, ‘name’, ‘fc1’)
reluLayer()
fullyConnectedLayer(30, ‘name’, ‘fc2’)
reluLayer()
fullyConnectedLayer(15, ‘name’, ‘fc3’)
reluLayer()
fullyConnectedLayer(3, ‘name’, ‘fc4’)
% regressionLayer()
];
net = dlnetwork;
net = addLayers(net, layers);
And I am using trainnetwork and a datastore. The output of read(ds) produces the following:
read(masterTest)
ans =
1×4 cell array
{1×1341 double} {[0.6500]} {[6.8000e-07]} {[0.0250]}
Where I have a 1 x 1341 set of features being used to predict three outputs. I thought the three neurons in my final fully connected layer would be the regression outputs, but there seems to be a mismatch in the number of predictions and number of targets. How can I align the number of predictions and targets when using regression in FC layers? Hello! I’m trying to get a CNN up and running and I think I’m almost there, but I’m still running into a few errors. What I would like is to have a series of 1D convolutions with a featureInputLayer, but those throw the following error:
Caused by:
Layer ‘conv1d1’: Input data must have one spatial dimension only, one temporal dimension only, or one of each. Instead, it
has 0 spatial dimensions and 0 temporal dimensions.
According to https://www.mathworks.com/matlabcentral/answers/1747170-error-on-convolutional-layer-s-input-data-has-0-spatial-dimensions-and-0-temporal-dimensions the workaround is to reformat the CNN to a conv2d using N x 1 "images." So, I’ve tried that and now I have a new and interesting problem:
Error using trainnet (line 46)
Number of channels in predictions (3) must match the number of channels in the targets (1).
Error in convNet_1_edits (line 97)
[trainedNet, trainInfo]=trainnet(masterTrain,net,’mse’,options);
This problem has been approached several times before (https://www.mathworks.com/matlabcentral/answers/2123216-error-in-deep-learning-classification-code/?s_tid=ans_lp_feed_leaf and others) but none of them that I’ve found have used fully connected layers. For reference, my CNN is the following:
layers = [
imageInputLayer(nFeatures, "name", "input");
convolution2dLayer(f1Size, numFilters1, ‘padding’, ‘same’,…
"name", "conv1")
batchNormalizationLayer();
reluLayer();
convolution2dLayer(f1Size, numFilters2, "padding", "same",…
‘numchannels’, numFilters1, ‘name’, ‘conv2’)
batchNormalizationLayer();
reluLayer();
maxPooling2dLayer([1, 3]);
convolution2dLayer(f1Size, numFilters3, "padding", "same",…
‘numchannels’, numFilters2, ‘name’, ‘conv3’)
batchNormalizationLayer();
reluLayer();
maxPooling2dLayer([1, 5]);
fullyConnectedLayer(60, ‘name’, ‘fc1’)
reluLayer()
fullyConnectedLayer(30, ‘name’, ‘fc2’)
reluLayer()
fullyConnectedLayer(15, ‘name’, ‘fc3’)
reluLayer()
fullyConnectedLayer(3, ‘name’, ‘fc4’)
% regressionLayer()
];
net = dlnetwork;
net = addLayers(net, layers);
And I am using trainnetwork and a datastore. The output of read(ds) produces the following:
read(masterTest)
ans =
1×4 cell array
{1×1341 double} {[0.6500]} {[6.8000e-07]} {[0.0250]}
Where I have a 1 x 1341 set of features being used to predict three outputs. I thought the three neurons in my final fully connected layer would be the regression outputs, but there seems to be a mismatch in the number of predictions and number of targets. How can I align the number of predictions and targets when using regression in FC layers? convolution, cnn, regression, trainnet MATLAB Answers — New Questions
What happened to the figure toolbar? Why is it an axes toolbar? How can I put the buttons back?
From R2018b onwards, tools such as the zoom, pan, datatip, etc are no longer at the toolbar at the top of the figure window. These buttons are now in an "axes" toolbar and only appear when you hover your mouse over the plot. How do I put the buttons back at the top of the figure window?From R2018b onwards, tools such as the zoom, pan, datatip, etc are no longer at the toolbar at the top of the figure window. These buttons are now in an "axes" toolbar and only appear when you hover your mouse over the plot. How do I put the buttons back at the top of the figure window? From R2018b onwards, tools such as the zoom, pan, datatip, etc are no longer at the toolbar at the top of the figure window. These buttons are now in an "axes" toolbar and only appear when you hover your mouse over the plot. How do I put the buttons back at the top of the figure window? figure, toolbar, axes, missing MATLAB Answers — New Questions
How to extract numbers from image with reflections and artifacts?
Hello.
I have a series of photos of the seven-digit display (below please find the example of such photo). I want to apply OCR to extract the information from each consecutive frame. Generally, the methods works quite fine provided that the image is distinct. Hovewer, at the preprocessing stage there is a need to binarize the image. The problem lies in a fact that there are some reflections in the image. I spent a significant amount of time and tried a lot of combinations and various functions (e.g. adaptive thresholding, histograms) to obtain the best possible performance. Is there any reasonable method for obtaining a nice set of digits without artifacts? Unfortunately there is no way to repeat the experiments in better conditions and remove the reflections at acqusition stage.
Thank you kindly in advance for any useful suggestions.Hello.
I have a series of photos of the seven-digit display (below please find the example of such photo). I want to apply OCR to extract the information from each consecutive frame. Generally, the methods works quite fine provided that the image is distinct. Hovewer, at the preprocessing stage there is a need to binarize the image. The problem lies in a fact that there are some reflections in the image. I spent a significant amount of time and tried a lot of combinations and various functions (e.g. adaptive thresholding, histograms) to obtain the best possible performance. Is there any reasonable method for obtaining a nice set of digits without artifacts? Unfortunately there is no way to repeat the experiments in better conditions and remove the reflections at acqusition stage.
Thank you kindly in advance for any useful suggestions. Hello.
I have a series of photos of the seven-digit display (below please find the example of such photo). I want to apply OCR to extract the information from each consecutive frame. Generally, the methods works quite fine provided that the image is distinct. Hovewer, at the preprocessing stage there is a need to binarize the image. The problem lies in a fact that there are some reflections in the image. I spent a significant amount of time and tried a lot of combinations and various functions (e.g. adaptive thresholding, histograms) to obtain the best possible performance. Is there any reasonable method for obtaining a nice set of digits without artifacts? Unfortunately there is no way to repeat the experiments in better conditions and remove the reflections at acqusition stage.
Thank you kindly in advance for any useful suggestions. binarize, thresholding, image processing MATLAB Answers — New Questions
Type Simulink.metamodel.foundation.ValueType issue with Adaptive Autosar System Composer architecture.
I’m using Simulink 22b system composer for Autosar architecture.
When trying a model update I get this error:
Simulink.metamodel.arplatform.common.ModeDeclarationGroup of value [noname](__). Please report this to MathWorks.
I saw you answered "I would like to let you know that this is a known bug and I apologize for this experience. The fix for the same is released in R2022b Update 6, R2023a Update 3 as well as R2023b. Please update the MATLAB to any of the above version to resolve the issue.".
My SW version is Update 9, but stil have a problem.
What could be the source of it, and how could I possibly fix it? I can’t find anything on Simulink.metamodel.I’m using Simulink 22b system composer for Autosar architecture.
When trying a model update I get this error:
Simulink.metamodel.arplatform.common.ModeDeclarationGroup of value [noname](__). Please report this to MathWorks.
I saw you answered "I would like to let you know that this is a known bug and I apologize for this experience. The fix for the same is released in R2022b Update 6, R2023a Update 3 as well as R2023b. Please update the MATLAB to any of the above version to resolve the issue.".
My SW version is Update 9, but stil have a problem.
What could be the source of it, and how could I possibly fix it? I can’t find anything on Simulink.metamodel. I’m using Simulink 22b system composer for Autosar architecture.
When trying a model update I get this error:
Simulink.metamodel.arplatform.common.ModeDeclarationGroup of value [noname](__). Please report this to MathWorks.
I saw you answered "I would like to let you know that this is a known bug and I apologize for this experience. The fix for the same is released in R2022b Update 6, R2023a Update 3 as well as R2023b. Please update the MATLAB to any of the above version to resolve the issue.".
My SW version is Update 9, but stil have a problem.
What could be the source of it, and how could I possibly fix it? I can’t find anything on Simulink.metamodel. adaptive autosar MATLAB Answers — New Questions
Why fixed values are not working in randomstart function of trainDDPGrobot program?
I am new to reinforcement learning and have run the programs given in online ramp course of reinforcement learning. In the randomstart function I made only a single change as given in the code below but the program is giving the attached image. I have seen the documentation, where all examples are given with random numbers. But I all inputvariables i.e. x0, y0, theta0, v0, and w0 to be fix and should be picked from already stored vectors, When I tried to fix only with single variable x0, the program is generating an error. How can I fix it?
function in = randomstart(in)
mdl = "whrobot";
a=0.5;
% in = setVariable(in,"x0",((-1)^randi([0 1]))*(2.5 + 3.5*rand),"Workspace",mdl);
in = setVariable(in,"x0",a,"Workspace",mdl);
in = setVariable(in,"y0",2.6 + 3.4*rand,"Workspace",mdl);
in = setVariable(in,"theta0",pi*(2*rand-1),"Workspace",mdl);
in = setVariable(in,"v0",randn/3,"Workspace",mdl);
in = setVariable(in,"w0",randn/3,"Workspace",mdl);
disp(x0)
endI am new to reinforcement learning and have run the programs given in online ramp course of reinforcement learning. In the randomstart function I made only a single change as given in the code below but the program is giving the attached image. I have seen the documentation, where all examples are given with random numbers. But I all inputvariables i.e. x0, y0, theta0, v0, and w0 to be fix and should be picked from already stored vectors, When I tried to fix only with single variable x0, the program is generating an error. How can I fix it?
function in = randomstart(in)
mdl = "whrobot";
a=0.5;
% in = setVariable(in,"x0",((-1)^randi([0 1]))*(2.5 + 3.5*rand),"Workspace",mdl);
in = setVariable(in,"x0",a,"Workspace",mdl);
in = setVariable(in,"y0",2.6 + 3.4*rand,"Workspace",mdl);
in = setVariable(in,"theta0",pi*(2*rand-1),"Workspace",mdl);
in = setVariable(in,"v0",randn/3,"Workspace",mdl);
in = setVariable(in,"w0",randn/3,"Workspace",mdl);
disp(x0)
end I am new to reinforcement learning and have run the programs given in online ramp course of reinforcement learning. In the randomstart function I made only a single change as given in the code below but the program is giving the attached image. I have seen the documentation, where all examples are given with random numbers. But I all inputvariables i.e. x0, y0, theta0, v0, and w0 to be fix and should be picked from already stored vectors, When I tried to fix only with single variable x0, the program is generating an error. How can I fix it?
function in = randomstart(in)
mdl = "whrobot";
a=0.5;
% in = setVariable(in,"x0",((-1)^randi([0 1]))*(2.5 + 3.5*rand),"Workspace",mdl);
in = setVariable(in,"x0",a,"Workspace",mdl);
in = setVariable(in,"y0",2.6 + 3.4*rand,"Workspace",mdl);
in = setVariable(in,"theta0",pi*(2*rand-1),"Workspace",mdl);
in = setVariable(in,"v0",randn/3,"Workspace",mdl);
in = setVariable(in,"w0",randn/3,"Workspace",mdl);
disp(x0)
end reinforcement learning, input, env MATLAB Answers — New Questions
How to create a bucket in a bucket bucket using MATLAB R213nb
I want to create bucket in bucket using MATLAB R2013bI want to create bucket in bucket using MATLAB R2013b I want to create bucket in bucket using MATLAB R2013b bucketing MATLAB Answers — New Questions
How to define a path of vehicle
I would like to find an approach to define a path of vehicle in path-length coordinate.
As input I have an array of Cartesian coordinates (X, Y). I need to convert it to some function (object) which allow me to get a curvature of path, X, Y as functions of path length.
Also there is a issue that I have a closed path like the following:
<</matlabcentral/answers/uploaded_files/54861/closed_lap.png>>
What can I use for such task?I would like to find an approach to define a path of vehicle in path-length coordinate.
As input I have an array of Cartesian coordinates (X, Y). I need to convert it to some function (object) which allow me to get a curvature of path, X, Y as functions of path length.
Also there is a issue that I have a closed path like the following:
<</matlabcentral/answers/uploaded_files/54861/closed_lap.png>>
What can I use for such task? I would like to find an approach to define a path of vehicle in path-length coordinate.
As input I have an array of Cartesian coordinates (X, Y). I need to convert it to some function (object) which allow me to get a curvature of path, X, Y as functions of path length.
Also there is a issue that I have a closed path like the following:
<</matlabcentral/answers/uploaded_files/54861/closed_lap.png>>
What can I use for such task? spline, path MATLAB Answers — New Questions
Is it possible to access the blocks inside the model under test from the test harness in custom criteria script of simulink test?
I would like to verify the datatype of inports and outports of a model in simulink test. So I created a test harness for it and when I try to access the inports inside the model under test in test harness from the custom cirteria script of simulink test, it doesnt work.
function ioAnalysisFunc(test)
res = get_param(strcat(test.sltest_bdroot, ‘/Model1/Inport1’), ‘OutDataTypeStr’); % Here test.sltest_bdroot is the test harness ‘Model1_Harness’
assignin(‘base’, ‘ress_out’, res{1});
end
The error thrown is,
——————————————————————–
Error occurred in custom criteria and custom criteria assessment did not run to completion.
——— Error ID: ———
Simulink:Commands:InvSimulinkObjectName
————– Error Details: ————–
Invalid Simulink object name: ‘Model1_Harness/Model1/Inport1’.
——————————————————————–I would like to verify the datatype of inports and outports of a model in simulink test. So I created a test harness for it and when I try to access the inports inside the model under test in test harness from the custom cirteria script of simulink test, it doesnt work.
function ioAnalysisFunc(test)
res = get_param(strcat(test.sltest_bdroot, ‘/Model1/Inport1’), ‘OutDataTypeStr’); % Here test.sltest_bdroot is the test harness ‘Model1_Harness’
assignin(‘base’, ‘ress_out’, res{1});
end
The error thrown is,
——————————————————————–
Error occurred in custom criteria and custom criteria assessment did not run to completion.
——— Error ID: ———
Simulink:Commands:InvSimulinkObjectName
————– Error Details: ————–
Invalid Simulink object name: ‘Model1_Harness/Model1/Inport1’.
——————————————————————– I would like to verify the datatype of inports and outports of a model in simulink test. So I created a test harness for it and when I try to access the inports inside the model under test in test harness from the custom cirteria script of simulink test, it doesnt work.
function ioAnalysisFunc(test)
res = get_param(strcat(test.sltest_bdroot, ‘/Model1/Inport1’), ‘OutDataTypeStr’); % Here test.sltest_bdroot is the test harness ‘Model1_Harness’
assignin(‘base’, ‘ress_out’, res{1});
end
The error thrown is,
——————————————————————–
Error occurred in custom criteria and custom criteria assessment did not run to completion.
——— Error ID: ———
Simulink:Commands:InvSimulinkObjectName
————– Error Details: ————–
Invalid Simulink object name: ‘Model1_Harness/Model1/Inport1’.
——————————————————————– simulink test, custom criteria script, get_param MATLAB Answers — New Questions
Galerkin method fix the linear two-point BVP
I am a newcomer to matlab,I want to use the Galerkin method with the hat function as the set of basis functions to calculate the solution to the linear two-point BVP :
The hat knots are evenly distributed with the interval h = 1/20 and 1/40.
Compare the results to those of the exact solution, , to evaluate the order of accuracy using
the absolute errors for the two knot intervals.
Hope someone can teach or guide me how to do it .I am a newcomer to matlab,I want to use the Galerkin method with the hat function as the set of basis functions to calculate the solution to the linear two-point BVP :
The hat knots are evenly distributed with the interval h = 1/20 and 1/40.
Compare the results to those of the exact solution, , to evaluate the order of accuracy using
the absolute errors for the two knot intervals.
Hope someone can teach or guide me how to do it . I am a newcomer to matlab,I want to use the Galerkin method with the hat function as the set of basis functions to calculate the solution to the linear two-point BVP :
The hat knots are evenly distributed with the interval h = 1/20 and 1/40.
Compare the results to those of the exact solution, , to evaluate the order of accuracy using
the absolute errors for the two knot intervals.
Hope someone can teach or guide me how to do it . galerkin, hat function MATLAB Answers — New Questions
how to write a program combininb name and age?
In my HB I got an assigment on writing a program that asks for the name and age of a person, then runs the sentence Dear__your age is___, the blank spaces containing the input
I thought about using a switch, the cases being the name inputs, is that something I can do or is there an easier way?In my HB I got an assigment on writing a program that asks for the name and age of a person, then runs the sentence Dear__your age is___, the blank spaces containing the input
I thought about using a switch, the cases being the name inputs, is that something I can do or is there an easier way? In my HB I got an assigment on writing a program that asks for the name and age of a person, then runs the sentence Dear__your age is___, the blank spaces containing the input
I thought about using a switch, the cases being the name inputs, is that something I can do or is there an easier way? homework, help MATLAB Answers — New Questions
Combining two matrices into one
Hi all,
I have a question regarding matrix manipulation in MATLAB.
My scenario is as follows:
I have a set of GPS coordinates, which I have converted into relative meters using an algorithm. These coordinates correspond to locations on a farm which I have gathered data for using a video camera, and roughly correspond to a standard "go up one row, go down the next" path.
For each coordinate, I also have an image at that coordinate, which I am using to identify the location of weeds.
From the GPS coordinates, which are simple X-Y points, I can generate a matrix. The matrix would likely be a roughly 2000×2000 size matrix, where a cell would have a value of 1 if there was a GPS point identifying that the tractor had been on that spot.
The images are 800x600x3 color images.
What I want to be able to do, is take the images, and, using the matrix I made from the GPS coordinates, somehow combine all images together into one large image.
If the images were distinct, then this would not be as big of an issue, as I could generate an 800x600x3 matrix at each cell of the 2000×2000 matrix. This would be a rather large matrix, however, it would accomplish the task.
However, the GPS coordinates are such that an image might overlap to a certain extent with the images adjacent to it.
Can anyone suggest any ways I can accomplish what I am trying to do? The end result simply needs to be a large image I can look at, which will show me the entirety of my farm land, using the images which I have taken.Hi all,
I have a question regarding matrix manipulation in MATLAB.
My scenario is as follows:
I have a set of GPS coordinates, which I have converted into relative meters using an algorithm. These coordinates correspond to locations on a farm which I have gathered data for using a video camera, and roughly correspond to a standard "go up one row, go down the next" path.
For each coordinate, I also have an image at that coordinate, which I am using to identify the location of weeds.
From the GPS coordinates, which are simple X-Y points, I can generate a matrix. The matrix would likely be a roughly 2000×2000 size matrix, where a cell would have a value of 1 if there was a GPS point identifying that the tractor had been on that spot.
The images are 800x600x3 color images.
What I want to be able to do, is take the images, and, using the matrix I made from the GPS coordinates, somehow combine all images together into one large image.
If the images were distinct, then this would not be as big of an issue, as I could generate an 800x600x3 matrix at each cell of the 2000×2000 matrix. This would be a rather large matrix, however, it would accomplish the task.
However, the GPS coordinates are such that an image might overlap to a certain extent with the images adjacent to it.
Can anyone suggest any ways I can accomplish what I am trying to do? The end result simply needs to be a large image I can look at, which will show me the entirety of my farm land, using the images which I have taken. Hi all,
I have a question regarding matrix manipulation in MATLAB.
My scenario is as follows:
I have a set of GPS coordinates, which I have converted into relative meters using an algorithm. These coordinates correspond to locations on a farm which I have gathered data for using a video camera, and roughly correspond to a standard "go up one row, go down the next" path.
For each coordinate, I also have an image at that coordinate, which I am using to identify the location of weeds.
From the GPS coordinates, which are simple X-Y points, I can generate a matrix. The matrix would likely be a roughly 2000×2000 size matrix, where a cell would have a value of 1 if there was a GPS point identifying that the tractor had been on that spot.
The images are 800x600x3 color images.
What I want to be able to do, is take the images, and, using the matrix I made from the GPS coordinates, somehow combine all images together into one large image.
If the images were distinct, then this would not be as big of an issue, as I could generate an 800x600x3 matrix at each cell of the 2000×2000 matrix. This would be a rather large matrix, however, it would accomplish the task.
However, the GPS coordinates are such that an image might overlap to a certain extent with the images adjacent to it.
Can anyone suggest any ways I can accomplish what I am trying to do? The end result simply needs to be a large image I can look at, which will show me the entirety of my farm land, using the images which I have taken. matrix manipulation, image processing MATLAB Answers — New Questions
How to overwrite my data everytime I run my code?
I’m currently writing a program and everytime I run it, it just adds the data to the excel vs deleting the contents of the excel and replacing it with new data. I’m using the writetable function which I thought would automatically clear the excel before filling it with new data. Any ideas?I’m currently writing a program and everytime I run it, it just adds the data to the excel vs deleting the contents of the excel and replacing it with new data. I’m using the writetable function which I thought would automatically clear the excel before filling it with new data. Any ideas? I’m currently writing a program and everytime I run it, it just adds the data to the excel vs deleting the contents of the excel and replacing it with new data. I’m using the writetable function which I thought would automatically clear the excel before filling it with new data. Any ideas? importing excel data MATLAB Answers — New Questions
May I ask how to use MATLAB code to build an ECA module?
May I ask how to use MATLAB code to build an ECA module? The ECA module can refer to this paper: ECA Net: Efficient Channel Attention for Deep Convolutional Neural Networks.
Paper address: https://arxiv.org/abs/1910.03151May I ask how to use MATLAB code to build an ECA module? The ECA module can refer to this paper: ECA Net: Efficient Channel Attention for Deep Convolutional Neural Networks.
Paper address: https://arxiv.org/abs/1910.03151 May I ask how to use MATLAB code to build an ECA module? The ECA module can refer to this paper: ECA Net: Efficient Channel Attention for Deep Convolutional Neural Networks.
Paper address: https://arxiv.org/abs/1910.03151 eca-net, attention mechanism MATLAB Answers — New Questions
Convert pulse to digital
Dear Sir/Madam,
If i represent a pulse with y-axis as time and x-axis as frequency is it possible to convert that pulse into a digital representation having a period that changes with increase in frequency. The figure shows a pulse that starts at 10KHz rises to 10.1KHz then falls back to 10KHz. The sample time for discussion has been set to 1ms.
<</matlabcentral/answers/uploaded_files/117334/pulse.PNG>>
Regards JoeDear Sir/Madam,
If i represent a pulse with y-axis as time and x-axis as frequency is it possible to convert that pulse into a digital representation having a period that changes with increase in frequency. The figure shows a pulse that starts at 10KHz rises to 10.1KHz then falls back to 10KHz. The sample time for discussion has been set to 1ms.
<</matlabcentral/answers/uploaded_files/117334/pulse.PNG>>
Regards Joe Dear Sir/Madam,
If i represent a pulse with y-axis as time and x-axis as frequency is it possible to convert that pulse into a digital representation having a period that changes with increase in frequency. The figure shows a pulse that starts at 10KHz rises to 10.1KHz then falls back to 10KHz. The sample time for discussion has been set to 1ms.
<</matlabcentral/answers/uploaded_files/117334/pulse.PNG>>
Regards Joe pulse, frequency, digital, time MATLAB Answers — New Questions
Problem with finding the global minimum with fmincon
I am currently trying to find the global minimum for a strain-energy-function ("Holzapfel-Model") and I am running into multiple problems:
The SEF has the form
With
We can calculate , where
We want to determine the minimum of the least-square function
My solution was to put all these equations into one long one:
fun = @(x) sum((sigma_11 – (lambda_1.^2 – lambda_2.^2 .* lambda_1.^2).*x(1) + 2 .*lambda_1.^2 .*cos(x(4))^2 .* (2.*x(2).*((lambda_1.^2 .* cos(x(4))^2 + lambda_2.^2 .* sin(x(4))^2) – 1) .* exp(x(2).*((lambda_1.^2 .* cos(x(4))^2 + lambda_2.^2 .* sin(x(4))^2)-1).^2))).^2 + (sigma_22 – (lambda_2.^2 – lambda_2.^2 .* lambda_1.^2).*x(1) + 2 .*lambda_2.^2 .*sin(x(4))^2 .* (2.*x(2).*((lambda_1.^2 .* cos(x(4))^2 + lambda_2.^2 .* sin(x(4))^2) – 1) .* exp(x(2).*((lambda_1.^2 .* cos(x(4))^2 + lambda_2.^2 .* sin(x(4))^2)-1).^2))).^2)
and then use the following parameters
x0 = [15,500,12,0.75*pi];
A = [];
b = [];
Aeq = [];
beq = [];
lb = [0,0,0,0];
ub = [inf, inf, inf, pi];
chi = fmincon(fun, x0,A,b,Aeq,beq,lb,ub,nonlcon)
function [c,ceq] = nonlcon(x,lambda_1,lambda_2)
c =1-(lambda_1.^2 .* cos(x(4)).^2 + lambda_2.^2 * sin(x(4)).^2) ;
ceq = [];
end
With these parameters, I can somewhat get close to my data points.
Now my questions:
I don’t think I understood c,ceq correctly. I used c to account for the constraint on I4, but I’m not sure if this was the right way to do it.
With the initial guess for x0, I can get close but it never seems to approach my curve nearly enough. How do I know if I have a good starting guess, and is fmincon even the right approach for this problem.
I have multiple data sets, for different stretch ratios (lambda_1:lambda_2: 1-1, 1-0.75, 0.75-1, 1-0.5,0.5-1) and since they are the same sample, I would like to use those datas to get one set of parameters for all of them. I tried to put all my data into a single vector, (1:30 would be the first data set, 31:^60 the second,…). This does not seem to work well. Should I find the solution for just one curve and than try to average over the parameters? As you guys can see, I am doing this parameter evaluation thing the first time ever and I would greatly appreciate help.I am currently trying to find the global minimum for a strain-energy-function ("Holzapfel-Model") and I am running into multiple problems:
The SEF has the form
With
We can calculate , where
We want to determine the minimum of the least-square function
My solution was to put all these equations into one long one:
fun = @(x) sum((sigma_11 – (lambda_1.^2 – lambda_2.^2 .* lambda_1.^2).*x(1) + 2 .*lambda_1.^2 .*cos(x(4))^2 .* (2.*x(2).*((lambda_1.^2 .* cos(x(4))^2 + lambda_2.^2 .* sin(x(4))^2) – 1) .* exp(x(2).*((lambda_1.^2 .* cos(x(4))^2 + lambda_2.^2 .* sin(x(4))^2)-1).^2))).^2 + (sigma_22 – (lambda_2.^2 – lambda_2.^2 .* lambda_1.^2).*x(1) + 2 .*lambda_2.^2 .*sin(x(4))^2 .* (2.*x(2).*((lambda_1.^2 .* cos(x(4))^2 + lambda_2.^2 .* sin(x(4))^2) – 1) .* exp(x(2).*((lambda_1.^2 .* cos(x(4))^2 + lambda_2.^2 .* sin(x(4))^2)-1).^2))).^2)
and then use the following parameters
x0 = [15,500,12,0.75*pi];
A = [];
b = [];
Aeq = [];
beq = [];
lb = [0,0,0,0];
ub = [inf, inf, inf, pi];
chi = fmincon(fun, x0,A,b,Aeq,beq,lb,ub,nonlcon)
function [c,ceq] = nonlcon(x,lambda_1,lambda_2)
c =1-(lambda_1.^2 .* cos(x(4)).^2 + lambda_2.^2 * sin(x(4)).^2) ;
ceq = [];
end
With these parameters, I can somewhat get close to my data points.
Now my questions:
I don’t think I understood c,ceq correctly. I used c to account for the constraint on I4, but I’m not sure if this was the right way to do it.
With the initial guess for x0, I can get close but it never seems to approach my curve nearly enough. How do I know if I have a good starting guess, and is fmincon even the right approach for this problem.
I have multiple data sets, for different stretch ratios (lambda_1:lambda_2: 1-1, 1-0.75, 0.75-1, 1-0.5,0.5-1) and since they are the same sample, I would like to use those datas to get one set of parameters for all of them. I tried to put all my data into a single vector, (1:30 would be the first data set, 31:^60 the second,…). This does not seem to work well. Should I find the solution for just one curve and than try to average over the parameters? As you guys can see, I am doing this parameter evaluation thing the first time ever and I would greatly appreciate help. I am currently trying to find the global minimum for a strain-energy-function ("Holzapfel-Model") and I am running into multiple problems:
The SEF has the form
With
We can calculate , where
We want to determine the minimum of the least-square function
My solution was to put all these equations into one long one:
fun = @(x) sum((sigma_11 – (lambda_1.^2 – lambda_2.^2 .* lambda_1.^2).*x(1) + 2 .*lambda_1.^2 .*cos(x(4))^2 .* (2.*x(2).*((lambda_1.^2 .* cos(x(4))^2 + lambda_2.^2 .* sin(x(4))^2) – 1) .* exp(x(2).*((lambda_1.^2 .* cos(x(4))^2 + lambda_2.^2 .* sin(x(4))^2)-1).^2))).^2 + (sigma_22 – (lambda_2.^2 – lambda_2.^2 .* lambda_1.^2).*x(1) + 2 .*lambda_2.^2 .*sin(x(4))^2 .* (2.*x(2).*((lambda_1.^2 .* cos(x(4))^2 + lambda_2.^2 .* sin(x(4))^2) – 1) .* exp(x(2).*((lambda_1.^2 .* cos(x(4))^2 + lambda_2.^2 .* sin(x(4))^2)-1).^2))).^2)
and then use the following parameters
x0 = [15,500,12,0.75*pi];
A = [];
b = [];
Aeq = [];
beq = [];
lb = [0,0,0,0];
ub = [inf, inf, inf, pi];
chi = fmincon(fun, x0,A,b,Aeq,beq,lb,ub,nonlcon)
function [c,ceq] = nonlcon(x,lambda_1,lambda_2)
c =1-(lambda_1.^2 .* cos(x(4)).^2 + lambda_2.^2 * sin(x(4)).^2) ;
ceq = [];
end
With these parameters, I can somewhat get close to my data points.
Now my questions:
I don’t think I understood c,ceq correctly. I used c to account for the constraint on I4, but I’m not sure if this was the right way to do it.
With the initial guess for x0, I can get close but it never seems to approach my curve nearly enough. How do I know if I have a good starting guess, and is fmincon even the right approach for this problem.
I have multiple data sets, for different stretch ratios (lambda_1:lambda_2: 1-1, 1-0.75, 0.75-1, 1-0.5,0.5-1) and since they are the same sample, I would like to use those datas to get one set of parameters for all of them. I tried to put all my data into a single vector, (1:30 would be the first data set, 31:^60 the second,…). This does not seem to work well. Should I find the solution for just one curve and than try to average over the parameters? As you guys can see, I am doing this parameter evaluation thing the first time ever and I would greatly appreciate help. fmincon, nonlinear, curve fitting, optimization MATLAB Answers — New Questions
Problem with direct calculation on table with std and “omitnan”
Since R2023a, it is possible to perform calculations directly on tables (and timetables) without extracting their data by indexing.
https://fr.mathworks.com/help/matlab/matlab_prog/direct-calculations-on-tables-and-timetables.html?searchHighlight=table&s_tid=srchtitle_table_9
I want use std directly on a numeric table where I can have nan.
For example :
load patients
T = table(Age,Height,Weight,Systolic,Diastolic)
mean(T,"omitnan")
It’s fine.
But why there is a problem with std(T,"omitnan") ?
% Applying the function ‘std’ to the variable ‘Age’ generated an error.
I can use std(T{:,:},"omitnan") or std(T.Variables,"omitnan") but I lost the possibility to work directly with my table.
Did I miss something ?
Do you have any suggestion ?
Thank you in advance.
SAINTHILLIER Jean MarieSince R2023a, it is possible to perform calculations directly on tables (and timetables) without extracting their data by indexing.
https://fr.mathworks.com/help/matlab/matlab_prog/direct-calculations-on-tables-and-timetables.html?searchHighlight=table&s_tid=srchtitle_table_9
I want use std directly on a numeric table where I can have nan.
For example :
load patients
T = table(Age,Height,Weight,Systolic,Diastolic)
mean(T,"omitnan")
It’s fine.
But why there is a problem with std(T,"omitnan") ?
% Applying the function ‘std’ to the variable ‘Age’ generated an error.
I can use std(T{:,:},"omitnan") or std(T.Variables,"omitnan") but I lost the possibility to work directly with my table.
Did I miss something ?
Do you have any suggestion ?
Thank you in advance.
SAINTHILLIER Jean Marie Since R2023a, it is possible to perform calculations directly on tables (and timetables) without extracting their data by indexing.
https://fr.mathworks.com/help/matlab/matlab_prog/direct-calculations-on-tables-and-timetables.html?searchHighlight=table&s_tid=srchtitle_table_9
I want use std directly on a numeric table where I can have nan.
For example :
load patients
T = table(Age,Height,Weight,Systolic,Diastolic)
mean(T,"omitnan")
It’s fine.
But why there is a problem with std(T,"omitnan") ?
% Applying the function ‘std’ to the variable ‘Age’ generated an error.
I can use std(T{:,:},"omitnan") or std(T.Variables,"omitnan") but I lost the possibility to work directly with my table.
Did I miss something ?
Do you have any suggestion ?
Thank you in advance.
SAINTHILLIER Jean Marie table, std MATLAB Answers — New Questions
Create and plot an oriented graph of a circuit from a netlist
Hello,
it should be a ridiculously trivial task, but I have to admit I’ve been stuck on it for a few months. Sadly, I’m not very good at Python either, so I’m coming here.
Assume that I have some circuit like the one below:
I want to read and parse a netlist such that I create a digraph object, which can later be used for testing subgraphs being a spanning tree and alike graph theoretic features. Prsing a netlist posses no difficulty, but it looks like the digraph function does not care about the order in my input cells and when I plot the graph, it is labeled wrongly.
I have spent weeks on it with no result. Can you see a easy solution how to turn it into a graph object and plot it accordingly?
Code below produces obvisouly wrong plot, for instance resistors, while the topoogy seems to be idnetified correctly. Edges/Nodes are mislabeled.
clear
close all
clc
netlist = {
‘R1 N001 0 R’;
‘R2 N002 N001 R’;
‘R3 0 N002 R’;
‘C1 N002 N001 C’;
‘C2 N001 0 C’;
‘C3 N002 0 C’;
‘L1 N002 N001 L’;
‘L2 0 N001 L’;
‘L3 0 N002 L’
};
elements = {};
sourceNodes = {};
targetNodes = {};
labels = {};
for i = 1:length(netlist)
parts = strsplit(netlist{i});
elements{end+1} = parts{1};
sourceNodes{end+1} = parts{2};
targetNodes{end+1} = parts{3};
labels{end+1} = [parts{4} ‘ – ‘ parts{1}];
end
edgeTable = table(sourceNodes’, targetNodes’, labels’, ‘VariableNames’, {‘EndNodes’, ‘EndNodes2’, ‘Label’});
G = digraph(edgeTable.EndNodes, edgeTable.EndNodes2);
G.Edges.Label = edgeTable.Label;
h = plot(G, ‘EdgeLabel’, G.Edges.Label, ‘NodeLabel’, G.Nodes.Name, ‘Layout’, ‘force’);Hello,
it should be a ridiculously trivial task, but I have to admit I’ve been stuck on it for a few months. Sadly, I’m not very good at Python either, so I’m coming here.
Assume that I have some circuit like the one below:
I want to read and parse a netlist such that I create a digraph object, which can later be used for testing subgraphs being a spanning tree and alike graph theoretic features. Prsing a netlist posses no difficulty, but it looks like the digraph function does not care about the order in my input cells and when I plot the graph, it is labeled wrongly.
I have spent weeks on it with no result. Can you see a easy solution how to turn it into a graph object and plot it accordingly?
Code below produces obvisouly wrong plot, for instance resistors, while the topoogy seems to be idnetified correctly. Edges/Nodes are mislabeled.
clear
close all
clc
netlist = {
‘R1 N001 0 R’;
‘R2 N002 N001 R’;
‘R3 0 N002 R’;
‘C1 N002 N001 C’;
‘C2 N001 0 C’;
‘C3 N002 0 C’;
‘L1 N002 N001 L’;
‘L2 0 N001 L’;
‘L3 0 N002 L’
};
elements = {};
sourceNodes = {};
targetNodes = {};
labels = {};
for i = 1:length(netlist)
parts = strsplit(netlist{i});
elements{end+1} = parts{1};
sourceNodes{end+1} = parts{2};
targetNodes{end+1} = parts{3};
labels{end+1} = [parts{4} ‘ – ‘ parts{1}];
end
edgeTable = table(sourceNodes’, targetNodes’, labels’, ‘VariableNames’, {‘EndNodes’, ‘EndNodes2’, ‘Label’});
G = digraph(edgeTable.EndNodes, edgeTable.EndNodes2);
G.Edges.Label = edgeTable.Label;
h = plot(G, ‘EdgeLabel’, G.Edges.Label, ‘NodeLabel’, G.Nodes.Name, ‘Layout’, ‘force’); Hello,
it should be a ridiculously trivial task, but I have to admit I’ve been stuck on it for a few months. Sadly, I’m not very good at Python either, so I’m coming here.
Assume that I have some circuit like the one below:
I want to read and parse a netlist such that I create a digraph object, which can later be used for testing subgraphs being a spanning tree and alike graph theoretic features. Prsing a netlist posses no difficulty, but it looks like the digraph function does not care about the order in my input cells and when I plot the graph, it is labeled wrongly.
I have spent weeks on it with no result. Can you see a easy solution how to turn it into a graph object and plot it accordingly?
Code below produces obvisouly wrong plot, for instance resistors, while the topoogy seems to be idnetified correctly. Edges/Nodes are mislabeled.
clear
close all
clc
netlist = {
‘R1 N001 0 R’;
‘R2 N002 N001 R’;
‘R3 0 N002 R’;
‘C1 N002 N001 C’;
‘C2 N001 0 C’;
‘C3 N002 0 C’;
‘L1 N002 N001 L’;
‘L2 0 N001 L’;
‘L3 0 N002 L’
};
elements = {};
sourceNodes = {};
targetNodes = {};
labels = {};
for i = 1:length(netlist)
parts = strsplit(netlist{i});
elements{end+1} = parts{1};
sourceNodes{end+1} = parts{2};
targetNodes{end+1} = parts{3};
labels{end+1} = [parts{4} ‘ – ‘ parts{1}];
end
edgeTable = table(sourceNodes’, targetNodes’, labels’, ‘VariableNames’, {‘EndNodes’, ‘EndNodes2’, ‘Label’});
G = digraph(edgeTable.EndNodes, edgeTable.EndNodes2);
G.Edges.Label = edgeTable.Label;
h = plot(G, ‘EdgeLabel’, G.Edges.Label, ‘NodeLabel’, G.Nodes.Name, ‘Layout’, ‘force’); digraph, circuit, netlist, spanning tree, graph plotting, spice MATLAB Answers — New Questions
Is it possible to realize self-supervised RL by adding auxiliary loss to the loss of Critic of PPO agent?
I am trying to realize self-supervised (SS) RL in MATLAB by using PPO agent. The SS RL can improve exploration and thereby enhance the convergence. In particular, it can be explained as follows:
At step , in addition to the original head of Critic that output the value via fullyConnectedLayer(1), there is an additional layer that is parallel to the original head of Critic and connected to the main body of critic, which outputs the the prediction of future state, denoted by , via fullyConnectedLayer(N) with N being the dimension of .
Then, such a prediction of future state will be used to calculate the SS loss by comparing it with the real future state, i.e., , where is the real future state.
Later, such a SS loss will be sampled and thereafter added to the original loss of Critic , i.e., 5-b in https://ww2.mathworks.cn/help/reinforcement-learning/ug/proximal-policy-optimization-agents.html, as follows
,
which requires to additionally add an auxiliary loss to the original loss of Critic.
So, is it possible to realize the above SS RL while avoiding significant modification in the source code of RL toolbox? Thank you!I am trying to realize self-supervised (SS) RL in MATLAB by using PPO agent. The SS RL can improve exploration and thereby enhance the convergence. In particular, it can be explained as follows:
At step , in addition to the original head of Critic that output the value via fullyConnectedLayer(1), there is an additional layer that is parallel to the original head of Critic and connected to the main body of critic, which outputs the the prediction of future state, denoted by , via fullyConnectedLayer(N) with N being the dimension of .
Then, such a prediction of future state will be used to calculate the SS loss by comparing it with the real future state, i.e., , where is the real future state.
Later, such a SS loss will be sampled and thereafter added to the original loss of Critic , i.e., 5-b in https://ww2.mathworks.cn/help/reinforcement-learning/ug/proximal-policy-optimization-agents.html, as follows
,
which requires to additionally add an auxiliary loss to the original loss of Critic.
So, is it possible to realize the above SS RL while avoiding significant modification in the source code of RL toolbox? Thank you! I am trying to realize self-supervised (SS) RL in MATLAB by using PPO agent. The SS RL can improve exploration and thereby enhance the convergence. In particular, it can be explained as follows:
At step , in addition to the original head of Critic that output the value via fullyConnectedLayer(1), there is an additional layer that is parallel to the original head of Critic and connected to the main body of critic, which outputs the the prediction of future state, denoted by , via fullyConnectedLayer(N) with N being the dimension of .
Then, such a prediction of future state will be used to calculate the SS loss by comparing it with the real future state, i.e., , where is the real future state.
Later, such a SS loss will be sampled and thereafter added to the original loss of Critic , i.e., 5-b in https://ww2.mathworks.cn/help/reinforcement-learning/ug/proximal-policy-optimization-agents.html, as follows
,
which requires to additionally add an auxiliary loss to the original loss of Critic.
So, is it possible to realize the above SS RL while avoiding significant modification in the source code of RL toolbox? Thank you! self-supervised rl, auxiliary loss, loss of critic, rlppoagent MATLAB Answers — New Questions