Tag Archives: matlab
how to modify a variable data by scale function like data= scale(data); I have to modify scale function for variable data
how to modify a variable data by scale function like data= scale(data); I have to modify scale function for variable data
Actually i am writing a matlab code for training in Matlan onramp and didn’t getting how to solve the abovehow to modify a variable data by scale function like data= scale(data); I have to modify scale function for variable data
Actually i am writing a matlab code for training in Matlan onramp and didn’t getting how to solve the above how to modify a variable data by scale function like data= scale(data); I have to modify scale function for variable data
Actually i am writing a matlab code for training in Matlan onramp and didn’t getting how to solve the above create a function, modify a function MATLAB Answers — New Questions
How can I calculate the forces and moments on servo arms and couplers in a Stewart platform simulation using MATLAB
I am currently working on developing a Stewart platform and have calculated the inverse kinematic. I’ve also created a Simscape Multibody simulation, utilizing revolute joints (driven by angle inputs), brick solids, and spherical joints. From this simulation, I extracted the joint torques necessary to drive the top platform based on a given input trajectory. This data has helped me select appropriate motors and spherical joints.
Now, I need to determine the forces and moments applied to the servo arms and couplers. My goal is to ensure that the servo arms do not twist and the couplers do not buckle under load. Could you advise on how I can calculate these forces and moments using MATLAB, or would it be more appropriate to use another tool for this analysis?
The mechanical members in question are highlighted in the attached pictures.I am currently working on developing a Stewart platform and have calculated the inverse kinematic. I’ve also created a Simscape Multibody simulation, utilizing revolute joints (driven by angle inputs), brick solids, and spherical joints. From this simulation, I extracted the joint torques necessary to drive the top platform based on a given input trajectory. This data has helped me select appropriate motors and spherical joints.
Now, I need to determine the forces and moments applied to the servo arms and couplers. My goal is to ensure that the servo arms do not twist and the couplers do not buckle under load. Could you advise on how I can calculate these forces and moments using MATLAB, or would it be more appropriate to use another tool for this analysis?
The mechanical members in question are highlighted in the attached pictures. I am currently working on developing a Stewart platform and have calculated the inverse kinematic. I’ve also created a Simscape Multibody simulation, utilizing revolute joints (driven by angle inputs), brick solids, and spherical joints. From this simulation, I extracted the joint torques necessary to drive the top platform based on a given input trajectory. This data has helped me select appropriate motors and spherical joints.
Now, I need to determine the forces and moments applied to the servo arms and couplers. My goal is to ensure that the servo arms do not twist and the couplers do not buckle under load. Could you advise on how I can calculate these forces and moments using MATLAB, or would it be more appropriate to use another tool for this analysis?
The mechanical members in question are highlighted in the attached pictures. simscape, matlab, force analysis, simulink, inverse kinematics, stuart’s platform MATLAB Answers — New Questions
STM32F4xx SPI Block Error
I’m working with a customized embedded system with stm32f410C8 processor in simulink R2024a. I’m using the latest version of embedded coder support package for STM32 procesors(version 24.1.3). The goal is communicating with a sensor over SPI. In a simple model I have added just a SPI Tranmit block.When I set chip select calling mehod to "Provided by the SPI peripherial" there is no error in model building process. but when I set chip select calling mehod to "Expilicit GPIO calls" I got this strange error without any hints to debug
### Starting build procedure for: H4CAModel
### Build procedure for H4CAModel aborted due to an error.
Build Summary
Top model targets built:
Model Action Rebuild Reason
=====================================================================
H4CAModel Failed Code generation information file does not exist.
0 of 1 models built (0 models already up to date)
Build duration: 0h 0m 0.63179s
Unrecognized field name "Signal".
I have done many projects with this support package but with other processors like stm32h7 series and i didn’t encountered this error. The point to be noted in this case is that SPI blocks for STM32F4xx processors have been added to suport package recently(R2024a).
Every little tip can go a long way.
Thanks.I’m working with a customized embedded system with stm32f410C8 processor in simulink R2024a. I’m using the latest version of embedded coder support package for STM32 procesors(version 24.1.3). The goal is communicating with a sensor over SPI. In a simple model I have added just a SPI Tranmit block.When I set chip select calling mehod to "Provided by the SPI peripherial" there is no error in model building process. but when I set chip select calling mehod to "Expilicit GPIO calls" I got this strange error without any hints to debug
### Starting build procedure for: H4CAModel
### Build procedure for H4CAModel aborted due to an error.
Build Summary
Top model targets built:
Model Action Rebuild Reason
=====================================================================
H4CAModel Failed Code generation information file does not exist.
0 of 1 models built (0 models already up to date)
Build duration: 0h 0m 0.63179s
Unrecognized field name "Signal".
I have done many projects with this support package but with other processors like stm32h7 series and i didn’t encountered this error. The point to be noted in this case is that SPI blocks for STM32F4xx processors have been added to suport package recently(R2024a).
Every little tip can go a long way.
Thanks. I’m working with a customized embedded system with stm32f410C8 processor in simulink R2024a. I’m using the latest version of embedded coder support package for STM32 procesors(version 24.1.3). The goal is communicating with a sensor over SPI. In a simple model I have added just a SPI Tranmit block.When I set chip select calling mehod to "Provided by the SPI peripherial" there is no error in model building process. but when I set chip select calling mehod to "Expilicit GPIO calls" I got this strange error without any hints to debug
### Starting build procedure for: H4CAModel
### Build procedure for H4CAModel aborted due to an error.
Build Summary
Top model targets built:
Model Action Rebuild Reason
=====================================================================
H4CAModel Failed Code generation information file does not exist.
0 of 1 models built (0 models already up to date)
Build duration: 0h 0m 0.63179s
Unrecognized field name "Signal".
I have done many projects with this support package but with other processors like stm32h7 series and i didn’t encountered this error. The point to be noted in this case is that SPI blocks for STM32F4xx processors have been added to suport package recently(R2024a).
Every little tip can go a long way.
Thanks. stm32f4xx, spi transmit MATLAB Answers — New Questions
Problema despues de ejecutar comando “addpath”
Holas:
Estoy usando Matlb 2021b en una PC-Win10.
Dado que no tengo derechos de administrador, cada vez que iniciaba un programa ejecutaba el comando "addpath" incluyendo el directorio que queria se incluyera:
addpath(genpath(‘c:Userspr8331C-ccar0_NuBES1_Nextcloud1____K_I_T54_m’),’-begin’)
Desde hace un tiempo ya empece a percatar que matlab empieza a tener error que no permite que se ejecuten mis codigos (que ya funcionaban previamente).
Por ejemplo estoy tratando de ejecutar mi funcion para buscar la ruta de un archivo:
function [rutArchivo, Dir, Archivo, Extension] = f_PathFile()
% anteriormente llamada f_VenetanaBuscaArchivo => le estoy cambiando el nombre para poder recordarme más facil
%
% Funcion que busca el path de un archivo seleccionado en la ventana.
% Para filtrar por extension en la busqueda primeramente se abre ventana preguntando la extension a buscar
%
% InPUT:
%==========
% No requiere
%
% OutPUT:
%==========
% rutArchivo: string, Ruta del archivo
% Dir: string, path del archivo
% Archivo: string, nombre del archivo
% Ext_ string, extension del archivo
%
% addpath(genpath(‘\sccfs-home.scc.kit.eduhomeMATLAB’),’-begin’) %esta línea agrega todas las subcarpetas de la Carpeta Matlab para asi leer las funciones sin problema
% addpath(genpath(‘c:Userspr8331C-ccar0_NuBES1_Nextcloud1____K_I_T54_m’),’-begin’) %esta línea agrega todas las subcarpetas de la Carpeta Matlab para asi leer las funciones sin problema
%%
defaultValue = {‘txt’};
disp([‘°fff starting f_PathFile’])
titleBar = ‘InPut’; % Click sobre archivo a seleccionar: ‘;
userPrompt = {‘Extencion a usar para la búsqueda: ‘ };
Entrada = inputdlg(userPrompt,titleBar, [1 35], defaultValue);
Extension = [‘*.’ char(Entrada(1,1))];
%%
TiituloBarra = [‘Busqueda filtrada para archivos del tipo: ‘ Extension];
disp([‘ffff ‘ TiituloBarra]);
[File,Dir] = uigetfile(Extension, TiituloBarra);
rutArchivo = [Dir File];
disp([‘#fff Selected PathFile: ‘ rutArchivo])
end
Cuando se trata de ejecutar la linea de:
Entrada = inputdlg(userPrompt,titleBar, [1 35], defaultValue);
Obtengo el siguiente error y el programa deja de funcinar, hasa que reinicie matlab nuevamente.
Error: File: system_dependent.m Line: 1 Column: 24
Invalid text character. Check for unsupported symbol, invisible character, or pasting of non-ASCII characters.
Error in usejava (line 44)
isok = system_dependent(‘useJava’,feature);
Error in matlab.ui.internal.utils.checkJVMError (line 14)
if ~isdecaf && ~usejava(‘jvm’)
Error in warnfiguredialog (line 8)
matlab.ui.internal.utils.checkJVMError;
Error in dialog (line 42)
warnfiguredialog(‘dialog’);
Error in inputdlg (line 147)
InputFig=dialog( …
Error in f_PathFile (line 26)
Entrada = inputdlg(userPrompt,titleBar, [1 35], defaultValue);
Alguna sugerencia?Holas:
Estoy usando Matlb 2021b en una PC-Win10.
Dado que no tengo derechos de administrador, cada vez que iniciaba un programa ejecutaba el comando "addpath" incluyendo el directorio que queria se incluyera:
addpath(genpath(‘c:Userspr8331C-ccar0_NuBES1_Nextcloud1____K_I_T54_m’),’-begin’)
Desde hace un tiempo ya empece a percatar que matlab empieza a tener error que no permite que se ejecuten mis codigos (que ya funcionaban previamente).
Por ejemplo estoy tratando de ejecutar mi funcion para buscar la ruta de un archivo:
function [rutArchivo, Dir, Archivo, Extension] = f_PathFile()
% anteriormente llamada f_VenetanaBuscaArchivo => le estoy cambiando el nombre para poder recordarme más facil
%
% Funcion que busca el path de un archivo seleccionado en la ventana.
% Para filtrar por extension en la busqueda primeramente se abre ventana preguntando la extension a buscar
%
% InPUT:
%==========
% No requiere
%
% OutPUT:
%==========
% rutArchivo: string, Ruta del archivo
% Dir: string, path del archivo
% Archivo: string, nombre del archivo
% Ext_ string, extension del archivo
%
% addpath(genpath(‘\sccfs-home.scc.kit.eduhomeMATLAB’),’-begin’) %esta línea agrega todas las subcarpetas de la Carpeta Matlab para asi leer las funciones sin problema
% addpath(genpath(‘c:Userspr8331C-ccar0_NuBES1_Nextcloud1____K_I_T54_m’),’-begin’) %esta línea agrega todas las subcarpetas de la Carpeta Matlab para asi leer las funciones sin problema
%%
defaultValue = {‘txt’};
disp([‘°fff starting f_PathFile’])
titleBar = ‘InPut’; % Click sobre archivo a seleccionar: ‘;
userPrompt = {‘Extencion a usar para la búsqueda: ‘ };
Entrada = inputdlg(userPrompt,titleBar, [1 35], defaultValue);
Extension = [‘*.’ char(Entrada(1,1))];
%%
TiituloBarra = [‘Busqueda filtrada para archivos del tipo: ‘ Extension];
disp([‘ffff ‘ TiituloBarra]);
[File,Dir] = uigetfile(Extension, TiituloBarra);
rutArchivo = [Dir File];
disp([‘#fff Selected PathFile: ‘ rutArchivo])
end
Cuando se trata de ejecutar la linea de:
Entrada = inputdlg(userPrompt,titleBar, [1 35], defaultValue);
Obtengo el siguiente error y el programa deja de funcinar, hasa que reinicie matlab nuevamente.
Error: File: system_dependent.m Line: 1 Column: 24
Invalid text character. Check for unsupported symbol, invisible character, or pasting of non-ASCII characters.
Error in usejava (line 44)
isok = system_dependent(‘useJava’,feature);
Error in matlab.ui.internal.utils.checkJVMError (line 14)
if ~isdecaf && ~usejava(‘jvm’)
Error in warnfiguredialog (line 8)
matlab.ui.internal.utils.checkJVMError;
Error in dialog (line 42)
warnfiguredialog(‘dialog’);
Error in inputdlg (line 147)
InputFig=dialog( …
Error in f_PathFile (line 26)
Entrada = inputdlg(userPrompt,titleBar, [1 35], defaultValue);
Alguna sugerencia? Holas:
Estoy usando Matlb 2021b en una PC-Win10.
Dado que no tengo derechos de administrador, cada vez que iniciaba un programa ejecutaba el comando "addpath" incluyendo el directorio que queria se incluyera:
addpath(genpath(‘c:Userspr8331C-ccar0_NuBES1_Nextcloud1____K_I_T54_m’),’-begin’)
Desde hace un tiempo ya empece a percatar que matlab empieza a tener error que no permite que se ejecuten mis codigos (que ya funcionaban previamente).
Por ejemplo estoy tratando de ejecutar mi funcion para buscar la ruta de un archivo:
function [rutArchivo, Dir, Archivo, Extension] = f_PathFile()
% anteriormente llamada f_VenetanaBuscaArchivo => le estoy cambiando el nombre para poder recordarme más facil
%
% Funcion que busca el path de un archivo seleccionado en la ventana.
% Para filtrar por extension en la busqueda primeramente se abre ventana preguntando la extension a buscar
%
% InPUT:
%==========
% No requiere
%
% OutPUT:
%==========
% rutArchivo: string, Ruta del archivo
% Dir: string, path del archivo
% Archivo: string, nombre del archivo
% Ext_ string, extension del archivo
%
% addpath(genpath(‘\sccfs-home.scc.kit.eduhomeMATLAB’),’-begin’) %esta línea agrega todas las subcarpetas de la Carpeta Matlab para asi leer las funciones sin problema
% addpath(genpath(‘c:Userspr8331C-ccar0_NuBES1_Nextcloud1____K_I_T54_m’),’-begin’) %esta línea agrega todas las subcarpetas de la Carpeta Matlab para asi leer las funciones sin problema
%%
defaultValue = {‘txt’};
disp([‘°fff starting f_PathFile’])
titleBar = ‘InPut’; % Click sobre archivo a seleccionar: ‘;
userPrompt = {‘Extencion a usar para la búsqueda: ‘ };
Entrada = inputdlg(userPrompt,titleBar, [1 35], defaultValue);
Extension = [‘*.’ char(Entrada(1,1))];
%%
TiituloBarra = [‘Busqueda filtrada para archivos del tipo: ‘ Extension];
disp([‘ffff ‘ TiituloBarra]);
[File,Dir] = uigetfile(Extension, TiituloBarra);
rutArchivo = [Dir File];
disp([‘#fff Selected PathFile: ‘ rutArchivo])
end
Cuando se trata de ejecutar la linea de:
Entrada = inputdlg(userPrompt,titleBar, [1 35], defaultValue);
Obtengo el siguiente error y el programa deja de funcinar, hasa que reinicie matlab nuevamente.
Error: File: system_dependent.m Line: 1 Column: 24
Invalid text character. Check for unsupported symbol, invisible character, or pasting of non-ASCII characters.
Error in usejava (line 44)
isok = system_dependent(‘useJava’,feature);
Error in matlab.ui.internal.utils.checkJVMError (line 14)
if ~isdecaf && ~usejava(‘jvm’)
Error in warnfiguredialog (line 8)
matlab.ui.internal.utils.checkJVMError;
Error in dialog (line 42)
warnfiguredialog(‘dialog’);
Error in inputdlg (line 147)
InputFig=dialog( …
Error in f_PathFile (line 26)
Entrada = inputdlg(userPrompt,titleBar, [1 35], defaultValue);
Alguna sugerencia? administrator, addpath, jave error MATLAB Answers — New Questions
System stability in control system engineering
The mathematical model of a systemis given by:
x ̈ +(x^2- η) x ̇ +w^(2 )x=0
For w=1
Show that a stable equilibrium point becomes unstable as the parameter 𝜂 is varied from -1 in +1 using phase plane analysis.
At what value of 𝜂 does the instability occur?
What happens to the system after the equilibrium point becomes unstable?The mathematical model of a systemis given by:
x ̈ +(x^2- η) x ̇ +w^(2 )x=0
For w=1
Show that a stable equilibrium point becomes unstable as the parameter 𝜂 is varied from -1 in +1 using phase plane analysis.
At what value of 𝜂 does the instability occur?
What happens to the system after the equilibrium point becomes unstable? The mathematical model of a systemis given by:
x ̈ +(x^2- η) x ̇ +w^(2 )x=0
For w=1
Show that a stable equilibrium point becomes unstable as the parameter 𝜂 is varied from -1 in +1 using phase plane analysis.
At what value of 𝜂 does the instability occur?
What happens to the system after the equilibrium point becomes unstable? control system MATLAB Answers — New Questions
SSO for Linux and Mac users, setup guidance for multiple users.
Hello MATLAB Community,
I’m reaching out to understand the implications of MATLAB’s transition from traditional license servers to Single Sign-On (SSO) for Linux and Mac users.
How will this transition impact users on Linux and Mac systems? Are there specific considerations or additional steps required for these platforms to ensure continued access and functionality?
Could someone provide guidance on how to set up MATLAB SSO for multiple users on Linux and Mac systems within a business or educational institution? Any best practices or detailed instructions for managing SSO for a larger group across these operating systems would be very helpful.
Any detailed information or guidance on these topics would be greatly appreciated.Hello MATLAB Community,
I’m reaching out to understand the implications of MATLAB’s transition from traditional license servers to Single Sign-On (SSO) for Linux and Mac users.
How will this transition impact users on Linux and Mac systems? Are there specific considerations or additional steps required for these platforms to ensure continued access and functionality?
Could someone provide guidance on how to set up MATLAB SSO for multiple users on Linux and Mac systems within a business or educational institution? Any best practices or detailed instructions for managing SSO for a larger group across these operating systems would be very helpful.
Any detailed information or guidance on these topics would be greatly appreciated. Hello MATLAB Community,
I’m reaching out to understand the implications of MATLAB’s transition from traditional license servers to Single Sign-On (SSO) for Linux and Mac users.
How will this transition impact users on Linux and Mac systems? Are there specific considerations or additional steps required for these platforms to ensure continued access and functionality?
Could someone provide guidance on how to set up MATLAB SSO for multiple users on Linux and Mac systems within a business or educational institution? Any best practices or detailed instructions for managing SSO for a larger group across these operating systems would be very helpful.
Any detailed information or guidance on these topics would be greatly appreciated. sso, matlab, linux, mac, license server MATLAB Answers — New Questions
HDL Coder and Bitstream Programming Insight Needed
I am trying to program the DAC PL-DDR Transmit example ( https://www.mathworks.com/help/hdlcoder/ug/hdl-dac-PL-DDR4-transmit.html ) to my ZCU216 board. I have already asked a question about this a few days ago but will make this one more broad to give it a better chance of being answered.
When generating and programming the bitstream, I ensure that the AXI4 Stream Interface is 128 bits wide. However, when I run the addAXI4StreamInterface() function, a prequisite to writing to the port from MATLAB for testing purposes, I am getting data mismatch errors that can be resolved by changing the inteface width to 64 bits. So, clearly the programmed FPGA is expecting the function to request 64 bits and not 128.
My question is: what kind of troubleshooting steps are aviailable for an issue like this? Trying to explore these functions, you run into .p files quickly, so it’s been impossible so far to see what’s going on under the hood.I am trying to program the DAC PL-DDR Transmit example ( https://www.mathworks.com/help/hdlcoder/ug/hdl-dac-PL-DDR4-transmit.html ) to my ZCU216 board. I have already asked a question about this a few days ago but will make this one more broad to give it a better chance of being answered.
When generating and programming the bitstream, I ensure that the AXI4 Stream Interface is 128 bits wide. However, when I run the addAXI4StreamInterface() function, a prequisite to writing to the port from MATLAB for testing purposes, I am getting data mismatch errors that can be resolved by changing the inteface width to 64 bits. So, clearly the programmed FPGA is expecting the function to request 64 bits and not 128.
My question is: what kind of troubleshooting steps are aviailable for an issue like this? Trying to explore these functions, you run into .p files quickly, so it’s been impossible so far to see what’s going on under the hood. I am trying to program the DAC PL-DDR Transmit example ( https://www.mathworks.com/help/hdlcoder/ug/hdl-dac-PL-DDR4-transmit.html ) to my ZCU216 board. I have already asked a question about this a few days ago but will make this one more broad to give it a better chance of being answered.
When generating and programming the bitstream, I ensure that the AXI4 Stream Interface is 128 bits wide. However, when I run the addAXI4StreamInterface() function, a prequisite to writing to the port from MATLAB for testing purposes, I am getting data mismatch errors that can be resolved by changing the inteface width to 64 bits. So, clearly the programmed FPGA is expecting the function to request 64 bits and not 128.
My question is: what kind of troubleshooting steps are aviailable for an issue like this? Trying to explore these functions, you run into .p files quickly, so it’s been impossible so far to see what’s going on under the hood. hdl coder, zcu216, dac, ddr MATLAB Answers — New Questions
Performing operations on row elements in a matrix
So I’m creating some simple examples on trying to perform operations between rows in a matrix such as division. I have a matrix where:
my_mat = [2 4 6 8; 4 8 12 16]
which looks like so when printed.
my_mat =
2 4 6 8
4 8 12 16
and what I’m trying to do now is to divide the elements of the first row with the corresponding neighbouring ones which is the second row in this case since it’s only a 2×4 matrix. This means 2/4, 4/8, 6/12 and 8/16
Then perhaps printing out the result as the output
0.5
0.5
0.5
0.5
How do i perform row operations in a single matrix?
I’ve looked into bsxfun but apparently i can’t figure out the way to perform row operations with it.So I’m creating some simple examples on trying to perform operations between rows in a matrix such as division. I have a matrix where:
my_mat = [2 4 6 8; 4 8 12 16]
which looks like so when printed.
my_mat =
2 4 6 8
4 8 12 16
and what I’m trying to do now is to divide the elements of the first row with the corresponding neighbouring ones which is the second row in this case since it’s only a 2×4 matrix. This means 2/4, 4/8, 6/12 and 8/16
Then perhaps printing out the result as the output
0.5
0.5
0.5
0.5
How do i perform row operations in a single matrix?
I’ve looked into bsxfun but apparently i can’t figure out the way to perform row operations with it. So I’m creating some simple examples on trying to perform operations between rows in a matrix such as division. I have a matrix where:
my_mat = [2 4 6 8; 4 8 12 16]
which looks like so when printed.
my_mat =
2 4 6 8
4 8 12 16
and what I’m trying to do now is to divide the elements of the first row with the corresponding neighbouring ones which is the second row in this case since it’s only a 2×4 matrix. This means 2/4, 4/8, 6/12 and 8/16
Then perhaps printing out the result as the output
0.5
0.5
0.5
0.5
How do i perform row operations in a single matrix?
I’ve looked into bsxfun but apparently i can’t figure out the way to perform row operations with it. image processing, matlab, rows, matrix MATLAB Answers — New Questions
How to Use Future System States to Make Real-Time Decisions in Simulink?
I would like to implement the following function in my Simulink model:
At time instant ti, I need to calculate the value of an ODE function using an integrator block over the next 2 seconds, i.e., the time span is [ti, ti+2]. Then, I want to retrieve the system states at time ti+2 and use this information to make a decision on the execution commands at the current time ti.
Does anyone have suggestions on how to approach this problem? Thanks in advance.I would like to implement the following function in my Simulink model:
At time instant ti, I need to calculate the value of an ODE function using an integrator block over the next 2 seconds, i.e., the time span is [ti, ti+2]. Then, I want to retrieve the system states at time ti+2 and use this information to make a decision on the execution commands at the current time ti.
Does anyone have suggestions on how to approach this problem? Thanks in advance. I would like to implement the following function in my Simulink model:
At time instant ti, I need to calculate the value of an ODE function using an integrator block over the next 2 seconds, i.e., the time span is [ti, ti+2]. Then, I want to retrieve the system states at time ti+2 and use this information to make a decision on the execution commands at the current time ti.
Does anyone have suggestions on how to approach this problem? Thanks in advance. simulink MATLAB Answers — New Questions
Hello everyone i am solving multi degree freedom of system by using Newmarks beta method in which loading function F(x,v). When i am ruing the code, the result i
% dynamic analysis using direct integration method
% % the porbleme is: Mx”+Cx’+Kx=(C1+K1) *eps*cos(Omeg*t)+K2* eps^2cos(Omeg*t)^2
clc;
clear all
format short e
%close all
m=[18438.6 0 0;0 13761 0;0 0 9174];
disp(‘ mass matrix’)
m;
[ns,ms]=size(m);
% if forces are acting at degrees of freedom
m=[40000 0 0;0 20000 0;0 0 20000];
c0=[0 0 0;0 -8000 8000;0 -8000 8000];
c1=[0 0 0;0 -4000 8000;0 -8000 4000];
k0=[30000 -10000 0;-10000 20000 -10000;0 -10000 10000];
k1=[30000 -10000 0;-10000 50000 -10000;0 -10000 50000];
k2=[30000 -10000 0;-10000 20000 -40000;0 -40000 10000];
% disp(‘ force at various degrees of freedom’);
% f;
%if base ground acceleration is given
% dis=’disp.dat’
% di=load(dis)
% % convert to equivalent nodal loads
% for i=1:ns
% f(:,i)=-di*m(i,i)
% end
disp(‘ damping matrix’)
c0;
disp(‘ stiffness matrix’)
k0;
format long;
kim=inv(k0)*m;
[evec,ev]=eig(kim);
for i=1:ns
omega(i)=1/sqrt(ev(i,i));
end
disp(‘ natural frequencies’)
omega;
% give gamma=0.5 and beta=0.25 for Newmark average accln method
%gama=0.5;
%beta=0.25;
%give gamma=0.5 and beta=0.1667 for Newmark linear accln method
gama=0.5;
beta=0.167;
%give initial conditions for displacements
u0=[0 0.01 0.05];
disp(‘ initial displacements’)
u0;
%give initial condition for velocities
v0=[0. 0. 0.];
%y0=[0;0.01;0.05;0;0;0];
disp(‘ initial velocities’)
v0;
om=5; eps=0.01;
IM=inv(m);
X=u0′; Z=v0′;
tt=0;
% S1=-IM*k0+IM*k1*eps.*cos(om*tt)+IM*k2*eps^2.*cos(om*tt)^2;
% S2=-IM*c0+IM*c1*eps.*cos(om*tt);
dt=0.02;
S=k2*dt^2*beta*eps^2*cos(om*tt)^2+k1*dt^2*beta*eps*cos(om*tt)+…
c1*gama*dt*eps*cos(om*tt)+k0*dt^2*beta+c0*gama*dt+m;
S1=k1*eps.*cos(om*tt)+k2*eps^2.*cos(om*tt)^2;
S2=c1*eps.*cos(om*tt);
f(1,:)=(S1*X+S2*Z);
%for i=1:ns
a0=-inv(m)*(f(1,:)’+c0*v0’+k0*u0′);
%end
kba=k0+(gama/(beta*dt))*c0+(1/(beta*dt*dt))*m;
kin=inv(kba);
aa=(1/(beta*dt))*m+(gama/beta)*c0;
bb=(1/(2.0*beta))*m+dt*(gama/(2.0*beta)-1)*c0;
u(1,:)=u0;
v(1,:)=v0;
a(1,:)=a0;
t=linspace(0,5,251);
for i=2:10%251
X=u(i-1,:)’; Z=v(i-1,:)’;
S1=k1*eps.*cos(om*tt)+k2*eps^2.*cos(om*tt)^2;
S2=c1*eps.*cos(om*tt);
f(i,:)=(S1*X+S2*Z); %%%%% ??????
df=f(i,:)-f(i-1,:);
dfb(i,:)=df+v(i-1,:)*aa’+a(i-1,:)*bb’;
du(i,:)=dfb(i,:)*kin;
dv(i,:)=(gama/(beta*dt))*du(i,:)-(gama/beta)*v(i-1,:)+dt*…
(1-gama/(2.0*beta))*a(i-1,:);
da(i,:)=(1/(beta*dt^2))*du(i,:)-(1/(beta*dt))*v(i-1,:)-(1/(2.0*beta))*a(i-1,:);
u(i,:)=u(i-1,:)+du(i,:);
v(i,:)=v(i-1,:)+dv(i,:);
a(i,:)=a(i-1,:)+da(i,:);
end
%figure(1);
hold on
plot(tt,u(:,1),’k’);
xlabel(‘ time in secs’);
ylabel(‘ roof displacement’);
title(‘ displacement response of the roof’);
%figure(2);
hold on
plot(tt,u(:,2),’k’);
xlabel(‘ time in secs’);
ylabel(‘ roof velocity’);
title(‘velocity response of the roof’);
%figure(3);
hold on
plot(tt,u(:,3),’k’);
xlabel(‘ time in secs’);
ylabel(‘ roof acceleration’);
title(‘ acceleration response of the roof’)% dynamic analysis using direct integration method
% % the porbleme is: Mx”+Cx’+Kx=(C1+K1) *eps*cos(Omeg*t)+K2* eps^2cos(Omeg*t)^2
clc;
clear all
format short e
%close all
m=[18438.6 0 0;0 13761 0;0 0 9174];
disp(‘ mass matrix’)
m;
[ns,ms]=size(m);
% if forces are acting at degrees of freedom
m=[40000 0 0;0 20000 0;0 0 20000];
c0=[0 0 0;0 -8000 8000;0 -8000 8000];
c1=[0 0 0;0 -4000 8000;0 -8000 4000];
k0=[30000 -10000 0;-10000 20000 -10000;0 -10000 10000];
k1=[30000 -10000 0;-10000 50000 -10000;0 -10000 50000];
k2=[30000 -10000 0;-10000 20000 -40000;0 -40000 10000];
% disp(‘ force at various degrees of freedom’);
% f;
%if base ground acceleration is given
% dis=’disp.dat’
% di=load(dis)
% % convert to equivalent nodal loads
% for i=1:ns
% f(:,i)=-di*m(i,i)
% end
disp(‘ damping matrix’)
c0;
disp(‘ stiffness matrix’)
k0;
format long;
kim=inv(k0)*m;
[evec,ev]=eig(kim);
for i=1:ns
omega(i)=1/sqrt(ev(i,i));
end
disp(‘ natural frequencies’)
omega;
% give gamma=0.5 and beta=0.25 for Newmark average accln method
%gama=0.5;
%beta=0.25;
%give gamma=0.5 and beta=0.1667 for Newmark linear accln method
gama=0.5;
beta=0.167;
%give initial conditions for displacements
u0=[0 0.01 0.05];
disp(‘ initial displacements’)
u0;
%give initial condition for velocities
v0=[0. 0. 0.];
%y0=[0;0.01;0.05;0;0;0];
disp(‘ initial velocities’)
v0;
om=5; eps=0.01;
IM=inv(m);
X=u0′; Z=v0′;
tt=0;
% S1=-IM*k0+IM*k1*eps.*cos(om*tt)+IM*k2*eps^2.*cos(om*tt)^2;
% S2=-IM*c0+IM*c1*eps.*cos(om*tt);
dt=0.02;
S=k2*dt^2*beta*eps^2*cos(om*tt)^2+k1*dt^2*beta*eps*cos(om*tt)+…
c1*gama*dt*eps*cos(om*tt)+k0*dt^2*beta+c0*gama*dt+m;
S1=k1*eps.*cos(om*tt)+k2*eps^2.*cos(om*tt)^2;
S2=c1*eps.*cos(om*tt);
f(1,:)=(S1*X+S2*Z);
%for i=1:ns
a0=-inv(m)*(f(1,:)’+c0*v0’+k0*u0′);
%end
kba=k0+(gama/(beta*dt))*c0+(1/(beta*dt*dt))*m;
kin=inv(kba);
aa=(1/(beta*dt))*m+(gama/beta)*c0;
bb=(1/(2.0*beta))*m+dt*(gama/(2.0*beta)-1)*c0;
u(1,:)=u0;
v(1,:)=v0;
a(1,:)=a0;
t=linspace(0,5,251);
for i=2:10%251
X=u(i-1,:)’; Z=v(i-1,:)’;
S1=k1*eps.*cos(om*tt)+k2*eps^2.*cos(om*tt)^2;
S2=c1*eps.*cos(om*tt);
f(i,:)=(S1*X+S2*Z); %%%%% ??????
df=f(i,:)-f(i-1,:);
dfb(i,:)=df+v(i-1,:)*aa’+a(i-1,:)*bb’;
du(i,:)=dfb(i,:)*kin;
dv(i,:)=(gama/(beta*dt))*du(i,:)-(gama/beta)*v(i-1,:)+dt*…
(1-gama/(2.0*beta))*a(i-1,:);
da(i,:)=(1/(beta*dt^2))*du(i,:)-(1/(beta*dt))*v(i-1,:)-(1/(2.0*beta))*a(i-1,:);
u(i,:)=u(i-1,:)+du(i,:);
v(i,:)=v(i-1,:)+dv(i,:);
a(i,:)=a(i-1,:)+da(i,:);
end
%figure(1);
hold on
plot(tt,u(:,1),’k’);
xlabel(‘ time in secs’);
ylabel(‘ roof displacement’);
title(‘ displacement response of the roof’);
%figure(2);
hold on
plot(tt,u(:,2),’k’);
xlabel(‘ time in secs’);
ylabel(‘ roof velocity’);
title(‘velocity response of the roof’);
%figure(3);
hold on
plot(tt,u(:,3),’k’);
xlabel(‘ time in secs’);
ylabel(‘ roof acceleration’);
title(‘ acceleration response of the roof’) % dynamic analysis using direct integration method
% % the porbleme is: Mx”+Cx’+Kx=(C1+K1) *eps*cos(Omeg*t)+K2* eps^2cos(Omeg*t)^2
clc;
clear all
format short e
%close all
m=[18438.6 0 0;0 13761 0;0 0 9174];
disp(‘ mass matrix’)
m;
[ns,ms]=size(m);
% if forces are acting at degrees of freedom
m=[40000 0 0;0 20000 0;0 0 20000];
c0=[0 0 0;0 -8000 8000;0 -8000 8000];
c1=[0 0 0;0 -4000 8000;0 -8000 4000];
k0=[30000 -10000 0;-10000 20000 -10000;0 -10000 10000];
k1=[30000 -10000 0;-10000 50000 -10000;0 -10000 50000];
k2=[30000 -10000 0;-10000 20000 -40000;0 -40000 10000];
% disp(‘ force at various degrees of freedom’);
% f;
%if base ground acceleration is given
% dis=’disp.dat’
% di=load(dis)
% % convert to equivalent nodal loads
% for i=1:ns
% f(:,i)=-di*m(i,i)
% end
disp(‘ damping matrix’)
c0;
disp(‘ stiffness matrix’)
k0;
format long;
kim=inv(k0)*m;
[evec,ev]=eig(kim);
for i=1:ns
omega(i)=1/sqrt(ev(i,i));
end
disp(‘ natural frequencies’)
omega;
% give gamma=0.5 and beta=0.25 for Newmark average accln method
%gama=0.5;
%beta=0.25;
%give gamma=0.5 and beta=0.1667 for Newmark linear accln method
gama=0.5;
beta=0.167;
%give initial conditions for displacements
u0=[0 0.01 0.05];
disp(‘ initial displacements’)
u0;
%give initial condition for velocities
v0=[0. 0. 0.];
%y0=[0;0.01;0.05;0;0;0];
disp(‘ initial velocities’)
v0;
om=5; eps=0.01;
IM=inv(m);
X=u0′; Z=v0′;
tt=0;
% S1=-IM*k0+IM*k1*eps.*cos(om*tt)+IM*k2*eps^2.*cos(om*tt)^2;
% S2=-IM*c0+IM*c1*eps.*cos(om*tt);
dt=0.02;
S=k2*dt^2*beta*eps^2*cos(om*tt)^2+k1*dt^2*beta*eps*cos(om*tt)+…
c1*gama*dt*eps*cos(om*tt)+k0*dt^2*beta+c0*gama*dt+m;
S1=k1*eps.*cos(om*tt)+k2*eps^2.*cos(om*tt)^2;
S2=c1*eps.*cos(om*tt);
f(1,:)=(S1*X+S2*Z);
%for i=1:ns
a0=-inv(m)*(f(1,:)’+c0*v0’+k0*u0′);
%end
kba=k0+(gama/(beta*dt))*c0+(1/(beta*dt*dt))*m;
kin=inv(kba);
aa=(1/(beta*dt))*m+(gama/beta)*c0;
bb=(1/(2.0*beta))*m+dt*(gama/(2.0*beta)-1)*c0;
u(1,:)=u0;
v(1,:)=v0;
a(1,:)=a0;
t=linspace(0,5,251);
for i=2:10%251
X=u(i-1,:)’; Z=v(i-1,:)’;
S1=k1*eps.*cos(om*tt)+k2*eps^2.*cos(om*tt)^2;
S2=c1*eps.*cos(om*tt);
f(i,:)=(S1*X+S2*Z); %%%%% ??????
df=f(i,:)-f(i-1,:);
dfb(i,:)=df+v(i-1,:)*aa’+a(i-1,:)*bb’;
du(i,:)=dfb(i,:)*kin;
dv(i,:)=(gama/(beta*dt))*du(i,:)-(gama/beta)*v(i-1,:)+dt*…
(1-gama/(2.0*beta))*a(i-1,:);
da(i,:)=(1/(beta*dt^2))*du(i,:)-(1/(beta*dt))*v(i-1,:)-(1/(2.0*beta))*a(i-1,:);
u(i,:)=u(i-1,:)+du(i,:);
v(i,:)=v(i-1,:)+dv(i,:);
a(i,:)=a(i-1,:)+da(i,:);
end
%figure(1);
hold on
plot(tt,u(:,1),’k’);
xlabel(‘ time in secs’);
ylabel(‘ roof displacement’);
title(‘ displacement response of the roof’);
%figure(2);
hold on
plot(tt,u(:,2),’k’);
xlabel(‘ time in secs’);
ylabel(‘ roof velocity’);
title(‘velocity response of the roof’);
%figure(3);
hold on
plot(tt,u(:,3),’k’);
xlabel(‘ time in secs’);
ylabel(‘ roof acceleration’);
title(‘ acceleration response of the roof’) thank you MATLAB Answers — New Questions
I’m an employee and use Matlab in my work computer. Is that possible to obtain some kind of free license to install Matlab in my home computer for personal use?
I’m an employee and use Matlab in my work computer. Is that possible to "take credit of this" and then obtain some kind of free license to install Matlab in my home computer for personal use?I’m an employee and use Matlab in my work computer. Is that possible to "take credit of this" and then obtain some kind of free license to install Matlab in my home computer for personal use? I’m an employee and use Matlab in my work computer. Is that possible to "take credit of this" and then obtain some kind of free license to install Matlab in my home computer for personal use? home license MATLAB Answers — New Questions
How to gather digital audio stream via STM32?
Is it possible to gather data stream from serial audio interface for procedding by matlab/simulink?
I want to use STM32H743 for this purpose.
My goal is to receive audioi stream from the codec via SAI interface.
MarekIs it possible to gather data stream from serial audio interface for procedding by matlab/simulink?
I want to use STM32H743 for this purpose.
My goal is to receive audioi stream from the codec via SAI interface.
Marek Is it possible to gather data stream from serial audio interface for procedding by matlab/simulink?
I want to use STM32H743 for this purpose.
My goal is to receive audioi stream from the codec via SAI interface.
Marek stm32h7 MATLAB Answers — New Questions
FOC with BLDC motor
Hello,
I want to control a bldc motor using FOC but after doing some research i find that i can’t use FOC with BLDC only with PMSM.Can any one say if i can use FOC
Thank youHello,
I want to control a bldc motor using FOC but after doing some research i find that i can’t use FOC with BLDC only with PMSM.Can any one say if i can use FOC
Thank you Hello,
I want to control a bldc motor using FOC but after doing some research i find that i can’t use FOC with BLDC only with PMSM.Can any one say if i can use FOC
Thank you foc, bldc, pmsm MATLAB Answers — New Questions
How to label multiple objects in object detection with different names?
Hi there!
I’ve a problem with labelling my objects in an image.
Let’s have a look at the image:
This programme is detecting front/rear of the cars and a stop sign. I want the labels to say what they’re looking at. For example: "Stop Sign Confidence: 1.0000", CarRear Confidence: 0.6446 and etc. As you may see, my programme adds the probability factors correctly. But there are still no strings/labelnames attached.
You can have a look at my code:
%%
% Read test image
testImage = imread(‘StopSignTest2.jpg’);
% Detect stop signs
[bboxes,score,label] = detect(rcnn,testImage,’MiniBatchSize’,128)
% Display detection results
label_str = cell(3,1);
conf_val = [score];
conf_lab = [label];
for ii=1:3
label_str{ii} = [ ‘ Confidence: ‘ num2str(conf_val(ii), ‘%0.4f’)];
end
position = [bboxes];
outputImage = insertObjectAnnotation(testImage,’rectangle’,position,label_str,…
‘TextBoxOpacity’,0.9,’FontSize’,10);
figure
imshow(outputImage)
%%
I have NO clue in how to add strings in the label_str{ii} like the way I did with scores (num2str(conf_val(ii)).
Thanking you in advance!Hi there!
I’ve a problem with labelling my objects in an image.
Let’s have a look at the image:
This programme is detecting front/rear of the cars and a stop sign. I want the labels to say what they’re looking at. For example: "Stop Sign Confidence: 1.0000", CarRear Confidence: 0.6446 and etc. As you may see, my programme adds the probability factors correctly. But there are still no strings/labelnames attached.
You can have a look at my code:
%%
% Read test image
testImage = imread(‘StopSignTest2.jpg’);
% Detect stop signs
[bboxes,score,label] = detect(rcnn,testImage,’MiniBatchSize’,128)
% Display detection results
label_str = cell(3,1);
conf_val = [score];
conf_lab = [label];
for ii=1:3
label_str{ii} = [ ‘ Confidence: ‘ num2str(conf_val(ii), ‘%0.4f’)];
end
position = [bboxes];
outputImage = insertObjectAnnotation(testImage,’rectangle’,position,label_str,…
‘TextBoxOpacity’,0.9,’FontSize’,10);
figure
imshow(outputImage)
%%
I have NO clue in how to add strings in the label_str{ii} like the way I did with scores (num2str(conf_val(ii)).
Thanking you in advance! Hi there!
I’ve a problem with labelling my objects in an image.
Let’s have a look at the image:
This programme is detecting front/rear of the cars and a stop sign. I want the labels to say what they’re looking at. For example: "Stop Sign Confidence: 1.0000", CarRear Confidence: 0.6446 and etc. As you may see, my programme adds the probability factors correctly. But there are still no strings/labelnames attached.
You can have a look at my code:
%%
% Read test image
testImage = imread(‘StopSignTest2.jpg’);
% Detect stop signs
[bboxes,score,label] = detect(rcnn,testImage,’MiniBatchSize’,128)
% Display detection results
label_str = cell(3,1);
conf_val = [score];
conf_lab = [label];
for ii=1:3
label_str{ii} = [ ‘ Confidence: ‘ num2str(conf_val(ii), ‘%0.4f’)];
end
position = [bboxes];
outputImage = insertObjectAnnotation(testImage,’rectangle’,position,label_str,…
‘TextBoxOpacity’,0.9,’FontSize’,10);
figure
imshow(outputImage)
%%
I have NO clue in how to add strings in the label_str{ii} like the way I did with scores (num2str(conf_val(ii)).
Thanking you in advance! multiple objects, object detection, strings, objects detection, neural network, cnn MATLAB Answers — New Questions
I am getting this error when running my code >>>>Dot indexing is not supported for variables of this type. Error in rl.util.expstruct2timeserstruct (line 7) observation
The below code is the one I am running
Create Simulink Environment and Train Agent
This example shows how to convert the PI controller in the watertank Simulink® model to a reinforcement learning deep deterministic policy gradient (DDPG) agent. For an example that trains a DDPG agent in MATLAB®, see Train DDPG Agent to Balance Double Integrator Environment.
Water Tank Model
The original model for this example is the water tank model. The goal is to control the level of the water in the tank. For more information about the water tank model, see watertank Simulink Model.
Modify the original model by making the following changes:
Delete the PID Controller.
Insert the RL Agent block.
Connect the observation vector , where is the height of the water in the tank, , and is the reference height.
Set up the reward .
Configure the termination signal such that the simulation stops if or .
The resulting model is rlwatertank.slx. For more information on this model and the changes, see Create Simulink Environment for Reinforcement Learning.
open_system("RLFinal_PhD_Model_DroopPQ1")
Create the Environment
Creating an environment model includes defining the following:
Action and observation signals that the agent uses to interact with the environment. For more information, see rlNumericSpec and rlFiniteSetSpec.
Reward signal that the agent uses to measure its success. For more information, see Define Reward Signals.
Define the observation specification obsInfo and action specification actInfo.
% Observation info
obsInfo = rlNumericSpec([3 1],…
LowerLimit=[-inf -inf 0 ]’,…
UpperLimit=[ inf inf inf]’);
% Name and description are optional and not used by the software
obsInfo.Name = "observations";
obsInfo.Description = "integrated error, error, and measured height";
% Action info
actInfo = rlNumericSpec([1 1]);
actInfo.Name = "flow";
Create the environment object.
env = rlSimulinkEnv("RLFinal_PhD_Model_DroopPQ1","RLFinal_PhD_Model_DroopPQ1/RL Agent1",…
obsInfo,actInfo);
Set a custom reset function that randomizes the reference values for the model.
env.ResetFcn = @(in)localResetFcn(in);
Specify the simulation time Tf and the agent sample time Ts in seconds.
Ts = 1.0;
Tf = 200;
Fix the random generator seed for reproducibility.
rng(0)
Create the Critic
DDPG agents use a parametrized Q-value function approximator to estimate the value of the policy. A Q-value function critic takes the current observation and an action as inputs and returns a single scalar as output (the estimated discounted cumulative long-term reward for which receives the action from the state corresponding to the current observation, and following the policy thereafter).
To model the parametrized Q-value function within the critic, use a neural network with two input layers (one for the observation channel, as specified by obsInfo, and the other for the action channel, as specified by actInfo) and one output layer (which returns the scalar value).
Define each network path as an array of layer objects. Assign names to the input and output layers of each path. These names allow you to connect the paths and then later explicitly associate the network input and output layers with the appropriate environment channel. Obtain the dimension of the observation and action spaces from the obsInfo and actInfo specifications.
% Observation path
obsPath = [
featureInputLayer(obsInfo.Dimension(1),Name="obsInLyr")
fullyConnectedLayer(50)
reluLayer
fullyConnectedLayer(25,Name="obsPathOutLyr")
];
% Action path
actPath = [
featureInputLayer(actInfo.Dimension(1),Name="actInLyr")
fullyConnectedLayer(25,Name="actPathOutLyr")
];
% Common path
commonPath = [
additionLayer(2,Name="add")
reluLayer
fullyConnectedLayer(1,Name="QValue")
];
% Create the network object and add the layers
criticNet = dlnetwork();
criticNet = addLayers(criticNet,obsPath);
criticNet = addLayers(criticNet,actPath);
criticNet = addLayers(criticNet,commonPath);
% Connect the layers
criticNet = connectLayers(criticNet, …
"obsPathOutLyr","add/in1");
criticNet = connectLayers(criticNet, …
"actPathOutLyr","add/in2");
View the critic network configuration.
figure
plot(criticNet)
Initialize the dlnetwork object and summarize its properties.
criticNet = initialize(criticNet);
summary(criticNet)
Create the critic approximator object using the specified deep neural network, the environment specification objects, and the names if the network inputs to be associated with the observation and action channels.
critic = rlQValueFunction(criticNet, …
obsInfo,actInfo, …
ObservationInputNames="obsInLyr", …
ActionInputNames="actInLyr");
For more information on Q-value function objects, see rlQValueFunction.
Check the critic with a random input observation and action.
getValue(critic, …
{rand(obsInfo.Dimension)}, …
{rand(actInfo.Dimension)})
For more information on creating critics, see Create Policies and Value Functions.
Create the Actor
DDPG agents use a parametrized deterministic policy over continuous action spaces, which is learned by a continuous deterministic actor.
A continuous deterministic actor implements a parametrized deterministic policy for a continuous action space. This actor takes the current observation as input and returns as output an action that is a deterministic function of the observation.
To model the parametrized policy within the actor, use a neural network with one input layer (which receives the content of the environment observation channel, as specified by obsInfo) and one output layer (which returns the action to the environment action channel, as specified by actInfo).
Define the network as an array of layer objects.
actorNet = [
featureInputLayer(obsInfo.Dimension(1))
fullyConnectedLayer(3)
tanhLayer
fullyConnectedLayer(actInfo.Dimension(1))
];
Convert the network to a dlnetwork object and summarize its properties.
actorNet = dlnetwork(actorNet);
summary(actorNet)
Create the actor approximator object using the specified deep neural network, the environment specification objects, and the name if the network input to be associated with the observation channel.
actor = rlContinuousDeterministicActor(actorNet,obsInfo,actInfo);
For more information, see rlContinuousDeterministicActor.
Check the actor with a random input observation.
getAction(actor,{rand(obsInfo.Dimension)})
For more information on creating critics, see Create Policies and Value Functions.
Create the DDPG Agent
Create the DDPG agent using the specified actor and critic approximator objects.
agent = rlDDPGAgent(actor,critic);
For more information, see rlDDPGAgent.
Specify options for the agent, the actor, and the critic using dot notation.
agent.SampleTime = Ts;
agent.AgentOptions.TargetSmoothFactor = 1e-3;
agent.AgentOptions.DiscountFactor = 1.0;
agent.AgentOptions.MiniBatchSize = 64;
agent.AgentOptions.ExperienceBufferLength = 1e6;
agent.AgentOptions.NoiseOptions.Variance = 0.3;
agent.AgentOptions.NoiseOptions.VarianceDecayRate = 1e-5;
agent.AgentOptions.CriticOptimizerOptions.LearnRate = 1e-03;
agent.AgentOptions.CriticOptimizerOptions.GradientThreshold = 1;
agent.AgentOptions.ActorOptimizerOptions.LearnRate = 1e-04;
agent.AgentOptions.ActorOptimizerOptions.GradientThreshold = 1;
Alternatively, you can specify the agent options using an rlDDPGAgentOptions object.
Check the agent with a random input observation.
getAction(agent,{rand(obsInfo.Dimension)})
Train Agent
To train the agent, first specify the training options. For this example, use the following options:
Run each training for at most 5000 episodes. Specify that each episode lasts for at most ceil(Tf/Ts) (that is 200) time steps.
Display the training progress in the Episode Manager dialog box (set the Plots option) and disable the command line display (set the Verbose option to false).
Stop training when the agent receives an average cumulative reward greater than 800 over 20 consecutive episodes. At this point, the agent can control the level of water in the tank.
For more information, see rlTrainingOptions.
trainOpts = rlTrainingOptions(…
MaxEpisodes=5000, …
MaxStepsPerEpisode=ceil(Tf/Ts), …
ScoreAveragingWindowLength=20, …
Verbose=false, …
Plots="training-progress",…
StopTrainingCriteria="AverageReward",…
StopTrainingValue=800);
Train the agent using the train function. Training is a computationally intensive process that takes several minutes to complete. To save time while running this example, load a pretrained agent by setting doTraining to false. To train the agent yourself, set doTraining to true.
doTraining = true;
if doTraining
% Train the agent.
trainingStats = train(agent,env,trainOpts);
else
% Load the pretrained agent for the example.
load("WaterTankDDPG.mat","agent")
end
Validate Trained Agent
Validate the learned agent against the model by simulation. Since the reset function randomizes the reference values, fix the random generator seed to ensure simulation reproducibility.
rng(1)
Simulate the agent within the environment, and return the experiences as output.
simOpts = rlSimulationOptions(MaxSteps=ceil(Tf/Ts),StopOnError="on");
experiences = sim(env,agent,simOpts);
Local Reset Function
function in = localResetFcn(in)
% Randomize reference signal
blk = sprintf("RLFinal_PhD_Model_DroopPQ1/Droop/Voutref");
h = 3*randn + 0.5;
while h <= 0 || h >= 400
h = 3*randn + 200;
end
in = setBlockParameter(in,blk,Value=num2str(h));
% Randomize initial height
h1 = 3*randn + 200;
while h1 <= 0 || h1 >= 1
h1 = 3*randn + 0.5;
end
blk = "RLFinal_PhD_Model_DroopPQ1/Gain";
in = setBlockParameter(in,blk,Gain=num2str(h1));
end
I am getting the following results without rewards at all
Zero rewards
When I stop the trainging I see this error::::
Dot indexing is not supported for variables of this type.
Error in rl.util.expstruct2timeserstruct (line 7)
observation = {experiences.Observation};
Error in rl.env.AbstractEnv/sim (line 138)
s = rl.util.expstruct2timeserstruct(exp,time,oinfo,ainfo);
Copyright 2019 – 2023 The MathWorks, Inc.The below code is the one I am running
Create Simulink Environment and Train Agent
This example shows how to convert the PI controller in the watertank Simulink® model to a reinforcement learning deep deterministic policy gradient (DDPG) agent. For an example that trains a DDPG agent in MATLAB®, see Train DDPG Agent to Balance Double Integrator Environment.
Water Tank Model
The original model for this example is the water tank model. The goal is to control the level of the water in the tank. For more information about the water tank model, see watertank Simulink Model.
Modify the original model by making the following changes:
Delete the PID Controller.
Insert the RL Agent block.
Connect the observation vector , where is the height of the water in the tank, , and is the reference height.
Set up the reward .
Configure the termination signal such that the simulation stops if or .
The resulting model is rlwatertank.slx. For more information on this model and the changes, see Create Simulink Environment for Reinforcement Learning.
open_system("RLFinal_PhD_Model_DroopPQ1")
Create the Environment
Creating an environment model includes defining the following:
Action and observation signals that the agent uses to interact with the environment. For more information, see rlNumericSpec and rlFiniteSetSpec.
Reward signal that the agent uses to measure its success. For more information, see Define Reward Signals.
Define the observation specification obsInfo and action specification actInfo.
% Observation info
obsInfo = rlNumericSpec([3 1],…
LowerLimit=[-inf -inf 0 ]’,…
UpperLimit=[ inf inf inf]’);
% Name and description are optional and not used by the software
obsInfo.Name = "observations";
obsInfo.Description = "integrated error, error, and measured height";
% Action info
actInfo = rlNumericSpec([1 1]);
actInfo.Name = "flow";
Create the environment object.
env = rlSimulinkEnv("RLFinal_PhD_Model_DroopPQ1","RLFinal_PhD_Model_DroopPQ1/RL Agent1",…
obsInfo,actInfo);
Set a custom reset function that randomizes the reference values for the model.
env.ResetFcn = @(in)localResetFcn(in);
Specify the simulation time Tf and the agent sample time Ts in seconds.
Ts = 1.0;
Tf = 200;
Fix the random generator seed for reproducibility.
rng(0)
Create the Critic
DDPG agents use a parametrized Q-value function approximator to estimate the value of the policy. A Q-value function critic takes the current observation and an action as inputs and returns a single scalar as output (the estimated discounted cumulative long-term reward for which receives the action from the state corresponding to the current observation, and following the policy thereafter).
To model the parametrized Q-value function within the critic, use a neural network with two input layers (one for the observation channel, as specified by obsInfo, and the other for the action channel, as specified by actInfo) and one output layer (which returns the scalar value).
Define each network path as an array of layer objects. Assign names to the input and output layers of each path. These names allow you to connect the paths and then later explicitly associate the network input and output layers with the appropriate environment channel. Obtain the dimension of the observation and action spaces from the obsInfo and actInfo specifications.
% Observation path
obsPath = [
featureInputLayer(obsInfo.Dimension(1),Name="obsInLyr")
fullyConnectedLayer(50)
reluLayer
fullyConnectedLayer(25,Name="obsPathOutLyr")
];
% Action path
actPath = [
featureInputLayer(actInfo.Dimension(1),Name="actInLyr")
fullyConnectedLayer(25,Name="actPathOutLyr")
];
% Common path
commonPath = [
additionLayer(2,Name="add")
reluLayer
fullyConnectedLayer(1,Name="QValue")
];
% Create the network object and add the layers
criticNet = dlnetwork();
criticNet = addLayers(criticNet,obsPath);
criticNet = addLayers(criticNet,actPath);
criticNet = addLayers(criticNet,commonPath);
% Connect the layers
criticNet = connectLayers(criticNet, …
"obsPathOutLyr","add/in1");
criticNet = connectLayers(criticNet, …
"actPathOutLyr","add/in2");
View the critic network configuration.
figure
plot(criticNet)
Initialize the dlnetwork object and summarize its properties.
criticNet = initialize(criticNet);
summary(criticNet)
Create the critic approximator object using the specified deep neural network, the environment specification objects, and the names if the network inputs to be associated with the observation and action channels.
critic = rlQValueFunction(criticNet, …
obsInfo,actInfo, …
ObservationInputNames="obsInLyr", …
ActionInputNames="actInLyr");
For more information on Q-value function objects, see rlQValueFunction.
Check the critic with a random input observation and action.
getValue(critic, …
{rand(obsInfo.Dimension)}, …
{rand(actInfo.Dimension)})
For more information on creating critics, see Create Policies and Value Functions.
Create the Actor
DDPG agents use a parametrized deterministic policy over continuous action spaces, which is learned by a continuous deterministic actor.
A continuous deterministic actor implements a parametrized deterministic policy for a continuous action space. This actor takes the current observation as input and returns as output an action that is a deterministic function of the observation.
To model the parametrized policy within the actor, use a neural network with one input layer (which receives the content of the environment observation channel, as specified by obsInfo) and one output layer (which returns the action to the environment action channel, as specified by actInfo).
Define the network as an array of layer objects.
actorNet = [
featureInputLayer(obsInfo.Dimension(1))
fullyConnectedLayer(3)
tanhLayer
fullyConnectedLayer(actInfo.Dimension(1))
];
Convert the network to a dlnetwork object and summarize its properties.
actorNet = dlnetwork(actorNet);
summary(actorNet)
Create the actor approximator object using the specified deep neural network, the environment specification objects, and the name if the network input to be associated with the observation channel.
actor = rlContinuousDeterministicActor(actorNet,obsInfo,actInfo);
For more information, see rlContinuousDeterministicActor.
Check the actor with a random input observation.
getAction(actor,{rand(obsInfo.Dimension)})
For more information on creating critics, see Create Policies and Value Functions.
Create the DDPG Agent
Create the DDPG agent using the specified actor and critic approximator objects.
agent = rlDDPGAgent(actor,critic);
For more information, see rlDDPGAgent.
Specify options for the agent, the actor, and the critic using dot notation.
agent.SampleTime = Ts;
agent.AgentOptions.TargetSmoothFactor = 1e-3;
agent.AgentOptions.DiscountFactor = 1.0;
agent.AgentOptions.MiniBatchSize = 64;
agent.AgentOptions.ExperienceBufferLength = 1e6;
agent.AgentOptions.NoiseOptions.Variance = 0.3;
agent.AgentOptions.NoiseOptions.VarianceDecayRate = 1e-5;
agent.AgentOptions.CriticOptimizerOptions.LearnRate = 1e-03;
agent.AgentOptions.CriticOptimizerOptions.GradientThreshold = 1;
agent.AgentOptions.ActorOptimizerOptions.LearnRate = 1e-04;
agent.AgentOptions.ActorOptimizerOptions.GradientThreshold = 1;
Alternatively, you can specify the agent options using an rlDDPGAgentOptions object.
Check the agent with a random input observation.
getAction(agent,{rand(obsInfo.Dimension)})
Train Agent
To train the agent, first specify the training options. For this example, use the following options:
Run each training for at most 5000 episodes. Specify that each episode lasts for at most ceil(Tf/Ts) (that is 200) time steps.
Display the training progress in the Episode Manager dialog box (set the Plots option) and disable the command line display (set the Verbose option to false).
Stop training when the agent receives an average cumulative reward greater than 800 over 20 consecutive episodes. At this point, the agent can control the level of water in the tank.
For more information, see rlTrainingOptions.
trainOpts = rlTrainingOptions(…
MaxEpisodes=5000, …
MaxStepsPerEpisode=ceil(Tf/Ts), …
ScoreAveragingWindowLength=20, …
Verbose=false, …
Plots="training-progress",…
StopTrainingCriteria="AverageReward",…
StopTrainingValue=800);
Train the agent using the train function. Training is a computationally intensive process that takes several minutes to complete. To save time while running this example, load a pretrained agent by setting doTraining to false. To train the agent yourself, set doTraining to true.
doTraining = true;
if doTraining
% Train the agent.
trainingStats = train(agent,env,trainOpts);
else
% Load the pretrained agent for the example.
load("WaterTankDDPG.mat","agent")
end
Validate Trained Agent
Validate the learned agent against the model by simulation. Since the reset function randomizes the reference values, fix the random generator seed to ensure simulation reproducibility.
rng(1)
Simulate the agent within the environment, and return the experiences as output.
simOpts = rlSimulationOptions(MaxSteps=ceil(Tf/Ts),StopOnError="on");
experiences = sim(env,agent,simOpts);
Local Reset Function
function in = localResetFcn(in)
% Randomize reference signal
blk = sprintf("RLFinal_PhD_Model_DroopPQ1/Droop/Voutref");
h = 3*randn + 0.5;
while h <= 0 || h >= 400
h = 3*randn + 200;
end
in = setBlockParameter(in,blk,Value=num2str(h));
% Randomize initial height
h1 = 3*randn + 200;
while h1 <= 0 || h1 >= 1
h1 = 3*randn + 0.5;
end
blk = "RLFinal_PhD_Model_DroopPQ1/Gain";
in = setBlockParameter(in,blk,Gain=num2str(h1));
end
I am getting the following results without rewards at all
Zero rewards
When I stop the trainging I see this error::::
Dot indexing is not supported for variables of this type.
Error in rl.util.expstruct2timeserstruct (line 7)
observation = {experiences.Observation};
Error in rl.env.AbstractEnv/sim (line 138)
s = rl.util.expstruct2timeserstruct(exp,time,oinfo,ainfo);
Copyright 2019 – 2023 The MathWorks, Inc. The below code is the one I am running
Create Simulink Environment and Train Agent
This example shows how to convert the PI controller in the watertank Simulink® model to a reinforcement learning deep deterministic policy gradient (DDPG) agent. For an example that trains a DDPG agent in MATLAB®, see Train DDPG Agent to Balance Double Integrator Environment.
Water Tank Model
The original model for this example is the water tank model. The goal is to control the level of the water in the tank. For more information about the water tank model, see watertank Simulink Model.
Modify the original model by making the following changes:
Delete the PID Controller.
Insert the RL Agent block.
Connect the observation vector , where is the height of the water in the tank, , and is the reference height.
Set up the reward .
Configure the termination signal such that the simulation stops if or .
The resulting model is rlwatertank.slx. For more information on this model and the changes, see Create Simulink Environment for Reinforcement Learning.
open_system("RLFinal_PhD_Model_DroopPQ1")
Create the Environment
Creating an environment model includes defining the following:
Action and observation signals that the agent uses to interact with the environment. For more information, see rlNumericSpec and rlFiniteSetSpec.
Reward signal that the agent uses to measure its success. For more information, see Define Reward Signals.
Define the observation specification obsInfo and action specification actInfo.
% Observation info
obsInfo = rlNumericSpec([3 1],…
LowerLimit=[-inf -inf 0 ]’,…
UpperLimit=[ inf inf inf]’);
% Name and description are optional and not used by the software
obsInfo.Name = "observations";
obsInfo.Description = "integrated error, error, and measured height";
% Action info
actInfo = rlNumericSpec([1 1]);
actInfo.Name = "flow";
Create the environment object.
env = rlSimulinkEnv("RLFinal_PhD_Model_DroopPQ1","RLFinal_PhD_Model_DroopPQ1/RL Agent1",…
obsInfo,actInfo);
Set a custom reset function that randomizes the reference values for the model.
env.ResetFcn = @(in)localResetFcn(in);
Specify the simulation time Tf and the agent sample time Ts in seconds.
Ts = 1.0;
Tf = 200;
Fix the random generator seed for reproducibility.
rng(0)
Create the Critic
DDPG agents use a parametrized Q-value function approximator to estimate the value of the policy. A Q-value function critic takes the current observation and an action as inputs and returns a single scalar as output (the estimated discounted cumulative long-term reward for which receives the action from the state corresponding to the current observation, and following the policy thereafter).
To model the parametrized Q-value function within the critic, use a neural network with two input layers (one for the observation channel, as specified by obsInfo, and the other for the action channel, as specified by actInfo) and one output layer (which returns the scalar value).
Define each network path as an array of layer objects. Assign names to the input and output layers of each path. These names allow you to connect the paths and then later explicitly associate the network input and output layers with the appropriate environment channel. Obtain the dimension of the observation and action spaces from the obsInfo and actInfo specifications.
% Observation path
obsPath = [
featureInputLayer(obsInfo.Dimension(1),Name="obsInLyr")
fullyConnectedLayer(50)
reluLayer
fullyConnectedLayer(25,Name="obsPathOutLyr")
];
% Action path
actPath = [
featureInputLayer(actInfo.Dimension(1),Name="actInLyr")
fullyConnectedLayer(25,Name="actPathOutLyr")
];
% Common path
commonPath = [
additionLayer(2,Name="add")
reluLayer
fullyConnectedLayer(1,Name="QValue")
];
% Create the network object and add the layers
criticNet = dlnetwork();
criticNet = addLayers(criticNet,obsPath);
criticNet = addLayers(criticNet,actPath);
criticNet = addLayers(criticNet,commonPath);
% Connect the layers
criticNet = connectLayers(criticNet, …
"obsPathOutLyr","add/in1");
criticNet = connectLayers(criticNet, …
"actPathOutLyr","add/in2");
View the critic network configuration.
figure
plot(criticNet)
Initialize the dlnetwork object and summarize its properties.
criticNet = initialize(criticNet);
summary(criticNet)
Create the critic approximator object using the specified deep neural network, the environment specification objects, and the names if the network inputs to be associated with the observation and action channels.
critic = rlQValueFunction(criticNet, …
obsInfo,actInfo, …
ObservationInputNames="obsInLyr", …
ActionInputNames="actInLyr");
For more information on Q-value function objects, see rlQValueFunction.
Check the critic with a random input observation and action.
getValue(critic, …
{rand(obsInfo.Dimension)}, …
{rand(actInfo.Dimension)})
For more information on creating critics, see Create Policies and Value Functions.
Create the Actor
DDPG agents use a parametrized deterministic policy over continuous action spaces, which is learned by a continuous deterministic actor.
A continuous deterministic actor implements a parametrized deterministic policy for a continuous action space. This actor takes the current observation as input and returns as output an action that is a deterministic function of the observation.
To model the parametrized policy within the actor, use a neural network with one input layer (which receives the content of the environment observation channel, as specified by obsInfo) and one output layer (which returns the action to the environment action channel, as specified by actInfo).
Define the network as an array of layer objects.
actorNet = [
featureInputLayer(obsInfo.Dimension(1))
fullyConnectedLayer(3)
tanhLayer
fullyConnectedLayer(actInfo.Dimension(1))
];
Convert the network to a dlnetwork object and summarize its properties.
actorNet = dlnetwork(actorNet);
summary(actorNet)
Create the actor approximator object using the specified deep neural network, the environment specification objects, and the name if the network input to be associated with the observation channel.
actor = rlContinuousDeterministicActor(actorNet,obsInfo,actInfo);
For more information, see rlContinuousDeterministicActor.
Check the actor with a random input observation.
getAction(actor,{rand(obsInfo.Dimension)})
For more information on creating critics, see Create Policies and Value Functions.
Create the DDPG Agent
Create the DDPG agent using the specified actor and critic approximator objects.
agent = rlDDPGAgent(actor,critic);
For more information, see rlDDPGAgent.
Specify options for the agent, the actor, and the critic using dot notation.
agent.SampleTime = Ts;
agent.AgentOptions.TargetSmoothFactor = 1e-3;
agent.AgentOptions.DiscountFactor = 1.0;
agent.AgentOptions.MiniBatchSize = 64;
agent.AgentOptions.ExperienceBufferLength = 1e6;
agent.AgentOptions.NoiseOptions.Variance = 0.3;
agent.AgentOptions.NoiseOptions.VarianceDecayRate = 1e-5;
agent.AgentOptions.CriticOptimizerOptions.LearnRate = 1e-03;
agent.AgentOptions.CriticOptimizerOptions.GradientThreshold = 1;
agent.AgentOptions.ActorOptimizerOptions.LearnRate = 1e-04;
agent.AgentOptions.ActorOptimizerOptions.GradientThreshold = 1;
Alternatively, you can specify the agent options using an rlDDPGAgentOptions object.
Check the agent with a random input observation.
getAction(agent,{rand(obsInfo.Dimension)})
Train Agent
To train the agent, first specify the training options. For this example, use the following options:
Run each training for at most 5000 episodes. Specify that each episode lasts for at most ceil(Tf/Ts) (that is 200) time steps.
Display the training progress in the Episode Manager dialog box (set the Plots option) and disable the command line display (set the Verbose option to false).
Stop training when the agent receives an average cumulative reward greater than 800 over 20 consecutive episodes. At this point, the agent can control the level of water in the tank.
For more information, see rlTrainingOptions.
trainOpts = rlTrainingOptions(…
MaxEpisodes=5000, …
MaxStepsPerEpisode=ceil(Tf/Ts), …
ScoreAveragingWindowLength=20, …
Verbose=false, …
Plots="training-progress",…
StopTrainingCriteria="AverageReward",…
StopTrainingValue=800);
Train the agent using the train function. Training is a computationally intensive process that takes several minutes to complete. To save time while running this example, load a pretrained agent by setting doTraining to false. To train the agent yourself, set doTraining to true.
doTraining = true;
if doTraining
% Train the agent.
trainingStats = train(agent,env,trainOpts);
else
% Load the pretrained agent for the example.
load("WaterTankDDPG.mat","agent")
end
Validate Trained Agent
Validate the learned agent against the model by simulation. Since the reset function randomizes the reference values, fix the random generator seed to ensure simulation reproducibility.
rng(1)
Simulate the agent within the environment, and return the experiences as output.
simOpts = rlSimulationOptions(MaxSteps=ceil(Tf/Ts),StopOnError="on");
experiences = sim(env,agent,simOpts);
Local Reset Function
function in = localResetFcn(in)
% Randomize reference signal
blk = sprintf("RLFinal_PhD_Model_DroopPQ1/Droop/Voutref");
h = 3*randn + 0.5;
while h <= 0 || h >= 400
h = 3*randn + 200;
end
in = setBlockParameter(in,blk,Value=num2str(h));
% Randomize initial height
h1 = 3*randn + 200;
while h1 <= 0 || h1 >= 1
h1 = 3*randn + 0.5;
end
blk = "RLFinal_PhD_Model_DroopPQ1/Gain";
in = setBlockParameter(in,blk,Gain=num2str(h1));
end
I am getting the following results without rewards at all
Zero rewards
When I stop the trainging I see this error::::
Dot indexing is not supported for variables of this type.
Error in rl.util.expstruct2timeserstruct (line 7)
observation = {experiences.Observation};
Error in rl.env.AbstractEnv/sim (line 138)
s = rl.util.expstruct2timeserstruct(exp,time,oinfo,ainfo);
Copyright 2019 – 2023 The MathWorks, Inc. can someone help MATLAB Answers — New Questions
How can I plot a hyperbola?
Hi everyone,
I’m a beginner at Matlab, so I don’t have much experience. Right now I’m trying to plot a hyperbola that I’m using for Time Difference of Arrival(TDoA), but I’ve been lost for hours now, and I still can’t figure out how to plot it. Any suggestions how to solve this problem?
Here is my code:
function hyperbola()
syms x y ;
f = @(x)0.4829 == sqrt((95-x)^2-(0-y)^2)-sqrt((0-x)^2-(0-y)^2);
fplot(f);
endHi everyone,
I’m a beginner at Matlab, so I don’t have much experience. Right now I’m trying to plot a hyperbola that I’m using for Time Difference of Arrival(TDoA), but I’ve been lost for hours now, and I still can’t figure out how to plot it. Any suggestions how to solve this problem?
Here is my code:
function hyperbola()
syms x y ;
f = @(x)0.4829 == sqrt((95-x)^2-(0-y)^2)-sqrt((0-x)^2-(0-y)^2);
fplot(f);
end Hi everyone,
I’m a beginner at Matlab, so I don’t have much experience. Right now I’m trying to plot a hyperbola that I’m using for Time Difference of Arrival(TDoA), but I’ve been lost for hours now, and I still can’t figure out how to plot it. Any suggestions how to solve this problem?
Here is my code:
function hyperbola()
syms x y ;
f = @(x)0.4829 == sqrt((95-x)^2-(0-y)^2)-sqrt((0-x)^2-(0-y)^2);
fplot(f);
end hyperbola, tdoa, nonlinear MATLAB Answers — New Questions
Unrecognized function or variable ‘doPlot’.
if doPlot == 1
plot(density)
title("Sample Densities")
xticklabels(element)
ylabel("Density (g/cm^3)")
endif doPlot == 1
plot(density)
title("Sample Densities")
xticklabels(element)
ylabel("Density (g/cm^3)")
end if doPlot == 1
plot(density)
title("Sample Densities")
xticklabels(element)
ylabel("Density (g/cm^3)")
end showing error while submitting MATLAB Answers — New Questions
Métodos numéricos en simulink
Tengo una tarea de resolver analíticamente una ecuación diferencial por el método de Euler (lo hice con maltab) con su gráfica, y luego hacer diagrama de bloques en simulink para ver la gráfica nuevamente con un scope. Soy completamente nueva en simulink y el profesor no lo explicó por lo que quería ver si alguien puede ayudarme
El código de matlab es el archivo "Euler_analitico01.mlx"
Me da la gráfica:
y el scope de simulink me sale:Tengo una tarea de resolver analíticamente una ecuación diferencial por el método de Euler (lo hice con maltab) con su gráfica, y luego hacer diagrama de bloques en simulink para ver la gráfica nuevamente con un scope. Soy completamente nueva en simulink y el profesor no lo explicó por lo que quería ver si alguien puede ayudarme
El código de matlab es el archivo "Euler_analitico01.mlx"
Me da la gráfica:
y el scope de simulink me sale: Tengo una tarea de resolver analíticamente una ecuación diferencial por el método de Euler (lo hice con maltab) con su gráfica, y luego hacer diagrama de bloques en simulink para ver la gráfica nuevamente con un scope. Soy completamente nueva en simulink y el profesor no lo explicó por lo que quería ver si alguien puede ayudarme
El código de matlab es el archivo "Euler_analitico01.mlx"
Me da la gráfica:
y el scope de simulink me sale: matlab, simulink, euler, metodos numericos MATLAB Answers — New Questions
error loading shared libraries: libicuuc.so.69
I get this message when trying to run MATLAB after an install without any "errors":
/data/MATLAB/R2022a/bin/glnxa64/MATLAB: error while loading shared libraries: libicuuc.so.69: cannot open shared object file: No such file or directory
I tried this with R2022a and R2022b.
I do have R2021a and R2023a running…I get this message when trying to run MATLAB after an install without any "errors":
/data/MATLAB/R2022a/bin/glnxa64/MATLAB: error while loading shared libraries: libicuuc.so.69: cannot open shared object file: No such file or directory
I tried this with R2022a and R2022b.
I do have R2021a and R2023a running… I get this message when trying to run MATLAB after an install without any "errors":
/data/MATLAB/R2022a/bin/glnxa64/MATLAB: error while loading shared libraries: libicuuc.so.69: cannot open shared object file: No such file or directory
I tried this with R2022a and R2022b.
I do have R2021a and R2023a running… shared libraries, install, ubuntu MATLAB Answers — New Questions
Maximizing Spectral efficiency instead of maximizing SINR in RI selection in 5G NR toolbox
Hi all,
I’ve noticed that the new version of the 5G Toolbox includes two different algorithms for calculating Rank Indication (RI): ‘MaxSINR’ and ‘MaxSE’. The ‘MaxSINR’ algorithm selects the RI based on maximizing the SINR, while ‘MaxSE’ selects it based on maximizing spectral efficiency.
I was under the impression that the standard approach was to select the rank that maximizes SINR. Could anyone clarify the rationale behind including both algorithms and when one might be preferred over the other?
Thanks a lotHi all,
I’ve noticed that the new version of the 5G Toolbox includes two different algorithms for calculating Rank Indication (RI): ‘MaxSINR’ and ‘MaxSE’. The ‘MaxSINR’ algorithm selects the RI based on maximizing the SINR, while ‘MaxSE’ selects it based on maximizing spectral efficiency.
I was under the impression that the standard approach was to select the rank that maximizes SINR. Could anyone clarify the rationale behind including both algorithms and when one might be preferred over the other?
Thanks a lot Hi all,
I’ve noticed that the new version of the 5G Toolbox includes two different algorithms for calculating Rank Indication (RI): ‘MaxSINR’ and ‘MaxSE’. The ‘MaxSINR’ algorithm selects the RI based on maximizing the SINR, while ‘MaxSE’ selects it based on maximizing spectral efficiency.
I was under the impression that the standard approach was to select the rank that maximizes SINR. Could anyone clarify the rationale behind including both algorithms and when one might be preferred over the other?
Thanks a lot 5g, ri, sinr, spectral efficiency MATLAB Answers — New Questions