Month: August 2024
Performing operations on row elements in a matrix
So I’m creating some simple examples on trying to perform operations between rows in a matrix such as division. I have a matrix where:
my_mat = [2 4 6 8; 4 8 12 16]
which looks like so when printed.
my_mat =
2 4 6 8
4 8 12 16
and what I’m trying to do now is to divide the elements of the first row with the corresponding neighbouring ones which is the second row in this case since it’s only a 2×4 matrix. This means 2/4, 4/8, 6/12 and 8/16
Then perhaps printing out the result as the output
0.5
0.5
0.5
0.5
How do i perform row operations in a single matrix?
I’ve looked into bsxfun but apparently i can’t figure out the way to perform row operations with it.So I’m creating some simple examples on trying to perform operations between rows in a matrix such as division. I have a matrix where:
my_mat = [2 4 6 8; 4 8 12 16]
which looks like so when printed.
my_mat =
2 4 6 8
4 8 12 16
and what I’m trying to do now is to divide the elements of the first row with the corresponding neighbouring ones which is the second row in this case since it’s only a 2×4 matrix. This means 2/4, 4/8, 6/12 and 8/16
Then perhaps printing out the result as the output
0.5
0.5
0.5
0.5
How do i perform row operations in a single matrix?
I’ve looked into bsxfun but apparently i can’t figure out the way to perform row operations with it. So I’m creating some simple examples on trying to perform operations between rows in a matrix such as division. I have a matrix where:
my_mat = [2 4 6 8; 4 8 12 16]
which looks like so when printed.
my_mat =
2 4 6 8
4 8 12 16
and what I’m trying to do now is to divide the elements of the first row with the corresponding neighbouring ones which is the second row in this case since it’s only a 2×4 matrix. This means 2/4, 4/8, 6/12 and 8/16
Then perhaps printing out the result as the output
0.5
0.5
0.5
0.5
How do i perform row operations in a single matrix?
I’ve looked into bsxfun but apparently i can’t figure out the way to perform row operations with it. image processing, matlab, rows, matrix MATLAB Answers — New Questions
List all NEW resources created within a week
Hello:
I wonder if there is a way to find out ALL Resources in our tenant that were created since specific date (like last 7 days).
Ideally I’d like to run it via PowerShell…
How can I do it?
We are not enforcing “create date” tags… I thought about Powershell, but sometimes there are CreateDate, sometimes CreateAt…
Thank you!
Hello: I wonder if there is a way to find out ALL Resources in our tenant that were created since specific date (like last 7 days). Ideally I’d like to run it via PowerShell… How can I do it? We are not enforcing “create date” tags… I thought about Powershell, but sometimes there are CreateDate, sometimes CreateAt… Thank you! Read More
Fin All new Resources and ideally their cost
Hello:
I wonder if there is a way to find out ALL Resources in our tenant that were created since specific date (like last 7 days) and their cost for the same period.
Ideally I’d like to run it via PowerShell…
How can I do it?
We are not enforcing “create date” tags…
Thank you!
Hello: I wonder if there is a way to find out ALL Resources in our tenant that were created since specific date (like last 7 days) and their cost for the same period. Ideally I’d like to run it via PowerShell… How can I do it? We are not enforcing “create date” tags… Thank you! Read More
How to Use Future System States to Make Real-Time Decisions in Simulink?
I would like to implement the following function in my Simulink model:
At time instant ti, I need to calculate the value of an ODE function using an integrator block over the next 2 seconds, i.e., the time span is [ti, ti+2]. Then, I want to retrieve the system states at time ti+2 and use this information to make a decision on the execution commands at the current time ti.
Does anyone have suggestions on how to approach this problem? Thanks in advance.I would like to implement the following function in my Simulink model:
At time instant ti, I need to calculate the value of an ODE function using an integrator block over the next 2 seconds, i.e., the time span is [ti, ti+2]. Then, I want to retrieve the system states at time ti+2 and use this information to make a decision on the execution commands at the current time ti.
Does anyone have suggestions on how to approach this problem? Thanks in advance. I would like to implement the following function in my Simulink model:
At time instant ti, I need to calculate the value of an ODE function using an integrator block over the next 2 seconds, i.e., the time span is [ti, ti+2]. Then, I want to retrieve the system states at time ti+2 and use this information to make a decision on the execution commands at the current time ti.
Does anyone have suggestions on how to approach this problem? Thanks in advance. simulink MATLAB Answers — New Questions
How to Manually Sync the Compliance Policies
Hi All,
We have come across an incident where we need to exclude a device from a device compliance policy after the device got non-compliant according as per the policy. We have exclude the specific user from the compliance policy to meet the requirement (we have assigned the policy to user groups).
However, the issue is that the device not get back to the compliant state event after passed 4 days. I would appreciate if anyone could help me here to manually get the device into the compliant state.
Please note that, when we go to specific device > device compliance policies, we are not able to see the compliance policy anymore and other applied policies in complaint state (refer image01). However, the device is still showing as non-compliant in devices in Intune (refer image 02). Last checking time is continuously updating as well.
Further, We have tried below troubleshooting but there is no luck yet. We are not still take a remote session to the device yet as we have some challenges to get a remote session from the end user.
Sync the device from Intune Portal.remotely login into the PowerShell of the device and run below command. Start-Process -FilePath “C:Program Files (x86)Microsoft Intune Management ExtensionMicrosoft.Management.Services.IntuneWindowsAgent.exe” -ArgumentList “intunemanagementextension://synccompliance”
Image01
We are not able to see the excluded compliance policy under policy name anymore and the all the applied policies are in complaint state as below.
Image02
Thanks in advance
Dilan
Hi All, We have come across an incident where we need to exclude a device from a device compliance policy after the device got non-compliant according as per the policy. We have exclude the specific user from the compliance policy to meet the requirement (we have assigned the policy to user groups). However, the issue is that the device not get back to the compliant state event after passed 4 days. I would appreciate if anyone could help me here to manually get the device into the compliant state. Please note that, when we go to specific device > device compliance policies, we are not able to see the compliance policy anymore and other applied policies in complaint state (refer image01). However, the device is still showing as non-compliant in devices in Intune (refer image 02). Last checking time is continuously updating as well. Further, We have tried below troubleshooting but there is no luck yet. We are not still take a remote session to the device yet as we have some challenges to get a remote session from the end user.Sync the device from Intune Portal.remotely login into the PowerShell of the device and run below command. Start-Process -FilePath “C:Program Files (x86)Microsoft Intune Management ExtensionMicrosoft.Management.Services.IntuneWindowsAgent.exe” -ArgumentList “intunemanagementextension://synccompliance” Image01We are not able to see the excluded compliance policy under policy name anymore and the all the applied policies are in complaint state as below. Image02 Thanks in advanceDilan Read More
Hello everyone i am solving multi degree freedom of system by using Newmarks beta method in which loading function F(x,v). When i am ruing the code, the result i
% dynamic analysis using direct integration method
% % the porbleme is: Mx”+Cx’+Kx=(C1+K1) *eps*cos(Omeg*t)+K2* eps^2cos(Omeg*t)^2
clc;
clear all
format short e
%close all
m=[18438.6 0 0;0 13761 0;0 0 9174];
disp(‘ mass matrix’)
m;
[ns,ms]=size(m);
% if forces are acting at degrees of freedom
m=[40000 0 0;0 20000 0;0 0 20000];
c0=[0 0 0;0 -8000 8000;0 -8000 8000];
c1=[0 0 0;0 -4000 8000;0 -8000 4000];
k0=[30000 -10000 0;-10000 20000 -10000;0 -10000 10000];
k1=[30000 -10000 0;-10000 50000 -10000;0 -10000 50000];
k2=[30000 -10000 0;-10000 20000 -40000;0 -40000 10000];
% disp(‘ force at various degrees of freedom’);
% f;
%if base ground acceleration is given
% dis=’disp.dat’
% di=load(dis)
% % convert to equivalent nodal loads
% for i=1:ns
% f(:,i)=-di*m(i,i)
% end
disp(‘ damping matrix’)
c0;
disp(‘ stiffness matrix’)
k0;
format long;
kim=inv(k0)*m;
[evec,ev]=eig(kim);
for i=1:ns
omega(i)=1/sqrt(ev(i,i));
end
disp(‘ natural frequencies’)
omega;
% give gamma=0.5 and beta=0.25 for Newmark average accln method
%gama=0.5;
%beta=0.25;
%give gamma=0.5 and beta=0.1667 for Newmark linear accln method
gama=0.5;
beta=0.167;
%give initial conditions for displacements
u0=[0 0.01 0.05];
disp(‘ initial displacements’)
u0;
%give initial condition for velocities
v0=[0. 0. 0.];
%y0=[0;0.01;0.05;0;0;0];
disp(‘ initial velocities’)
v0;
om=5; eps=0.01;
IM=inv(m);
X=u0′; Z=v0′;
tt=0;
% S1=-IM*k0+IM*k1*eps.*cos(om*tt)+IM*k2*eps^2.*cos(om*tt)^2;
% S2=-IM*c0+IM*c1*eps.*cos(om*tt);
dt=0.02;
S=k2*dt^2*beta*eps^2*cos(om*tt)^2+k1*dt^2*beta*eps*cos(om*tt)+…
c1*gama*dt*eps*cos(om*tt)+k0*dt^2*beta+c0*gama*dt+m;
S1=k1*eps.*cos(om*tt)+k2*eps^2.*cos(om*tt)^2;
S2=c1*eps.*cos(om*tt);
f(1,:)=(S1*X+S2*Z);
%for i=1:ns
a0=-inv(m)*(f(1,:)’+c0*v0’+k0*u0′);
%end
kba=k0+(gama/(beta*dt))*c0+(1/(beta*dt*dt))*m;
kin=inv(kba);
aa=(1/(beta*dt))*m+(gama/beta)*c0;
bb=(1/(2.0*beta))*m+dt*(gama/(2.0*beta)-1)*c0;
u(1,:)=u0;
v(1,:)=v0;
a(1,:)=a0;
t=linspace(0,5,251);
for i=2:10%251
X=u(i-1,:)’; Z=v(i-1,:)’;
S1=k1*eps.*cos(om*tt)+k2*eps^2.*cos(om*tt)^2;
S2=c1*eps.*cos(om*tt);
f(i,:)=(S1*X+S2*Z); %%%%% ??????
df=f(i,:)-f(i-1,:);
dfb(i,:)=df+v(i-1,:)*aa’+a(i-1,:)*bb’;
du(i,:)=dfb(i,:)*kin;
dv(i,:)=(gama/(beta*dt))*du(i,:)-(gama/beta)*v(i-1,:)+dt*…
(1-gama/(2.0*beta))*a(i-1,:);
da(i,:)=(1/(beta*dt^2))*du(i,:)-(1/(beta*dt))*v(i-1,:)-(1/(2.0*beta))*a(i-1,:);
u(i,:)=u(i-1,:)+du(i,:);
v(i,:)=v(i-1,:)+dv(i,:);
a(i,:)=a(i-1,:)+da(i,:);
end
%figure(1);
hold on
plot(tt,u(:,1),’k’);
xlabel(‘ time in secs’);
ylabel(‘ roof displacement’);
title(‘ displacement response of the roof’);
%figure(2);
hold on
plot(tt,u(:,2),’k’);
xlabel(‘ time in secs’);
ylabel(‘ roof velocity’);
title(‘velocity response of the roof’);
%figure(3);
hold on
plot(tt,u(:,3),’k’);
xlabel(‘ time in secs’);
ylabel(‘ roof acceleration’);
title(‘ acceleration response of the roof’)% dynamic analysis using direct integration method
% % the porbleme is: Mx”+Cx’+Kx=(C1+K1) *eps*cos(Omeg*t)+K2* eps^2cos(Omeg*t)^2
clc;
clear all
format short e
%close all
m=[18438.6 0 0;0 13761 0;0 0 9174];
disp(‘ mass matrix’)
m;
[ns,ms]=size(m);
% if forces are acting at degrees of freedom
m=[40000 0 0;0 20000 0;0 0 20000];
c0=[0 0 0;0 -8000 8000;0 -8000 8000];
c1=[0 0 0;0 -4000 8000;0 -8000 4000];
k0=[30000 -10000 0;-10000 20000 -10000;0 -10000 10000];
k1=[30000 -10000 0;-10000 50000 -10000;0 -10000 50000];
k2=[30000 -10000 0;-10000 20000 -40000;0 -40000 10000];
% disp(‘ force at various degrees of freedom’);
% f;
%if base ground acceleration is given
% dis=’disp.dat’
% di=load(dis)
% % convert to equivalent nodal loads
% for i=1:ns
% f(:,i)=-di*m(i,i)
% end
disp(‘ damping matrix’)
c0;
disp(‘ stiffness matrix’)
k0;
format long;
kim=inv(k0)*m;
[evec,ev]=eig(kim);
for i=1:ns
omega(i)=1/sqrt(ev(i,i));
end
disp(‘ natural frequencies’)
omega;
% give gamma=0.5 and beta=0.25 for Newmark average accln method
%gama=0.5;
%beta=0.25;
%give gamma=0.5 and beta=0.1667 for Newmark linear accln method
gama=0.5;
beta=0.167;
%give initial conditions for displacements
u0=[0 0.01 0.05];
disp(‘ initial displacements’)
u0;
%give initial condition for velocities
v0=[0. 0. 0.];
%y0=[0;0.01;0.05;0;0;0];
disp(‘ initial velocities’)
v0;
om=5; eps=0.01;
IM=inv(m);
X=u0′; Z=v0′;
tt=0;
% S1=-IM*k0+IM*k1*eps.*cos(om*tt)+IM*k2*eps^2.*cos(om*tt)^2;
% S2=-IM*c0+IM*c1*eps.*cos(om*tt);
dt=0.02;
S=k2*dt^2*beta*eps^2*cos(om*tt)^2+k1*dt^2*beta*eps*cos(om*tt)+…
c1*gama*dt*eps*cos(om*tt)+k0*dt^2*beta+c0*gama*dt+m;
S1=k1*eps.*cos(om*tt)+k2*eps^2.*cos(om*tt)^2;
S2=c1*eps.*cos(om*tt);
f(1,:)=(S1*X+S2*Z);
%for i=1:ns
a0=-inv(m)*(f(1,:)’+c0*v0’+k0*u0′);
%end
kba=k0+(gama/(beta*dt))*c0+(1/(beta*dt*dt))*m;
kin=inv(kba);
aa=(1/(beta*dt))*m+(gama/beta)*c0;
bb=(1/(2.0*beta))*m+dt*(gama/(2.0*beta)-1)*c0;
u(1,:)=u0;
v(1,:)=v0;
a(1,:)=a0;
t=linspace(0,5,251);
for i=2:10%251
X=u(i-1,:)’; Z=v(i-1,:)’;
S1=k1*eps.*cos(om*tt)+k2*eps^2.*cos(om*tt)^2;
S2=c1*eps.*cos(om*tt);
f(i,:)=(S1*X+S2*Z); %%%%% ??????
df=f(i,:)-f(i-1,:);
dfb(i,:)=df+v(i-1,:)*aa’+a(i-1,:)*bb’;
du(i,:)=dfb(i,:)*kin;
dv(i,:)=(gama/(beta*dt))*du(i,:)-(gama/beta)*v(i-1,:)+dt*…
(1-gama/(2.0*beta))*a(i-1,:);
da(i,:)=(1/(beta*dt^2))*du(i,:)-(1/(beta*dt))*v(i-1,:)-(1/(2.0*beta))*a(i-1,:);
u(i,:)=u(i-1,:)+du(i,:);
v(i,:)=v(i-1,:)+dv(i,:);
a(i,:)=a(i-1,:)+da(i,:);
end
%figure(1);
hold on
plot(tt,u(:,1),’k’);
xlabel(‘ time in secs’);
ylabel(‘ roof displacement’);
title(‘ displacement response of the roof’);
%figure(2);
hold on
plot(tt,u(:,2),’k’);
xlabel(‘ time in secs’);
ylabel(‘ roof velocity’);
title(‘velocity response of the roof’);
%figure(3);
hold on
plot(tt,u(:,3),’k’);
xlabel(‘ time in secs’);
ylabel(‘ roof acceleration’);
title(‘ acceleration response of the roof’) % dynamic analysis using direct integration method
% % the porbleme is: Mx”+Cx’+Kx=(C1+K1) *eps*cos(Omeg*t)+K2* eps^2cos(Omeg*t)^2
clc;
clear all
format short e
%close all
m=[18438.6 0 0;0 13761 0;0 0 9174];
disp(‘ mass matrix’)
m;
[ns,ms]=size(m);
% if forces are acting at degrees of freedom
m=[40000 0 0;0 20000 0;0 0 20000];
c0=[0 0 0;0 -8000 8000;0 -8000 8000];
c1=[0 0 0;0 -4000 8000;0 -8000 4000];
k0=[30000 -10000 0;-10000 20000 -10000;0 -10000 10000];
k1=[30000 -10000 0;-10000 50000 -10000;0 -10000 50000];
k2=[30000 -10000 0;-10000 20000 -40000;0 -40000 10000];
% disp(‘ force at various degrees of freedom’);
% f;
%if base ground acceleration is given
% dis=’disp.dat’
% di=load(dis)
% % convert to equivalent nodal loads
% for i=1:ns
% f(:,i)=-di*m(i,i)
% end
disp(‘ damping matrix’)
c0;
disp(‘ stiffness matrix’)
k0;
format long;
kim=inv(k0)*m;
[evec,ev]=eig(kim);
for i=1:ns
omega(i)=1/sqrt(ev(i,i));
end
disp(‘ natural frequencies’)
omega;
% give gamma=0.5 and beta=0.25 for Newmark average accln method
%gama=0.5;
%beta=0.25;
%give gamma=0.5 and beta=0.1667 for Newmark linear accln method
gama=0.5;
beta=0.167;
%give initial conditions for displacements
u0=[0 0.01 0.05];
disp(‘ initial displacements’)
u0;
%give initial condition for velocities
v0=[0. 0. 0.];
%y0=[0;0.01;0.05;0;0;0];
disp(‘ initial velocities’)
v0;
om=5; eps=0.01;
IM=inv(m);
X=u0′; Z=v0′;
tt=0;
% S1=-IM*k0+IM*k1*eps.*cos(om*tt)+IM*k2*eps^2.*cos(om*tt)^2;
% S2=-IM*c0+IM*c1*eps.*cos(om*tt);
dt=0.02;
S=k2*dt^2*beta*eps^2*cos(om*tt)^2+k1*dt^2*beta*eps*cos(om*tt)+…
c1*gama*dt*eps*cos(om*tt)+k0*dt^2*beta+c0*gama*dt+m;
S1=k1*eps.*cos(om*tt)+k2*eps^2.*cos(om*tt)^2;
S2=c1*eps.*cos(om*tt);
f(1,:)=(S1*X+S2*Z);
%for i=1:ns
a0=-inv(m)*(f(1,:)’+c0*v0’+k0*u0′);
%end
kba=k0+(gama/(beta*dt))*c0+(1/(beta*dt*dt))*m;
kin=inv(kba);
aa=(1/(beta*dt))*m+(gama/beta)*c0;
bb=(1/(2.0*beta))*m+dt*(gama/(2.0*beta)-1)*c0;
u(1,:)=u0;
v(1,:)=v0;
a(1,:)=a0;
t=linspace(0,5,251);
for i=2:10%251
X=u(i-1,:)’; Z=v(i-1,:)’;
S1=k1*eps.*cos(om*tt)+k2*eps^2.*cos(om*tt)^2;
S2=c1*eps.*cos(om*tt);
f(i,:)=(S1*X+S2*Z); %%%%% ??????
df=f(i,:)-f(i-1,:);
dfb(i,:)=df+v(i-1,:)*aa’+a(i-1,:)*bb’;
du(i,:)=dfb(i,:)*kin;
dv(i,:)=(gama/(beta*dt))*du(i,:)-(gama/beta)*v(i-1,:)+dt*…
(1-gama/(2.0*beta))*a(i-1,:);
da(i,:)=(1/(beta*dt^2))*du(i,:)-(1/(beta*dt))*v(i-1,:)-(1/(2.0*beta))*a(i-1,:);
u(i,:)=u(i-1,:)+du(i,:);
v(i,:)=v(i-1,:)+dv(i,:);
a(i,:)=a(i-1,:)+da(i,:);
end
%figure(1);
hold on
plot(tt,u(:,1),’k’);
xlabel(‘ time in secs’);
ylabel(‘ roof displacement’);
title(‘ displacement response of the roof’);
%figure(2);
hold on
plot(tt,u(:,2),’k’);
xlabel(‘ time in secs’);
ylabel(‘ roof velocity’);
title(‘velocity response of the roof’);
%figure(3);
hold on
plot(tt,u(:,3),’k’);
xlabel(‘ time in secs’);
ylabel(‘ roof acceleration’);
title(‘ acceleration response of the roof’) thank you MATLAB Answers — New Questions
I’m an employee and use Matlab in my work computer. Is that possible to obtain some kind of free license to install Matlab in my home computer for personal use?
I’m an employee and use Matlab in my work computer. Is that possible to "take credit of this" and then obtain some kind of free license to install Matlab in my home computer for personal use?I’m an employee and use Matlab in my work computer. Is that possible to "take credit of this" and then obtain some kind of free license to install Matlab in my home computer for personal use? I’m an employee and use Matlab in my work computer. Is that possible to "take credit of this" and then obtain some kind of free license to install Matlab in my home computer for personal use? home license MATLAB Answers — New Questions
How to gather digital audio stream via STM32?
Is it possible to gather data stream from serial audio interface for procedding by matlab/simulink?
I want to use STM32H743 for this purpose.
My goal is to receive audioi stream from the codec via SAI interface.
MarekIs it possible to gather data stream from serial audio interface for procedding by matlab/simulink?
I want to use STM32H743 for this purpose.
My goal is to receive audioi stream from the codec via SAI interface.
Marek Is it possible to gather data stream from serial audio interface for procedding by matlab/simulink?
I want to use STM32H743 for this purpose.
My goal is to receive audioi stream from the codec via SAI interface.
Marek stm32h7 MATLAB Answers — New Questions
Why sharePoint online is not saving my calculated column formula
I have created a calculated column of type Date/time with this formula:-
=IF([DueDateTime]=”12:00 AM”, [DueDate],
IF([DueDateTime]=”1:00 AM”, [DueDate] + 60/(24*60),
IF([DueDateTime]=”2:00 AM”, [DueDate] + 120/(24*60),
IF([DueDateTime]=”3:00 AM”, [DueDate] + 180/(24*60),
IF([DueDateTime]=”4:00 AM”, [DueDate] + 240/(24*60),
IF([DueDateTime]=”5:00 AM”, [DueDate] + 300/(24*60),
IF([DueDateTime]=”6:00 AM”, [DueDate] + 360/(24*60),
IF([DueDateTime]=”7:00 AM”, [DueDate] + 420/(24*60),
IF([DueDateTime]=”8:00 AM”, [DueDate] + 480/(24*60),
IF([DueDateTime]=”9:00 AM”, [DueDate] + 540/(24*60),
IF([DueDateTime]=”10:00 AM”, [DueDate] + 600/(24*60),
IF([DueDateTime]=”11:00 AM”, [DueDate] + 660/(24*60),
[DueDate]))))))))))))
but when i try to apply this formula to the site column i will get this message forever:-
and the formula will not get applied to the list column and hence to the list.. any advice? i read that in sharepoint online we can only have 19 nested If statements inside calculated columns, but in my case i only have 12.. so why this is not working?
I have created a calculated column of type Date/time with this formula:- =IF([DueDateTime]=”12:00 AM”, [DueDate],
IF([DueDateTime]=”1:00 AM”, [DueDate] + 60/(24*60),
IF([DueDateTime]=”2:00 AM”, [DueDate] + 120/(24*60),
IF([DueDateTime]=”3:00 AM”, [DueDate] + 180/(24*60),
IF([DueDateTime]=”4:00 AM”, [DueDate] + 240/(24*60),
IF([DueDateTime]=”5:00 AM”, [DueDate] + 300/(24*60),
IF([DueDateTime]=”6:00 AM”, [DueDate] + 360/(24*60),
IF([DueDateTime]=”7:00 AM”, [DueDate] + 420/(24*60),
IF([DueDateTime]=”8:00 AM”, [DueDate] + 480/(24*60),
IF([DueDateTime]=”9:00 AM”, [DueDate] + 540/(24*60),
IF([DueDateTime]=”10:00 AM”, [DueDate] + 600/(24*60),
IF([DueDateTime]=”11:00 AM”, [DueDate] + 660/(24*60),
[DueDate])))))))))))) but when i try to apply this formula to the site column i will get this message forever:- and the formula will not get applied to the list column and hence to the list.. any advice? i read that in sharepoint online we can only have 19 nested If statements inside calculated columns, but in my case i only have 12.. so why this is not working? Read More
FOC with BLDC motor
Hello,
I want to control a bldc motor using FOC but after doing some research i find that i can’t use FOC with BLDC only with PMSM.Can any one say if i can use FOC
Thank youHello,
I want to control a bldc motor using FOC but after doing some research i find that i can’t use FOC with BLDC only with PMSM.Can any one say if i can use FOC
Thank you Hello,
I want to control a bldc motor using FOC but after doing some research i find that i can’t use FOC with BLDC only with PMSM.Can any one say if i can use FOC
Thank you foc, bldc, pmsm MATLAB Answers — New Questions
Create a Refinable managed property which contain a Date field + choice field
I have a sharepoint online date field named DueDate of type Date/Time that allow to enter Date Only (without time).
And i choice field named DueDatetime with these values:-
12:00 AM
12:30 AM
1:00 AM
1:30 AM
2:00 AM
2:30 AM
3:00 AM
3:30 AM
4:00 AM
4:30 AM
5:00 AM
5:30 AM
6:00 AM
6:30 AM
7:00 AM
7:30 AM
8:00 AM
8:30 AM
9:00 AM
9:30 AM
10:00 AM
10:30 AM
11:00 AM
11:30 AM
12:00 PM
12:30 PM
1:00 PM
1:30 PM
2:00 PM
2:30 PM
3:00 PM
3:30 PM
4:00 PM
4:30 PM
5:00 PM
5:30 PM
6:00 PM
6:30 PM
7:00 PM
7:30 PM
8:00 PM
8:30 PM
9:00 PM
9:30 PM
10:00 PM
10:30 PM
11:00 PM
11:30 PM
now can i create a refinable managed property that combine the values of both fields to generate a date & time field? so i can search, filter and sort items based on this refianble value?
so if i have a combination of the 2 columns as follow:-
22 August 2024 9:00 pm
22 August 2023 11:00 am
and i sort them descending then
22 August 2024 9:00 pm
should be shown before
22 August 2023 11:00 am
is this possible? to manage inside sharepoint search?
I have a sharepoint online date field named DueDate of type Date/Time that allow to enter Date Only (without time).And i choice field named DueDatetime with these values:- 12:00 AM
12:30 AM
1:00 AM
1:30 AM
2:00 AM
2:30 AM
3:00 AM
3:30 AM
4:00 AM
4:30 AM
5:00 AM
5:30 AM
6:00 AM
6:30 AM
7:00 AM
7:30 AM
8:00 AM
8:30 AM
9:00 AM
9:30 AM
10:00 AM
10:30 AM
11:00 AM
11:30 AM
12:00 PM
12:30 PM
1:00 PM
1:30 PM
2:00 PM
2:30 PM
3:00 PM
3:30 PM
4:00 PM
4:30 PM
5:00 PM
5:30 PM
6:00 PM
6:30 PM
7:00 PM
7:30 PM
8:00 PM
8:30 PM
9:00 PM
9:30 PM
10:00 PM
10:30 PM
11:00 PM
11:30 PM now can i create a refinable managed property that combine the values of both fields to generate a date & time field? so i can search, filter and sort items based on this refianble value? so if i have a combination of the 2 columns as follow:- 22 August 2024 9:00 pm22 August 2023 11:00 am and i sort them descending then22 August 2024 9:00 pmshould be shown before22 August 2023 11:00 am is this possible? to manage inside sharepoint search? Read More
Calcuated column to add hours to Date field
I have a sharepoint online date field named DueDate of type Date/Time that allow to enter Date Only (without time).
And i choice field named DueDatetime with these values:-
12:00 AM
12:30 AM
1:00 AM
1:30 AM
2:00 AM
2:30 AM
3:00 AM
3:30 AM
4:00 AM
4:30 AM
5:00 AM
5:30 AM
6:00 AM
6:30 AM
7:00 AM
7:30 AM
8:00 AM
8:30 AM
9:00 AM
9:30 AM
10:00 AM
10:30 AM
11:00 AM
11:30 AM
12:00 PM
12:30 PM
1:00 PM
1:30 PM
2:00 PM
2:30 PM
3:00 PM
3:30 PM
4:00 PM
4:30 PM
5:00 PM
5:30 PM
6:00 PM
6:30 PM
7:00 PM
7:30 PM
8:00 PM
8:30 PM
9:00 PM
9:30 PM
10:00 PM
10:30 PM
11:00 PM
11:30 PM
now i want to create a calculated column which will add the number of hours based on the choice selection to the due date, is this possible? i do not want to modify the DueDate field to allow Date and time, since i am using Search web parts and i need to control the time value, seems by default the time in search will be shown in UTC and not based on the site collection time zone… so can i created such as calculated column?
I have a sharepoint online date field named DueDate of type Date/Time that allow to enter Date Only (without time).And i choice field named DueDatetime with these values:- 12:00 AM
12:30 AM
1:00 AM
1:30 AM
2:00 AM
2:30 AM
3:00 AM
3:30 AM
4:00 AM
4:30 AM
5:00 AM
5:30 AM
6:00 AM
6:30 AM
7:00 AM
7:30 AM
8:00 AM
8:30 AM
9:00 AM
9:30 AM
10:00 AM
10:30 AM
11:00 AM
11:30 AM
12:00 PM
12:30 PM
1:00 PM
1:30 PM
2:00 PM
2:30 PM
3:00 PM
3:30 PM
4:00 PM
4:30 PM
5:00 PM
5:30 PM
6:00 PM
6:30 PM
7:00 PM
7:30 PM
8:00 PM
8:30 PM
9:00 PM
9:30 PM
10:00 PM
10:30 PM
11:00 PM
11:30 PM now i want to create a calculated column which will add the number of hours based on the choice selection to the due date, is this possible? i do not want to modify the DueDate field to allow Date and time, since i am using Search web parts and i need to control the time value, seems by default the time in search will be shown in UTC and not based on the site collection time zone… so can i created such as calculated column? Read More
How to label multiple objects in object detection with different names?
Hi there!
I’ve a problem with labelling my objects in an image.
Let’s have a look at the image:
This programme is detecting front/rear of the cars and a stop sign. I want the labels to say what they’re looking at. For example: "Stop Sign Confidence: 1.0000", CarRear Confidence: 0.6446 and etc. As you may see, my programme adds the probability factors correctly. But there are still no strings/labelnames attached.
You can have a look at my code:
%%
% Read test image
testImage = imread(‘StopSignTest2.jpg’);
% Detect stop signs
[bboxes,score,label] = detect(rcnn,testImage,’MiniBatchSize’,128)
% Display detection results
label_str = cell(3,1);
conf_val = [score];
conf_lab = [label];
for ii=1:3
label_str{ii} = [ ‘ Confidence: ‘ num2str(conf_val(ii), ‘%0.4f’)];
end
position = [bboxes];
outputImage = insertObjectAnnotation(testImage,’rectangle’,position,label_str,…
‘TextBoxOpacity’,0.9,’FontSize’,10);
figure
imshow(outputImage)
%%
I have NO clue in how to add strings in the label_str{ii} like the way I did with scores (num2str(conf_val(ii)).
Thanking you in advance!Hi there!
I’ve a problem with labelling my objects in an image.
Let’s have a look at the image:
This programme is detecting front/rear of the cars and a stop sign. I want the labels to say what they’re looking at. For example: "Stop Sign Confidence: 1.0000", CarRear Confidence: 0.6446 and etc. As you may see, my programme adds the probability factors correctly. But there are still no strings/labelnames attached.
You can have a look at my code:
%%
% Read test image
testImage = imread(‘StopSignTest2.jpg’);
% Detect stop signs
[bboxes,score,label] = detect(rcnn,testImage,’MiniBatchSize’,128)
% Display detection results
label_str = cell(3,1);
conf_val = [score];
conf_lab = [label];
for ii=1:3
label_str{ii} = [ ‘ Confidence: ‘ num2str(conf_val(ii), ‘%0.4f’)];
end
position = [bboxes];
outputImage = insertObjectAnnotation(testImage,’rectangle’,position,label_str,…
‘TextBoxOpacity’,0.9,’FontSize’,10);
figure
imshow(outputImage)
%%
I have NO clue in how to add strings in the label_str{ii} like the way I did with scores (num2str(conf_val(ii)).
Thanking you in advance! Hi there!
I’ve a problem with labelling my objects in an image.
Let’s have a look at the image:
This programme is detecting front/rear of the cars and a stop sign. I want the labels to say what they’re looking at. For example: "Stop Sign Confidence: 1.0000", CarRear Confidence: 0.6446 and etc. As you may see, my programme adds the probability factors correctly. But there are still no strings/labelnames attached.
You can have a look at my code:
%%
% Read test image
testImage = imread(‘StopSignTest2.jpg’);
% Detect stop signs
[bboxes,score,label] = detect(rcnn,testImage,’MiniBatchSize’,128)
% Display detection results
label_str = cell(3,1);
conf_val = [score];
conf_lab = [label];
for ii=1:3
label_str{ii} = [ ‘ Confidence: ‘ num2str(conf_val(ii), ‘%0.4f’)];
end
position = [bboxes];
outputImage = insertObjectAnnotation(testImage,’rectangle’,position,label_str,…
‘TextBoxOpacity’,0.9,’FontSize’,10);
figure
imshow(outputImage)
%%
I have NO clue in how to add strings in the label_str{ii} like the way I did with scores (num2str(conf_val(ii)).
Thanking you in advance! multiple objects, object detection, strings, objects detection, neural network, cnn MATLAB Answers — New Questions
search large lists using calculated columns, even that we cannot add calculated columns as index
Inside SharePoint online we cannot index calculated columns at the list level, so we cannot filter large lists based on the calculated columns inside list views. but what about Search? can we search large lists using calculated columns and sort the search results based on the calculated columns? In other words, is there a relation between index at list level and the ability to search calculated columns?
As I have large lists which might contain 100,000 items in the future for each, and i am using PnP Modern search web part to show data from those lists. i want to be able to filter the PnP Search result using the calculated column, so is this achievable? or since we cannot add calculated columns inside list indexes then this mean that we will not be able to filter and sort Search Results using this calculated column? or this is not the case?
Thanks
Inside SharePoint online we cannot index calculated columns at the list level, so we cannot filter large lists based on the calculated columns inside list views. but what about Search? can we search large lists using calculated columns and sort the search results based on the calculated columns? In other words, is there a relation between index at list level and the ability to search calculated columns?As I have large lists which might contain 100,000 items in the future for each, and i am using PnP Modern search web part to show data from those lists. i want to be able to filter the PnP Search result using the calculated column, so is this achievable? or since we cannot add calculated columns inside list indexes then this mean that we will not be able to filter and sort Search Results using this calculated column? or this is not the case?Thanks Read More
Copying a Formula in a workseet to pull data from other worksheets
Hello! I need some help!
I have a file that has a worksheet for each day of the month. I also have a sheet that I want to pull key data from each worksheet to create a financial snapshot all in one sheet. I created a column for each day of the month. I have entered the formulas in Day 1 to pull data from the Day 1 worksheet. I want to pull the same data into the other columns (1 column for each days worksheet).
How can I copy the formula over without having to manually change it 30 times for each line of data??
For instance, I have “Cash on Hand” and in column 1 I have a formula “=-‘1’!$J9 to pull data into that cell from worksheet 1 – cell J9. However, if I copy that formula into columns for day 2, 3, etc. I find that I am having to manually change the “1” in that formula to a “2”, then a “3” and it is awfully time consuming.
Surely there is an easier way to copy the formula so it pull data from that cell for different worksheets??
I’m attaching a sample file so you can see more easily. My formulas are correct in Column D for Day 1… but I need to pull data for days 2 – 31 as well. Ugh!
Would really appreciate your help! I know it’s got to be easier than manually changing the formulas 30 times for each line of my summary spreadsheet! At least I hope so!!
TIA!!
Hello! I need some help! I have a file that has a worksheet for each day of the month. I also have a sheet that I want to pull key data from each worksheet to create a financial snapshot all in one sheet. I created a column for each day of the month. I have entered the formulas in Day 1 to pull data from the Day 1 worksheet. I want to pull the same data into the other columns (1 column for each days worksheet). How can I copy the formula over without having to manually change it 30 times for each line of data??For instance, I have “Cash on Hand” and in column 1 I have a formula “=-‘1’!$J9 to pull data into that cell from worksheet 1 – cell J9. However, if I copy that formula into columns for day 2, 3, etc. I find that I am having to manually change the “1” in that formula to a “2”, then a “3” and it is awfully time consuming. Surely there is an easier way to copy the formula so it pull data from that cell for different worksheets??I’m attaching a sample file so you can see more easily. My formulas are correct in Column D for Day 1… but I need to pull data for days 2 – 31 as well. Ugh! Would really appreciate your help! I know it’s got to be easier than manually changing the formulas 30 times for each line of my summary spreadsheet! At least I hope so!! TIA!! Read More
Updating ODBC and OLE
SQL Gurus,
When updating ODBC and OLE drivers they are getting installed as new instance, not updating the existing instance. is there any cmd switch I can use to just upgrade?
Thanks
RK
SQL Gurus, When updating ODBC and OLE drivers they are getting installed as new instance, not updating the existing instance. is there any cmd switch I can use to just upgrade? ThanksRK Read More
How do you disable Windows TimeLine
I was appalled to discover MS keeping screen shots of work from my desktop including items that would be considered insecure. Turns out they have turned on a feature called timeline.
I went through the instructions to go and stop it storing locally and sending to cloud but I note the wording doesnt say – stop timeline working.
I found the service to disable was The Connected User Experiences and Telemetry service. this was already disabled prior to me discovering timeline items. So what service actually runs timeline nad how do we disable this security flaw?
I was appalled to discover MS keeping screen shots of work from my desktop including items that would be considered insecure. Turns out they have turned on a feature called timeline. I went through the instructions to go and stop it storing locally and sending to cloud but I note the wording doesnt say – stop timeline working. I found the service to disable was The Connected User Experiences and Telemetry service. this was already disabled prior to me discovering timeline items. So what service actually runs timeline nad how do we disable this security flaw? Read More
I am getting this error when running my code >>>>Dot indexing is not supported for variables of this type. Error in rl.util.expstruct2timeserstruct (line 7) observation
The below code is the one I am running
Create Simulink Environment and Train Agent
This example shows how to convert the PI controller in the watertank Simulink® model to a reinforcement learning deep deterministic policy gradient (DDPG) agent. For an example that trains a DDPG agent in MATLAB®, see Train DDPG Agent to Balance Double Integrator Environment.
Water Tank Model
The original model for this example is the water tank model. The goal is to control the level of the water in the tank. For more information about the water tank model, see watertank Simulink Model.
Modify the original model by making the following changes:
Delete the PID Controller.
Insert the RL Agent block.
Connect the observation vector , where is the height of the water in the tank, , and is the reference height.
Set up the reward .
Configure the termination signal such that the simulation stops if or .
The resulting model is rlwatertank.slx. For more information on this model and the changes, see Create Simulink Environment for Reinforcement Learning.
open_system("RLFinal_PhD_Model_DroopPQ1")
Create the Environment
Creating an environment model includes defining the following:
Action and observation signals that the agent uses to interact with the environment. For more information, see rlNumericSpec and rlFiniteSetSpec.
Reward signal that the agent uses to measure its success. For more information, see Define Reward Signals.
Define the observation specification obsInfo and action specification actInfo.
% Observation info
obsInfo = rlNumericSpec([3 1],…
LowerLimit=[-inf -inf 0 ]’,…
UpperLimit=[ inf inf inf]’);
% Name and description are optional and not used by the software
obsInfo.Name = "observations";
obsInfo.Description = "integrated error, error, and measured height";
% Action info
actInfo = rlNumericSpec([1 1]);
actInfo.Name = "flow";
Create the environment object.
env = rlSimulinkEnv("RLFinal_PhD_Model_DroopPQ1","RLFinal_PhD_Model_DroopPQ1/RL Agent1",…
obsInfo,actInfo);
Set a custom reset function that randomizes the reference values for the model.
env.ResetFcn = @(in)localResetFcn(in);
Specify the simulation time Tf and the agent sample time Ts in seconds.
Ts = 1.0;
Tf = 200;
Fix the random generator seed for reproducibility.
rng(0)
Create the Critic
DDPG agents use a parametrized Q-value function approximator to estimate the value of the policy. A Q-value function critic takes the current observation and an action as inputs and returns a single scalar as output (the estimated discounted cumulative long-term reward for which receives the action from the state corresponding to the current observation, and following the policy thereafter).
To model the parametrized Q-value function within the critic, use a neural network with two input layers (one for the observation channel, as specified by obsInfo, and the other for the action channel, as specified by actInfo) and one output layer (which returns the scalar value).
Define each network path as an array of layer objects. Assign names to the input and output layers of each path. These names allow you to connect the paths and then later explicitly associate the network input and output layers with the appropriate environment channel. Obtain the dimension of the observation and action spaces from the obsInfo and actInfo specifications.
% Observation path
obsPath = [
featureInputLayer(obsInfo.Dimension(1),Name="obsInLyr")
fullyConnectedLayer(50)
reluLayer
fullyConnectedLayer(25,Name="obsPathOutLyr")
];
% Action path
actPath = [
featureInputLayer(actInfo.Dimension(1),Name="actInLyr")
fullyConnectedLayer(25,Name="actPathOutLyr")
];
% Common path
commonPath = [
additionLayer(2,Name="add")
reluLayer
fullyConnectedLayer(1,Name="QValue")
];
% Create the network object and add the layers
criticNet = dlnetwork();
criticNet = addLayers(criticNet,obsPath);
criticNet = addLayers(criticNet,actPath);
criticNet = addLayers(criticNet,commonPath);
% Connect the layers
criticNet = connectLayers(criticNet, …
"obsPathOutLyr","add/in1");
criticNet = connectLayers(criticNet, …
"actPathOutLyr","add/in2");
View the critic network configuration.
figure
plot(criticNet)
Initialize the dlnetwork object and summarize its properties.
criticNet = initialize(criticNet);
summary(criticNet)
Create the critic approximator object using the specified deep neural network, the environment specification objects, and the names if the network inputs to be associated with the observation and action channels.
critic = rlQValueFunction(criticNet, …
obsInfo,actInfo, …
ObservationInputNames="obsInLyr", …
ActionInputNames="actInLyr");
For more information on Q-value function objects, see rlQValueFunction.
Check the critic with a random input observation and action.
getValue(critic, …
{rand(obsInfo.Dimension)}, …
{rand(actInfo.Dimension)})
For more information on creating critics, see Create Policies and Value Functions.
Create the Actor
DDPG agents use a parametrized deterministic policy over continuous action spaces, which is learned by a continuous deterministic actor.
A continuous deterministic actor implements a parametrized deterministic policy for a continuous action space. This actor takes the current observation as input and returns as output an action that is a deterministic function of the observation.
To model the parametrized policy within the actor, use a neural network with one input layer (which receives the content of the environment observation channel, as specified by obsInfo) and one output layer (which returns the action to the environment action channel, as specified by actInfo).
Define the network as an array of layer objects.
actorNet = [
featureInputLayer(obsInfo.Dimension(1))
fullyConnectedLayer(3)
tanhLayer
fullyConnectedLayer(actInfo.Dimension(1))
];
Convert the network to a dlnetwork object and summarize its properties.
actorNet = dlnetwork(actorNet);
summary(actorNet)
Create the actor approximator object using the specified deep neural network, the environment specification objects, and the name if the network input to be associated with the observation channel.
actor = rlContinuousDeterministicActor(actorNet,obsInfo,actInfo);
For more information, see rlContinuousDeterministicActor.
Check the actor with a random input observation.
getAction(actor,{rand(obsInfo.Dimension)})
For more information on creating critics, see Create Policies and Value Functions.
Create the DDPG Agent
Create the DDPG agent using the specified actor and critic approximator objects.
agent = rlDDPGAgent(actor,critic);
For more information, see rlDDPGAgent.
Specify options for the agent, the actor, and the critic using dot notation.
agent.SampleTime = Ts;
agent.AgentOptions.TargetSmoothFactor = 1e-3;
agent.AgentOptions.DiscountFactor = 1.0;
agent.AgentOptions.MiniBatchSize = 64;
agent.AgentOptions.ExperienceBufferLength = 1e6;
agent.AgentOptions.NoiseOptions.Variance = 0.3;
agent.AgentOptions.NoiseOptions.VarianceDecayRate = 1e-5;
agent.AgentOptions.CriticOptimizerOptions.LearnRate = 1e-03;
agent.AgentOptions.CriticOptimizerOptions.GradientThreshold = 1;
agent.AgentOptions.ActorOptimizerOptions.LearnRate = 1e-04;
agent.AgentOptions.ActorOptimizerOptions.GradientThreshold = 1;
Alternatively, you can specify the agent options using an rlDDPGAgentOptions object.
Check the agent with a random input observation.
getAction(agent,{rand(obsInfo.Dimension)})
Train Agent
To train the agent, first specify the training options. For this example, use the following options:
Run each training for at most 5000 episodes. Specify that each episode lasts for at most ceil(Tf/Ts) (that is 200) time steps.
Display the training progress in the Episode Manager dialog box (set the Plots option) and disable the command line display (set the Verbose option to false).
Stop training when the agent receives an average cumulative reward greater than 800 over 20 consecutive episodes. At this point, the agent can control the level of water in the tank.
For more information, see rlTrainingOptions.
trainOpts = rlTrainingOptions(…
MaxEpisodes=5000, …
MaxStepsPerEpisode=ceil(Tf/Ts), …
ScoreAveragingWindowLength=20, …
Verbose=false, …
Plots="training-progress",…
StopTrainingCriteria="AverageReward",…
StopTrainingValue=800);
Train the agent using the train function. Training is a computationally intensive process that takes several minutes to complete. To save time while running this example, load a pretrained agent by setting doTraining to false. To train the agent yourself, set doTraining to true.
doTraining = true;
if doTraining
% Train the agent.
trainingStats = train(agent,env,trainOpts);
else
% Load the pretrained agent for the example.
load("WaterTankDDPG.mat","agent")
end
Validate Trained Agent
Validate the learned agent against the model by simulation. Since the reset function randomizes the reference values, fix the random generator seed to ensure simulation reproducibility.
rng(1)
Simulate the agent within the environment, and return the experiences as output.
simOpts = rlSimulationOptions(MaxSteps=ceil(Tf/Ts),StopOnError="on");
experiences = sim(env,agent,simOpts);
Local Reset Function
function in = localResetFcn(in)
% Randomize reference signal
blk = sprintf("RLFinal_PhD_Model_DroopPQ1/Droop/Voutref");
h = 3*randn + 0.5;
while h <= 0 || h >= 400
h = 3*randn + 200;
end
in = setBlockParameter(in,blk,Value=num2str(h));
% Randomize initial height
h1 = 3*randn + 200;
while h1 <= 0 || h1 >= 1
h1 = 3*randn + 0.5;
end
blk = "RLFinal_PhD_Model_DroopPQ1/Gain";
in = setBlockParameter(in,blk,Gain=num2str(h1));
end
I am getting the following results without rewards at all
Zero rewards
When I stop the trainging I see this error::::
Dot indexing is not supported for variables of this type.
Error in rl.util.expstruct2timeserstruct (line 7)
observation = {experiences.Observation};
Error in rl.env.AbstractEnv/sim (line 138)
s = rl.util.expstruct2timeserstruct(exp,time,oinfo,ainfo);
Copyright 2019 – 2023 The MathWorks, Inc.The below code is the one I am running
Create Simulink Environment and Train Agent
This example shows how to convert the PI controller in the watertank Simulink® model to a reinforcement learning deep deterministic policy gradient (DDPG) agent. For an example that trains a DDPG agent in MATLAB®, see Train DDPG Agent to Balance Double Integrator Environment.
Water Tank Model
The original model for this example is the water tank model. The goal is to control the level of the water in the tank. For more information about the water tank model, see watertank Simulink Model.
Modify the original model by making the following changes:
Delete the PID Controller.
Insert the RL Agent block.
Connect the observation vector , where is the height of the water in the tank, , and is the reference height.
Set up the reward .
Configure the termination signal such that the simulation stops if or .
The resulting model is rlwatertank.slx. For more information on this model and the changes, see Create Simulink Environment for Reinforcement Learning.
open_system("RLFinal_PhD_Model_DroopPQ1")
Create the Environment
Creating an environment model includes defining the following:
Action and observation signals that the agent uses to interact with the environment. For more information, see rlNumericSpec and rlFiniteSetSpec.
Reward signal that the agent uses to measure its success. For more information, see Define Reward Signals.
Define the observation specification obsInfo and action specification actInfo.
% Observation info
obsInfo = rlNumericSpec([3 1],…
LowerLimit=[-inf -inf 0 ]’,…
UpperLimit=[ inf inf inf]’);
% Name and description are optional and not used by the software
obsInfo.Name = "observations";
obsInfo.Description = "integrated error, error, and measured height";
% Action info
actInfo = rlNumericSpec([1 1]);
actInfo.Name = "flow";
Create the environment object.
env = rlSimulinkEnv("RLFinal_PhD_Model_DroopPQ1","RLFinal_PhD_Model_DroopPQ1/RL Agent1",…
obsInfo,actInfo);
Set a custom reset function that randomizes the reference values for the model.
env.ResetFcn = @(in)localResetFcn(in);
Specify the simulation time Tf and the agent sample time Ts in seconds.
Ts = 1.0;
Tf = 200;
Fix the random generator seed for reproducibility.
rng(0)
Create the Critic
DDPG agents use a parametrized Q-value function approximator to estimate the value of the policy. A Q-value function critic takes the current observation and an action as inputs and returns a single scalar as output (the estimated discounted cumulative long-term reward for which receives the action from the state corresponding to the current observation, and following the policy thereafter).
To model the parametrized Q-value function within the critic, use a neural network with two input layers (one for the observation channel, as specified by obsInfo, and the other for the action channel, as specified by actInfo) and one output layer (which returns the scalar value).
Define each network path as an array of layer objects. Assign names to the input and output layers of each path. These names allow you to connect the paths and then later explicitly associate the network input and output layers with the appropriate environment channel. Obtain the dimension of the observation and action spaces from the obsInfo and actInfo specifications.
% Observation path
obsPath = [
featureInputLayer(obsInfo.Dimension(1),Name="obsInLyr")
fullyConnectedLayer(50)
reluLayer
fullyConnectedLayer(25,Name="obsPathOutLyr")
];
% Action path
actPath = [
featureInputLayer(actInfo.Dimension(1),Name="actInLyr")
fullyConnectedLayer(25,Name="actPathOutLyr")
];
% Common path
commonPath = [
additionLayer(2,Name="add")
reluLayer
fullyConnectedLayer(1,Name="QValue")
];
% Create the network object and add the layers
criticNet = dlnetwork();
criticNet = addLayers(criticNet,obsPath);
criticNet = addLayers(criticNet,actPath);
criticNet = addLayers(criticNet,commonPath);
% Connect the layers
criticNet = connectLayers(criticNet, …
"obsPathOutLyr","add/in1");
criticNet = connectLayers(criticNet, …
"actPathOutLyr","add/in2");
View the critic network configuration.
figure
plot(criticNet)
Initialize the dlnetwork object and summarize its properties.
criticNet = initialize(criticNet);
summary(criticNet)
Create the critic approximator object using the specified deep neural network, the environment specification objects, and the names if the network inputs to be associated with the observation and action channels.
critic = rlQValueFunction(criticNet, …
obsInfo,actInfo, …
ObservationInputNames="obsInLyr", …
ActionInputNames="actInLyr");
For more information on Q-value function objects, see rlQValueFunction.
Check the critic with a random input observation and action.
getValue(critic, …
{rand(obsInfo.Dimension)}, …
{rand(actInfo.Dimension)})
For more information on creating critics, see Create Policies and Value Functions.
Create the Actor
DDPG agents use a parametrized deterministic policy over continuous action spaces, which is learned by a continuous deterministic actor.
A continuous deterministic actor implements a parametrized deterministic policy for a continuous action space. This actor takes the current observation as input and returns as output an action that is a deterministic function of the observation.
To model the parametrized policy within the actor, use a neural network with one input layer (which receives the content of the environment observation channel, as specified by obsInfo) and one output layer (which returns the action to the environment action channel, as specified by actInfo).
Define the network as an array of layer objects.
actorNet = [
featureInputLayer(obsInfo.Dimension(1))
fullyConnectedLayer(3)
tanhLayer
fullyConnectedLayer(actInfo.Dimension(1))
];
Convert the network to a dlnetwork object and summarize its properties.
actorNet = dlnetwork(actorNet);
summary(actorNet)
Create the actor approximator object using the specified deep neural network, the environment specification objects, and the name if the network input to be associated with the observation channel.
actor = rlContinuousDeterministicActor(actorNet,obsInfo,actInfo);
For more information, see rlContinuousDeterministicActor.
Check the actor with a random input observation.
getAction(actor,{rand(obsInfo.Dimension)})
For more information on creating critics, see Create Policies and Value Functions.
Create the DDPG Agent
Create the DDPG agent using the specified actor and critic approximator objects.
agent = rlDDPGAgent(actor,critic);
For more information, see rlDDPGAgent.
Specify options for the agent, the actor, and the critic using dot notation.
agent.SampleTime = Ts;
agent.AgentOptions.TargetSmoothFactor = 1e-3;
agent.AgentOptions.DiscountFactor = 1.0;
agent.AgentOptions.MiniBatchSize = 64;
agent.AgentOptions.ExperienceBufferLength = 1e6;
agent.AgentOptions.NoiseOptions.Variance = 0.3;
agent.AgentOptions.NoiseOptions.VarianceDecayRate = 1e-5;
agent.AgentOptions.CriticOptimizerOptions.LearnRate = 1e-03;
agent.AgentOptions.CriticOptimizerOptions.GradientThreshold = 1;
agent.AgentOptions.ActorOptimizerOptions.LearnRate = 1e-04;
agent.AgentOptions.ActorOptimizerOptions.GradientThreshold = 1;
Alternatively, you can specify the agent options using an rlDDPGAgentOptions object.
Check the agent with a random input observation.
getAction(agent,{rand(obsInfo.Dimension)})
Train Agent
To train the agent, first specify the training options. For this example, use the following options:
Run each training for at most 5000 episodes. Specify that each episode lasts for at most ceil(Tf/Ts) (that is 200) time steps.
Display the training progress in the Episode Manager dialog box (set the Plots option) and disable the command line display (set the Verbose option to false).
Stop training when the agent receives an average cumulative reward greater than 800 over 20 consecutive episodes. At this point, the agent can control the level of water in the tank.
For more information, see rlTrainingOptions.
trainOpts = rlTrainingOptions(…
MaxEpisodes=5000, …
MaxStepsPerEpisode=ceil(Tf/Ts), …
ScoreAveragingWindowLength=20, …
Verbose=false, …
Plots="training-progress",…
StopTrainingCriteria="AverageReward",…
StopTrainingValue=800);
Train the agent using the train function. Training is a computationally intensive process that takes several minutes to complete. To save time while running this example, load a pretrained agent by setting doTraining to false. To train the agent yourself, set doTraining to true.
doTraining = true;
if doTraining
% Train the agent.
trainingStats = train(agent,env,trainOpts);
else
% Load the pretrained agent for the example.
load("WaterTankDDPG.mat","agent")
end
Validate Trained Agent
Validate the learned agent against the model by simulation. Since the reset function randomizes the reference values, fix the random generator seed to ensure simulation reproducibility.
rng(1)
Simulate the agent within the environment, and return the experiences as output.
simOpts = rlSimulationOptions(MaxSteps=ceil(Tf/Ts),StopOnError="on");
experiences = sim(env,agent,simOpts);
Local Reset Function
function in = localResetFcn(in)
% Randomize reference signal
blk = sprintf("RLFinal_PhD_Model_DroopPQ1/Droop/Voutref");
h = 3*randn + 0.5;
while h <= 0 || h >= 400
h = 3*randn + 200;
end
in = setBlockParameter(in,blk,Value=num2str(h));
% Randomize initial height
h1 = 3*randn + 200;
while h1 <= 0 || h1 >= 1
h1 = 3*randn + 0.5;
end
blk = "RLFinal_PhD_Model_DroopPQ1/Gain";
in = setBlockParameter(in,blk,Gain=num2str(h1));
end
I am getting the following results without rewards at all
Zero rewards
When I stop the trainging I see this error::::
Dot indexing is not supported for variables of this type.
Error in rl.util.expstruct2timeserstruct (line 7)
observation = {experiences.Observation};
Error in rl.env.AbstractEnv/sim (line 138)
s = rl.util.expstruct2timeserstruct(exp,time,oinfo,ainfo);
Copyright 2019 – 2023 The MathWorks, Inc. The below code is the one I am running
Create Simulink Environment and Train Agent
This example shows how to convert the PI controller in the watertank Simulink® model to a reinforcement learning deep deterministic policy gradient (DDPG) agent. For an example that trains a DDPG agent in MATLAB®, see Train DDPG Agent to Balance Double Integrator Environment.
Water Tank Model
The original model for this example is the water tank model. The goal is to control the level of the water in the tank. For more information about the water tank model, see watertank Simulink Model.
Modify the original model by making the following changes:
Delete the PID Controller.
Insert the RL Agent block.
Connect the observation vector , where is the height of the water in the tank, , and is the reference height.
Set up the reward .
Configure the termination signal such that the simulation stops if or .
The resulting model is rlwatertank.slx. For more information on this model and the changes, see Create Simulink Environment for Reinforcement Learning.
open_system("RLFinal_PhD_Model_DroopPQ1")
Create the Environment
Creating an environment model includes defining the following:
Action and observation signals that the agent uses to interact with the environment. For more information, see rlNumericSpec and rlFiniteSetSpec.
Reward signal that the agent uses to measure its success. For more information, see Define Reward Signals.
Define the observation specification obsInfo and action specification actInfo.
% Observation info
obsInfo = rlNumericSpec([3 1],…
LowerLimit=[-inf -inf 0 ]’,…
UpperLimit=[ inf inf inf]’);
% Name and description are optional and not used by the software
obsInfo.Name = "observations";
obsInfo.Description = "integrated error, error, and measured height";
% Action info
actInfo = rlNumericSpec([1 1]);
actInfo.Name = "flow";
Create the environment object.
env = rlSimulinkEnv("RLFinal_PhD_Model_DroopPQ1","RLFinal_PhD_Model_DroopPQ1/RL Agent1",…
obsInfo,actInfo);
Set a custom reset function that randomizes the reference values for the model.
env.ResetFcn = @(in)localResetFcn(in);
Specify the simulation time Tf and the agent sample time Ts in seconds.
Ts = 1.0;
Tf = 200;
Fix the random generator seed for reproducibility.
rng(0)
Create the Critic
DDPG agents use a parametrized Q-value function approximator to estimate the value of the policy. A Q-value function critic takes the current observation and an action as inputs and returns a single scalar as output (the estimated discounted cumulative long-term reward for which receives the action from the state corresponding to the current observation, and following the policy thereafter).
To model the parametrized Q-value function within the critic, use a neural network with two input layers (one for the observation channel, as specified by obsInfo, and the other for the action channel, as specified by actInfo) and one output layer (which returns the scalar value).
Define each network path as an array of layer objects. Assign names to the input and output layers of each path. These names allow you to connect the paths and then later explicitly associate the network input and output layers with the appropriate environment channel. Obtain the dimension of the observation and action spaces from the obsInfo and actInfo specifications.
% Observation path
obsPath = [
featureInputLayer(obsInfo.Dimension(1),Name="obsInLyr")
fullyConnectedLayer(50)
reluLayer
fullyConnectedLayer(25,Name="obsPathOutLyr")
];
% Action path
actPath = [
featureInputLayer(actInfo.Dimension(1),Name="actInLyr")
fullyConnectedLayer(25,Name="actPathOutLyr")
];
% Common path
commonPath = [
additionLayer(2,Name="add")
reluLayer
fullyConnectedLayer(1,Name="QValue")
];
% Create the network object and add the layers
criticNet = dlnetwork();
criticNet = addLayers(criticNet,obsPath);
criticNet = addLayers(criticNet,actPath);
criticNet = addLayers(criticNet,commonPath);
% Connect the layers
criticNet = connectLayers(criticNet, …
"obsPathOutLyr","add/in1");
criticNet = connectLayers(criticNet, …
"actPathOutLyr","add/in2");
View the critic network configuration.
figure
plot(criticNet)
Initialize the dlnetwork object and summarize its properties.
criticNet = initialize(criticNet);
summary(criticNet)
Create the critic approximator object using the specified deep neural network, the environment specification objects, and the names if the network inputs to be associated with the observation and action channels.
critic = rlQValueFunction(criticNet, …
obsInfo,actInfo, …
ObservationInputNames="obsInLyr", …
ActionInputNames="actInLyr");
For more information on Q-value function objects, see rlQValueFunction.
Check the critic with a random input observation and action.
getValue(critic, …
{rand(obsInfo.Dimension)}, …
{rand(actInfo.Dimension)})
For more information on creating critics, see Create Policies and Value Functions.
Create the Actor
DDPG agents use a parametrized deterministic policy over continuous action spaces, which is learned by a continuous deterministic actor.
A continuous deterministic actor implements a parametrized deterministic policy for a continuous action space. This actor takes the current observation as input and returns as output an action that is a deterministic function of the observation.
To model the parametrized policy within the actor, use a neural network with one input layer (which receives the content of the environment observation channel, as specified by obsInfo) and one output layer (which returns the action to the environment action channel, as specified by actInfo).
Define the network as an array of layer objects.
actorNet = [
featureInputLayer(obsInfo.Dimension(1))
fullyConnectedLayer(3)
tanhLayer
fullyConnectedLayer(actInfo.Dimension(1))
];
Convert the network to a dlnetwork object and summarize its properties.
actorNet = dlnetwork(actorNet);
summary(actorNet)
Create the actor approximator object using the specified deep neural network, the environment specification objects, and the name if the network input to be associated with the observation channel.
actor = rlContinuousDeterministicActor(actorNet,obsInfo,actInfo);
For more information, see rlContinuousDeterministicActor.
Check the actor with a random input observation.
getAction(actor,{rand(obsInfo.Dimension)})
For more information on creating critics, see Create Policies and Value Functions.
Create the DDPG Agent
Create the DDPG agent using the specified actor and critic approximator objects.
agent = rlDDPGAgent(actor,critic);
For more information, see rlDDPGAgent.
Specify options for the agent, the actor, and the critic using dot notation.
agent.SampleTime = Ts;
agent.AgentOptions.TargetSmoothFactor = 1e-3;
agent.AgentOptions.DiscountFactor = 1.0;
agent.AgentOptions.MiniBatchSize = 64;
agent.AgentOptions.ExperienceBufferLength = 1e6;
agent.AgentOptions.NoiseOptions.Variance = 0.3;
agent.AgentOptions.NoiseOptions.VarianceDecayRate = 1e-5;
agent.AgentOptions.CriticOptimizerOptions.LearnRate = 1e-03;
agent.AgentOptions.CriticOptimizerOptions.GradientThreshold = 1;
agent.AgentOptions.ActorOptimizerOptions.LearnRate = 1e-04;
agent.AgentOptions.ActorOptimizerOptions.GradientThreshold = 1;
Alternatively, you can specify the agent options using an rlDDPGAgentOptions object.
Check the agent with a random input observation.
getAction(agent,{rand(obsInfo.Dimension)})
Train Agent
To train the agent, first specify the training options. For this example, use the following options:
Run each training for at most 5000 episodes. Specify that each episode lasts for at most ceil(Tf/Ts) (that is 200) time steps.
Display the training progress in the Episode Manager dialog box (set the Plots option) and disable the command line display (set the Verbose option to false).
Stop training when the agent receives an average cumulative reward greater than 800 over 20 consecutive episodes. At this point, the agent can control the level of water in the tank.
For more information, see rlTrainingOptions.
trainOpts = rlTrainingOptions(…
MaxEpisodes=5000, …
MaxStepsPerEpisode=ceil(Tf/Ts), …
ScoreAveragingWindowLength=20, …
Verbose=false, …
Plots="training-progress",…
StopTrainingCriteria="AverageReward",…
StopTrainingValue=800);
Train the agent using the train function. Training is a computationally intensive process that takes several minutes to complete. To save time while running this example, load a pretrained agent by setting doTraining to false. To train the agent yourself, set doTraining to true.
doTraining = true;
if doTraining
% Train the agent.
trainingStats = train(agent,env,trainOpts);
else
% Load the pretrained agent for the example.
load("WaterTankDDPG.mat","agent")
end
Validate Trained Agent
Validate the learned agent against the model by simulation. Since the reset function randomizes the reference values, fix the random generator seed to ensure simulation reproducibility.
rng(1)
Simulate the agent within the environment, and return the experiences as output.
simOpts = rlSimulationOptions(MaxSteps=ceil(Tf/Ts),StopOnError="on");
experiences = sim(env,agent,simOpts);
Local Reset Function
function in = localResetFcn(in)
% Randomize reference signal
blk = sprintf("RLFinal_PhD_Model_DroopPQ1/Droop/Voutref");
h = 3*randn + 0.5;
while h <= 0 || h >= 400
h = 3*randn + 200;
end
in = setBlockParameter(in,blk,Value=num2str(h));
% Randomize initial height
h1 = 3*randn + 200;
while h1 <= 0 || h1 >= 1
h1 = 3*randn + 0.5;
end
blk = "RLFinal_PhD_Model_DroopPQ1/Gain";
in = setBlockParameter(in,blk,Gain=num2str(h1));
end
I am getting the following results without rewards at all
Zero rewards
When I stop the trainging I see this error::::
Dot indexing is not supported for variables of this type.
Error in rl.util.expstruct2timeserstruct (line 7)
observation = {experiences.Observation};
Error in rl.env.AbstractEnv/sim (line 138)
s = rl.util.expstruct2timeserstruct(exp,time,oinfo,ainfo);
Copyright 2019 – 2023 The MathWorks, Inc. can someone help MATLAB Answers — New Questions
How can I plot a hyperbola?
Hi everyone,
I’m a beginner at Matlab, so I don’t have much experience. Right now I’m trying to plot a hyperbola that I’m using for Time Difference of Arrival(TDoA), but I’ve been lost for hours now, and I still can’t figure out how to plot it. Any suggestions how to solve this problem?
Here is my code:
function hyperbola()
syms x y ;
f = @(x)0.4829 == sqrt((95-x)^2-(0-y)^2)-sqrt((0-x)^2-(0-y)^2);
fplot(f);
endHi everyone,
I’m a beginner at Matlab, so I don’t have much experience. Right now I’m trying to plot a hyperbola that I’m using for Time Difference of Arrival(TDoA), but I’ve been lost for hours now, and I still can’t figure out how to plot it. Any suggestions how to solve this problem?
Here is my code:
function hyperbola()
syms x y ;
f = @(x)0.4829 == sqrt((95-x)^2-(0-y)^2)-sqrt((0-x)^2-(0-y)^2);
fplot(f);
end Hi everyone,
I’m a beginner at Matlab, so I don’t have much experience. Right now I’m trying to plot a hyperbola that I’m using for Time Difference of Arrival(TDoA), but I’ve been lost for hours now, and I still can’t figure out how to plot it. Any suggestions how to solve this problem?
Here is my code:
function hyperbola()
syms x y ;
f = @(x)0.4829 == sqrt((95-x)^2-(0-y)^2)-sqrt((0-x)^2-(0-y)^2);
fplot(f);
end hyperbola, tdoa, nonlinear MATLAB Answers — New Questions
Unrecognized function or variable ‘doPlot’.
if doPlot == 1
plot(density)
title("Sample Densities")
xticklabels(element)
ylabel("Density (g/cm^3)")
endif doPlot == 1
plot(density)
title("Sample Densities")
xticklabels(element)
ylabel("Density (g/cm^3)")
end if doPlot == 1
plot(density)
title("Sample Densities")
xticklabels(element)
ylabel("Density (g/cm^3)")
end showing error while submitting MATLAB Answers — New Questions