Month: August 2024
Saving class objects in array suddenly tanking performance
So, I have this tool I’ve developed to do some radar simulations. One small part of it is I use this class just for bundling and passing around detection data instead of a struct:
classdef Detection
% Data holder for a radar detection
properties
time (1,1) double
snr (1,1) double
raer (1,4) double
sigmas (1,4) double
end
methods
function obj = Detection(time,snr,raer,sigmas)
if nargin > 0 % Allow no args constructor to be called to make empty detections
obj.time = time;
obj.snr = snr;
obj.raer = raer;
obj.sigmas = sigmas;
end
end
end
end
These get created in one handle class then passed off to another that saves data on a given target. In that object, I save them into a (pre-allocated* array):
this.detections(i) = det;
After making some code changes yesterday, suddenly this one line went from being trivial computationally to 70% of the run-time of a basic sim run when profiled, more than tripling the run time of said sim from ~1s to ~3s for 761 detections saved. I can’t figure out how to pull out any more details about what’s suddenly making a simple operation like that so slow, or how to work around it. I’ve run into some weird behavior like this in the past when saving simple data holder objects caused bizarre performance issues, but usually I could tweak something or find a work around, here I’m just stumped as to what caused it because I didn’t even change anything about the object or how they’re saved.
*Specifically, they start with a base size and then if the sim runs long enough to outstrip that I start doubling the data arrays, just thought I’d add that in case there’s some weird potential behavior there.
Edit: I tried making this class and the Dwell class (other one mentioned in comments) handles, and that seems to have alleviated the problem (although still seems noticeably slower than saving structs), but I’m curious as to why saving value classes is seemingly so slow.So, I have this tool I’ve developed to do some radar simulations. One small part of it is I use this class just for bundling and passing around detection data instead of a struct:
classdef Detection
% Data holder for a radar detection
properties
time (1,1) double
snr (1,1) double
raer (1,4) double
sigmas (1,4) double
end
methods
function obj = Detection(time,snr,raer,sigmas)
if nargin > 0 % Allow no args constructor to be called to make empty detections
obj.time = time;
obj.snr = snr;
obj.raer = raer;
obj.sigmas = sigmas;
end
end
end
end
These get created in one handle class then passed off to another that saves data on a given target. In that object, I save them into a (pre-allocated* array):
this.detections(i) = det;
After making some code changes yesterday, suddenly this one line went from being trivial computationally to 70% of the run-time of a basic sim run when profiled, more than tripling the run time of said sim from ~1s to ~3s for 761 detections saved. I can’t figure out how to pull out any more details about what’s suddenly making a simple operation like that so slow, or how to work around it. I’ve run into some weird behavior like this in the past when saving simple data holder objects caused bizarre performance issues, but usually I could tweak something or find a work around, here I’m just stumped as to what caused it because I didn’t even change anything about the object or how they’re saved.
*Specifically, they start with a base size and then if the sim runs long enough to outstrip that I start doubling the data arrays, just thought I’d add that in case there’s some weird potential behavior there.
Edit: I tried making this class and the Dwell class (other one mentioned in comments) handles, and that seems to have alleviated the problem (although still seems noticeably slower than saving structs), but I’m curious as to why saving value classes is seemingly so slow. So, I have this tool I’ve developed to do some radar simulations. One small part of it is I use this class just for bundling and passing around detection data instead of a struct:
classdef Detection
% Data holder for a radar detection
properties
time (1,1) double
snr (1,1) double
raer (1,4) double
sigmas (1,4) double
end
methods
function obj = Detection(time,snr,raer,sigmas)
if nargin > 0 % Allow no args constructor to be called to make empty detections
obj.time = time;
obj.snr = snr;
obj.raer = raer;
obj.sigmas = sigmas;
end
end
end
end
These get created in one handle class then passed off to another that saves data on a given target. In that object, I save them into a (pre-allocated* array):
this.detections(i) = det;
After making some code changes yesterday, suddenly this one line went from being trivial computationally to 70% of the run-time of a basic sim run when profiled, more than tripling the run time of said sim from ~1s to ~3s for 761 detections saved. I can’t figure out how to pull out any more details about what’s suddenly making a simple operation like that so slow, or how to work around it. I’ve run into some weird behavior like this in the past when saving simple data holder objects caused bizarre performance issues, but usually I could tweak something or find a work around, here I’m just stumped as to what caused it because I didn’t even change anything about the object or how they’re saved.
*Specifically, they start with a base size and then if the sim runs long enough to outstrip that I start doubling the data arrays, just thought I’d add that in case there’s some weird potential behavior there.
Edit: I tried making this class and the Dwell class (other one mentioned in comments) handles, and that seems to have alleviated the problem (although still seems noticeably slower than saving structs), but I’m curious as to why saving value classes is seemingly so slow. class, objects, performance, arrays MATLAB Answers — New Questions
Power Automate Post Loop Component to Teams
I have a daily post to a teams channel that includes a link to a Microsoft Loop. I would like the daily post to include the loop component. I haven’t found a way to include the component. Suggestions?
I have a daily post to a teams channel that includes a link to a Microsoft Loop. I would like the daily post to include the loop component. I haven’t found a way to include the component. Suggestions? Read More
Running MS Teams in Kiosk Mode on a Windows 11 desktop
I have a Kiosk (Windows 11) where MS Teams needs to be autolaunched, I have set it up in Intune for the Kiosk Mode as follows:
I am not sure which one should I autolaunch here?
Microsoft Teams or Microsoft Teams Autostart
Next, should I also have the Win32 App of MS Teams defined? If so there is an issue with the MS TEAMS installation location as it is not the location what I thought it was “%localappdata%MicrosoftTeamsCurrent”
So, where is the path that MS TEAMS should be defined
Also, I am not sure where to find this in the article on the Microsoft forum that shows the location of MS Teams. I do not see anything like this on my computer: “C:Program FilesWindowsAppsMSTeams_23272.2707.2453.769_x64__8wekyb3d8bbwems-teams.exe”. Will this path keep changing with newer versions of Teams? If so, how do updates happen, and how does it work in Kiosk mode if a new version of Teams is installed? I hope I am clear in what I am trying to say here.
Launching Teams from Stream Deck / location of New Teams exe – Microsoft Community
The next issue I see is that configuration I modified and applied is showing as pending. How long does this take to be successful. I did a manual sync and multiple times through Intune on the device, but it still is in this state for more than 3 to 4 hours.
I have a Kiosk (Windows 11) where MS Teams needs to be autolaunched, I have set it up in Intune for the Kiosk Mode as follows: I am not sure which one should I autolaunch here? Microsoft Teams or Microsoft Teams Autostart Next, should I also have the Win32 App of MS Teams defined? If so there is an issue with the MS TEAMS installation location as it is not the location what I thought it was “%localappdata%MicrosoftTeamsCurrent” So, where is the path that MS TEAMS should be defined Also, I am not sure where to find this in the article on the Microsoft forum that shows the location of MS Teams. I do not see anything like this on my computer: “C:Program FilesWindowsAppsMSTeams_23272.2707.2453.769_x64__8wekyb3d8bbwems-teams.exe”. Will this path keep changing with newer versions of Teams? If so, how do updates happen, and how does it work in Kiosk mode if a new version of Teams is installed? I hope I am clear in what I am trying to say here. Launching Teams from Stream Deck / location of New Teams exe – Microsoft Community The next issue I see is that configuration I modified and applied is showing as pending. How long does this take to be successful. I did a manual sync and multiple times through Intune on the device, but it still is in this state for more than 3 to 4 hours. Read More
Not able to run simulations using Rapid accelerator
Running the example provide by Mathworks to simulate in rapid accelerator (sldemo_bounce) I got the following error message:
Top Model Build
1
Elapsed: 3 sec
### Building the rapid accelerator target for model: sldemo_bounce "INGWROOTbin/gcc" -c -fwrapv -m64 -O0 -DCLASSIC_INTERFACE=1 -DALLOCATIONFCN=0 -DONESTEPFCN=0 -DTERMFCN=1 -DMULTI_INSTANCE_CODE=0 -DINTEGER_CODE=0 -DEXT_MODE -DIS_RAPID_ACCEL -DTGTCONN -DIS_SIM_TARGET -DNRT -DRSIM_PARAMETER_LOADING -DRSIM_WITH_SL_SOLVER -DENABLE_SLEXEC_SSBRIDGE=1 -DMODEL_HAS_DYNAMICALLY_LOADED_SFCNS=0 -DON_TARGET_WAIT_FOR_START=0 -DTID01EQ=0 -DMODEL=sldemo_bounce -DNUMST=2 -DNCSTATES=2 -DHAVESTDIO @sldemo_bounce_comp.rsp -o "rt_logging_simtarget.obj" "C:/PROGRA~1/MATLAB/R2021a/rtw/c/src/rt_logging_simtarget.c" The system cannot find the path specified. gmake: *** [rt_logging_simtarget.obj] Error 1 The make command returned an error of 2 ### Build procedure for sldemo_bounce aborted due to an error.
Build Summary
1
Elapsed: 0.2 sec
Top model rapid accelerator targets built: Model Action Rebuild Reason ========================================================================= sldemo_bounce Failed Code generation information file does not exist. 0 of 1 models built (0 models already up to date) Build duration: 0h 0m 3.238s
Unable to build a standalone executable to simulate the model ‘sldemo_bounce’ in rapid accelerator mode.
Caused by:
Error(s) encountered while building "sldemo_bounce"
to run the simulation in rapid accelerator do I need the real time workshop tools license?
Regards
EduardoRunning the example provide by Mathworks to simulate in rapid accelerator (sldemo_bounce) I got the following error message:
Top Model Build
1
Elapsed: 3 sec
### Building the rapid accelerator target for model: sldemo_bounce "INGWROOTbin/gcc" -c -fwrapv -m64 -O0 -DCLASSIC_INTERFACE=1 -DALLOCATIONFCN=0 -DONESTEPFCN=0 -DTERMFCN=1 -DMULTI_INSTANCE_CODE=0 -DINTEGER_CODE=0 -DEXT_MODE -DIS_RAPID_ACCEL -DTGTCONN -DIS_SIM_TARGET -DNRT -DRSIM_PARAMETER_LOADING -DRSIM_WITH_SL_SOLVER -DENABLE_SLEXEC_SSBRIDGE=1 -DMODEL_HAS_DYNAMICALLY_LOADED_SFCNS=0 -DON_TARGET_WAIT_FOR_START=0 -DTID01EQ=0 -DMODEL=sldemo_bounce -DNUMST=2 -DNCSTATES=2 -DHAVESTDIO @sldemo_bounce_comp.rsp -o "rt_logging_simtarget.obj" "C:/PROGRA~1/MATLAB/R2021a/rtw/c/src/rt_logging_simtarget.c" The system cannot find the path specified. gmake: *** [rt_logging_simtarget.obj] Error 1 The make command returned an error of 2 ### Build procedure for sldemo_bounce aborted due to an error.
Build Summary
1
Elapsed: 0.2 sec
Top model rapid accelerator targets built: Model Action Rebuild Reason ========================================================================= sldemo_bounce Failed Code generation information file does not exist. 0 of 1 models built (0 models already up to date) Build duration: 0h 0m 3.238s
Unable to build a standalone executable to simulate the model ‘sldemo_bounce’ in rapid accelerator mode.
Caused by:
Error(s) encountered while building "sldemo_bounce"
to run the simulation in rapid accelerator do I need the real time workshop tools license?
Regards
Eduardo Running the example provide by Mathworks to simulate in rapid accelerator (sldemo_bounce) I got the following error message:
Top Model Build
1
Elapsed: 3 sec
### Building the rapid accelerator target for model: sldemo_bounce "INGWROOTbin/gcc" -c -fwrapv -m64 -O0 -DCLASSIC_INTERFACE=1 -DALLOCATIONFCN=0 -DONESTEPFCN=0 -DTERMFCN=1 -DMULTI_INSTANCE_CODE=0 -DINTEGER_CODE=0 -DEXT_MODE -DIS_RAPID_ACCEL -DTGTCONN -DIS_SIM_TARGET -DNRT -DRSIM_PARAMETER_LOADING -DRSIM_WITH_SL_SOLVER -DENABLE_SLEXEC_SSBRIDGE=1 -DMODEL_HAS_DYNAMICALLY_LOADED_SFCNS=0 -DON_TARGET_WAIT_FOR_START=0 -DTID01EQ=0 -DMODEL=sldemo_bounce -DNUMST=2 -DNCSTATES=2 -DHAVESTDIO @sldemo_bounce_comp.rsp -o "rt_logging_simtarget.obj" "C:/PROGRA~1/MATLAB/R2021a/rtw/c/src/rt_logging_simtarget.c" The system cannot find the path specified. gmake: *** [rt_logging_simtarget.obj] Error 1 The make command returned an error of 2 ### Build procedure for sldemo_bounce aborted due to an error.
Build Summary
1
Elapsed: 0.2 sec
Top model rapid accelerator targets built: Model Action Rebuild Reason ========================================================================= sldemo_bounce Failed Code generation information file does not exist. 0 of 1 models built (0 models already up to date) Build duration: 0h 0m 3.238s
Unable to build a standalone executable to simulate the model ‘sldemo_bounce’ in rapid accelerator mode.
Caused by:
Error(s) encountered while building "sldemo_bounce"
to run the simulation in rapid accelerator do I need the real time workshop tools license?
Regards
Eduardo simulink, rapid accelerator MATLAB Answers — New Questions
Can the output of plsregress be used to calculate Q residuals and T2 for new X data
Assume we have spectral data xcal, ycal, xval, yval where
xcal is mxn : m spectra, or observations, of a sample, n wavelengths per spectrum
ycal is mx1 : m concentrations of the sample corresponding to the m observations in xcal
xval is 1xn : 1 new spetrum or new observation of the sample (ie, not a member of xcal)
yval is 1×1 : 1 new concentration of the sample corresponding to the observation in xval
assuming m>n and ncomp<n and xcal0 is xcal with its mean subtracted,
xcal0 = xcal – ones(m,1)*mean(xcal)
[XL,YL,XS,YS,BETA,PCTVAR,MSE,STATS] = PLSREGRESS(xcal,ycal,ncomp);
Can be used to compute Q residuals, or the rowwise sum of squares of the STATS.XResiduals matrix
and
STATS.T2, is the Hotelling T^2 value
for each of the m observations in xcal
Q residuals and T2 values can be used to determine if the observations in xcal are outliers
Can the outputs of plsregress as described above be used to compute a Q residual and a T^2 value for the single observation in xval to determine if it seems to be an outlier with respect to xcal?Assume we have spectral data xcal, ycal, xval, yval where
xcal is mxn : m spectra, or observations, of a sample, n wavelengths per spectrum
ycal is mx1 : m concentrations of the sample corresponding to the m observations in xcal
xval is 1xn : 1 new spetrum or new observation of the sample (ie, not a member of xcal)
yval is 1×1 : 1 new concentration of the sample corresponding to the observation in xval
assuming m>n and ncomp<n and xcal0 is xcal with its mean subtracted,
xcal0 = xcal – ones(m,1)*mean(xcal)
[XL,YL,XS,YS,BETA,PCTVAR,MSE,STATS] = PLSREGRESS(xcal,ycal,ncomp);
Can be used to compute Q residuals, or the rowwise sum of squares of the STATS.XResiduals matrix
and
STATS.T2, is the Hotelling T^2 value
for each of the m observations in xcal
Q residuals and T2 values can be used to determine if the observations in xcal are outliers
Can the outputs of plsregress as described above be used to compute a Q residual and a T^2 value for the single observation in xval to determine if it seems to be an outlier with respect to xcal? Assume we have spectral data xcal, ycal, xval, yval where
xcal is mxn : m spectra, or observations, of a sample, n wavelengths per spectrum
ycal is mx1 : m concentrations of the sample corresponding to the m observations in xcal
xval is 1xn : 1 new spetrum or new observation of the sample (ie, not a member of xcal)
yval is 1×1 : 1 new concentration of the sample corresponding to the observation in xval
assuming m>n and ncomp<n and xcal0 is xcal with its mean subtracted,
xcal0 = xcal – ones(m,1)*mean(xcal)
[XL,YL,XS,YS,BETA,PCTVAR,MSE,STATS] = PLSREGRESS(xcal,ycal,ncomp);
Can be used to compute Q residuals, or the rowwise sum of squares of the STATS.XResiduals matrix
and
STATS.T2, is the Hotelling T^2 value
for each of the m observations in xcal
Q residuals and T2 values can be used to determine if the observations in xcal are outliers
Can the outputs of plsregress as described above be used to compute a Q residual and a T^2 value for the single observation in xval to determine if it seems to be an outlier with respect to xcal? plsregress q residuals, plsregress t2, plsregress outlier new data MATLAB Answers — New Questions
Convert string of nested field names to variable without eval?
Sorry if this is answered elsewhere, It wasn’t obvious to me from searching the archives that this exact question has been posed before.
I am parsing XML files using the function in File Exchange xml2struct (https://www.mathworks.com/matlabcentral/fileexchange/28518-xml2struct) which is a great tool, BTW.
As you might expected, the output is a nested structure, and depending on the input file, the field names and complexity of the structures are highly variable.
I am searching through theses structures to find a particular field of interest:
S.Data.Configuration.FieldofInterest
but in another file, that field might be on a completely different branch or level:
S2.Data.Nested.Field.FieldofInterest
I can generate list of strings with all the field names which I can parse to find the field of interest. But then I have some strings like this:
fnlist = ‘S.Data.Configuration.FieldofInterest’;
fnlist2 = ‘S2.Data.Nested.Field.FieldofInterest’;
If I want to extract the data from the fields of interest, the only way I know how to do this is to use eval:
output = eval(fnlist);
I’m not a fan of the eval functions because they make it very hard to debug code and diagnose problems if the string gets malformed.
I’d like to use dynamic field names like S.(name1).(name2).(name3), etc., but unless you know a priori the data structure and how many levels in your target field is (which I will not), this isn’t possible.
Is there another alternative besides eval? Thanks in advance.Sorry if this is answered elsewhere, It wasn’t obvious to me from searching the archives that this exact question has been posed before.
I am parsing XML files using the function in File Exchange xml2struct (https://www.mathworks.com/matlabcentral/fileexchange/28518-xml2struct) which is a great tool, BTW.
As you might expected, the output is a nested structure, and depending on the input file, the field names and complexity of the structures are highly variable.
I am searching through theses structures to find a particular field of interest:
S.Data.Configuration.FieldofInterest
but in another file, that field might be on a completely different branch or level:
S2.Data.Nested.Field.FieldofInterest
I can generate list of strings with all the field names which I can parse to find the field of interest. But then I have some strings like this:
fnlist = ‘S.Data.Configuration.FieldofInterest’;
fnlist2 = ‘S2.Data.Nested.Field.FieldofInterest’;
If I want to extract the data from the fields of interest, the only way I know how to do this is to use eval:
output = eval(fnlist);
I’m not a fan of the eval functions because they make it very hard to debug code and diagnose problems if the string gets malformed.
I’d like to use dynamic field names like S.(name1).(name2).(name3), etc., but unless you know a priori the data structure and how many levels in your target field is (which I will not), this isn’t possible.
Is there another alternative besides eval? Thanks in advance. Sorry if this is answered elsewhere, It wasn’t obvious to me from searching the archives that this exact question has been posed before.
I am parsing XML files using the function in File Exchange xml2struct (https://www.mathworks.com/matlabcentral/fileexchange/28518-xml2struct) which is a great tool, BTW.
As you might expected, the output is a nested structure, and depending on the input file, the field names and complexity of the structures are highly variable.
I am searching through theses structures to find a particular field of interest:
S.Data.Configuration.FieldofInterest
but in another file, that field might be on a completely different branch or level:
S2.Data.Nested.Field.FieldofInterest
I can generate list of strings with all the field names which I can parse to find the field of interest. But then I have some strings like this:
fnlist = ‘S.Data.Configuration.FieldofInterest’;
fnlist2 = ‘S2.Data.Nested.Field.FieldofInterest’;
If I want to extract the data from the fields of interest, the only way I know how to do this is to use eval:
output = eval(fnlist);
I’m not a fan of the eval functions because they make it very hard to debug code and diagnose problems if the string gets malformed.
I’d like to use dynamic field names like S.(name1).(name2).(name3), etc., but unless you know a priori the data structure and how many levels in your target field is (which I will not), this isn’t possible.
Is there another alternative besides eval? Thanks in advance. struct, eval, field names MATLAB Answers — New Questions
OpenSSL Defender Vulnerabilities yet no OpenSSL Installed – Only OpenSSL Libs
Defender is alerting that there are vulns with OpenSSL needing updates on devices yet none actually have it installed. The paths are c:windowssystem32driverstorefilerepositoryiclsclient.inf_amd64_fc84dfa25a6a7727liblibcrypto-3-x64.dll and c:windowssystem32driverstorefilerepositoryiclsclient.inf_amd64_fc84dfa25a6a7727liblibssl-3-x64.dll
I don’t want to have to push out OpenSSL to all machines, but Microsoft isn’t including updates for these. Has anyone ran into this and found a fix?
Defender is alerting that there are vulns with OpenSSL needing updates on devices yet none actually have it installed. The paths are c:windowssystem32driverstorefilerepositoryiclsclient.inf_amd64_fc84dfa25a6a7727liblibcrypto-3-x64.dll and c:windowssystem32driverstorefilerepositoryiclsclient.inf_amd64_fc84dfa25a6a7727liblibssl-3-x64.dllI don’t want to have to push out OpenSSL to all machines, but Microsoft isn’t including updates for these. Has anyone ran into this and found a fix? Read More
IIS 10 serving wrong ssl cert
All sites on our server when browsed to are showing one wildcard ssl cert no matter that these other sites have a different ssl cert set in bindings.
I checked every binding and made sure to select the specific IP address instead of all unassigned like some forums said. All the bindings with ssl have a host name and Require Server Name Indication Checked.
I don’t know what else to do! Is there any help you can give me on this? I hadn’t changed any SSL or binding settings recently and they just stopped working today.
It just serves the *.hoaguru ssl certificate to all sites even if they are pointing to a different ssl cert in the bindings.
The sites that do use the *.hoaguru.com ssl cert are very important. I cannot remove them.
All sites on our server when browsed to are showing one wildcard ssl cert no matter that these other sites have a different ssl cert set in bindings. I checked every binding and made sure to select the specific IP address instead of all unassigned like some forums said. All the bindings with ssl have a host name and Require Server Name Indication Checked.I don’t know what else to do! Is there any help you can give me on this? I hadn’t changed any SSL or binding settings recently and they just stopped working today.It just serves the *.hoaguru ssl certificate to all sites even if they are pointing to a different ssl cert in the bindings.The sites that do use the *.hoaguru.com ssl cert are very important. I cannot remove them. Read More
Help with concatenate with IF function for blank date
I am using Concatenate (CONCAT) to pull 3 date values from a worksheet within a workbook.
=CONCAT(TEXT((‘Liability Schedule’!F219),”mm/dd/yyyy”),”
“,
TEXT((‘Liability Schedule’!F220),”mm/dd/yyyy”),”
“,
TEXT((‘Liability Schedule’!F221),”mm/dd/yyyy”))
However, if the Liability Schedule doesn’t have a value (blank), excel will default to 01/00/1900. I’m trying to expand the rule that IF the referring cell is blank in the Liability Schedule, I want to replace it with “N/A”
I am using Concatenate (CONCAT) to pull 3 date values from a worksheet within a workbook.=CONCAT(TEXT((‘Liability Schedule’!F219),”mm/dd/yyyy”),”
“,
TEXT((‘Liability Schedule’!F220),”mm/dd/yyyy”),”
“,
TEXT((‘Liability Schedule’!F221),”mm/dd/yyyy”))However, if the Liability Schedule doesn’t have a value (blank), excel will default to 01/00/1900. I’m trying to expand the rule that IF the referring cell is blank in the Liability Schedule, I want to replace it with “N/A” Read More
Can anyone please help ?
im trying to compute the steady solutions for the stream function and scalar vorticity for a 2d flow around an infinite cylinder at RE = 10
here is the question:
Here is the code:
function [psi, omega] = flow_around_cylinder_steady
Re=10;
%%%%% define the grid %%%%%
n=101; m=101; % number of grid points
N=n-1; M=m-1; % number of grid intervals
h=pi/M; % grid spacing based on theta variable
xi=(0:N)*h; theta=(0:M)*h; % xi and theta variables on the grid
%%%%% Initialize the flow fields %%%%%
psi=zeros(n,m);
omega=zeros(n,m);
psi(n,:)=exp(xi(n)) * sin(theta(:));
%%%%% Set relax params, tol, extra variables %%%%%
r_psi=1.8
r_omega=0.9
delta=1.e-08; % error tolerance
error=2*delta; % initialize error variable
%%%%% Add any additional variable definitions here %%%%%
…
…
%%%%% Main SOR Loop %%%%%
while (error > delta)
psi_old = psi; omega_old = omega;
for i=2:n-1
for j=2:m-1
psi(i,j)=exp(xi(i)) * sin(theta(j));
end
end
error_psi=max(abs(psi(:)-psi_old(:)));
omega(1,:)= (psi(3,j) – 8*psi(2,j)) * 1/(2*h^2);
for i=2:n-1
for j=2:m-1
omega_old(1,j)= (psi(3,j) – 8*psi(2,j)) * 1/(2*h^2);
end
end
error_omega=max(abs(omega(:)-omega_old(:)));
error=max(error_psi, error_omega);
end
plot_Re10(psi);
The code to call the function is:
[psi, omega] = flow_around_cylinder_steady;
But i get this:
The server timed out while running your solution. Potential reasons include inefficient code, an infinite loop, and excessive output. Try to improve your solution.
is there any way to improve it ?im trying to compute the steady solutions for the stream function and scalar vorticity for a 2d flow around an infinite cylinder at RE = 10
here is the question:
Here is the code:
function [psi, omega] = flow_around_cylinder_steady
Re=10;
%%%%% define the grid %%%%%
n=101; m=101; % number of grid points
N=n-1; M=m-1; % number of grid intervals
h=pi/M; % grid spacing based on theta variable
xi=(0:N)*h; theta=(0:M)*h; % xi and theta variables on the grid
%%%%% Initialize the flow fields %%%%%
psi=zeros(n,m);
omega=zeros(n,m);
psi(n,:)=exp(xi(n)) * sin(theta(:));
%%%%% Set relax params, tol, extra variables %%%%%
r_psi=1.8
r_omega=0.9
delta=1.e-08; % error tolerance
error=2*delta; % initialize error variable
%%%%% Add any additional variable definitions here %%%%%
…
…
%%%%% Main SOR Loop %%%%%
while (error > delta)
psi_old = psi; omega_old = omega;
for i=2:n-1
for j=2:m-1
psi(i,j)=exp(xi(i)) * sin(theta(j));
end
end
error_psi=max(abs(psi(:)-psi_old(:)));
omega(1,:)= (psi(3,j) – 8*psi(2,j)) * 1/(2*h^2);
for i=2:n-1
for j=2:m-1
omega_old(1,j)= (psi(3,j) – 8*psi(2,j)) * 1/(2*h^2);
end
end
error_omega=max(abs(omega(:)-omega_old(:)));
error=max(error_psi, error_omega);
end
plot_Re10(psi);
The code to call the function is:
[psi, omega] = flow_around_cylinder_steady;
But i get this:
The server timed out while running your solution. Potential reasons include inefficient code, an infinite loop, and excessive output. Try to improve your solution.
is there any way to improve it ? im trying to compute the steady solutions for the stream function and scalar vorticity for a 2d flow around an infinite cylinder at RE = 10
here is the question:
Here is the code:
function [psi, omega] = flow_around_cylinder_steady
Re=10;
%%%%% define the grid %%%%%
n=101; m=101; % number of grid points
N=n-1; M=m-1; % number of grid intervals
h=pi/M; % grid spacing based on theta variable
xi=(0:N)*h; theta=(0:M)*h; % xi and theta variables on the grid
%%%%% Initialize the flow fields %%%%%
psi=zeros(n,m);
omega=zeros(n,m);
psi(n,:)=exp(xi(n)) * sin(theta(:));
%%%%% Set relax params, tol, extra variables %%%%%
r_psi=1.8
r_omega=0.9
delta=1.e-08; % error tolerance
error=2*delta; % initialize error variable
%%%%% Add any additional variable definitions here %%%%%
…
…
%%%%% Main SOR Loop %%%%%
while (error > delta)
psi_old = psi; omega_old = omega;
for i=2:n-1
for j=2:m-1
psi(i,j)=exp(xi(i)) * sin(theta(j));
end
end
error_psi=max(abs(psi(:)-psi_old(:)));
omega(1,:)= (psi(3,j) – 8*psi(2,j)) * 1/(2*h^2);
for i=2:n-1
for j=2:m-1
omega_old(1,j)= (psi(3,j) – 8*psi(2,j)) * 1/(2*h^2);
end
end
error_omega=max(abs(omega(:)-omega_old(:)));
error=max(error_psi, error_omega);
end
plot_Re10(psi);
The code to call the function is:
[psi, omega] = flow_around_cylinder_steady;
But i get this:
The server timed out while running your solution. Potential reasons include inefficient code, an infinite loop, and excessive output. Try to improve your solution.
is there any way to improve it ? steady flow at re = 10, homework, assignment, exam question, cylinder MATLAB Answers — New Questions
How to generate a time signal from spectrum
Hello,
I’m working on spectrum comes from sea wave data.
I have a JONSWAP spectrum and I need to generate a signal in time domain to fed my numerical model of a floating body, can anybody suggest me how do it.Hello,
I’m working on spectrum comes from sea wave data.
I have a JONSWAP spectrum and I need to generate a signal in time domain to fed my numerical model of a floating body, can anybody suggest me how do it. Hello,
I’m working on spectrum comes from sea wave data.
I have a JONSWAP spectrum and I need to generate a signal in time domain to fed my numerical model of a floating body, can anybody suggest me how do it. signal, spectrum MATLAB Answers — New Questions
Highlighting Tasks that need attention but someone with out assigning it to them.
I have a request from a User who uses Planner to manage projects for a multi-functional team. They want a way to easily see what tasks and projects require their attention. I thought creating a custom priority or a Flag to mark these would be a good idea, but I can’t find how to do that. Has anyone else had this issue or has a solution worked for them?
I have a request from a User who uses Planner to manage projects for a multi-functional team. They want a way to easily see what tasks and projects require their attention. I thought creating a custom priority or a Flag to mark these would be a good idea, but I can’t find how to do that. Has anyone else had this issue or has a solution worked for them? Read More
Hi I have a query I removed the harmonic to preserve the fundamental but still fundamental at 1MHz is also filtered can anyone tell m e.
% Load the frequency domain data
folder = ‘C:UsershaneuOneDrive바탕 화면dateNew folder (2)’;
filename = ‘270mvp.csv’;
data = readtable(fullfile(folder, filename));
% Extract frequency and FFT amplitude (assumed to be in dB)
f = table2array(data(3:end, 3)); % Frequency data (in Hz)
x_dB = table2array(data(3:end, 4)); % FFT amplitude data in dB
% Number of points in the FFT
N = length(x_dB);
% Compute the sampling frequency (fs)
% Assuming that your frequency axis (f) is spaced evenly
df = f(2) – f(1); % Frequency resolution
fs = N * df; % Sampling frequency
% Convert the amplitude from dB to linear scale
x_linear = 10.^(x_dB / 20);
% Compute the Power Spectral Density (PSD)
pxx = (x_linear.^2) / (fs * N);
% Convert PSD to dB/Hz
pxx_dBHz = 10 * log10(pxx);
% Fundamental frequency
f0 = 1e6; % 1 MHz
% Identify harmonic frequencies
harmonics = f0 * (1:floor(max(f)/f0));
% Remove harmonics from PSD
pxx_dBHz_filtered = pxx_dBHz; % Copy original PSD
for k = 1:length(harmonics)
harmonic_freq = harmonics(k);
[~, idx] = min(abs(f – harmonic_freq)); % Find the closest frequency index
pxx_dBHz_filtered(idx) = -Inf; % Set PSD of harmonics to -Inf (effectively removing)
end
% Plot the original Power Spectral Density (PSD)
figure;
subplot(2, 1, 1);
plot(f/1e6, pxx_dBHz);
xlabel(‘Frequency (MHz)’);
ylabel(‘PSD (dB/Hz)’);
title(‘Original Power Spectral Density (PSD)’);
% Plot the filtered Power Spectral Density (PSD)
subplot(2, 1, 2);
plot(f/1e6, pxx_dBHz_filtered);
xlabel(‘Frequency (MHz)’);
ylabel(‘PSD (dB/Hz)’);
title(‘Filtered Power Spectral Density (PSD)’);
% Highlight fundamental frequency in the plot
hold on;
plot(f0/1e6, pxx_dBHz_filtered(abs(f – f0) < df), ‘ro’, ‘MarkerSize’, 8, ‘LineWidth’, 2);
legend(‘Filtered PSD’, ‘Fundamental Frequency’);% Load the frequency domain data
folder = ‘C:UsershaneuOneDrive바탕 화면dateNew folder (2)’;
filename = ‘270mvp.csv’;
data = readtable(fullfile(folder, filename));
% Extract frequency and FFT amplitude (assumed to be in dB)
f = table2array(data(3:end, 3)); % Frequency data (in Hz)
x_dB = table2array(data(3:end, 4)); % FFT amplitude data in dB
% Number of points in the FFT
N = length(x_dB);
% Compute the sampling frequency (fs)
% Assuming that your frequency axis (f) is spaced evenly
df = f(2) – f(1); % Frequency resolution
fs = N * df; % Sampling frequency
% Convert the amplitude from dB to linear scale
x_linear = 10.^(x_dB / 20);
% Compute the Power Spectral Density (PSD)
pxx = (x_linear.^2) / (fs * N);
% Convert PSD to dB/Hz
pxx_dBHz = 10 * log10(pxx);
% Fundamental frequency
f0 = 1e6; % 1 MHz
% Identify harmonic frequencies
harmonics = f0 * (1:floor(max(f)/f0));
% Remove harmonics from PSD
pxx_dBHz_filtered = pxx_dBHz; % Copy original PSD
for k = 1:length(harmonics)
harmonic_freq = harmonics(k);
[~, idx] = min(abs(f – harmonic_freq)); % Find the closest frequency index
pxx_dBHz_filtered(idx) = -Inf; % Set PSD of harmonics to -Inf (effectively removing)
end
% Plot the original Power Spectral Density (PSD)
figure;
subplot(2, 1, 1);
plot(f/1e6, pxx_dBHz);
xlabel(‘Frequency (MHz)’);
ylabel(‘PSD (dB/Hz)’);
title(‘Original Power Spectral Density (PSD)’);
% Plot the filtered Power Spectral Density (PSD)
subplot(2, 1, 2);
plot(f/1e6, pxx_dBHz_filtered);
xlabel(‘Frequency (MHz)’);
ylabel(‘PSD (dB/Hz)’);
title(‘Filtered Power Spectral Density (PSD)’);
% Highlight fundamental frequency in the plot
hold on;
plot(f0/1e6, pxx_dBHz_filtered(abs(f – f0) < df), ‘ro’, ‘MarkerSize’, 8, ‘LineWidth’, 2);
legend(‘Filtered PSD’, ‘Fundamental Frequency’); % Load the frequency domain data
folder = ‘C:UsershaneuOneDrive바탕 화면dateNew folder (2)’;
filename = ‘270mvp.csv’;
data = readtable(fullfile(folder, filename));
% Extract frequency and FFT amplitude (assumed to be in dB)
f = table2array(data(3:end, 3)); % Frequency data (in Hz)
x_dB = table2array(data(3:end, 4)); % FFT amplitude data in dB
% Number of points in the FFT
N = length(x_dB);
% Compute the sampling frequency (fs)
% Assuming that your frequency axis (f) is spaced evenly
df = f(2) – f(1); % Frequency resolution
fs = N * df; % Sampling frequency
% Convert the amplitude from dB to linear scale
x_linear = 10.^(x_dB / 20);
% Compute the Power Spectral Density (PSD)
pxx = (x_linear.^2) / (fs * N);
% Convert PSD to dB/Hz
pxx_dBHz = 10 * log10(pxx);
% Fundamental frequency
f0 = 1e6; % 1 MHz
% Identify harmonic frequencies
harmonics = f0 * (1:floor(max(f)/f0));
% Remove harmonics from PSD
pxx_dBHz_filtered = pxx_dBHz; % Copy original PSD
for k = 1:length(harmonics)
harmonic_freq = harmonics(k);
[~, idx] = min(abs(f – harmonic_freq)); % Find the closest frequency index
pxx_dBHz_filtered(idx) = -Inf; % Set PSD of harmonics to -Inf (effectively removing)
end
% Plot the original Power Spectral Density (PSD)
figure;
subplot(2, 1, 1);
plot(f/1e6, pxx_dBHz);
xlabel(‘Frequency (MHz)’);
ylabel(‘PSD (dB/Hz)’);
title(‘Original Power Spectral Density (PSD)’);
% Plot the filtered Power Spectral Density (PSD)
subplot(2, 1, 2);
plot(f/1e6, pxx_dBHz_filtered);
xlabel(‘Frequency (MHz)’);
ylabel(‘PSD (dB/Hz)’);
title(‘Filtered Power Spectral Density (PSD)’);
% Highlight fundamental frequency in the plot
hold on;
plot(f0/1e6, pxx_dBHz_filtered(abs(f – f0) < df), ‘ro’, ‘MarkerSize’, 8, ‘LineWidth’, 2);
legend(‘Filtered PSD’, ‘Fundamental Frequency’); signal processing, filter MATLAB Answers — New Questions
Mex compiler MinGW64 only, although msvcpp xmls are in the mexopts folder
Hi,
I am having issues – for whatever reason my previous GPU compile for VS studio 2019 and Matlab 2022a got corrupted and now VS studio no longer offers VS 2019 download.
My exact issue is that the mex -setup C++ only finds MinGW64 and not the VS studio code (msvc2022) although it is in the mexopts folder and VS studio 2022 is download (including the manual C++ library installation).
Mathworks has xmls but it seems mex is not robust enough to find them and does not allow for manual addition of compilers (even though the xmls are thoroughly there in win64 folder. Does anyone have any ideas how to compile VS studio 2022 with whatever version of MATLAB that works with it.Hi,
I am having issues – for whatever reason my previous GPU compile for VS studio 2019 and Matlab 2022a got corrupted and now VS studio no longer offers VS 2019 download.
My exact issue is that the mex -setup C++ only finds MinGW64 and not the VS studio code (msvc2022) although it is in the mexopts folder and VS studio 2022 is download (including the manual C++ library installation).
Mathworks has xmls but it seems mex is not robust enough to find them and does not allow for manual addition of compilers (even though the xmls are thoroughly there in win64 folder. Does anyone have any ideas how to compile VS studio 2022 with whatever version of MATLAB that works with it. Hi,
I am having issues – for whatever reason my previous GPU compile for VS studio 2019 and Matlab 2022a got corrupted and now VS studio no longer offers VS 2019 download.
My exact issue is that the mex -setup C++ only finds MinGW64 and not the VS studio code (msvc2022) although it is in the mexopts folder and VS studio 2022 is download (including the manual C++ library installation).
Mathworks has xmls but it seems mex is not robust enough to find them and does not allow for manual addition of compilers (even though the xmls are thoroughly there in win64 folder. Does anyone have any ideas how to compile VS studio 2022 with whatever version of MATLAB that works with it. mex, mex compiler MATLAB Answers — New Questions
Dev Channel update to 129.0.2752.4 is live.
Hello Insiders! We released 129.0.2752.4 to the Dev channel! This includes numerous fixes. For more details on the changes, check out the highlights below.
Added Features:
Added an observer to track extension uninstalls in Browser Essentials.
Improved Reliability:
Fixed an issue where the browser crashes when toggling off Gamer Mode.
Changed Behavior:
Resolved an issue where browser specific attributes were not visible in the tooltip under autofill.
Resolved an issue where the ‘X’ icon was not clearly visible on the ‘Leave’ dialog in Dark mode under personalization.
Resolved an issue where clicking the ‘back’ button in the header would close the pane instead of navigating back to the customization page under personalization.
Resolved an issue where tabs failed to close in the tab center, causing UI display abnormalities.
Fixed an issue where the captured selection could extend beyond the screen range in screenshots.
Fixed an issue where open tab groups were not hidden in the tab group pane.
Fixed an issue where the bubble notification was displayed even when the sidebar was hidden.
Fixed an issue where, in dark mode, both the font and page background were white, rendering the content unreadable on the workspaces-internal page.
Android:
Fixed an issue where the Omnibox action icon was incorrect on Android.
Resolved an issue where the title of top sites was not fully displayed in the ‘Frequently Visited’ section when added to the home page on Android.
iOS:
Resolved an issue where the menu on bing.com could not be opened on iOS.
Fixed an issue where the ‘Default browser prompt’ was difficult to appear under the default browser on iOS.
Resolved an issue where the Tab center background color appeared black in light mode on iOS.
See an issue that you think might be a bug? Remember to send that directly through the in-app feedback by heading to the … menu > Help and feedback > Send feedback and include diagnostics so the team can investigate.
Thanks again for sending us feedback and helping us improve our Insider builds.
~Gouri
Hello Insiders! We released 129.0.2752.4 to the Dev channel! This includes numerous fixes. For more details on the changes, check out the highlights below.
Looking back on FY24: from Copilots empowering human achievement to leading AI Transformation – The Official Microsoft Blog
Added Features:
Added an observer to track extension uninstalls in Browser Essentials.
Improved Reliability:
Fixed an issue where the browser crashes when toggling off Gamer Mode.
Changed Behavior:
Resolved an issue where browser specific attributes were not visible in the tooltip under autofill.
Resolved an issue where the ‘X’ icon was not clearly visible on the ‘Leave’ dialog in Dark mode under personalization.
Resolved an issue where clicking the ‘back’ button in the header would close the pane instead of navigating back to the customization page under personalization.
Resolved an issue where tabs failed to close in the tab center, causing UI display abnormalities.
Fixed an issue where the captured selection could extend beyond the screen range in screenshots.
Fixed an issue where open tab groups were not hidden in the tab group pane.
Fixed an issue where the bubble notification was displayed even when the sidebar was hidden.
Fixed an issue where, in dark mode, both the font and page background were white, rendering the content unreadable on the workspaces-internal page.
Android:
Fixed an issue where the Omnibox action icon was incorrect on Android.
Resolved an issue where the title of top sites was not fully displayed in the ‘Frequently Visited’ section when added to the home page on Android.
iOS:
Resolved an issue where the menu on bing.com could not be opened on iOS.
Fixed an issue where the ‘Default browser prompt’ was difficult to appear under the default browser on iOS.
Resolved an issue where the Tab center background color appeared black in light mode on iOS.
See an issue that you think might be a bug? Remember to send that directly through the in-app feedback by heading to the … menu > Help and feedback > Send feedback and include diagnostics so the team can investigate.
Thanks again for sending us feedback and helping us improve our Insider builds.
~Gouri Read More
selecting every 46th row in excel then copy/ paste into new page
Hi,
I have approx. 160,000 rows of data
Ideally I want to choose every 46th row; and then use them as my sample.
OR
How can I choose 3466 random rows from an excel document of 160,000 and put in new document
Hi,I have approx. 160,000 rows of dataIdeally I want to choose every 46th row; and then use them as my sample.ORHow can I choose 3466 random rows from an excel document of 160,000 and put in new document Read More
The excel function proper is not working
I have a cell in column B that has all CAPS. I would like to set this to Camel Case.
First letter Cap and remaining letter lower case. Second Word should have the first letter cap and remaining letters lower case. I am on Office 365.
This is what I use and notice the value for the column B does not get generated.
I did format the the column B cells to general. The formula exists in column C.
In column B when I type the formula it does appear as formula.
XICENTE CAYANS=proper(B10)
I have a cell in column B that has all CAPS. I would like to set this to Camel Case.First letter Cap and remaining letter lower case. Second Word should have the first letter cap and remaining letters lower case. I am on Office 365. This is what I use and notice the value for the column B does not get generated. I did format the the column B cells to general. The formula exists in column C.In column B when I type the formula it does appear as formula. XICENTE CAYANS=proper(B10) Read More
Major Version Upgrades for Azure Database for MySQL Flexible Server Burstable SKU on Azure Portal
We’re excited to announce a significant improvement for Azure Database for MySQL users, the ability to perform major version upgrades directly on Burstable SKU compute tiers through the Azure portal. This enhancement makes it easier than ever to upgrade to the latest MySQL versions with just a few clicks.
Why this matters
Major version upgrades are critical for accessing the latest features, performance improvements, and security enhancements in MySQL. However, these upgrades can be resource-intensive, demanding substantial CPU and memory resources. Burstable SKU instances, which are optimized for cost efficiency with variable performance, are credit based and often face challenges in handling these upgrades due to their limited resources.
Due to the challenges mentioned above, major version upgrades were not supported directly on Burstable SKU instances previously. Users had to manually upgrade to a General Purpose (GP) or Business Critical (BC) SKU before initiating the upgrade. After the upgrade, users needed to either downgrade back to the original Burstable SKU or decide to stay on the GP or BC SKU, followed by necessary clean-up tasks. This manual process was cumbersome and time-consuming.
To overcome this, we’ve streamlined the upgrade process. When you initiate a major version upgrade on a Burstable SKU instance, the system automatically upgrades the compute tier to a General Purpose SKU. This ensures that the upgrade process has the necessary resources to complete successfully.
Key benefits
The key benefits of this functionality are detailed in the following sections.
Seamless upgrade process
The new upgrade process is designed to be seamless and user-friendly. Here’s how it works:
1. Initiate the upgrade: In the Azure portal, select your existing Azure Database for MySQL Burstable SKU server, and then select Upgrade.
2. Validate schema compatibility: To help identify any potential issues that could disrupt the upgrade, before proceeding, use Oracle’s official tool to validate that your current database schema is compatible with MySQL 8.0.
When you use Oracle’s official tool to check schema compatibility, you will encounter some warnings indicating unexpected tokens in stored procedures, such as:
mysql.az_replication_change_master – at line 3,4255: unexpected token ‘REPLICATION’
mysql.az_add_action_history – PROCEDURE uses obsolete NO_AUTO_CREATE_USER sql_mode
You can safely ignore these warnings. They refer to built-in stored procedures prefixed with mysql., which are used to support Azure MySQL features. These warnings do not affect the functionality of your database.
3. Automatic upgrade to the Compute tier: To ensure sufficient resources are available for the upgrade, the system will automatically upgrade your Burstable service tier instance to use the General Purpose service tier.
4. Select which service tier to use after the upgrade: During the initial upgrade steps, you’ll be prompted to select whether to remain on the General Purpose service tier or revert to the Burstable service tier after the upgrade completes.
5. Perform the upgrade: The major version upgrade to MySQL 8.0 is performed seamlessly.
6. Post-Upgrade Option: After the upgrade, the system will either retain the General Purpose SKU or revert to Burstable SKU based on the selection you made during the initial upgrade steps (the default option is to use B2S).
Enhanced reliability
By ensuring that your compute tier has adequate resources, this new process significantly enhances the reliability of major version upgrades. You can be confident that your upgrade will proceed smoothly, reducing the risk of interruptions or failures.
Cost management
We understand that cost management is a key concern for our users. However, while upgrading to a General Purpose SKU will incur additional costs, using this approach helps ensure the success of your upgrade. As a result, you can avoid the potential costs and downtime associated with failed upgrade attempts.
Conclusion
Upgrading your Azure Database for MySQL instance based on the Burstable service tier to a major new version is now simpler and more efficient. With just a few clicks in the Azure portal, you can ensure that your database is up-to-date and take advantage of the latest MySQL features and improvements. For more detailed information and step-by-step instructions, please visit our documentation page.
We’re committed to continuously improving your experience with Azure Database for MySQL. We hope this new feature helps you manage your databases more effectively and so that you can take full advantage of the powerful capabilities of MySQL.
If you have any questions about the information provided in this post, please leave a comment below or contact us directly at AskAzureDBforMySQL@service.microsoft.com. Thank you!
Microsoft Tech Community – Latest Blogs –Read More
Expanding GenAI Gateway Capabilities in Azure API Management
In May 2024, we introduced GenAI Gateway capabilities – a set of features designed specifically for GenAI use cases. Today, we are happy to announce that we are adding new policies to support a wider range of large language models through Azure AI Model Inference API. These new policies work in a similar way to the previously announced capabilities, but now can be used with a wider range of LLMs.
Azure AI Model Inference API enables you to consume the capabilities of models, available in Azure AI model catalog, in a uniform and consistent way. It allows you to talk with different models in Azure AI Studio without changing the underlying code.
Working with large language models presents unique challenges, particularly around managing token resources. Token consumption impacts cost and performance of intelligent apps calling the same model, making it crucial to have robust mechanisms for monitoring and controlling token usage. The new policies aim to address challenges by providing detailed insights and control over token resources, ensuring efficient and cost-effective use of models deployed in Azure AI Studio.
LLM Token Limit Policy
LLM Token Limit policy (preview) provides the flexibility to define and enforce token limits when interacting with large language models available through the Azure AI Model Inference API.
Key Features
Configurable Token Limits: Set token limits for requests to control costs and manage resource usage effectively
Prevents Overuse: Automatically blocks requests that exceed the token limit, ensuring fair use and eliminating the noisy neighbour problem
Seamless Integration: Works seamlessly with existing applications, requiring no changes to your application configuration
Learn more about this policy here.
LLM Emit Token Metric Policy
LLM Emit Token Metric policy (preview) provides detailed metrics on token usage, enabling better cost management and insights into model usage across your application portfolio.
Key Features
Real-Time Monitoring: Emit metrics in real-time to monitor token consumption.
Detailed Insights: Gain insights into token usage patterns to identify and mitigate high-usage scenarios
Cost Management: Split token usage by any custom dimension to attribute cost to different teams, departments, or applications
Learn more about this policy here.
LLM Semantic Caching Policy
LLM Semantic Caching policy (preview) is designed to reduce latency and reduce token consumption by caching responses based on the semantic content of prompts.
Key Features
Reduced Latency: Cache responses to frequently requested queries based to decrease response times.
Improved Efficiency: Optimize resource utilization by reducing redundant model inferences.
Content-Based Caching: Leverages semantic similarity to determine which response to retrieve from cache
Learn more about this policy here.
Get Started with Azure AI Model Inference API and Azure API Management
We are committed to continuously improving our platform and providing the tools you need to leverage the full potential of large language models. Stay tuned as we roll out these new policies across all regions and watch for further updates and enhancements as we continue to expand our capabilities. Get started today and bring your intelligent application development to the next level with Azure API Management.
Microsoft Tech Community – Latest Blogs –Read More
Transforme o Desenvolvimento com .NET Aspire: Integração com JavaScript e Node.js
No cenário em constante evolução do desenvolvimento de aplicações em nuvem, gerenciar configurações, garantir resiliência e manter a integração perfeita entre vários componentes pode ser bastante desafiador.
E, é justamente nesse caso que entra em cena o .NET Aspire! Uma estrutura de desenvolvimento de aplicação totalmente robusta e projetada para simplificar essas complexidades, permitindo que os desenvolvedores(as) se concentrem na criação de recursos em vez de lidar com extensas configurações.
Neste artigo exploraremos os aspectos centrais do .NET Aspire, examinando seus benefícios, o processo de configuraçao e a integração com JavaScript, conforme apresentado em sua sessão fenomenal no último evento do .NET Aspire Developers Day, pelo Chris Noring que é Senior Developer Advocate na Microsoft.
.NET Aspire Developers Day
No último evento do .NET Aspire Developers Day, que ocorreu no dia 23 de Julho de 2024, foi um evento repleto de sessões técnicas e práticas, com diferentes linguagens de programação e frameworks. Pois esse foi o grande objetivo do evento online: mostrar o quão adaptável, flexível e fácil é desenvolver aplicações modernas com o poder do .NET Aspire!
Caso você tenha perdido o evento, não se preocupe! Deixo aqui o link da gravação do evento para que você possa assistir e aprender mais sobre o .NET Aspire e suas funcionalidades em diferentes cenários de desenvolvimento de software.
.NET Aspire Developer Days – Online Event
Mas, o que seria o .NET Aspire? Vamos descobrir agora mesmo!
Entendendo o .NET Aspire
O .NET Aspire é uma estrutura pronta para a nuvem que ajuda a construir aplicações distribuídas e prontas para produção. Ele vem com pacotes NuGet que facilitam o desenvolvimento de apps que, em vez de serem monolíticos, são formados por pequenos serviços interconectados, os famosos microsserviços.
Objetivo do .NET Aspire
O objetivo do .NET Aspire é melhorar a experiência de desenvolvimento, especialmente quando você está criando apps na nuvem. Ele oferece ferramentas e padrões que tornam tudo mais fácil, desde a configuração até a execução das aplicações distribuídas. A orquestração no .NET Aspire foca em simplificar o ambiente de desenvolvimento local, conectando projetos e suas dependências de forma automática, sem que você precise se preocupar com detalhes técnicos.
Orquestração Simplificada
A orquestração no .NET Aspire foca na simplificação do ambiente de desenvolvimento localm automatizando a configuração e a interconexão de múltiplos projetos e suas dependências. Embora não substitua sistemas robustos usados em produção, como o Kubernetes, o .NET Aspire fornece abstrações que tornam o setup de descoberta de serviços, variáveis de ambiente e configurações de contêiner mais acessíveis e consistentes.
Componentes Prontos para Uso
O .NET Aspire também vem com componentes prontos para uso, como Redis ou PostgreSQL, que você pode adicionar ao seu projeto com poucas linhas de código. Além disso, ele inclui templates de projetos e ferramentas para Visual Studio, Visual Studio Code e CLI do .NET, facilitando ainda mais a criação e gestão dos seus projetos.
Exemplo de Uso
Por exemplo, com algumas linhas de código, você pode adicionar um contêiner Redis e configurar automaticamente a connection string no projeto frontend:
var builder = DistributedApplication.CreateBuilder(args);
var cache = builder.AddRedis(“cache”);
builder.AddProject<Projects.MyFrontend>(“frontend”)
.WithReference(cache);
Se você deseja saber mais sobre o .NET Aspire, recomendo que acesse a documentação oficial do .NET Aspire, que está repleta de informações detalhadas e exemplos práticos para você começar a desenvolver suas aplicações com o .NET Aspire.
Acesse agora mesmo a documentação oficial do .NET Aspire: Documentação Oficial do .NET Aspire
Iniciando com o .NET Aspire
Durante a sessão do .NET Aspire Developers Day, o Chris Noring apresentou uma integração incrível entre o .NET Aspire e JavaScript, mostrando como é possível criar aplicações modernas e distribuídas com o poder do .NET Aspire e a flexibilidade do JavaScript.
Se você deseja assistir a sessão completa do Chris Noring, acesse o link abaixo:
Antes ele começou explicando como é fácil realizar a configuração para começar a usar o .NET Aspire, que há necessidade de instalar:
.NET 8
.NET Aspire Workload
OCI Compatível com o Docker ou Podman
Visual Studio Code ou Visual Studio
Extensão: C# Dev Kit
A estruturação de um projeto .NET Aspire é simples e pode ser feita usando Visual Studio, Visual Studio Code ou simplesmente o terminal.
Por exemplo, você pode criar um novo projeto usando o terminal com o seguinte comando:
dotnet new aspire-starter
Este comando gera uma estrutura de projeto que inclui componentes essenciais como o AppHost (o cérebro da operação), ServiceDefaults e uma aplicação inicial.
Após estruturar o projeto, o próximo passo é justamente executar. Porém se faz necessário se certificar e garantir que o HTTPS esteja habilitado, pois o .NET Aspire requer HTTPS para funcionar.
Para habilitar o HTTPS, você pode usar o seguinte comando:
dotnet dev-certs https –trust
E, finalmente, para executar o projeto, basta usar o comando:
dotnet run
Ao executar o projeto AppHost, abrirá um painel exibindo todos os recursos dentro do seu projeto, como APIs e serviços de front-end. Este painel fornece insights valiosos sobre as métricas, logs e solicitações ativas da sua aplicação, facilitando o monitoramento e a depuração da sua aplicação em nuvem.
Tudo isso o Chris Noring mostrou durante a sessão do .NET Aspire Developers Day, demonstrando como é fácil e prático começar a desenvolver aplicações modernas com o .NET Aspire.
Se desejar, recomendo a leitura do tutorial: “Quickstart: Build your first .NET Aspire project” que está disponível na documentação oficial do .NET Aspire.
Um pouco mais sobre Orquestração com .NET Aspire
Vamos explorar um pouco mais o que o Chris Noring mostrou nessa parte da sessão.
A orquestração de aplicações distribuídas com o .NET Aspire envolve a configuração e a conexão dos vários componentes que compõem a aplicação. O arquivo aspire-manifest.json é uma peça central nesse processo, documentando como os serviços se conectam e configuram dentro da aplicação.
Essa automatização facilita a vida do desenvolvedor, eliminando a necessidade de configurar manualmente cada conexão e dependência.
O Papel do aspire-manifest.json
O aspire-manifest.json é um arquivo JSON gerado automaticamente pelo .NET Aspire, que contém todas as informações necessárias sobre os recursos e componentes da aplicação.
Ele inclui detalhes como strings de conexão, variáveis de ambiente, portas e protocolos de comunicação. Este manifesto garante que todos os serviços da aplicação se conectem corretamente e funcionem em harmonia.
Vejamos o exemplo replicado pelo Chris Noring durante a sessão em como configurar um cache Redis e uma API de Produtos desenvolvida em Node.js utilizando o arquivo Program.cs:
var cache = builder.AddRedis(“cache”);
var productApi = builder.AddNpmApp(“productapi”, “../NodeApi”, “watch”)
.WithReference(cache)
.WithHttpEndpoint(env: “PORT”)
.WithExternalHttpEndpoints()
.PublishAsDockerFile();
Neste exemplo, o Redis é configurado como um serviço de cache, e a API de produtos, desenvolvida em Node.js, é configurada para utilizar esse cache. O método WithReference(cache) assegura que a API de produtos possa se conectar ao Redis. O método PublishAsDockerFile() cria um Dockerfile para a aplicação, permitindo sua execução em um contêiner.
Como o Manifesto Reflete Essas Configurações?
Bom, uma vez que o código é executado o .NET Aspire gera um arquivo aspire-manifest.json que reflete todas as configurações feitas no código. Nessa parte o Chris explica que como o manifesto documenta a configuração do Redis e da API de Produtos:
{
“productapi”: {
“type”: “dockerfile.v0”,
“path”: “../NodeApi/Dockerfile”,
“context”: “../NodeApi”,
“env”: {
“NODE_ENV”: “development”,
“ConnectionStrings__cache”: “{cache.connectionString}”,
“PORT”: “{productapi.bindings.http.port}”
},
“bindings”: {
“http”: {
“scheme”: “http”,
“protocol”: “tcp”,
“transport”: “http”,
“targetPort”: 8000,
“external”: true
}
}
}
}
Neste trecho do manifesto, podemos ver que a API de produtos (productapi) está configurada para utilizar a string de conexão do Redis (ConnectionStrings__cache), que é automaticamente gerada e injetada no ambiente da aplicação. Além disso, o manifesto especifica que a API de produtos será exposta via HTTP na porta 8000.
Como Configurar ou Atualizar o Manifesto?
Para gerar ou atualizar o arquivo aspire-manifest.json, você pode usar o seguinte comando:
dotnet run –publisher manifest –output-path aspire-manifest.json
Esse comando executa a aplicação e gera o manifesto, que é muito importante para a implantação em ambientes de produção ou para testes em desenvolvimento.
Integrando JavaScript com .NET Aspire
A flexibilidade do .NET Aspire se estende à integração com JavaScript, suportando tanto o desenvolvimento de Front-end quanto de Back-end. Essa capacidade permite que os desenvolvedores usem frameworks e bibliotecas JavaScript populares juntamente com componentes .NET, criando um ambiente de desenvolvimento unificado.
Exemplo de Front-End com Angular
Na palestra de Chris Noring, foi demonstrado como o .NET Aspire pode ser integrado a um projeto de front-end desenvolvido em Angular. A configuração de backend e a conexão com APIs são simplificadas com o uso de variáveis de ambiente, que são automaticamente geradas e injetadas no projeto.
Configuração de Backend no Angular
O arquivo proxy.conf.js é utilizado para redirecionar as chamadas de API no ambiente de desenvolvimento para o backend correto. As URLs do backend, que podem variar entre ambientes, são gerenciadas usando variáveis de ambiente. Veja um exemplo de configuração:
module.exports = {
“/api”: {
target: process.env[“services__weatherapi__https__0”] || process.env[“services__weatherapi__http__0”],
secure: process.env[“NODE_ENV”] !== “development”,
pathRewrite: { “^/api”: “” },
},
};
Neste exemplo, o target é definido com base nas variáveis de ambiente services__weatherapi__https__0 ou services__weatherapi__http__0, que são injetadas automaticamente pelo .NET Aspire. Essa configuração garante que o Frontend Angular possa se conectar corretamente ao serviço Backend, independentemente do ambiente (desenvolvimento, teste, produção).
Uso do HttpClient no Angular
No código Angular, a interação com o backend pode ser feita usando o serviço HttpClient, como mostrado no exemplo a seguir:
constructor(private http: HttpClient) {
this.http.get<WeatherForecast[]>(‘api/weatherforecast’).subscribe({
next: result => this.forecasts = result,
error: console.error
});
}
Neste trecho, a chamada à API api/weatherforecast é redirecionada automaticamente para o backend correto, graças à configuração feita no proxy.conf.js. Isso simplifica a comunicação entre o frontend Angular e o backend, garantindo que as variáveis de ambiente configuradas no manifesto do .NET Aspire sejam utilizadas corretamente.
Integração com Node.js e .NET Aspire
O .NET Aspire não só facilita a orquestração de aplicações.NET, mas também integra perfeitamente outras tecnologias como o Node.js. Essa flexibilidade permite que você construa aplicações distribuídas que combinam diferentes stacks tecnológicos de forma eficiente.
Orquestração no AppHost
Na orquestração realizada no AppHost, o .NET Aspire permite que você conecte diferentes componentes de sua aplicação, como um frontend em Node.js e uma API de backend, tudo isso de forma simples e clara.
var cache = builder.AddRedis(“cache”);
var weatherapi = builder.AddProject<Projects.AspireWithNode_AspNetCoreApi>(“weatherapi”);
var frontend = builder.AddNpmApp(“frontend”, “../NodeFrontend”, “watch”)
.WithReference(weatherapi)
.WithReference(cache)
.WithHttpEndpoint(env: “PORT”)
.WithExternalHttpEndpoints()
.PublishAsDockerFile();
Nesse exemplo, o cache é o Redis, o weatherapi é a API de previsão do tempo, e o Frontend é a aplicação Node.js. A função WithReference() conecta esses componentes, garantindo que o frontend tenha acesso tanto ao Redis quanto à API.
O uso de PublishAsDockerFile() permite que o Frontend seja empacotado como um contêiner Docker, facilitando a sua implantação em qualquer ambiente.
No código mostrado na segunda imagem, podemos ver como o AppHost é configurado:
Na Aplicação Node.js…
No exemplo mostrado nas imagens, a aplicação Node.js está configurada para recuperar o endereço do cache e a URL da API diretamente a partir do projeto .NET Aspire.
Isso é feito através de variáveis de ambiente que são geradas automaticamente com base nos recursos definidos no manifesto do Aspire.
const cacheAddress = env[‘ConnectionStrings__cache’];
const apiServer = env[‘services__weatherapi__https__0’] ?? env[‘services__weatherapi__http__0’];
Aqui, ConnectionStrings__cache e services__weatherapi são variáveis de ambiente que o Aspire injeta automaticamente no ambiente de execução da aplicação Node.js. Essas variáveis contêm os valores necessários para que a aplicação se conecte corretamente ao Redis e à API de previsão do tempo.
Com essas informações em mãos, a aplicação pode facilmente acessar o cache e a API, sem a necessidade de hard-coding de URLs ou strings de conexão. Isso não só facilita a manutenção do código como também garante que a aplicação funcione corretamente em diferentes ambientes (desenvolvimento, teste, produção).
Exemplo de Uso em uma Rota Express
Um exemplo de como essa configuração é utilizada em uma rota Express na aplicação Node.js pode ser visto a seguir:
app.get(‘/’, async (req, res) => {
let cachedForecasts = await cache.get(‘forecasts’);
if (cachedForecasts) {
res.render(‘index’, { forecasts: JSON.parse(cachedForecasts) });
return;
}
let response = await fetch(`${apiServer}/weatherforecast`);
let forecasts = await response.json();
await cache.set(‘forecasts’, JSON.stringify(forecasts));
res.render(‘index’, { forecasts });
});
Aqui, a aplicação tenta primeiro recuperar as previsões do tempo a partir do cache Redis. Se os dados estiverem no cache, eles são renderizados diretamente. Caso contrário, a aplicação faz uma requisição à API de previsão do tempo (apiServer), armazena os resultados no cache, e depois os exibe.
Essa lógica melhora significativamente a performance e a eficiência da aplicação, garantindo que os dados sejam recuperados rapidamente a partir do cache sempre que possível.
Conclusão
O .NET Aspire representa um avanço significativo na simplificação do desenvolvimento de aplicações distribuídas e prontas para a nuvem. Com sua capacidade de integrar diferentes tecnologias, como JavaScript e Node.js, ele oferece uma plataforma robusta e flexível para criar soluções modernas e eficientes. Se você deseja levar suas habilidades de desenvolvimento para o próximo nível, aproveite ao máximo o poder do .NET Aspire.
Para aprofundar ainda mais o seu conhecimento, recomendo fortemente que você assista à palestra do Chris Noring, onde ele explora detalhadamente as capacidades e a versatilidade do .NET Aspire. Esta é uma oportunidade imperdível para aprender diretamente com um dos especialistas que está na vanguarda do desenvolvimento de software.
Assista agora à palestra do Chris Noring: Palestra do Chris Noring no .NET Aspire Developers Day
Recursos Adicionais
Para continuar sua jornada no .NET Aspire, explore os seguintes recursos adicionais:
Documentação Oficial – .NET Aspire
Orchestrate Node.js apps in .NET Aspire
Code Sample: .NET Aspire with Angular, React, and Vue
Code Sample: .NET Aspire + Node.js
Curso Grátis: Criar aplicativos distribuídos com o .NET Aspire
Video series: Welcome to .NET Aspire
Espero que este artigo tenha sido útil e inspirador para você. Se tiver alguma dúvida ou sugestão, não hesite em compartilhar nos comentários abaixo. Estou aqui para ajudar e apoiar você em sua jornada de aprendizado e crescimento profissional.
Até a próxima e continue aprendendo, criando e compartilhando!
Microsoft Tech Community – Latest Blogs –Read More