Category: News
How to check if Azure SQL Managed Instances are enrolled (or not) on November 2022 Feature Wave?
Recently, we have received a few questions from customers on how they can check if all the Azure SQL Managed Instances on a subscription have been enrolled in the November 2022 Feature Wave.
Resource graph query to the rescue!
Running this query is quite easy from the Azure portal. Follow the instructions on this article to understand how to run Kusto queries against resource graph.
Use the query below.
resources
| where type =~ “microsoft.sql/virtualclusters”
| extend parsed_properties = parse_json(properties)
| extend version = tostring(parsed_properties.version)
| extend NovemberFeatureWave2022Enabled = iif([‘version’] == ‘2.0’,’Yes’,’No’)
| extend childResources = tostring(parsed_properties.childResources)
| mv-expand childResource = parse_json(childResources)
| extend subscriptionId = tostring(split(childResource, “/”)[2])
| extend resourceGroup = tostring(split(childResource, “/”)[4])
| extend managedInstance = tostring(split(childResource, “/”)[-1])
| project subscriptionId, resourceGroup, managedInstance, VirtualCluster=name, NovemberFeatureWave2022Enabled
| order by [‘subscriptionId’], resourceGroup, VirtualCluster, managedInstance
The output will show you at the subscription level if the instances have been enrolled or not.
Microsoft Tech Community – Latest Blogs –Read More
Finding multiple Matrix in a txt file
I want to do radiobiological calculations for structures in a patient with two different plans. I have a txt file with all the data in it. The txt file contains among other things the biological structure (which exists in both plans), the biological data in two columns and two rows that specifies what plan and what structure it is.
I know how to code the math to do the calculations, but how do I most easy read the text file and store each plans column in like a matrix?
For example (in my own dumb coding brain) I would like matlab to "Search a txt file that has a certain name that the user can specify. For a specific organ like the bladder, put the data in a variable called something like "Bladder Plan 1" and "Bladder Plan 2". Do math. Do same for all other organs. Put result in a report. Save as pdf or file". I hope that the wonderful Matlab community can help me, since I am no Matrix wizard with coding.
For all patients the structures all have the same naming convention, so I guess I could specify somewhere/somehow what organs should be searched for in the text file and what I would like the variable to be called, so that the search in the text file could go quick.
If anyone has any tips and suggestions that would be great. I have attached an anonymized txt file of how it could look like.I want to do radiobiological calculations for structures in a patient with two different plans. I have a txt file with all the data in it. The txt file contains among other things the biological structure (which exists in both plans), the biological data in two columns and two rows that specifies what plan and what structure it is.
I know how to code the math to do the calculations, but how do I most easy read the text file and store each plans column in like a matrix?
For example (in my own dumb coding brain) I would like matlab to "Search a txt file that has a certain name that the user can specify. For a specific organ like the bladder, put the data in a variable called something like "Bladder Plan 1" and "Bladder Plan 2". Do math. Do same for all other organs. Put result in a report. Save as pdf or file". I hope that the wonderful Matlab community can help me, since I am no Matrix wizard with coding.
For all patients the structures all have the same naming convention, so I guess I could specify somewhere/somehow what organs should be searched for in the text file and what I would like the variable to be called, so that the search in the text file could go quick.
If anyone has any tips and suggestions that would be great. I have attached an anonymized txt file of how it could look like. I want to do radiobiological calculations for structures in a patient with two different plans. I have a txt file with all the data in it. The txt file contains among other things the biological structure (which exists in both plans), the biological data in two columns and two rows that specifies what plan and what structure it is.
I know how to code the math to do the calculations, but how do I most easy read the text file and store each plans column in like a matrix?
For example (in my own dumb coding brain) I would like matlab to "Search a txt file that has a certain name that the user can specify. For a specific organ like the bladder, put the data in a variable called something like "Bladder Plan 1" and "Bladder Plan 2". Do math. Do same for all other organs. Put result in a report. Save as pdf or file". I hope that the wonderful Matlab community can help me, since I am no Matrix wizard with coding.
For all patients the structures all have the same naming convention, so I guess I could specify somewhere/somehow what organs should be searched for in the text file and what I would like the variable to be called, so that the search in the text file could go quick.
If anyone has any tips and suggestions that would be great. I have attached an anonymized txt file of how it could look like. radiobiology MATLAB Answers — New Questions
2D data fitting – Surface
I have some numbers as a function of 2 variables: _( x, y ) ↦ z_.
I would like to know which function _z = z( x, y )_ best fits my data.
Unfortunately, I don’t have any hint, I mean, there’s no theoretical background on these numbers. They’re the result ( _z_ ) of some FEM simulations of a system, being the simulation a parametric sweep over two parameters ( _x_ and _y_ ) of the system.
Here’s my data:
x = [1 2 4 6 8 10 13 17 21 25];
y = [0.2 0.5 1 2 4 7 10 14 18 22];
z = [1 0.6844 0.3048 0.2124 0.1689 0.1432 0.1192 0.1015 0.0908 0.0841;…
1.000 0.7096 0.3595 0.2731 0.2322 0.2081 0.1857 0.1690 0.1590 0.1529;…
1.000 0.7451 0.4362 0.3585 0.3217 0.2999 0.2797 0.2648 0.2561 0.2504;…
1.000 0.7979 0.5519 0.4877 0.4574 0.4394 0.4228 0.4107 0.4037 0.3994;…
1.000 0.8628 0.6945 0.6490 0.6271 0.6145 0.6027 0.5945 0.5896 0.5870;…
1.000 0.9131 0.8057 0.7758 0.7614 0.7531 0.7457 0.7410 0.7383 0.7368;…
1.000 0.9397 0.8647 0.8436 0.8333 0.8278 0.8228 0.8195 0.8181 0.8171;…
1.000 0.9594 0.9087 0.8942 0.8877 0.8839 0.8808 0.8791 0.8783 0.8777;…
1.000 0.9705 0.9342 0.9238 0.9190 0.9165 0.9145 0.9133 0.9131 0.9127;…
1.000 0.9776 0.9502 0.9425 0.9390 0.9372 0.9358 0.9352 0.9349 0.9348];
I tried with MATLAB with the Curve Fitting app, but I didn’t succeed. The ‘polynomial’ fitting doesn’t work well. I would like to use the ‘custom equation’ fitting, but I don’t know what equation to start. I don’t have much practice in data analysis.
Any hint?
<<http://i.stack.imgur.com/tlqDu.png>>I have some numbers as a function of 2 variables: _( x, y ) ↦ z_.
I would like to know which function _z = z( x, y )_ best fits my data.
Unfortunately, I don’t have any hint, I mean, there’s no theoretical background on these numbers. They’re the result ( _z_ ) of some FEM simulations of a system, being the simulation a parametric sweep over two parameters ( _x_ and _y_ ) of the system.
Here’s my data:
x = [1 2 4 6 8 10 13 17 21 25];
y = [0.2 0.5 1 2 4 7 10 14 18 22];
z = [1 0.6844 0.3048 0.2124 0.1689 0.1432 0.1192 0.1015 0.0908 0.0841;…
1.000 0.7096 0.3595 0.2731 0.2322 0.2081 0.1857 0.1690 0.1590 0.1529;…
1.000 0.7451 0.4362 0.3585 0.3217 0.2999 0.2797 0.2648 0.2561 0.2504;…
1.000 0.7979 0.5519 0.4877 0.4574 0.4394 0.4228 0.4107 0.4037 0.3994;…
1.000 0.8628 0.6945 0.6490 0.6271 0.6145 0.6027 0.5945 0.5896 0.5870;…
1.000 0.9131 0.8057 0.7758 0.7614 0.7531 0.7457 0.7410 0.7383 0.7368;…
1.000 0.9397 0.8647 0.8436 0.8333 0.8278 0.8228 0.8195 0.8181 0.8171;…
1.000 0.9594 0.9087 0.8942 0.8877 0.8839 0.8808 0.8791 0.8783 0.8777;…
1.000 0.9705 0.9342 0.9238 0.9190 0.9165 0.9145 0.9133 0.9131 0.9127;…
1.000 0.9776 0.9502 0.9425 0.9390 0.9372 0.9358 0.9352 0.9349 0.9348];
I tried with MATLAB with the Curve Fitting app, but I didn’t succeed. The ‘polynomial’ fitting doesn’t work well. I would like to use the ‘custom equation’ fitting, but I don’t know what equation to start. I don’t have much practice in data analysis.
Any hint?
<<http://i.stack.imgur.com/tlqDu.png>> I have some numbers as a function of 2 variables: _( x, y ) ↦ z_.
I would like to know which function _z = z( x, y )_ best fits my data.
Unfortunately, I don’t have any hint, I mean, there’s no theoretical background on these numbers. They’re the result ( _z_ ) of some FEM simulations of a system, being the simulation a parametric sweep over two parameters ( _x_ and _y_ ) of the system.
Here’s my data:
x = [1 2 4 6 8 10 13 17 21 25];
y = [0.2 0.5 1 2 4 7 10 14 18 22];
z = [1 0.6844 0.3048 0.2124 0.1689 0.1432 0.1192 0.1015 0.0908 0.0841;…
1.000 0.7096 0.3595 0.2731 0.2322 0.2081 0.1857 0.1690 0.1590 0.1529;…
1.000 0.7451 0.4362 0.3585 0.3217 0.2999 0.2797 0.2648 0.2561 0.2504;…
1.000 0.7979 0.5519 0.4877 0.4574 0.4394 0.4228 0.4107 0.4037 0.3994;…
1.000 0.8628 0.6945 0.6490 0.6271 0.6145 0.6027 0.5945 0.5896 0.5870;…
1.000 0.9131 0.8057 0.7758 0.7614 0.7531 0.7457 0.7410 0.7383 0.7368;…
1.000 0.9397 0.8647 0.8436 0.8333 0.8278 0.8228 0.8195 0.8181 0.8171;…
1.000 0.9594 0.9087 0.8942 0.8877 0.8839 0.8808 0.8791 0.8783 0.8777;…
1.000 0.9705 0.9342 0.9238 0.9190 0.9165 0.9145 0.9133 0.9131 0.9127;…
1.000 0.9776 0.9502 0.9425 0.9390 0.9372 0.9358 0.9352 0.9349 0.9348];
I tried with MATLAB with the Curve Fitting app, but I didn’t succeed. The ‘polynomial’ fitting doesn’t work well. I would like to use the ‘custom equation’ fitting, but I don’t know what equation to start. I don’t have much practice in data analysis.
Any hint?
<<http://i.stack.imgur.com/tlqDu.png>> statistics, 3d, 2d, surface, data analysis, fitting, curve fitting, lsqcurvefit, nlinfit, fit regression surface to 3d data MATLAB Answers — New Questions
Which SLRT Functions Replace xPC Functions?
I need to update legacy r2010b MATLAB code that uses xPC functions to r2024a code that uses Simulink Real-Time functions. For example, I have replaced instances of xpctargetping with slrtpingtarget (https://www.mathworks.com/help/releases/R2020a/xpc/api/slrtpingtarget.html).
Are there modern SLRT replacement functions for legacy setxpcenv and xpctarget.fs/xpctarget.ftp functions? Maybe setslrtenv and slrt(‘target’) or slrealtime.fs/ftp? (https://www.mathworks.com/help/slrealtime/api/slrealtime.target.html)
Are these functions only available once connected to target hardware?I need to update legacy r2010b MATLAB code that uses xPC functions to r2024a code that uses Simulink Real-Time functions. For example, I have replaced instances of xpctargetping with slrtpingtarget (https://www.mathworks.com/help/releases/R2020a/xpc/api/slrtpingtarget.html).
Are there modern SLRT replacement functions for legacy setxpcenv and xpctarget.fs/xpctarget.ftp functions? Maybe setslrtenv and slrt(‘target’) or slrealtime.fs/ftp? (https://www.mathworks.com/help/slrealtime/api/slrealtime.target.html)
Are these functions only available once connected to target hardware? I need to update legacy r2010b MATLAB code that uses xPC functions to r2024a code that uses Simulink Real-Time functions. For example, I have replaced instances of xpctargetping with slrtpingtarget (https://www.mathworks.com/help/releases/R2020a/xpc/api/slrtpingtarget.html).
Are there modern SLRT replacement functions for legacy setxpcenv and xpctarget.fs/xpctarget.ftp functions? Maybe setslrtenv and slrt(‘target’) or slrealtime.fs/ftp? (https://www.mathworks.com/help/slrealtime/api/slrealtime.target.html)
Are these functions only available once connected to target hardware? simulink, xpc, slrt MATLAB Answers — New Questions
Error: Failed to initialize the interactive session
I am trying to validate a cluster profile. I was able to do all tests but the parallel pool test. I have attached my validation report below. Any help is appreciated.
VALIDATION REPORT
Profile: beoshock
Scheduler Type: Generic
Stage: Cluster connection test (parcluster)
Status: Passed
Start Time: Thu May 13 12:21:31 CDT 2021
Finish Time: Thu May 13 12:21:31 CDT 2021
Running Duration: 0 min 0 sec
Description:
Error Report:
Command Line Output:
Debug Log:
Stage: Job test (createJob)
Status: Passed
Start Time: Thu May 13 12:21:31 CDT 2021
Finish Time: Thu May 13 12:21:57 CDT 2021
Running Duration: 0 min 26 sec
Description:
Error Report:
Command Line Output:
Debug Log:
Stage: SPMD job test (createCommunicatingJob)
Status: Passed
Start Time: Thu May 13 12:21:59 CDT 2021
Finish Time: Thu May 13 12:22:37 CDT 2021
Running Duration: 0 min 38 sec
Description: Job ran with 2 workers.
Error Report:
Command Line Output:
Debug Log:
Stage: Pool job test (createCommunicatingJob)
Status: Passed
Start Time: Thu May 13 12:22:39 CDT 2021
Finish Time: Thu May 13 12:23:06 CDT 2021
Running Duration: 0 min 27 sec
Description: Job ran with 2 workers.
Error Report:
Command Line Output:
Debug Log:
Stage: Parallel pool test (parpool)
Status: Failed
Start Time: Thu May 13 12:23:08 CDT 2021
Finish Time: Thu May 13 12:24:41 CDT 2021
Running Duration: 1 min 33 sec
Description: Failed to initialize the interactive session.
Error Report: Failed to initialize the interactive session.
Caused by:
Error using parallel.internal.pool.AbstractInteractiveClient>iThrowIfBadParallelJobStatus (line 433)
The interactive communicating job errored with the following message: MatlabPoolPeerInstance{fLabIndex=1, fNumberOfLabs=2, fUuid=b10ec9e0-6fbc-43e5-8566-67ed5d06514d} was unable to find the host for MacBook-Pro:27370 due to a JVM UnknownHostException: nullI am trying to validate a cluster profile. I was able to do all tests but the parallel pool test. I have attached my validation report below. Any help is appreciated.
VALIDATION REPORT
Profile: beoshock
Scheduler Type: Generic
Stage: Cluster connection test (parcluster)
Status: Passed
Start Time: Thu May 13 12:21:31 CDT 2021
Finish Time: Thu May 13 12:21:31 CDT 2021
Running Duration: 0 min 0 sec
Description:
Error Report:
Command Line Output:
Debug Log:
Stage: Job test (createJob)
Status: Passed
Start Time: Thu May 13 12:21:31 CDT 2021
Finish Time: Thu May 13 12:21:57 CDT 2021
Running Duration: 0 min 26 sec
Description:
Error Report:
Command Line Output:
Debug Log:
Stage: SPMD job test (createCommunicatingJob)
Status: Passed
Start Time: Thu May 13 12:21:59 CDT 2021
Finish Time: Thu May 13 12:22:37 CDT 2021
Running Duration: 0 min 38 sec
Description: Job ran with 2 workers.
Error Report:
Command Line Output:
Debug Log:
Stage: Pool job test (createCommunicatingJob)
Status: Passed
Start Time: Thu May 13 12:22:39 CDT 2021
Finish Time: Thu May 13 12:23:06 CDT 2021
Running Duration: 0 min 27 sec
Description: Job ran with 2 workers.
Error Report:
Command Line Output:
Debug Log:
Stage: Parallel pool test (parpool)
Status: Failed
Start Time: Thu May 13 12:23:08 CDT 2021
Finish Time: Thu May 13 12:24:41 CDT 2021
Running Duration: 1 min 33 sec
Description: Failed to initialize the interactive session.
Error Report: Failed to initialize the interactive session.
Caused by:
Error using parallel.internal.pool.AbstractInteractiveClient>iThrowIfBadParallelJobStatus (line 433)
The interactive communicating job errored with the following message: MatlabPoolPeerInstance{fLabIndex=1, fNumberOfLabs=2, fUuid=b10ec9e0-6fbc-43e5-8566-67ed5d06514d} was unable to find the host for MacBook-Pro:27370 due to a JVM UnknownHostException: null I am trying to validate a cluster profile. I was able to do all tests but the parallel pool test. I have attached my validation report below. Any help is appreciated.
VALIDATION REPORT
Profile: beoshock
Scheduler Type: Generic
Stage: Cluster connection test (parcluster)
Status: Passed
Start Time: Thu May 13 12:21:31 CDT 2021
Finish Time: Thu May 13 12:21:31 CDT 2021
Running Duration: 0 min 0 sec
Description:
Error Report:
Command Line Output:
Debug Log:
Stage: Job test (createJob)
Status: Passed
Start Time: Thu May 13 12:21:31 CDT 2021
Finish Time: Thu May 13 12:21:57 CDT 2021
Running Duration: 0 min 26 sec
Description:
Error Report:
Command Line Output:
Debug Log:
Stage: SPMD job test (createCommunicatingJob)
Status: Passed
Start Time: Thu May 13 12:21:59 CDT 2021
Finish Time: Thu May 13 12:22:37 CDT 2021
Running Duration: 0 min 38 sec
Description: Job ran with 2 workers.
Error Report:
Command Line Output:
Debug Log:
Stage: Pool job test (createCommunicatingJob)
Status: Passed
Start Time: Thu May 13 12:22:39 CDT 2021
Finish Time: Thu May 13 12:23:06 CDT 2021
Running Duration: 0 min 27 sec
Description: Job ran with 2 workers.
Error Report:
Command Line Output:
Debug Log:
Stage: Parallel pool test (parpool)
Status: Failed
Start Time: Thu May 13 12:23:08 CDT 2021
Finish Time: Thu May 13 12:24:41 CDT 2021
Running Duration: 1 min 33 sec
Description: Failed to initialize the interactive session.
Error Report: Failed to initialize the interactive session.
Caused by:
Error using parallel.internal.pool.AbstractInteractiveClient>iThrowIfBadParallelJobStatus (line 433)
The interactive communicating job errored with the following message: MatlabPoolPeerInstance{fLabIndex=1, fNumberOfLabs=2, fUuid=b10ec9e0-6fbc-43e5-8566-67ed5d06514d} was unable to find the host for MacBook-Pro:27370 due to a JVM UnknownHostException: null cluster profile, parallel computing MATLAB Answers — New Questions
Microsoft List changing from am to pm
Hello all,
I have an issue with a Sharepoint List I created.
I have a list view which users normally edit in grid view. When they add a date like 08/12/2024 12:00 a.m.
It changes to 08/12/2024 12:00 p.m. and it only happens with the grid view.
It also happens from p.m. to a.m.
The problem started with the new look that List has since about a month ago. Has anyone encountered this issue before?
Hello all, I have an issue with a Sharepoint List I created. I have a list view which users normally edit in grid view. When they add a date like 08/12/2024 12:00 a.m. It changes to 08/12/2024 12:00 p.m. and it only happens with the grid view. It also happens from p.m. to a.m. The problem started with the new look that List has since about a month ago. Has anyone encountered this issue before? Read More
Announcing General Availability of Attach & Detach of Virtual Machines on Virtual Machine Scale Sets
Today, we’re thrilled to announce that the ability to attach or detach Virtual Machines (VMs) to and from a Virtual Machine Scale Set (VMSS) with no downtime is Generally Available. This functionality is available for scale sets with Flexible Orchestration Mode with a Fault Domain Count of 1.
Benefits
Let Azure do the work: Easily move from a single VM to VMSS Flex and make use of all the benefits that come from scale sets, like Autoscale, Auto OS Upgrades, Spot Priority Mix, Instance Repairs, and Upgrade Policies.
Easily scale: By attaching an existing VM to an existing VMSS Flex, you can grow your Compute without having to rebuild it from scratch.
No downtime: You can attach running VMs to a scale set with no downtime, thereby creating a frictionless experience to make use of scale sets.
Isolated troubleshooting: Should you need more detailed troubleshooting of a VM, you can now detach the VM to isolate it from the scale set.
Easily move VMs: Using the feature, you can easily move VMs between scale sets to ensure your VMs are grouped the way you want them to be.
When the VM and VMSS meet all the qualifications, you can quickly attach the VM to the scale set by updating the VM to use the VMSS ID. You can attach VMs through the REST API, Azure Portal, Azure CLI, or Azure PowerShell. For example, using PowerShell:
#Get VM information
$vm = Get-AzVM -ResourceGroupName $resourceGroupName -Name $vmName `
#Get scale set information
$vmss = Get-AzVmss -ResourceGroupName $resourceGroupName -Name $vmssName `
#Update the VM with the scale set ID
Update-AzVM -ResourceGroupName $resourceGroupName -VM $vm -VirtualMachineScaleSetId $vmss.Id
Conversely, to detach the VM from the scale set, you simply need to update the VM to no longer use a VMSS ID:
#Get VM information
$vm = Get-AzVM -ResourceGroupName $resourceGroupName -Name $vmName
#Update the VM with the new scale set refence of $null
Update-AzVM -ResourceGroupName $resourceGroupName -VM $vm -VirtualMachineScaleSetId $null
Attach and detach of VMs to/from VMSS Flex with a Fault Domain Count of 1 is Generally Available in Azure.
Learn More
To learn more about how to attach or detach VMs to or from a VMSS Flex, please visit the documentation.
Microsoft Tech Community – Latest Blogs –Read More
Facing license Issues while running a function from Communication Toolbox.
I am encountering a License Manager Error -4 when trying to use a function from the Communication Toolbox, even though I have an active license for it. The error message suggests that the maximum number of users for ‘Signal_Blocks’ has been reached, is ‘Signal_Blocks’ related to the DSP System Toolbox?.
License checkout failed.
License Manager Error -4
Maximum number of users for Signal_Blocks
reached.
Try again later.
I’m trying to understand why I’m getting this error when accessing a function from the Communication Toolbox. Does using a function from one toolbox require licenses for other toolboxes as well? Additionally, how can I find out which toolboxes are included in my current license?I am encountering a License Manager Error -4 when trying to use a function from the Communication Toolbox, even though I have an active license for it. The error message suggests that the maximum number of users for ‘Signal_Blocks’ has been reached, is ‘Signal_Blocks’ related to the DSP System Toolbox?.
License checkout failed.
License Manager Error -4
Maximum number of users for Signal_Blocks
reached.
Try again later.
I’m trying to understand why I’m getting this error when accessing a function from the Communication Toolbox. Does using a function from one toolbox require licenses for other toolboxes as well? Additionally, how can I find out which toolboxes are included in my current license? I am encountering a License Manager Error -4 when trying to use a function from the Communication Toolbox, even though I have an active license for it. The error message suggests that the maximum number of users for ‘Signal_Blocks’ has been reached, is ‘Signal_Blocks’ related to the DSP System Toolbox?.
License checkout failed.
License Manager Error -4
Maximum number of users for Signal_Blocks
reached.
Try again later.
I’m trying to understand why I’m getting this error when accessing a function from the Communication Toolbox. Does using a function from one toolbox require licenses for other toolboxes as well? Additionally, how can I find out which toolboxes are included in my current license? license, toolbox, dspsystemtoolbox MATLAB Answers — New Questions
How does Forward kinematik work in Robotics System Toolbox?
How does the Robot System Toolbox work internally?
I noticed some small offset in the results of the Forward Kinematics if I compare the result of getTransform with the plain geometric Forward Kinematics calculated with the parameters from the Datasheet (p. 61).
The offset also seems to be dynamic. So I wonder what the reason could be.
Is there some kind of dynamic simulation of the joints stiffnes?
Attached is some code to reproduce this:
% Compare DH forward Kinematics with Matlab Model of KinGen3
%% get robot model
close all
gen3 = loadrobot("kinovaGen3");
gen3.DataFormat = ‘column’;
eeName = ‘EndEffector_Link’;
% define q:
q=[ 1.18 -68.68 18.47 -69.09 94.36 112.93 46.00]’;
%q=[ 1.18+180 -68.68 18.47 -69.09 94.36 112.93 46.00]’;
%q=[0 0 0 0 0 0 0]’;
%% calculate FK with DH and with Matlab modell:
% DH
H_DH = getSingleH(getHcomplete(q),0,8)
pos_DH = H_DH(1:3,4);
% Model
H_mod = getTransform(gen3, q/180*pi’, eeName)
pos_mod=H_mod(1:3,4);
%calculate difference:
pos_dif=(pos_DH-pos_mod)*1000
%% Funktiondecalrations for forward kinematics
function H = getHcomplete(q)
% calculates H cell according to input angles
% wheras H{1}=H01, H{2}=H12, H{3}=H23, etc…
i=1;
R{i}=rotx(180)*rotz(q(i)); % Rotation
D{i}=[0 0 +156.4]’/1000; % Displacement
H{i}=RnD2H(R{i},D{i});
i=2;
R{i}=rotx(+90)*rotz(q(i));
D{i}=[0 5.4 -128.4]’/1000;
H{i}=RnD2H(R{i},D{i});
i=3;
R{i}=rotx(-90)*rotz(q(i));
D{i}=[0 -210.4 -6.4]’/1000;
H{i}=RnD2H(R{i},D{i});
i=4;
R{i}=rotx(+90)*rotz(q(i));
D{i}=[0 6.4 -210.4]’/1000;
H{i}=RnD2H(R{i},D{i});
i=5;
R{i}=rotx(-90)*rotz(q(i));
D{i}=[0 -208.4 -6.4]’/1000;
H{i}=RnD2H(R{i},D{i});
i=6;
R{i}=rotx(+90)*rotz(q(i));
D{i}=[0 0 -105.9]’/1000;
H{i}=RnD2H(R{i},D{i});
i=7;
R{i}=rotx(-90)*rotz(q(i));
D{i}=[0 -105.9 0]’/1000;
H{i}=RnD2H(R{i},D{i});
i=8;
R{i}=rotx(180)*rotz(0);
D{i}=[0 0 -61.5]’/1000;
H{i}=RnD2H(R{i},D{i});
end
function H = RnD2H(R,D)
% combines Rotation and Displacement into homogenous transform
H = horzcat(R,D);
H = vertcat(H,[0 0 0 1]);
end
function H = getSingleH(h,von,bis)
% returns homogenous transfrormation from to defined frames
H=eye(4);
for i=von+(bis-von>0):sign(bis-von):bis+(von-bis>0)
H=H*h{i}^sign(bis-von);
end
endHow does the Robot System Toolbox work internally?
I noticed some small offset in the results of the Forward Kinematics if I compare the result of getTransform with the plain geometric Forward Kinematics calculated with the parameters from the Datasheet (p. 61).
The offset also seems to be dynamic. So I wonder what the reason could be.
Is there some kind of dynamic simulation of the joints stiffnes?
Attached is some code to reproduce this:
% Compare DH forward Kinematics with Matlab Model of KinGen3
%% get robot model
close all
gen3 = loadrobot("kinovaGen3");
gen3.DataFormat = ‘column’;
eeName = ‘EndEffector_Link’;
% define q:
q=[ 1.18 -68.68 18.47 -69.09 94.36 112.93 46.00]’;
%q=[ 1.18+180 -68.68 18.47 -69.09 94.36 112.93 46.00]’;
%q=[0 0 0 0 0 0 0]’;
%% calculate FK with DH and with Matlab modell:
% DH
H_DH = getSingleH(getHcomplete(q),0,8)
pos_DH = H_DH(1:3,4);
% Model
H_mod = getTransform(gen3, q/180*pi’, eeName)
pos_mod=H_mod(1:3,4);
%calculate difference:
pos_dif=(pos_DH-pos_mod)*1000
%% Funktiondecalrations for forward kinematics
function H = getHcomplete(q)
% calculates H cell according to input angles
% wheras H{1}=H01, H{2}=H12, H{3}=H23, etc…
i=1;
R{i}=rotx(180)*rotz(q(i)); % Rotation
D{i}=[0 0 +156.4]’/1000; % Displacement
H{i}=RnD2H(R{i},D{i});
i=2;
R{i}=rotx(+90)*rotz(q(i));
D{i}=[0 5.4 -128.4]’/1000;
H{i}=RnD2H(R{i},D{i});
i=3;
R{i}=rotx(-90)*rotz(q(i));
D{i}=[0 -210.4 -6.4]’/1000;
H{i}=RnD2H(R{i},D{i});
i=4;
R{i}=rotx(+90)*rotz(q(i));
D{i}=[0 6.4 -210.4]’/1000;
H{i}=RnD2H(R{i},D{i});
i=5;
R{i}=rotx(-90)*rotz(q(i));
D{i}=[0 -208.4 -6.4]’/1000;
H{i}=RnD2H(R{i},D{i});
i=6;
R{i}=rotx(+90)*rotz(q(i));
D{i}=[0 0 -105.9]’/1000;
H{i}=RnD2H(R{i},D{i});
i=7;
R{i}=rotx(-90)*rotz(q(i));
D{i}=[0 -105.9 0]’/1000;
H{i}=RnD2H(R{i},D{i});
i=8;
R{i}=rotx(180)*rotz(0);
D{i}=[0 0 -61.5]’/1000;
H{i}=RnD2H(R{i},D{i});
end
function H = RnD2H(R,D)
% combines Rotation and Displacement into homogenous transform
H = horzcat(R,D);
H = vertcat(H,[0 0 0 1]);
end
function H = getSingleH(h,von,bis)
% returns homogenous transfrormation from to defined frames
H=eye(4);
for i=von+(bis-von>0):sign(bis-von):bis+(von-bis>0)
H=H*h{i}^sign(bis-von);
end
end How does the Robot System Toolbox work internally?
I noticed some small offset in the results of the Forward Kinematics if I compare the result of getTransform with the plain geometric Forward Kinematics calculated with the parameters from the Datasheet (p. 61).
The offset also seems to be dynamic. So I wonder what the reason could be.
Is there some kind of dynamic simulation of the joints stiffnes?
Attached is some code to reproduce this:
% Compare DH forward Kinematics with Matlab Model of KinGen3
%% get robot model
close all
gen3 = loadrobot("kinovaGen3");
gen3.DataFormat = ‘column’;
eeName = ‘EndEffector_Link’;
% define q:
q=[ 1.18 -68.68 18.47 -69.09 94.36 112.93 46.00]’;
%q=[ 1.18+180 -68.68 18.47 -69.09 94.36 112.93 46.00]’;
%q=[0 0 0 0 0 0 0]’;
%% calculate FK with DH and with Matlab modell:
% DH
H_DH = getSingleH(getHcomplete(q),0,8)
pos_DH = H_DH(1:3,4);
% Model
H_mod = getTransform(gen3, q/180*pi’, eeName)
pos_mod=H_mod(1:3,4);
%calculate difference:
pos_dif=(pos_DH-pos_mod)*1000
%% Funktiondecalrations for forward kinematics
function H = getHcomplete(q)
% calculates H cell according to input angles
% wheras H{1}=H01, H{2}=H12, H{3}=H23, etc…
i=1;
R{i}=rotx(180)*rotz(q(i)); % Rotation
D{i}=[0 0 +156.4]’/1000; % Displacement
H{i}=RnD2H(R{i},D{i});
i=2;
R{i}=rotx(+90)*rotz(q(i));
D{i}=[0 5.4 -128.4]’/1000;
H{i}=RnD2H(R{i},D{i});
i=3;
R{i}=rotx(-90)*rotz(q(i));
D{i}=[0 -210.4 -6.4]’/1000;
H{i}=RnD2H(R{i},D{i});
i=4;
R{i}=rotx(+90)*rotz(q(i));
D{i}=[0 6.4 -210.4]’/1000;
H{i}=RnD2H(R{i},D{i});
i=5;
R{i}=rotx(-90)*rotz(q(i));
D{i}=[0 -208.4 -6.4]’/1000;
H{i}=RnD2H(R{i},D{i});
i=6;
R{i}=rotx(+90)*rotz(q(i));
D{i}=[0 0 -105.9]’/1000;
H{i}=RnD2H(R{i},D{i});
i=7;
R{i}=rotx(-90)*rotz(q(i));
D{i}=[0 -105.9 0]’/1000;
H{i}=RnD2H(R{i},D{i});
i=8;
R{i}=rotx(180)*rotz(0);
D{i}=[0 0 -61.5]’/1000;
H{i}=RnD2H(R{i},D{i});
end
function H = RnD2H(R,D)
% combines Rotation and Displacement into homogenous transform
H = horzcat(R,D);
H = vertcat(H,[0 0 0 1]);
end
function H = getSingleH(h,von,bis)
% returns homogenous transfrormation from to defined frames
H=eye(4);
for i=von+(bis-von>0):sign(bis-von):bis+(von-bis>0)
H=H*h{i}^sign(bis-von);
end
end robotics system toolbox, gettransform, kinovagen3, loadrobot MATLAB Answers — New Questions
Formatting of strings of letters/numbers
Hi, I’m hoping someone can help me, I have reports that come through daily with identification numbers that contain a mix of letters and numbers, and I need to format them with dashes to be able to import into another program.
Raw data looks like this:
HFMRBT0N480020HFMRBT0J0C0005HFMRBT0MXJ0010HFMRBT0NL80006AHFMRBT0NL80006B
And I need output to be:
HFM-RBT-0N48-0020
HFM-RBT-0J0C-0005
HFM-RBT-0MXJ-0010
HFM-RBT-0NL8-0006A
HFM-RBT-0NL8-0006B
Would anyone have a simple-ish solution??
Thanks a lot!
Amy
Hi, I’m hoping someone can help me, I have reports that come through daily with identification numbers that contain a mix of letters and numbers, and I need to format them with dashes to be able to import into another program. Raw data looks like this: HFMRBT0N480020HFMRBT0J0C0005HFMRBT0MXJ0010HFMRBT0NL80006AHFMRBT0NL80006B And I need output to be:HFM-RBT-0N48-0020HFM-RBT-0J0C-0005HFM-RBT-0MXJ-0010HFM-RBT-0NL8-0006AHFM-RBT-0NL8-0006B Would anyone have a simple-ish solution?? Thanks a lot!Amy Read More
Microsoft Bookings – Blocking off time
We have a team in our company that is planning on setting aside 2-3 weeks for premium support for particular clients. Every weekday for those 2-3 weeks, they want to block off timeslots of 10:00 AM to 3:00 PM for those particular clients to be able to book time with team members to discuss product innovations and customer care issues.
Is there a best practice within Microsoft Bookings (or any best practices in general) to allow for this?
We have a team in our company that is planning on setting aside 2-3 weeks for premium support for particular clients. Every weekday for those 2-3 weeks, they want to block off timeslots of 10:00 AM to 3:00 PM for those particular clients to be able to book time with team members to discuss product innovations and customer care issues. Is there a best practice within Microsoft Bookings (or any best practices in general) to allow for this? Read More
Introducing an AI Governance Framework for Nonprofits
How to navigate nonprofit AI adoption with a clear framework
The AI Governance Framework for Nonprofits was developed with insights from nearly two dozen nonprofit leaders to help organizations navigate AI adoption and management. The framework was sponsored by Microsoft and created by noted AI advisor, Afua Bruce, author of The Tech that Comes Next, founder and principal of ANB Advisory Group, and faculty member at Carnegie Mellon University.
The framework aims to provide nonprofits with a clear and actionable roadmap for governance and AI adoption. The training includes video overviews and downloadable documents tailored to organizations’ current AI implementation status, with a focus on real-world use cases and productivity-driven scenarios. Download the kit to receive:
Internal AI policy templates
Examples of AI use
Materials for board discussions
Six modules for comprehensive guidance on AI implementation.
Download the full kit with videos, templates, and frameworks here.
Meeting nonprofit challenges with AI
We know based on research that over half of nonprofit workers are unsure of how to use AI, highlighting the need for training and a framework to guide responsible AI use within organizations. Yet nonprofits, facing a crisis of funding and staffing constraints, are most poised to benefit from the productivity, time saving, and creativity that AI can bring to their important work.
Balanced, responsible AI
Afua Bruce’s framework emphasizes the importance of implementing AI policies now, to help nonprofits balance creativity and responsibility in AI tool usage, and to ease administrative burdens while being mindful of AI’s limitations.
AI should complement, not compromise, the core values of empathy, human-driven service, and critical service provision that define nonprofit organizations. The framework brings a human-centered approach that keeps nonprofit missions at the heart of its adoption strategy in an accessible step-by-step format.
Download the full kit with videos, templates, and frameworks here.
Microsoft Tech Community – Latest Blogs –Read More
MATLAB not saving variables to workspace
I don’t know what’s wrong with my MATLAB. Every time I run the dummy.m using the F5 in the editor, all the variables are being displayed in the workspace. But when I run the NitrogenDef.m using again the F5 in the editor, all the variables used in the NitrogenDef.m are not displayed in the workspace. Any help with this? Thanks!I don’t know what’s wrong with my MATLAB. Every time I run the dummy.m using the F5 in the editor, all the variables are being displayed in the workspace. But when I run the NitrogenDef.m using again the F5 in the editor, all the variables used in the NitrogenDef.m are not displayed in the workspace. Any help with this? Thanks! I don’t know what’s wrong with my MATLAB. Every time I run the dummy.m using the F5 in the editor, all the variables are being displayed in the workspace. But when I run the NitrogenDef.m using again the F5 in the editor, all the variables used in the NitrogenDef.m are not displayed in the workspace. Any help with this? Thanks! workspace, function, variable MATLAB Answers — New Questions
Simulation with an RL Agent does not save simulation data
I have a reinforcement learning simulink model environment I am training in MATLAB 2023a, which I started porting to MATLAB 2024a. The model runs well in 2023a and saves the simulations done using the sim function. The environment has some signals that I want to save.
For the 2023a they got saved in the SimulationInfo object but that doesnt happen with 2024a. Do I have to activate something additionally in 2024a or is it a bug?
The images below detail the difference between the two versions. The env is a simulink envrionment
simEpisodes = 1;
simOpts = rlSimulationOptions("MaxSteps",1250,…
"NumSimulations", simEpisodes);
experience = sim(env,agent,simOpts);
save(strcat(results_dir,’/Experience.mat’),"experience")I have a reinforcement learning simulink model environment I am training in MATLAB 2023a, which I started porting to MATLAB 2024a. The model runs well in 2023a and saves the simulations done using the sim function. The environment has some signals that I want to save.
For the 2023a they got saved in the SimulationInfo object but that doesnt happen with 2024a. Do I have to activate something additionally in 2024a or is it a bug?
The images below detail the difference between the two versions. The env is a simulink envrionment
simEpisodes = 1;
simOpts = rlSimulationOptions("MaxSteps",1250,…
"NumSimulations", simEpisodes);
experience = sim(env,agent,simOpts);
save(strcat(results_dir,’/Experience.mat’),"experience") I have a reinforcement learning simulink model environment I am training in MATLAB 2023a, which I started porting to MATLAB 2024a. The model runs well in 2023a and saves the simulations done using the sim function. The environment has some signals that I want to save.
For the 2023a they got saved in the SimulationInfo object but that doesnt happen with 2024a. Do I have to activate something additionally in 2024a or is it a bug?
The images below detail the difference between the two versions. The env is a simulink envrionment
simEpisodes = 1;
simOpts = rlSimulationOptions("MaxSteps",1250,…
"NumSimulations", simEpisodes);
experience = sim(env,agent,simOpts);
save(strcat(results_dir,’/Experience.mat’),"experience") machine learning, deep learning, artificial intelligence, bug, reinforcement learning MATLAB Answers — New Questions
How to deleting a registry key
I would like to know if you have a way to remediated a malicious registry key with Defender XDR ?
I would like to know if you have a way to remediated a malicious registry key with Defender XDR ? Read More
JSON Header Formatting SharePoint list
I am attempting to add a custom header to my list form. When particular departments are selected from the “Department” choices (Critical Care, Anesthesia, Radiology, Surgery), I would like alert to display in the header that say “This Department Requires Approval – A Request Will Be Sent to Department Representation.” Can anyone assist with the JSON?
I am attempting to add a custom header to my list form. When particular departments are selected from the “Department” choices (Critical Care, Anesthesia, Radiology, Surgery), I would like alert to display in the header that say “This Department Requires Approval – A Request Will Be Sent to Department Representation.” Can anyone assist with the JSON? Read More
Macro Error
Hi All,
I generated a macro to first filter the values of a column and remove the blanks, and then sort them from highest to lowest, but it generates the following error. What could it be due to?.
The Macro is:
Hi All, I generated a macro to first filter the values of a column and remove the blanks, and then sort them from highest to lowest, but it generates the following error. What could it be due to?. The Macro is: Read More
Benchmark Testing puts you on the path to peak API Performance
Benchmark performance testing involves measuring the performance characteristics of an application or system under normal or expected conditions. It’s a recommended practice in any case, but it’s a critical consideration for your APIs since your consumers will depend on consistent performance for their client applications.
Incorporating benchmark testing of your Microsoft Azure API Management services into your software delivery process provides several important benefits:
It establishes performance baseline as a known, quantifiable starting point against which future results can be compared.
It identifies performance regressions so that you can pinpoint changes or integrations that may be causing performance degradation or hindering scalability — in effect helping you to identify which components might need to be scaled or configured to maintain performance. This allows developers and operational staff to make targeted improvements to enhance the performance of your APIs and avoid accumulating performance-hindering technical debt.
It validates performance requirements so you can be assured that the architecture meets the desired operating performance targets. This can also help you determine a strategy for implementing throttling or a circuit breaker pattern.
It improves user experience by identifying and resolving performance issues early in the development life cycle— before your changes make it into production.
And perhaps most importantly, it gives you the data you need to create the capacity model you’ll need to operate your APIs efficiently across the entire range of design loads. This is a topic for a future post, but the methods described here are a a great starting point.
Benchmark vs Load Testing. What’s the difference?
While the approaches and tools involved are nominally very similar, the reasons for doing them differ. Benchmark testing establishes a performance baseline within the normal operational range of conditions, while load testing established the upper boundary or point of failure. Benchmark testing establishes a reference point for future iterations, while load testing validates scalability and stress handling. Both are important for ensuring API performance, and you can combine the approaches to suit your needs as long as the goals of each are met.
Below, we’ll describe the principles of designing a repeatable benchmark test and conclude with a full walkthrough and the resources you’ll need to do it yourself.
A model approach
Before we get into a specific example, let’s look at the conceptual steps involved.
Broadly, there are two stages:
Design and Planning: Decide what to measure, and how to measure it. (Steps 1-4 below)
Execution: Run test, collect results, and use the results to inform future actions or decisions. (Steps 5-7)
The execution stage is repetitive. The first execution result becomes the baseline. From there, the benchmark test can be repeated after any important change to your API workload (resource configuration, backend application code, etc.). Comparing the results of the current and previous test will indicate whether the most recent change moved you closer to your goal or caused a regression. Once the goal is met, you’ll continue the practice with future changes to ensure that the required performance is being maintained.
1. Identify your benchmark metric
Determine the key performance metric that will define your benchmark. Think of it as the key performance indicator (KPI) of your API workload. Some examples include: operation or request duration, failure rate, resource utilization (eg, memory usage), data transfer speed, and database transaction time. The metric should align with your requirements and objectives, and be a good indicator for the quality of the consumer experience. For API Management, and APIs in general, the easiest and most useful metric is usually response time. For that reason, start with response time as the default choice if your circumstances don’t guide you to choose something else.
The key here is to choose a single metric that you can capture easily and consistently, is an indicator of the kind of performance you are after, and that will allow you to make linear comparisons over time. It’s possible to devise your own composite metric based on an aggregation formula using multiple primitives, if required, in order to derive a single benchmark measurement that works best for you.
Tip: Requests per second (RPS) might be the first metric you think of when you are trying to decide what you should measure. Similar unit-per-second metrics have been used historically for benchmark everything from web servers to GPUs. But in reality, RPS by itself isn’t very useful as a benchmark for APIs. It’s not uncommon to observe a system achieve a “high” RPS while individual consumers are simultaneously experiencing “slow” response times. For this reason, we recommend that you only use RPS as a scenario parameter and choose something else as your benchmark metric.
2. Define the benchmark scenario
The scenario describes input parameters and the simulation. In other words, it describes what is happening in the system while the benchmark metric is being measured. For example, “1000 simulated users, calling the Product Search API, at a rate of 10 searches per minute per user”. The scenario should be as simple as possible while also providing a realistic representation of typical usage and conditions. It should accurately reflect the behavior of the system in terms of user interactions, data payloads, etc. For example, if your API relies on caching to boost performance, don’t use a scenario that results in an unrealistically high cache hit rate.
Tip: For an existing application, choose an API operation that represents an important use case and is frequently used by your API consumers. Also, make sure that the performance of the scenario is relatively deterministic— meaning that you expect the test results to be relatively consistent across repeated runs using the same code and configuration, and the results aren’t likely to be skewed by external or transient conditions. For example, if your API relies on a shared resource (like a database), make sure the external load on that resource isn’t interfering with your benchmark. When it doubt, use multiple test runs and compare the results.
3. Define the test environment
The test environment includes the tool that will run the simulation (JMeter, for example), along with all the resources your API requires. Generally speaking, you should use a dedicated environment that models your production environment as closely as possible, including compute, storage, networking, and downstream dependencies. If you have to use mocks for any of your dependencies, make sure that they are accurately simulating the real dependency (network latency, long running processes, data transfer, etc).
Tip: You want your testing environment to satisfy two conditions:
It makes it easy to set up and execute the test. You don’t want to deter yourself from running tests because the process is tedious or time-consuming.
It is consistent and repeatable across test runs to ensure the observed results can be compared reliably.
Automation helps you achieve both of these things.
4. Determine how you will record your chosen metric
You may need to instrument your code or API Management service with performance monitoring tools or profiling agents (for example, Azure Application Insights). You may also need to consider how you will retrieve and store the results for future analysis.
Tip: Be aware that adding observability and instrumentation can, by itself, adversely impact your performance metric, so the ideal case (if the observability tooling isn’t already part of your production-ready design) would be a data collection method that captures the data at the client (or agent, in the case of Azure Load Testing).
5. Execute the test scenario
Run the defined test scenario against the API while measuring the performance metric.
6. Analyze the results
Analyze the collected performance data to assess how your API performs. If this isn’t your first time running the test, compare the observed performance against previous executions to determine if the API continues to meet the desired performance objectives and what the impact (if any) of your code or configuration changes may be. There are statistical methods that can applied to aid this analysis, which are extremely useful in automated tests or pull request reviews. These methods are beyond the scope of this post, but it’s a good idea to familiarize yourself with some of the approaches.
For Example: You just added a policy change that decrypts part of the request payload and transforms it into a different format for your backend to consume. You noticed that the time for the operation to complete has increased from 70ms to 110ms. Your benchmark objective is 80ms. Do you revert the change? Do you scale your API management service to compensate? Do you try to optimize your recent changes to see if you can get the results to improve? The bottom line here is that you can use the data to make an informed decision.
7. Report and document
Document the test results, including performance metrics, observations, and any identified issues or recommended actions. This information serves as a reference for future performance testing iterations and as a new benchmark for future comparison.
8. Iterate and refine
Finally, find ways to automate or optimize the process or modify your strategy as necessary to improve its usefulness to your business operations and decision making. In a future article, we’ll talk more about how to operationalize benchmark testing and how to use it as a powerful capacity management tool.
Walkthrough
Let’s make this more realistic with a basic example. For the purposes of this walkthrough, we’ve developed an automated environment setup using Terraform. Find more information about the environment and the source code on GitHub. The environment includes an API Management service, a basic backend (httpbin, hosted in an Azure App Service plan), and an Azure Load Testing resource.
Tip: Use the Terraform templates provided in the repo to deploy all the resources you’ll need to follow along. For operational use, we recommend that you create your own repository using our repo as a template, and then follow the instructions in the README to configure the GitHub workflows for deployment to your Azure subscription. Once configured, the workflow will deploy the infrastructure and then run the load tests for you automatically.
You are free to choose any testing tools that fit your needs, but we recommend Azure Load Testing. It doesn’t require you to install JMeter locally or author your own test scripts. It allows you to define parameters, automatically generates the JMeter script for your test, and manages all the underlying resources required for the test agents. Most importantly, it avoids many of the problems we’d be likely to encounter with client-based tools and gives us the repeatability we need.
Let’s look at how we’ll apply our model approach in the example:
Performance metric
Average response time
Benchmark scenario
Performance will be measured under a consistent request rate of 500 requests per second.
Environment
The sample environment – an App Service Web App that hosts the backend API and an API Management Service configured with one scale unit. Both are located in the same region, along with the Azure Load Testing resource. The deployment assets for all resources are included.
Deploy the Azure resources
1. Open Azure Cloud Shell and run the following commands.
2. Clone the Repository
git clone https://github.com/ibersanoMS/api-management-benchmarking-sample.git
cd api-management-benchmarking-sample/src/infra
3. Initialize Terraform
terraform init
4. Plan the Deployment
terraform plan –out=tfplan
5. Apply the Terraform Templates
terraform apply tfplan
Creating and running the tests
Note: The Terraform templates will configure the load tests for you, but if you want to create tests on your own the steps below will walk you through it.
Identify the host url of your App Service backend and your API Management service. If you’re using the sample environment created from the Terraform template, these will be the “backendUrl” and “apiUrl” respectively.
Search for Azure Load Testing in the Azure Portal.
Click Create on the resource provider menu bar.
Once the Load Testing resource is created, navigate to Tests.
Click Create on grid menu bar and then choose Create a URL-based test.
Configure the test with the following parameters for your first case (500B payload). Enter the App Service backend as the host portion of the Test Url, which should be in the form of: https://{your App Service hostname}/bytes/500.
Click Run Test. Once the test completes, you should see results like below:
Now that we have a baseline result for the backend, create and run another identical test, but this time use the API Management API URL as the Test Url (https://{your API Management service hostname}/bytes/500).
Finally, we’ll simulate an updated version of the API by increasing the response payload size. Our API now returns more data than the previous version, so we’ll be able to measure the impact of that change.
Configure and run a new test. We’re still using the API Management host url, with a url path that returns 1500 bytes instead of 500 bytes: (https://{your API Management service hostname}/bytes/1500).
Once the test completes, you should see results like below:
Looking at the Results
In our first benchmark, we were establishing a performance baseline of the backend application which returns a 500-byte payload. We tested the backend in isolation (meaning the load test client was sending requests directly to the backend API end point, without API Management) so that we could measure how it performs on its own. This isn’t always necessary, or even practical, but it can provide really useful insights. Below are the results from three different runs of that first test:
First result set
Throughput (RPS)
Average Response Time (ms)
444
21
431
14
447
15
Next, we ran the same benchmark test using the API Management endpoint so requests were being proxied through API Management to the backend application. This scenario is an “end-to-end” or “system” test that is representative of how our API would be deployed in production. The results help us measure any latency or change in performance added by API Management and the Azure network. As we can see, the results are similar. This indicates that the net effect of API Management on the system performance at this design load is zero or very close to zero.
Second result set
Throughput (RPS)
Average Response Time (ms)
443
15
441
14
436
10
Finally, we ran a benchmark on a new “release” of our backend application. The new version of the API now returns a larger 1,500 byte payload, and we can see from the results that response times have increased significantly.
Third result set
Throughput (RPS)
Average Response Time (ms)
361
600
370
518
367
585
Assuming these results don’t meet our performance objectives, we now know that remediation steps will need to be taken before the new release of our API can be deployed to production. For example, we might consider adding output caching, or scaling the App Service or API Management service, or look for ways to optimize the payload returned from the application code. In any case, we now have the tools to test any remediation approach (using the same structured, quantitative approach above) so that we can be sure that the new API version meets its performance objective before it’s released.
Related resources to explore
Performance tuning a distributed application
Autoscaling
Automate an existing load test with CI/CD
Add caching to improve performance in Azure API Management
Troubleshooting client response timeouts and errors with API Management
Microsoft Tech Community – Latest Blogs –Read More
App designer TabGroup colours
I have been creating an app in app designer, and I cannot see any way to change the grey border of a tabgroup where there are no tabs. It is very ugly and I would rather this be transparent – does anyone know a fix for this? Or any clever way of using HTML/CSS to make this possible? Thanks in advance.
See picture below:I have been creating an app in app designer, and I cannot see any way to change the grey border of a tabgroup where there are no tabs. It is very ugly and I would rather this be transparent – does anyone know a fix for this? Or any clever way of using HTML/CSS to make this possible? Thanks in advance.
See picture below: I have been creating an app in app designer, and I cannot see any way to change the grey border of a tabgroup where there are no tabs. It is very ugly and I would rather this be transparent – does anyone know a fix for this? Or any clever way of using HTML/CSS to make this possible? Thanks in advance.
See picture below: app designer, tabgroup MATLAB Answers — New Questions
Error in boxchart (invalid parameter/value pair arguments)
I am trying to use Boxchart but am getting an error even when using the carbig dataset and functions as listed in the help files. Eventually I need to get boxcharts for ANOVA results. This is what I have. (The help file for anovan calls "Model_Year" as "mfg date" but I think this is the equivalent set of data)
aov = anovan(MPG,{org when},’model’,2,’varnames’,{‘Origin’,’Model_Year’})
boxchart(aov,["Origin"])
legend
The ANOVA seems to run just fine, but when it gets to the Boxchart I get this error. Any ideas? I’m using version R2023b.
Error using matlab.graphics.chart.primitive.BoxChart
Invalid parameter/value pair arguments.
Error in boxchart (line 186)
H(idx) = matlab.graphics.chart.primitive.BoxChart(‘Parent’, cax,…
Error in ANOVA_trial_file (line 2)
boxchart(p,["Origin"])I am trying to use Boxchart but am getting an error even when using the carbig dataset and functions as listed in the help files. Eventually I need to get boxcharts for ANOVA results. This is what I have. (The help file for anovan calls "Model_Year" as "mfg date" but I think this is the equivalent set of data)
aov = anovan(MPG,{org when},’model’,2,’varnames’,{‘Origin’,’Model_Year’})
boxchart(aov,["Origin"])
legend
The ANOVA seems to run just fine, but when it gets to the Boxchart I get this error. Any ideas? I’m using version R2023b.
Error using matlab.graphics.chart.primitive.BoxChart
Invalid parameter/value pair arguments.
Error in boxchart (line 186)
H(idx) = matlab.graphics.chart.primitive.BoxChart(‘Parent’, cax,…
Error in ANOVA_trial_file (line 2)
boxchart(p,["Origin"]) I am trying to use Boxchart but am getting an error even when using the carbig dataset and functions as listed in the help files. Eventually I need to get boxcharts for ANOVA results. This is what I have. (The help file for anovan calls "Model_Year" as "mfg date" but I think this is the equivalent set of data)
aov = anovan(MPG,{org when},’model’,2,’varnames’,{‘Origin’,’Model_Year’})
boxchart(aov,["Origin"])
legend
The ANOVA seems to run just fine, but when it gets to the Boxchart I get this error. Any ideas? I’m using version R2023b.
Error using matlab.graphics.chart.primitive.BoxChart
Invalid parameter/value pair arguments.
Error in boxchart (line 186)
H(idx) = matlab.graphics.chart.primitive.BoxChart(‘Parent’, cax,…
Error in ANOVA_trial_file (line 2)
boxchart(p,["Origin"]) boxchart, anova MATLAB Answers — New Questions