Category: News
Our latest work to improve Azure Functions cold starts
We continually work to improve performance and mitigate Azure Functions cold starts – the extra time it takes for a function that hasn’t been used recently to respond to an event. We understand that no matter when your functions were last called, you want fast executions and little lag time.
In this article:
How we measure cold start and the work done to improve it in the Azure Functions platform.
What you can do to optimize your functions to improve your app’s cold start performance.
Provide your feedback on Azure Functions cold start.
How we measure Azure Functions cold start
In measuring Azure Functions performance, we prioritize the cold start of synchronous HTTP triggers in the Consumption and Flex Consumption hosting plans. That means looking at what our platform and Azure Functions host need to do to execute the first HTTP trigger function on a new instance. Then we improve it. We are also working to improve cold start for asynchronous scenarios.
To assess our progress, we run sample HTTP trigger function apps that measure cold start latencies for all supported versions of Azure Functions, in all languages, for both Windows and Linux Consumption. These sample apps are deployed in all Azure regions and subregions where Azure Functions runs. Our test function calls these sample apps every few hours to trigger a true cold start and currently generates nearly 85,000 daily cold start samples. Through this testing infrastructure we observed in past 18 months a reduction on cold start latency by approximately 53 percent across all regions and for all supported languages and platforms.
If any of the tracked metrics start to regress, we’re immediately notified and start investigating. Daily emails, alerts, and historical dashboards tell us the end-to-end cold start latencies across various percentiles. We also perform specific analyses and trigger alerts if our fiftieth percentile, ninety-ninth percentile, or maximum latency numbers regress.
In addition, we collect detailed PerfView profiles of the sample apps deployed in select regions. The breakdown includes full call stacks (user mode and kernel mode) for every millisecond spent during cold start. The profiles reveal CPU usage and call stacks, context switches, disk reads, HTTP calls, memory hard faults, common language runtime (CLR) just-in-time (JIT) compiler, garbage collector (GC), type loads, and many more details about .NET internals. We report all these details in our logging pipelines and receive alerts if metrics regress. And we’re always looking for ways to make improvements based on these profiles.
Performance improvements in the platform
Since launching Azure Functions, we’ve improved performance across the Azure platform that it runs on, in order to achieve the observed reduction in cold starts. These enhancements extended to the shared platform with Azure App Service and the new Legion platform, the operating system, storage, .NET Core, and communication channels.
We aim to optimize for the ninety-ninth–percentile latency. We delve into cold start scenarios at the millisecond level and continually fine-tune the algorithms that allocate capacity. In short, we’re always working to improve Azure Functions cold start. The following areas are our current our focus:
Function app pools. In the internal architecture, we must ensure that the right number of Function app pools are warmed up and ready to handle a cold start for all supported platforms and languages. These pools serve as placeholders in effect. Exactly how many depends on the usage per region—plus enough extra capacity to meet unexpected bursts. We’re always refining our algorithms to balance the pools without increasing costs. Placeholder processes and dependencies stay hot in memory to prevent paging out.
Ninety-ninth–percentile latencies. Although it’s relatively straightforward to optimize cold start scenarios for the fiftieth percentile, we are digging deeper to address ninety-ninth–percentile latencies, particularly when multiple VMs are involved. Each runs different processes and components and is configured with unique disk, network, and memory characteristics. It’s even harder to trace the root causes of potential ninety-ninth–percentile regressions.
Profilers. We use a multitude of specialized profiling tools capable of dissecting cold start scenarios at the millisecond level. We examine detailed call stacks and tracking activities at both the application and operating system levels. The PerfView and Event Tracing for Windows (ETW) providers are great at addressing issues with Windows and .NET-based apps, but we also investigate issues across platforms and languages. We also use Profile Guided Optimization (PGO) to ensure that Functions Host and dependent libraries are fully JIT compiled and ready to minimize the impact of platform code JIT compilation during actual cold start requests.
Histograms. If our platform detects cold starts occurring at regular intervals, we fully prewarm the instance where the function app will run to avoid cold start delays during actual execution.
6 things you can do now to improve cold start in Azure Functions
Here are a few strategies you can follow to further improve cold starts for your apps:
Deploy your function as a .zip (compressed) package. Minimize its size by removing unneeded files and dependencies, such as debug symbols (.pdb files) and unnecessary image files.
For Windows deployment, run your functions from a package file. To do this, set the WEBSITE_RUN_FROM_PACKAGE=1 app setting. If your app uses storage for storing content, deploy Azure Storage in the same region as your Azure Functions app and consider using premium storage for a faster cold start.
When deploying .NET apps, publish with ReadyToRun to avoid additional costs from the JIT compiler.
In the Azure portal, navigate to your function app. Go to Diagnose and solve problems, and review any messages that appear under Risk alerts. Look for issues that may impact cold starts.
If your app uses a Premium or App Service plan, invoke warmup triggers to preload dependencies or to add any custom logic required to connect to external endpoints. This option isn’t supported for apps on Consumption plans.
To help mitigate cold starts, try the always ready instances feature of our newest hosting option for event-driven serverless functions, Flex Consumption.
Final Thoughts
If your Azure Functions app still doesn’t perform as well as you’d like, consider the following:
Share your feedback on Azure Functions cold start to get in touch with the team.
try the always ready instances feature of our newest hosting option for event-driven serverless functions, Flex Consumption.
Note: This article is a modified version of the article originally published on Newsstack.
Microsoft Tech Community – Latest Blogs –Read More
average between cell arrays of doubles
Hello, I’m working with a nested cell array in which each original cell contains a nested cell array of doubles.
For simplicity, the original nested array could have 10 cell arrays within it. For each of those cell arrays, there are 100 doubles that are 30×50 in length. I would like to get the mean/average of all of those doubles such that they go from 100 instances of 30×50 doubles to just 1 single instance of 30×50 double.
This would result with the average value found in each element of the double across the instances. This would mean that the original cell array is now 10 cells, with each having only one instance of 30×50 double as their individual averages.
Please let me know if you need more information to help me with calculating the average. Thank you!Hello, I’m working with a nested cell array in which each original cell contains a nested cell array of doubles.
For simplicity, the original nested array could have 10 cell arrays within it. For each of those cell arrays, there are 100 doubles that are 30×50 in length. I would like to get the mean/average of all of those doubles such that they go from 100 instances of 30×50 doubles to just 1 single instance of 30×50 double.
This would result with the average value found in each element of the double across the instances. This would mean that the original cell array is now 10 cells, with each having only one instance of 30×50 double as their individual averages.
Please let me know if you need more information to help me with calculating the average. Thank you! Hello, I’m working with a nested cell array in which each original cell contains a nested cell array of doubles.
For simplicity, the original nested array could have 10 cell arrays within it. For each of those cell arrays, there are 100 doubles that are 30×50 in length. I would like to get the mean/average of all of those doubles such that they go from 100 instances of 30×50 doubles to just 1 single instance of 30×50 double.
This would result with the average value found in each element of the double across the instances. This would mean that the original cell array is now 10 cells, with each having only one instance of 30×50 double as their individual averages.
Please let me know if you need more information to help me with calculating the average. Thank you! nested cell array, average MATLAB Answers — New Questions
Finding accurate inverse of binary circulant matrix
I would like to find the inverse of a binary circulant matrix using MATLAB. I have the 24×24 binary circulant matrix stored in BCM and use the function inv(BCM) and I get a lot of garbage values. I have the inverse matrix that I am expecting to get, which is the multiplicative inverse of BCM. When I multiply them in MATLAB, I get the identity matrix which is correct. However, I’m going to need to calculate new inverses of new matrices and would like to do so with MATLAB instead of guessing over and over. How can I do this?
Both BCM and it’s inverse are stored in the attached Excel spreadsheet. Thank you!
Garbage Values:I would like to find the inverse of a binary circulant matrix using MATLAB. I have the 24×24 binary circulant matrix stored in BCM and use the function inv(BCM) and I get a lot of garbage values. I have the inverse matrix that I am expecting to get, which is the multiplicative inverse of BCM. When I multiply them in MATLAB, I get the identity matrix which is correct. However, I’m going to need to calculate new inverses of new matrices and would like to do so with MATLAB instead of guessing over and over. How can I do this?
Both BCM and it’s inverse are stored in the attached Excel spreadsheet. Thank you!
Garbage Values: I would like to find the inverse of a binary circulant matrix using MATLAB. I have the 24×24 binary circulant matrix stored in BCM and use the function inv(BCM) and I get a lot of garbage values. I have the inverse matrix that I am expecting to get, which is the multiplicative inverse of BCM. When I multiply them in MATLAB, I get the identity matrix which is correct. However, I’m going to need to calculate new inverses of new matrices and would like to do so with MATLAB instead of guessing over and over. How can I do this?
Both BCM and it’s inverse are stored in the attached Excel spreadsheet. Thank you!
Garbage Values: matrix, matrices, binary, circulant matrix, inverse MATLAB Answers — New Questions
I want to calculate distances in 3D space. How do I apply my code to all tables in all cells?
Hi,
I want to use the formula d = sqrt((x2 – x1)^2 + (y2 – y1)^2 + (z2 – z1)^2) to calculate two distances. Each distance is between two points in 3D space twice. Once from point A to point B, and one is from point A to point C. I have a data set with cell array where each cell contains tables.
he tables in the cells are built up so that the x, y and z positional coordinates of point A are in column 1, 2 and 3. The x, y, and z positional coordinates of point B are in columns 4, 5, and 6. And the x, y, and z positional coordinates of point C are in columns 7, 8 and 9.
I have the code below:
results_distances = cell(size(results_nooutliers));
% Initialize a cell array to store the distances
results_distances = cell(size(results_nooutliers));
% Loop through each cell in the results_no_outliers array
for i = 1:numel(results_nooutliers)
% Get the current table for the participant
this_cell = results_nooutliers{i};
% Check for empty cells
if isempty(this_cell)
continue; % Skip empty cells
end
% Initialize arrays to store distances for right hand and left hand
distances_A_B = zeros(size(this_cell, 1), 1);
distances_A_C = zeros(size(this_cell, 1), 1);
% Calculate distances for each row in the table
for row = 1:size(this_cell, 1)
% Calculate distance for left hand
distances_A_B(row) = sqrt((this_cell{row, 4} – this_cell{row, 1})^2 + …
(this_cell{row, 5} – this_cell{row, 2})^2 + …
(this_cell{row, 6} – this_cell{row, 3})^2);
% Calculate distance for right hand
distances_A_C(row) = sqrt((this_cell{row, 7} – this_cell{row, 1})^2 + …
(this_cell{row, 8} – this_cell{row, 2})^2 + …
(this_cell{row, 9} – this_cell{row, 3})^2);
end
% Store distances for the current participant in a table
results_distances{i} = table(distances_A_B, distances_A_C, ‘VariableNames’, {‘A_B_Distance’, ‘A_C_Distance’});
end
When running I get the error:
Undefined function ‘minus’ for input arguments of type ‘table’.
Can anybody tell me what I am doing incorrectly?
I have attached a small sample of my data set (it is much longer in actuality).
Thanks for the help!Hi,
I want to use the formula d = sqrt((x2 – x1)^2 + (y2 – y1)^2 + (z2 – z1)^2) to calculate two distances. Each distance is between two points in 3D space twice. Once from point A to point B, and one is from point A to point C. I have a data set with cell array where each cell contains tables.
he tables in the cells are built up so that the x, y and z positional coordinates of point A are in column 1, 2 and 3. The x, y, and z positional coordinates of point B are in columns 4, 5, and 6. And the x, y, and z positional coordinates of point C are in columns 7, 8 and 9.
I have the code below:
results_distances = cell(size(results_nooutliers));
% Initialize a cell array to store the distances
results_distances = cell(size(results_nooutliers));
% Loop through each cell in the results_no_outliers array
for i = 1:numel(results_nooutliers)
% Get the current table for the participant
this_cell = results_nooutliers{i};
% Check for empty cells
if isempty(this_cell)
continue; % Skip empty cells
end
% Initialize arrays to store distances for right hand and left hand
distances_A_B = zeros(size(this_cell, 1), 1);
distances_A_C = zeros(size(this_cell, 1), 1);
% Calculate distances for each row in the table
for row = 1:size(this_cell, 1)
% Calculate distance for left hand
distances_A_B(row) = sqrt((this_cell{row, 4} – this_cell{row, 1})^2 + …
(this_cell{row, 5} – this_cell{row, 2})^2 + …
(this_cell{row, 6} – this_cell{row, 3})^2);
% Calculate distance for right hand
distances_A_C(row) = sqrt((this_cell{row, 7} – this_cell{row, 1})^2 + …
(this_cell{row, 8} – this_cell{row, 2})^2 + …
(this_cell{row, 9} – this_cell{row, 3})^2);
end
% Store distances for the current participant in a table
results_distances{i} = table(distances_A_B, distances_A_C, ‘VariableNames’, {‘A_B_Distance’, ‘A_C_Distance’});
end
When running I get the error:
Undefined function ‘minus’ for input arguments of type ‘table’.
Can anybody tell me what I am doing incorrectly?
I have attached a small sample of my data set (it is much longer in actuality).
Thanks for the help! Hi,
I want to use the formula d = sqrt((x2 – x1)^2 + (y2 – y1)^2 + (z2 – z1)^2) to calculate two distances. Each distance is between two points in 3D space twice. Once from point A to point B, and one is from point A to point C. I have a data set with cell array where each cell contains tables.
he tables in the cells are built up so that the x, y and z positional coordinates of point A are in column 1, 2 and 3. The x, y, and z positional coordinates of point B are in columns 4, 5, and 6. And the x, y, and z positional coordinates of point C are in columns 7, 8 and 9.
I have the code below:
results_distances = cell(size(results_nooutliers));
% Initialize a cell array to store the distances
results_distances = cell(size(results_nooutliers));
% Loop through each cell in the results_no_outliers array
for i = 1:numel(results_nooutliers)
% Get the current table for the participant
this_cell = results_nooutliers{i};
% Check for empty cells
if isempty(this_cell)
continue; % Skip empty cells
end
% Initialize arrays to store distances for right hand and left hand
distances_A_B = zeros(size(this_cell, 1), 1);
distances_A_C = zeros(size(this_cell, 1), 1);
% Calculate distances for each row in the table
for row = 1:size(this_cell, 1)
% Calculate distance for left hand
distances_A_B(row) = sqrt((this_cell{row, 4} – this_cell{row, 1})^2 + …
(this_cell{row, 5} – this_cell{row, 2})^2 + …
(this_cell{row, 6} – this_cell{row, 3})^2);
% Calculate distance for right hand
distances_A_C(row) = sqrt((this_cell{row, 7} – this_cell{row, 1})^2 + …
(this_cell{row, 8} – this_cell{row, 2})^2 + …
(this_cell{row, 9} – this_cell{row, 3})^2);
end
% Store distances for the current participant in a table
results_distances{i} = table(distances_A_B, distances_A_C, ‘VariableNames’, {‘A_B_Distance’, ‘A_C_Distance’});
end
When running I get the error:
Undefined function ‘minus’ for input arguments of type ‘table’.
Can anybody tell me what I am doing incorrectly?
I have attached a small sample of my data set (it is much longer in actuality).
Thanks for the help! tables, rows, cell array, error, function, distances MATLAB Answers — New Questions
Delay Balancing Error (RTL Code/ IP Core generation)
Hallo everybody
I am using MATLAB/SIMULINK and HDL Coder to generate and IP Core for the ZedBoard DevKit.
I am encountering an issue with the "Delay Balancing" option. Basically SIMULINK got stuck during the HDL Code generation with the following error message:
“Error Delay balancing unsuccessful because Delay introduced in feedback loop cannot be path balanced. Offending Block: ……/Trigonometric Function”
1. Could someone please explain me the reason why this is happening ?
2. Could someone please explain me what should I do to avoid this situation?
I find a workaround for this issue. I have done the following changes:
– I put the "Trigonometric Fucntion" blocks into a subsystem (called "Trigonometric Fcn")
– I have disabled the "BalanceDelays" option from the HDL Coder properties
– I have set to "OFF" the "BalanceDelays" option for the "TOP" subsystem of the model
– I have left set to "Inherit" the "BalanceDelays" option for the other subsystems of the model
– but I have set to "ON" the "BalanceDelays" option for the "Trigonometric Fcn" subsystems of the model
– I have generated the "Validation Model" and I have verified that the results match the original one
This allow me to generate the HDL code and continue to the creation of the IP Core and the Vivado project.
But I would like to keep the BalanceDelays option "ON" otherwise the HDL code won’t be optimised in terms of area and timing performance.
3. Could someone please give me the correct solution to this error?
I am sorry, but I cannot share the code otherwise I would attached the model and other useful information.
Thank you in advance,
Andrea ForadoriHallo everybody
I am using MATLAB/SIMULINK and HDL Coder to generate and IP Core for the ZedBoard DevKit.
I am encountering an issue with the "Delay Balancing" option. Basically SIMULINK got stuck during the HDL Code generation with the following error message:
“Error Delay balancing unsuccessful because Delay introduced in feedback loop cannot be path balanced. Offending Block: ……/Trigonometric Function”
1. Could someone please explain me the reason why this is happening ?
2. Could someone please explain me what should I do to avoid this situation?
I find a workaround for this issue. I have done the following changes:
– I put the "Trigonometric Fucntion" blocks into a subsystem (called "Trigonometric Fcn")
– I have disabled the "BalanceDelays" option from the HDL Coder properties
– I have set to "OFF" the "BalanceDelays" option for the "TOP" subsystem of the model
– I have left set to "Inherit" the "BalanceDelays" option for the other subsystems of the model
– but I have set to "ON" the "BalanceDelays" option for the "Trigonometric Fcn" subsystems of the model
– I have generated the "Validation Model" and I have verified that the results match the original one
This allow me to generate the HDL code and continue to the creation of the IP Core and the Vivado project.
But I would like to keep the BalanceDelays option "ON" otherwise the HDL code won’t be optimised in terms of area and timing performance.
3. Could someone please give me the correct solution to this error?
I am sorry, but I cannot share the code otherwise I would attached the model and other useful information.
Thank you in advance,
Andrea Foradori Hallo everybody
I am using MATLAB/SIMULINK and HDL Coder to generate and IP Core for the ZedBoard DevKit.
I am encountering an issue with the "Delay Balancing" option. Basically SIMULINK got stuck during the HDL Code generation with the following error message:
“Error Delay balancing unsuccessful because Delay introduced in feedback loop cannot be path balanced. Offending Block: ……/Trigonometric Function”
1. Could someone please explain me the reason why this is happening ?
2. Could someone please explain me what should I do to avoid this situation?
I find a workaround for this issue. I have done the following changes:
– I put the "Trigonometric Fucntion" blocks into a subsystem (called "Trigonometric Fcn")
– I have disabled the "BalanceDelays" option from the HDL Coder properties
– I have set to "OFF" the "BalanceDelays" option for the "TOP" subsystem of the model
– I have left set to "Inherit" the "BalanceDelays" option for the other subsystems of the model
– but I have set to "ON" the "BalanceDelays" option for the "Trigonometric Fcn" subsystems of the model
– I have generated the "Validation Model" and I have verified that the results match the original one
This allow me to generate the HDL code and continue to the creation of the IP Core and the Vivado project.
But I would like to keep the BalanceDelays option "ON" otherwise the HDL code won’t be optimised in terms of area and timing performance.
3. Could someone please give me the correct solution to this error?
I am sorry, but I cannot share the code otherwise I would attached the model and other useful information.
Thank you in advance,
Andrea Foradori matlab, simulink, simulink hdl coder MATLAB Answers — New Questions
Accessing data from same variables within different tables in a structural array
Hi, I am fairly new at this, so please bear with me…
I have created a struct 1×20 containing twenty 87×6 tables, each containing data from different patients in a study (n=1:20).
Each table is made up of the same 6 variables: TimeAge, TimeCooling, TempCore, etc.. (others really don’t matter).
p(n).data.TimeCooling
p(n).data.TempCore
TimeAge and TimeCooling are both in hours and each range from approximately 0 to 85.
How can I access the data from the tables for specific time points?
I would like to find the mean of the variable TempCore of all patients at the same timepoint. (i.e. I want to know the mean of all patients at TimeCooling = 1 hour, then at 2 hours, etc.)
There is some missing data in TempCore that may need to be accounted for.
Any help would be greatly appreciated!Hi, I am fairly new at this, so please bear with me…
I have created a struct 1×20 containing twenty 87×6 tables, each containing data from different patients in a study (n=1:20).
Each table is made up of the same 6 variables: TimeAge, TimeCooling, TempCore, etc.. (others really don’t matter).
p(n).data.TimeCooling
p(n).data.TempCore
TimeAge and TimeCooling are both in hours and each range from approximately 0 to 85.
How can I access the data from the tables for specific time points?
I would like to find the mean of the variable TempCore of all patients at the same timepoint. (i.e. I want to know the mean of all patients at TimeCooling = 1 hour, then at 2 hours, etc.)
There is some missing data in TempCore that may need to be accounted for.
Any help would be greatly appreciated! Hi, I am fairly new at this, so please bear with me…
I have created a struct 1×20 containing twenty 87×6 tables, each containing data from different patients in a study (n=1:20).
Each table is made up of the same 6 variables: TimeAge, TimeCooling, TempCore, etc.. (others really don’t matter).
p(n).data.TimeCooling
p(n).data.TempCore
TimeAge and TimeCooling are both in hours and each range from approximately 0 to 85.
How can I access the data from the tables for specific time points?
I would like to find the mean of the variable TempCore of all patients at the same timepoint. (i.e. I want to know the mean of all patients at TimeCooling = 1 hour, then at 2 hours, etc.)
There is some missing data in TempCore that may need to be accounted for.
Any help would be greatly appreciated! table, array, basic MATLAB Answers — New Questions
The computer has rebooted from a bugcheck.
Hi,
We have been having a few devices BSOD due to the issue:
The computer has rebooted from a bugcheck. The bugcheck was: 0x0000013a (0x0000000000000011, 0xffffc80f49e02140, 0xffffc80f515979a0, 0x0000000000000000). A dump was saved in: C:WINDOWSMEMORY.DMP. Report Id: 4c4b7e1f-98ac-495e-95d1-45a91e90ea38.
These are Dell Precision devices, windows 11 Version 22H2 (OS Build 22621.3593).
I was wondering if you would be able to assist troubleshooting the cause of these reboots.
Kind regards,
Dre
Hi,We have been having a few devices BSOD due to the issue:The computer has rebooted from a bugcheck. The bugcheck was: 0x0000013a (0x0000000000000011, 0xffffc80f49e02140, 0xffffc80f515979a0, 0x0000000000000000). A dump was saved in: C:WINDOWSMEMORY.DMP. Report Id: 4c4b7e1f-98ac-495e-95d1-45a91e90ea38.These are Dell Precision devices, windows 11 Version 22H2 (OS Build 22621.3593).I was wondering if you would be able to assist troubleshooting the cause of these reboots.Kind regards,Dre Read More
Counting days
Hi team
Hoping you can assist with a tricky one.
I am counting the number of days between when a job is received, to when it is completed. I have been using TODAY() as the end date which gives me the age of outstanding jobs, but how can I get it to stop counting when completed? Can I get it to reference when another cell is filled in, indicating the job is complete? eg job is complete when column G is filled in.
Cheers
Hi teamHoping you can assist with a tricky one.I am counting the number of days between when a job is received, to when it is completed. I have been using TODAY() as the end date which gives me the age of outstanding jobs, but how can I get it to stop counting when completed? Can I get it to reference when another cell is filled in, indicating the job is complete? eg job is complete when column G is filled in. Cheers Read More
If true then return value in another cell near it
Hi there, I am wanting a formula that can search through the row and when it finds the value ‘TRUE’ return the value 5 columns over. Example – true is in column AG and the cell I want it to return AB.
Is there a formula for this?
Thank you.
Hi there, I am wanting a formula that can search through the row and when it finds the value ‘TRUE’ return the value 5 columns over. Example – true is in column AG and the cell I want it to return AB. Is there a formula for this? Thank you. Read More
FAQ: Change plans mixed flat rate and metered billing
Q: I have a question about a customer changing plans where the pricing uses metered billing.
For example, the plan purchased by customer is plan1 with no of emails included in this plan as 100. Assume up-to-date email consumption is 500 emails, then the customer changes the plan in the middle of the month to the higher plan which will include 1000 emails included.
-Plan1 include 100emails.
-Consumption is 500emails by mid of the month.
-Mid of the month – change plan to Plan2 include 1000emails.
Based on the documentation saying “Changes you make to a plan you are subscribed to will take effect immediately. The billing is prorated according to the billing term of the current plan.”
https://learn.microsoft.com/en-us/marketplace/saas-subscription-lifecycle-management#change-plans
Based on this scenario, how is the billing calculated? How are the exceeded emails calculated?
A: The number of included dimension units in a plan (emails in this case) is partner managed, so it’s something the ISV needs to track and measure and decide how to handle
In your case if a customer changes plan mid-month, given plan1 is $10 and includes 100 emails, and plan2 is $50 and includes 1000 emails, the flat rate cost will be prorated by Microsoft ($5 for half a month on plan1 plus $25 for half a month on plan2) but the allotted emails is at the ISV’Ss discretion (50 emails from half a month on plan1 and 500 emails for half a month on plan2).
Based on that calculation, the partner only sends metered overages when the customer goes over the 550 emails limit during that period.
There are other edge cases as well:
– if the customer used all 100 free emails in plan1 then wants to switch to a free plan, the ISV can block that plan change
– if the customer used 200 emails in plan1 and ISV already charged them the 100 extra, when the customer changed to plan2, they would have already paid for something that would be included in plan2 – the ISV cannot refund previously sent meters, but they can choose not to charge next time customer goes over their allotted emails count, or they can simply specify in the terms that switching mid-month causes this extra cost.
Q: I have a question about a customer changing plans where the pricing uses metered billing.
For example, the plan purchased by customer is plan1 with no of emails included in this plan as 100. Assume up-to-date email consumption is 500 emails, then the customer changes the plan in the middle of the month to the higher plan which will include 1000 emails included.
-Plan1 include 100emails.
-Consumption is 500emails by mid of the month.
-Mid of the month – change plan to Plan2 include 1000emails.
Based on the documentation saying “Changes you make to a plan you are subscribed to will take effect immediately. The billing is prorated according to the billing term of the current plan.”
https://learn.microsoft.com/en-us/marketplace/saas-subscription-lifecycle-management#change-plans
Based on this scenario, how is the billing calculated? How are the exceeded emails calculated?
A: The number of included dimension units in a plan (emails in this case) is partner managed, so it’s something the ISV needs to track and measure and decide how to handle
In your case if a customer changes plan mid-month, given plan1 is $10 and includes 100 emails, and plan2 is $50 and includes 1000 emails, the flat rate cost will be prorated by Microsoft ($5 for half a month on plan1 plus $25 for half a month on plan2) but the allotted emails is at the ISV’Ss discretion (50 emails from half a month on plan1 and 500 emails for half a month on plan2).
Based on that calculation, the partner only sends metered overages when the customer goes over the 550 emails limit during that period.
There are other edge cases as well:
– if the customer used all 100 free emails in plan1 then wants to switch to a free plan, the ISV can block that plan change
– if the customer used 200 emails in plan1 and ISV already charged them the 100 extra, when the customer changed to plan2, they would have already paid for something that would be included in plan2 – the ISV cannot refund previously sent meters, but they can choose not to charge next time customer goes over their allotted emails count, or they can simply specify in the terms that switching mid-month causes this extra cost. Read More
Add ToDo Delegate Ability in the New Outlook
Please add delegate ability for ToDo in the New Outlook, (browser and desktop).
Our ID-Shutdown process includes checking for ToDo’s. The new Outlook (browser nor desktop) does not give this ability. The classic Windows Outlook has this ability.
Please add delegate ability for ToDo in the New Outlook, (browser and desktop). Our ID-Shutdown process includes checking for ToDo’s. The new Outlook (browser nor desktop) does not give this ability. The classic Windows Outlook has this ability. Read More
Finding the Latest date in a range of dates
With thanks to:
Peter Bartholomew; djclements
and Sergei Baklan.
I am nearly finished on this project that must find the earliest, latest and peak dates in a range of dates.
I now need to adapt the formula to be used on the Latest dates, which are always three cells on from each of the earliest dates in the range.
Will I now be able to find the latest date (by day and month only) if I can adapt the formula that was successfully deployed for Earliest dates?
This is the formula being used for identifying Earliest dates:
=AGGREGATE(15,6, $D3:$AQ3 /( (MONTH($D3:$AQ3)*100+DAY($D3:$AQ3)) = AGGREGATE(15,6,( MONTH($D3:$AQ3)*100+DAY($D3:$AQ3) )/NOT( MOD(COLUMN($D3:$AQ3),4) ),1) ),1)
Here is the table so far:
With thanks to:Peter Bartholomew; djclementsand Sergei Baklan. I am nearly finished on this project that must find the earliest, latest and peak dates in a range of dates. I now need to adapt the formula to be used on the Latest dates, which are always three cells on from each of the earliest dates in the range. Will I now be able to find the latest date (by day and month only) if I can adapt the formula that was successfully deployed for Earliest dates?This is the formula being used for identifying Earliest dates: =AGGREGATE(15,6, $D3:$AQ3 /( (MONTH($D3:$AQ3)*100+DAY($D3:$AQ3)) = AGGREGATE(15,6,( MONTH($D3:$AQ3)*100+DAY($D3:$AQ3) )/NOT( MOD(COLUMN($D3:$AQ3),4) ),1) ),1) Here is the table so far: Read More
FAQ: VM Offer for confidential computing
Q: We are in the process of technical configuration for our Azure Marketplace offerings and we’re encountering a problem. In the ‘Recommended VM Sizes’ section, we cannot select the confidential VMs DCasv5 and DCadsv5 that we use for our products. These do not appear and therefore we cannot limit the users to these types of VMs. This will be problematic in the case where a client decides to install our product on a non-compatible VM. My guess is that Recommended VM size is not the best section to work with. How can we restrict the VM family ?
A: The feature to restrict VM deployments to a particular size is not natively available in VM offers currently. The feature is available in Solution templates. The VM offer can be hidden and customers can deploy using Azure App.
Q: We are in the process of technical configuration for our Azure Marketplace offerings and we’re encountering a problem. In the ‘Recommended VM Sizes’ section, we cannot select the confidential VMs DCasv5 and DCadsv5 that we use for our products. These do not appear and therefore we cannot limit the users to these types of VMs. This will be problematic in the case where a client decides to install our product on a non-compatible VM. My guess is that Recommended VM size is not the best section to work with. How can we restrict the VM family ?
A: The feature to restrict VM deployments to a particular size is not natively available in VM offers currently. The feature is available in Solution templates. The VM offer can be hidden and customers can deploy using Azure App. Read More
Announcement: Monday, June 10, 2024: New Microsoft Cybersecurity Program for Rural Hospitals
Following an announcement from the White House regarding new cybersecurity standards for hospitals and with the support of The American Hospital Association and The National Rural Health Association, Microsoft announced via press release, the new Microsoft Cybersecurity Program for Rural Hospitals.
The program is designed to support the unique cybersecurity needs of rural hospitals across the US and will deliver free and low-cost technology services, along with free training and support.
Live links:
Press release: Microsoft to help rural hospitals defend against rising cybersecurity attacks – Stories
Program information and registration: https://aka.ms/Microsoft_Security_Rural_Hospitals
Social links:
Microsoft On the Issues on LinkedIn, X, and Instagram
Following an announcement from the White House regarding new cybersecurity standards for hospitals and with the support of The American Hospital Association and The National Rural Health Association, Microsoft announced via press release, the new Microsoft Cybersecurity Program for Rural Hospitals.
The program is designed to support the unique cybersecurity needs of rural hospitals across the US and will deliver free and low-cost technology services, along with free training and support.
Live links:
Press release: Microsoft to help rural hospitals defend against rising cybersecurity attacks – Stories
Program information and registration: https://aka.ms/Microsoft_Security_Rural_Hospitals
Social links:
Microsoft On the Issues on LinkedIn, X, and Instagram Read More
Our latest work to improve Azure Functions cold starts and what you can do
We continually work to improve performance and mitigate Azure Functions cold starts – the extra time it takes for a function that hasn’t been used recently to respond to an event. We understand that no matter when your functions were last called, you want fast executions and little lag time.
In this article:
How we measure cold start and the work done to improve it in the Azure Functions platform.
What you can do to optimize your functions to improve your app’s cold start performance.
Provide your feedback on Azure Functions cold start.
How we measure Azure Functions cold start
In measuring Azure Functions performance, we prioritize the cold start of synchronous HTTP triggers in the Consumption and Flex Consumption hosting plans. That means looking at what our platform and Azure Functions host need to do to execute the first HTTP trigger function on a new instance. Then we improve it. We are also working to improve cold start for asynchronous scenarios.
To assess our progress, we run sample HTTP trigger function apps that measure cold start latencies for all supported versions of Azure Functions, in all languages, for both Windows and Linux Consumption. These sample apps are deployed in all Azure regions and subregions where Azure Functions runs. Our test function calls these sample apps every few hours to trigger a true cold start and currently generates nearly 85,000 daily cold start samples. Through this testing infrastructure we observed in past 18 months a reduction on cold start latency by approximately 53 percent across all regions and for all supported languages and platforms.
If any of the tracked metrics start to regress, we’re immediately notified and start investigating. Daily emails, alerts, and historical dashboards tell us the end-to-end cold start latencies across various percentiles. We also perform specific analyses and trigger alerts if our fiftieth percentile, ninety-ninth percentile, or maximum latency numbers regress.
In addition, we collect detailed PerfView profiles of the sample apps deployed in select regions. The breakdown includes full call stacks (user mode and kernel mode) for every millisecond spent during cold start. The profiles reveal CPU usage and call stacks, context switches, disk reads, HTTP calls, memory hard faults, common language runtime (CLR) just-in-time (JIT) compiler, garbage collector (GC), type loads, and many more details about .NET internals. We report all these details in our logging pipelines and receive alerts if metrics regress. And we’re always looking for ways to make improvements based on these profiles.
Performance improvements in the platform
Since launching Azure Functions, we’ve improved performance across the Azure platform that it runs on, in order to achieve the observed reduction in cold starts. These enhancements extended to the shared platform with Azure App Service and the new Legion platform, the operating system, storage, .NET Core, and communication channels.
We aim to optimize for the ninety-ninth–percentile latency. We delve into cold start scenarios at the millisecond level and continually fine-tune the algorithms that allocate capacity. In short, we’re always working to improve Azure Functions cold start. The following areas are our current our focus:
Function app pools. In the internal architecture, we must ensure that the right number of Function app pools are warmed up and ready to handle a cold start for all supported platforms and languages. These pools serve as placeholders in effect. Exactly how many depends on the usage per region—plus enough extra capacity to meet unexpected bursts. We’re always refining our algorithms to balance the pools without increasing costs. Placeholder processes and dependencies stay hot in memory to prevent paging out.
Ninety-ninth–percentile latencies. Although it’s relatively straightforward to optimize cold start scenarios for the fiftieth percentile, we are digging deeper to address ninety-ninth–percentile latencies, particularly when multiple VMs are involved. Each runs different processes and components and is configured with unique disk, network, and memory characteristics. It’s even harder to trace the root causes of potential ninety-ninth–percentile regressions.
Profilers. We use a multitude of specialized profiling tools capable of dissecting cold start scenarios at the millisecond level. We examine detailed call stacks and tracking activities at both the application and operating system levels. The PerfView and Event Tracing for Windows (ETW) providers are great at addressing issues with Windows and .NET-based apps, but we also investigate issues across platforms and languages. We also use Profile Guided Optimization (PGO) to ensure that Functions Host and dependent libraries are fully JIT compiled and ready to minimize the impact of platform code JIT compilation during actual cold start requests.
Histograms. If our platform detects cold starts occurring at regular intervals, we fully prewarm the instance where the function app will run to avoid cold start delays during actual execution.
6 things you can do now to improve cold start in Azure Functions
Here are a few strategies you can follow to further improve cold starts for your apps:
Deploy your function as a .zip (compressed) package. Minimize its size by removing unneeded files and dependencies, such as debug symbols (.pdb files) and unnecessary image files.
For Windows deployment, run your functions from a package file. To do this, set the WEBSITE_RUN_FROM_PACKAGE=1 app setting. If your app uses storage for storing content, deploy Azure Storage in the same region as your Azure Functions app and consider using premium storage for a faster cold start.
When deploying .NET apps, publish with ReadyToRun to avoid additional costs from the JIT compiler.
In the Azure portal, navigate to your function app. Go to Diagnose and solve problems, and review any messages that appear under Risk alerts. Look for issues that may impact cold starts.
If your app uses a Premium or App Service plan, invoke warmup triggers to preload dependencies or to add any custom logic required to connect to external endpoints. This option isn’t supported for apps on Consumption plans.
To help mitigate cold starts, try the always ready instances feature of our newest hosting option for event-driven serverless functions, Flex Consumption.
Final Thoughts
If your Azure Functions app still doesn’t perform as well as you’d like, consider the following:
Share your feedback on Azure Functions cold start to get in touch with the team.
try the always ready instances feature of our newest hosting option for event-driven serverless functions, Flex Consumption.
Note: This article is a modified version of the article originally published on Newsstack.
Microsoft Tech Community – Latest Blogs –Read More
Create zero-thickness surface in a 3D partial differential equation problem
I am struggling to create the geometry that I want to use in the Matlab PDE modeling interface. I want my model to consist of a zero-thickness triangulated sheet embedded in a tetrahedral mesh of a sphere. I need to address the faces or nodes that lie on the sheets in order to prescribe boundary conditions there.
It’s easy to create the outer sphere in the PDE modeling environment:
g1 = multisphere(R)
However I am really struggling to define the zero thickness triangulated sheet geometry inside the sphere.
g2 = geometryFromMesh(mesh,nodes,elements) throws an error if the triangulation described by the input node and element lists does not form a closed boundary. This seems like a limitation of the modeling interface. Any ideas on how to create the geometry within the PDE modeling environment?
Alternatively…
Using a workaround, I created the FE mesh outside the PDE modeling environment. I am able to import this entire mesh into the interface just fine, albiet without any Faces, Edges, or Vertices definitions.
However, it’s apparently not possible to prescribe boundary conditions directly at mesh nodes in the PDE modeling interface – boundary conditions can only be prescribed onto geometry vertices. Is there a way to map mesh nodes to geometry vertices?I am struggling to create the geometry that I want to use in the Matlab PDE modeling interface. I want my model to consist of a zero-thickness triangulated sheet embedded in a tetrahedral mesh of a sphere. I need to address the faces or nodes that lie on the sheets in order to prescribe boundary conditions there.
It’s easy to create the outer sphere in the PDE modeling environment:
g1 = multisphere(R)
However I am really struggling to define the zero thickness triangulated sheet geometry inside the sphere.
g2 = geometryFromMesh(mesh,nodes,elements) throws an error if the triangulation described by the input node and element lists does not form a closed boundary. This seems like a limitation of the modeling interface. Any ideas on how to create the geometry within the PDE modeling environment?
Alternatively…
Using a workaround, I created the FE mesh outside the PDE modeling environment. I am able to import this entire mesh into the interface just fine, albiet without any Faces, Edges, or Vertices definitions.
However, it’s apparently not possible to prescribe boundary conditions directly at mesh nodes in the PDE modeling interface – boundary conditions can only be prescribed onto geometry vertices. Is there a way to map mesh nodes to geometry vertices? I am struggling to create the geometry that I want to use in the Matlab PDE modeling interface. I want my model to consist of a zero-thickness triangulated sheet embedded in a tetrahedral mesh of a sphere. I need to address the faces or nodes that lie on the sheets in order to prescribe boundary conditions there.
It’s easy to create the outer sphere in the PDE modeling environment:
g1 = multisphere(R)
However I am really struggling to define the zero thickness triangulated sheet geometry inside the sphere.
g2 = geometryFromMesh(mesh,nodes,elements) throws an error if the triangulation described by the input node and element lists does not form a closed boundary. This seems like a limitation of the modeling interface. Any ideas on how to create the geometry within the PDE modeling environment?
Alternatively…
Using a workaround, I created the FE mesh outside the PDE modeling environment. I am able to import this entire mesh into the interface just fine, albiet without any Faces, Edges, or Vertices definitions.
However, it’s apparently not possible to prescribe boundary conditions directly at mesh nodes in the PDE modeling interface – boundary conditions can only be prescribed onto geometry vertices. Is there a way to map mesh nodes to geometry vertices? partial differential equations, mesh nodes, importgeometry, geometryfrommesh MATLAB Answers — New Questions
Wrong sum after calculation
Hello!
So I would like to use a calculated field inside my Pivot Table. But somehow the formula is not respected.
I have the following columns:
Stock; Withdraw Rate; Usage
1000; 0,3; x
Somehow instead of displaying 1000*0,3=300 the table shows 700. Which means somehow the result of the formula is being deducted from the source value of 1000.
Does anyone know how to fix that?
Thank you
Hello!So I would like to use a calculated field inside my Pivot Table. But somehow the formula is not respected.I have the following columns: Stock; Withdraw Rate; Usage1000; 0,3; x Somehow instead of displaying 1000*0,3=300 the table shows 700. Which means somehow the result of the formula is being deducted from the source value of 1000.Does anyone know how to fix that? Thank you Read More
integral of the besselj function
how to count the integral of the besselj function from 0 to 4pi?how to count the integral of the besselj function from 0 to 4pi? how to count the integral of the besselj function from 0 to 4pi? besselj, integral, matlab MATLAB Answers — New Questions
I have purchased the Signal Processing Toolbox but still get an error when trying to use a function
I have purchased the Signal Processing Toolbox, but MATLAB still throws a "you need the Signal Processing Toolbox" error when I try to use the square function per the screenshot. I have tried reinstalling MATLAB and restarting the program.I have purchased the Signal Processing Toolbox, but MATLAB still throws a "you need the Signal Processing Toolbox" error when I try to use the square function per the screenshot. I have tried reinstalling MATLAB and restarting the program. I have purchased the Signal Processing Toolbox, but MATLAB still throws a "you need the Signal Processing Toolbox" error when I try to use the square function per the screenshot. I have tried reinstalling MATLAB and restarting the program. signal processing MATLAB Answers — New Questions
One function is greater than other
I would like to determine the range of values for ( z ) where the following inequality holds true:
this is my trying
syms z real
assume(z > exp(1))
% Define the function
f = z – 8.02 * log(z) – (3.359 / 21.233) * log(z) * z;
sol = solve(f > 0, z, ‘ReturnConditions’, true);
vpa(sol.conditions)I would like to determine the range of values for ( z ) where the following inequality holds true:
this is my trying
syms z real
assume(z > exp(1))
% Define the function
f = z – 8.02 * log(z) – (3.359 / 21.233) * log(z) * z;
sol = solve(f > 0, z, ‘ReturnConditions’, true);
vpa(sol.conditions) I would like to determine the range of values for ( z ) where the following inequality holds true:
this is my trying
syms z real
assume(z > exp(1))
% Define the function
f = z – 8.02 * log(z) – (3.359 / 21.233) * log(z) * z;
sol = solve(f > 0, z, ‘ReturnConditions’, true);
vpa(sol.conditions) @staff MATLAB Answers — New Questions