Category: News
Euler’s Method using a for loop
I am trying to produce a graph of displacement vs. velocity of a falling parachuter and produce tabulated values. I have been given the function–which I have attached a screenshot of. My code is currently producing the error: "Array indices must be positive integers or logical values.
My Code:
clear all
%initial conditions
g0 = 9.81; %m/s^2
R = 6.37e6; %m
h = 10000; %Step Size in m
x = 0 : h : 100000; % Range of X values
v = zeros(size(x));
vi = 1400; %m/s Initial velocity
n = numel(v); % Number of values for velocity
for i=1:n-1
v(x(i+1)) = v(x(i)) + ((g0/v(x(i))) * (R^2/ ((R + x(i))^2))) * (x(i+1) – x(i));
end
plot (x(i),v(i))I am trying to produce a graph of displacement vs. velocity of a falling parachuter and produce tabulated values. I have been given the function–which I have attached a screenshot of. My code is currently producing the error: "Array indices must be positive integers or logical values.
My Code:
clear all
%initial conditions
g0 = 9.81; %m/s^2
R = 6.37e6; %m
h = 10000; %Step Size in m
x = 0 : h : 100000; % Range of X values
v = zeros(size(x));
vi = 1400; %m/s Initial velocity
n = numel(v); % Number of values for velocity
for i=1:n-1
v(x(i+1)) = v(x(i)) + ((g0/v(x(i))) * (R^2/ ((R + x(i))^2))) * (x(i+1) – x(i));
end
plot (x(i),v(i)) I am trying to produce a graph of displacement vs. velocity of a falling parachuter and produce tabulated values. I have been given the function–which I have attached a screenshot of. My code is currently producing the error: "Array indices must be positive integers or logical values.
My Code:
clear all
%initial conditions
g0 = 9.81; %m/s^2
R = 6.37e6; %m
h = 10000; %Step Size in m
x = 0 : h : 100000; % Range of X values
v = zeros(size(x));
vi = 1400; %m/s Initial velocity
n = numel(v); % Number of values for velocity
for i=1:n-1
v(x(i+1)) = v(x(i)) + ((g0/v(x(i))) * (R^2/ ((R + x(i))^2))) * (x(i+1) – x(i));
end
plot (x(i),v(i)) euler, for loop MATLAB Answers — New Questions
How to combine cell arrays to form one nested cell array entry
Hello, I have a variable (X) that is a cell array (size 64X634). In the each location of cell array X, there is a nested 1×2 cell array.
How can I combinethe nested 1×2 cell arrays across the 634 columns in X such that the variable(X) is now the desired size of 64×1, where each row entry of the cell arrray X contains the new 634×2 nested cell array?
In other words, I want to combine each of the 1×2 cell arrays found in the columns of the original variable(X) so that each row of variable(X) only has one column (now a nested cell array with all the original 1×2 nested cell arrays). Thanks!Hello, I have a variable (X) that is a cell array (size 64X634). In the each location of cell array X, there is a nested 1×2 cell array.
How can I combinethe nested 1×2 cell arrays across the 634 columns in X such that the variable(X) is now the desired size of 64×1, where each row entry of the cell arrray X contains the new 634×2 nested cell array?
In other words, I want to combine each of the 1×2 cell arrays found in the columns of the original variable(X) so that each row of variable(X) only has one column (now a nested cell array with all the original 1×2 nested cell arrays). Thanks! Hello, I have a variable (X) that is a cell array (size 64X634). In the each location of cell array X, there is a nested 1×2 cell array.
How can I combinethe nested 1×2 cell arrays across the 634 columns in X such that the variable(X) is now the desired size of 64×1, where each row entry of the cell arrray X contains the new 634×2 nested cell array?
In other words, I want to combine each of the 1×2 cell arrays found in the columns of the original variable(X) so that each row of variable(X) only has one column (now a nested cell array with all the original 1×2 nested cell arrays). Thanks! nested cell arrays, concatenate cell arrays MATLAB Answers — New Questions
Group creation needed to start a campaign
Hello all – We recently were granted a trial of Amplify. I went to create a campaign and received an error message. The error message I received said that I do not have access to create groups. I did a little research and found this:
Ensure Microsoft 365 group creation has been enabled for you. You can connect with your admin to check if you have the necessary permissions. Learn more about Microsoft 365 group creation permissions.
Question for this group: How did your organizations get around allowing group creation for those with campaign access? Any guidance is greatly appreciated.
Hello all – We recently were granted a trial of Amplify. I went to create a campaign and received an error message. The error message I received said that I do not have access to create groups. I did a little research and found this: Ensure Microsoft 365 group creation has been enabled for you. You can connect with your admin to check if you have the necessary permissions. Learn more about Microsoft 365 group creation permissions.Question for this group: How did your organizations get around allowing group creation for those with campaign access? Any guidance is greatly appreciated. Read More
Azure File Share – NTFS Permission Extremely Slow
Recently moved file server data to Azure File Share. No issue with mapping, opening files. The issue is when managing the permission. When updating/adding NTFS permission per folder, it is EXTREMELY slow. Any advise or workaround that you could share please.
Thank you.
Recently moved file server data to Azure File Share. No issue with mapping, opening files. The issue is when managing the permission. When updating/adding NTFS permission per folder, it is EXTREMELY slow. Any advise or workaround that you could share please. Thank you. Read More
App Designer: prevent string truncation (uilabel, etc..)
Hi,
I have an app designed in App Designer 2018b. The same exact code ran in Matlab 2024a truncates strings with ellipsis (…) while 2018b does not.
How can I prevent this to happen? I do NOT want the string truncated, as the built-in function that does so seems unoptimized, as can be seen in screenshot below.
As you can see on the 2018b screenshot, the text has plenty of space to be fully displayed. Yet 2024a believes it needs to be truncated. Uilabel does not have Wordwrap, how to let user chose whether or not to truncate?Hi,
I have an app designed in App Designer 2018b. The same exact code ran in Matlab 2024a truncates strings with ellipsis (…) while 2018b does not.
How can I prevent this to happen? I do NOT want the string truncated, as the built-in function that does so seems unoptimized, as can be seen in screenshot below.
As you can see on the 2018b screenshot, the text has plenty of space to be fully displayed. Yet 2024a believes it needs to be truncated. Uilabel does not have Wordwrap, how to let user chose whether or not to truncate? Hi,
I have an app designed in App Designer 2018b. The same exact code ran in Matlab 2024a truncates strings with ellipsis (…) while 2018b does not.
How can I prevent this to happen? I do NOT want the string truncated, as the built-in function that does so seems unoptimized, as can be seen in screenshot below.
As you can see on the 2018b screenshot, the text has plenty of space to be fully displayed. Yet 2024a believes it needs to be truncated. Uilabel does not have Wordwrap, how to let user chose whether or not to truncate? uilabel, truncate MATLAB Answers — New Questions
Change date from cell to number inside array
I have data inside an array including strings and dates. The times appear to be in a cell inside the array.
The times appear from 0 to 1, which I understand. This data is brought from Excel which I must use.
The problem is I want to do an if statement inside this array to see which entries are larger than 10pm (approximately 0.9).
Whenever I try and use an if statement, I get an error because the times are stored in a cell format inside the array. How do I change it from a cell to a numeric format I can use?I have data inside an array including strings and dates. The times appear to be in a cell inside the array.
The times appear from 0 to 1, which I understand. This data is brought from Excel which I must use.
The problem is I want to do an if statement inside this array to see which entries are larger than 10pm (approximately 0.9).
Whenever I try and use an if statement, I get an error because the times are stored in a cell format inside the array. How do I change it from a cell to a numeric format I can use? I have data inside an array including strings and dates. The times appear to be in a cell inside the array.
The times appear from 0 to 1, which I understand. This data is brought from Excel which I must use.
The problem is I want to do an if statement inside this array to see which entries are larger than 10pm (approximately 0.9).
Whenever I try and use an if statement, I get an error because the times are stored in a cell format inside the array. How do I change it from a cell to a numeric format I can use? time, datetime, cell, cell array MATLAB Answers — New Questions
Derivative of table between NaN values
I have data in a table, in between there are NaN values.
I want to calculate the derivatives (using the 1st column as x’s and 2nd column as y’s ) between each NaN space and put these values into a new vector.
Thanks!I have data in a table, in between there are NaN values.
I want to calculate the derivatives (using the 1st column as x’s and 2nd column as y’s ) between each NaN space and put these values into a new vector.
Thanks! I have data in a table, in between there are NaN values.
I want to calculate the derivatives (using the 1st column as x’s and 2nd column as y’s ) between each NaN space and put these values into a new vector.
Thanks! derivative, tables, cells MATLAB Answers — New Questions
AppDesigner UIFigure WindowScrollWheelFcn disables datatips
I am building an app in AppDesigner r2021a. I have several plots with custom datatips on different tabs. Everything works great.
However, when I add a WindowScrollWheelFcn callback to the parent uifigure, I am no longer able to see datatips when hovering over points.
Is there anyway around this?I am building an app in AppDesigner r2021a. I have several plots with custom datatips on different tabs. Everything works great.
However, when I add a WindowScrollWheelFcn callback to the parent uifigure, I am no longer able to see datatips when hovering over points.
Is there anyway around this? I am building an app in AppDesigner r2021a. I have several plots with custom datatips on different tabs. Everything works great.
However, when I add a WindowScrollWheelFcn callback to the parent uifigure, I am no longer able to see datatips when hovering over points.
Is there anyway around this? appdesigner, windowscrollwheelfcn, callback, datatip MATLAB Answers — New Questions
How do I change the font size of text in a figure?
I want to change the font size for the title, axis labels, and other text in my figure. How do I do this?I want to change the font size for the title, axis labels, and other text in my figure. How do I do this? I want to change the font size for the title, axis labels, and other text in my figure. How do I do this? MATLAB Answers — New Questions
Problem using loadlibrary in R2023B Update 7 and VS 2022 v17.9
I just updated my VS 2022 Enterprise from version 17.8 to 17.9.0 and I am having a problem using loadlibrary. It cannot find C header files and runtime libraries, possibly due to 17.9 shipping with a new MSVC version 14.39.33519 instead of the 14.38 version that comes with 17.8. How do I retarget loadlibrary to use the new version?I just updated my VS 2022 Enterprise from version 17.8 to 17.9.0 and I am having a problem using loadlibrary. It cannot find C header files and runtime libraries, possibly due to 17.9 shipping with a new MSVC version 14.39.33519 instead of the 14.38 version that comes with 17.8. How do I retarget loadlibrary to use the new version? I just updated my VS 2022 Enterprise from version 17.8 to 17.9.0 and I am having a problem using loadlibrary. It cannot find C header files and runtime libraries, possibly due to 17.9 shipping with a new MSVC version 14.39.33519 instead of the 14.38 version that comes with 17.8. How do I retarget loadlibrary to use the new version? loadlibrary MATLAB Answers — New Questions
Checkbox in header row for uitable
Is it possible to put a checkbox next to a string in the same column?
I have a table, and one of the column are checkboxes. I want another checkbox that when checked will check all the other checkboxes. I know how to do this, but I what I would really like is to put this checkbox at the header row of the uitable. Maybe even add a string before the checkbox?
Is this even possible?Is it possible to put a checkbox next to a string in the same column?
I have a table, and one of the column are checkboxes. I want another checkbox that when checked will check all the other checkboxes. I know how to do this, but I what I would really like is to put this checkbox at the header row of the uitable. Maybe even add a string before the checkbox?
Is this even possible? Is it possible to put a checkbox next to a string in the same column?
I have a table, and one of the column are checkboxes. I want another checkbox that when checked will check all the other checkboxes. I know how to do this, but I what I would really like is to put this checkbox at the header row of the uitable. Maybe even add a string before the checkbox?
Is this even possible? appdesigner, checkbox, uitable MATLAB Answers — New Questions
How can I join Microsoft
I would be honored to join the Microsoft team and contribute to the Copilot team’s mission of providing innovative solutions to enterprises. I recognize the immense potential of this technology to benefit businesses, and I am eager to be a part of the team that is driving its development. I kindly request your guidance on the process of joining the team. Thank you for considering my interest.
I would be honored to join the Microsoft team and contribute to the Copilot team’s mission of providing innovative solutions to enterprises. I recognize the immense potential of this technology to benefit businesses, and I am eager to be a part of the team that is driving its development. I kindly request your guidance on the process of joining the team. Thank you for considering my interest. Read More
GDAP and not allowing global admin to auto renew
Hi all,
The relationships we created two years ago are quickly approaching their expiration date, and I’m interested in how other people are handling the creation of new relationships.
With the introduction of relationships that auto renew, have you found this to be a viable path? We are a Managed Service Provider and our customers expect us to turn ALL the knobs for them in the Microsoft portals.
I want to have the flexibility of techs only enabling the roles they need, but there are a LOT of roles. Creating a relationship with 34 roles is a bit extreme. Plus, it looks like we need 43 built-in roles to have the same level as access as Global Admin, and some of those roles are not available via GDAP today.
The role that stands out the most is “Organizational Branding Administrator.” Am I missing something, or is the only way to change sign-in branding to use the Global Administrator role (which prevents auto-renewal) or use a local tenant admin account?
What would partners think if Microsoft allowed the Global Admin role to auto-renew until Microsoft adds all the built in roles to GDAP roles needed to replace Global Admin? Maybe put some sort of extra warning on the role acceptance side advising the client this is not recommended and let the client make that informed choice themselves?
What do you think customers opinion of this move would be?
From my conversations with different people, I am under the impression that customers didn’t want Microsoft to allow partners the option of letting the Global Admin role auto-renew. Since I have never met a customer that shared this view, I can’t comment on the accuracy of that statement, but that what I’ve heard.
Hi all,The relationships we created two years ago are quickly approaching their expiration date, and I’m interested in how other people are handling the creation of new relationships. With the introduction of relationships that auto renew, have you found this to be a viable path? We are a Managed Service Provider and our customers expect us to turn ALL the knobs for them in the Microsoft portals. I want to have the flexibility of techs only enabling the roles they need, but there are a LOT of roles. Creating a relationship with 34 roles is a bit extreme. Plus, it looks like we need 43 built-in roles to have the same level as access as Global Admin, and some of those roles are not available via GDAP today. The role that stands out the most is “Organizational Branding Administrator.” Am I missing something, or is the only way to change sign-in branding to use the Global Administrator role (which prevents auto-renewal) or use a local tenant admin account? What would partners think if Microsoft allowed the Global Admin role to auto-renew until Microsoft adds all the built in roles to GDAP roles needed to replace Global Admin? Maybe put some sort of extra warning on the role acceptance side advising the client this is not recommended and let the client make that informed choice themselves? What do you think customers opinion of this move would be? From my conversations with different people, I am under the impression that customers didn’t want Microsoft to allow partners the option of letting the Global Admin role auto-renew. Since I have never met a customer that shared this view, I can’t comment on the accuracy of that statement, but that what I’ve heard. Read More
Azure Functions at Build 2024 – Solving customer problems with deep engineering
Azure Functions is Azure’s primary serverless service used in production by hundreds of thousands of customers who run trillions of executions on it monthly. It was first released in early 2016 and since then we have learnt a lot from our customers on what works and where they would like to see more.
Taking all this feedback into consideration, the Azure Functions team has worked hard to improve the experience across the stack from the initial getting started experience all the way to running at very high scale. Please see this link for a list of all the capabilities we have released in this year’s Build conference. Taking everything into account, this is one of the most significant set of releases in Functions history.
In this blog post, I will share a brief glimpse behind the scenes of some of the technical work that the Functions and other partner teams did to meet the expectations of our customers. We will write more technical blogs to explain these areas in depth this is a brief overview.
Flex Consumption: Burst scale your apps with networking support
We are releasing a new SKU of Functions, Flex Consumption. This SKU addresses all the feedback that we have received over the years on the Functions Consumption plans. We have looked at each part of the stack and made improvements at all levels. There are many new capabilities including:
Scales much faster than before with user controlled per-instance concurrency
Scale to many more instances than before (upto 100)
Serverless “scale to zero” SKU that also supports VNET integrated event sources
Supports always allocated workers
Supports multiple memory sizes
Purpose built backend “Legion”
To enable Flex Consumption, we have created a brand-new purpose-built backend internally called Legion.
To host customer code, Legion relies on nested virtualization on Azure VMSS. This gives us the Hyper-V isolation that is a pre-requisite for hostile multi-tenant workloads. Legion was built right from the outset to support scaling to thousands of instances with VNET injection. Efficient use of subnet IP addresses by use of kernel level routing was also a unique achievement in Legion.
For all languages, functions have a strict goal for cold start. To achieve this cold start metric for all languages and versions, and to support functions image update for all these variants, we had to create a construct called Pool Groups that allows functions to specify all the parameters of the pool, as well as networking and upgrade policies.
All this work led us to a solid, scalable and fast infrastructure on which to build Flex Consumption on.
“Trigger Monitor” – scale to 0 and scale out with network restrictions
Flex Consumption also introduces networking features to limit access to the Function app and to be able to trigger on event sources which are network restricted. Since these event sources are network restricted the multi-tenant scaling component scale controller that monitors the rate of events to determine to scale out or scale in cannot access them. In the Elastic Premium plan in which we scale down to 1 instance – we solved this by that instance having access to the network restricted event source and then communicating scale decisions to the scale controller. However, in the Flex Consumption plan we wanted to scale down to 0 instances.
To solve this, we implemented a small scaling component we call “Trigger Monitor” that is injected into the customers VNET. This component is now able to access the network restricted event source. The scale controller now communicates with this component to get scaling decisions.
Scaling Http based apps based on concurrency
When scaling Http based workloads on Function apps our previous implementation used an internal heuristic to decide when to scale out. This heuristic was based on Front End servers,: pinging the workers that are currently running customers workload and deciding to scale based on the latency of the responses. This implementation used SQL Azure to track workers and assignments for these workers.
In Flex Consumption we have rewritten this logic where now scaling is based on user configured concurrency. User configured concurrency gives customers flexibility in deciding based on the language and workload what concurrency they want to set per instance. So, for example, for Python customers they don’t have to think about multithreading and can set concurrency =1 (which is also the default for Python apps). This approach makes the scaling behavior predictable, and it gives customers the ability to control the cost vs performance tradeoff – if they are willing to tolerate the potential for higher latency, they might unlock cost savings by running each worker at higher levels of concurrency.
In our implementation, we use “request slots” that are managed by the Data Role. We split instances into “request slots” and assign them to different Front End servers. For example: If the per-instance concurrency is set to 16, then once the Data Role chooses an instance to allocate a Function app to, there are 16 request slots that it can hand out to Front Ends. It might give all 16 to a single Front End, or share them across multiple. This removes the need for any coordination between Front Ends – they can use the request slots they receive as much as they like, with the restriction of only one concurrent request per request slot. Also, this implementation uses Cosmos DB to track assignments and workers.
Along with the Legion as the compute provider, significantly large compute allocation per app and rapid scale in and capacity reclamation allows us to give customers much better experience than before.
Scaling Non-Http based apps based on concurrency
Similar to Http apps, we have also enabled Non-Http based apps to scale based on concurrency. We refer to this as Target Based Scaling. . From an implementation perspective we have moved to have various extensions implement scaling logic within the extension and the scale controller hosts these extensions. This unifies the scaling logic in one place and unifies all scaling based on concurrency.
Moving configuration to the Control Plane
One more change that we are making directionally based on feedback from our customers is to move from using AppSettings for various configuration properties to moving them to the Control Plane. For Public Preview we are doing this for the areas of Deployment, Scaling, Language. This is an example configuration which shows the new Control Plane properties. By GA we will move other properties as well.
Functions on Azure Container Apps: Cloud-native microservices deployments
At Build we are also announcing GA of Functions running on Azure Container Apps. This new SKU allows customers to run their apps using the Azure Functions programming model and event driven triggers alongside other microservices or web applications co-located on the same environment. It allows a customer to leverage common networking resources and observability for all their applications. Furthermore, this allows Functions customers wanting to leverage frameworks (like Dapr) and compute options like GPU’s which are only available on Container Apps environments.
We had to keep this SKU consistent with other Function SKUs/plans, even though it ran and scaled on a different platform (Container Apps).
In particular,
We created a new database for this SKU that can handle different schema needs (because of the differences in the underlying infra compared to regular Functions) and improved the query performance. We also redesigned some parts of the control plane for Functions on ACA.
We used ARM extensions routing to securely route the traffic to host and enable Function Host APIs via ARM for Apps running inside an internal VNET
We built a sync trigger service inside Azure Container Apps environment that detects Function App, reads trigger information from customer’s functions code and automatically creates corresponding KEDA scaler rules for the Function App. This enables automatic scaling of Function Apps on Azure Container Apps (ACA), without customers having to know about the KEDA scaling platform involved.
We developed a custom KEDA external scaler to support scale-to-zero scenario for Timer trigger functions.
VSCode.Web support: Develop your functions in the browser
The Azure Functions team values developer productivity and our VSCode integration and Core Tools are top-notch and one of the main advantages in experience over other similar products in this category. However, we are always striving to enhance this experience.
It is often challenging for developers to configure their local dev machine with the right pre-requisites before they can begin. This setup also needs to be updated with the new versions of local tools and language versions. On the other hand, GitHub codespaces and similar developer environments have demonstrated that we can have effective development environments hosted in the cloud.
We are launching a new getting started experience using VSCode for the Web for Azure Functions. This experience allows developers to write, debug, test and deploy their function code directly from their browser using VS Code for the Web, which is connected to a container-based-compute. This is the same exact experience that a developer would have locally. This container comes ready with all the required dependencies and supports the rich features offered by VS Code, including extensions. This experience can also be used for function apps that already have code deployed to them as well.
To build this functionality we built an extension that launches VS Code for the Web, a lightweight VS Code that runs in a user’s browser. This VS Code client will communicate with Azure Functions backend infrastructure t to establish a connection to a VS Code server using a Dev Tunnel. With the VS Code client and server connected via a DevTunnel, the user will be able to edit their function as desired.
Open AI extension to build AI apps effortlessly
Azure Functions aims to simplify the development of different types of apps, such as web apps, data pipelines and other related work loads. AI apps is a clear new domain. Azure Functions has a rich extensibility model helping developers abstract away many of the mundane tasks that are required for integration along with making the capability be available for all the languages that Functions support.
We are releasing an extension on top of OpenAI which enables the following scenarios in just a few lines of code:
Retrieval Augmented Generation (Bring your own data)
Text completion and Chat Completion
Assistants’ capability
Key here is that developers can build AI apps in any language of their choice that is supported by Functions and are hosted in a service that can be used within minutes.
Have a look at the following code snippet in C# where in a few lines of code:
This HTTP trigger function takes a query prompt as input, pulls in semantically similar document chunks into a prompt, and then sends the combined prompt to OpenAI. The results are then made available to the function, which simply returns that chat response to the caller.
public class SemanticSearchRequest
{
[JsonPropertyName(“Prompt”)]
public string? Prompt { get; set; }
}
[Function(“PromptFile”)]
public static IActionResult PromptFile(
[HttpTrigger(AuthorizationLevel.Function, “post”)] SemanticSearchRequest unused,
[SemanticSearchInput(“AISearchEndpoint”, “openai-index”, Query = “{Prompt}”, ChatModel = “%CHAT_MODEL_DEPLOYMENT_NAME%”, EmbeddingsModel = “%EMBEDDING_MODEL_DEPLOYMENT_NAME%”)] SemanticSearchContext result)
{
return new ContentResult { Content = result.Response, ContentType = “text/plain” };
}
The challenges of building an extension are making sure that it hides enough of “glue code” and at the same time give enough flexibility to the developer for their business use case.
Furthermore, these were some additional challenges we faced:
To save state across invocations in the chat completion scenarios we experimented with various implementations including Durable Functions and finally we move to using Table storage for preserving state during conversations.
We had to figure out which embeddings store we should pursue support – we currently support Azure AI Search, Cosmos DB and Azure Data Explorer
Like any technology that is moving fast we had to figure out the right strategy to use the underlying Open AI models and SDKS.
Streaming support in Node and Python
Another long asked for support that was added at Build is streaming support in Node (GA) and Python (preview)
With this feature, customers can stream HTTP requests to and responses from their Function Apps, using function exposed request and response APIs. Previously with HTTP requests, the amount of data that could be transmitted was limited to the SKU instance memory size. With HTTP streaming, large amounts of data can be processed with chunking. Especially relevant today is that this feature enables new scenarios when creating AI apps including processing large data streaming OpenAI responses and delivering dynamic content.
The journey to enable streaming support is interesting. It started with us first aiming for parity between in-proc and isolated models for .NET. To achieve this we implemented a new Http pipeline where-in the Http request would be proxied from the Functions Host onto the isolated worker. We were able to piggyback on the same technology to build streaming support in other out-of-proc languages.
OpenTelemetry support
In Build we are releasing support for OpenTelemetry in Functions. This allows customers to export telemetry data from both the Functions Host and from the language workers using OpenTelemetry semantics. These are some of the interesting design directions we took for this work:
The customer’s code ignores the Functions host and re-creates the context in each language worker for a smooth experience.
Telemetry is the same for ApplicationInsights and other vendors; customers get the same telemetry data no matter what they use. LiveLogs works with AI, but the overall experience doesn’t change.
To make things easier for our customers, each language worker has a package/module that removes extra code.
Thank you and going forward
Thank you to all the customers and developers who have used Azure Functions through the years. We would love for you to try out these new features and capabilities and provide feedback and suggestions.
Going forward we will be working on:
Getting Flex Consumption to GA and keep making improvements in the meanwhile.
Continue to keep enhancing the Open AI extension with more scenarios and models to make Azure Functions the easiest and fastest way to create an AI service.
Continue to enhance our getting started experience and take VSCode.Web integration to more languages and to GA.
Adding support for Streaming to other languages including Java.
Microsoft Tech Community – Latest Blogs –Read More
MATLAB 2021a produces a compilation error on older MATLAB code.
I have a MATLAB application written in MATLAB 2019a which I can compile into a .exe. I wish to port the code to 2021a. The application can be opened and closed without issues. But when I try to compile the code using mcc for 2021a, I get the following error message which I don’t get when using 2019a.
Compiler version: 8.2 (R2021a)
Analyzing file dependencies.
Error while determining required deployable files. Compilation terminated. Details:
Unable to resolve the name dependencies.internal.graph.Node.createFileNode.
How do I debug this ?I have a MATLAB application written in MATLAB 2019a which I can compile into a .exe. I wish to port the code to 2021a. The application can be opened and closed without issues. But when I try to compile the code using mcc for 2021a, I get the following error message which I don’t get when using 2019a.
Compiler version: 8.2 (R2021a)
Analyzing file dependencies.
Error while determining required deployable files. Compilation terminated. Details:
Unable to resolve the name dependencies.internal.graph.Node.createFileNode.
How do I debug this ? I have a MATLAB application written in MATLAB 2019a which I can compile into a .exe. I wish to port the code to 2021a. The application can be opened and closed without issues. But when I try to compile the code using mcc for 2021a, I get the following error message which I don’t get when using 2019a.
Compiler version: 8.2 (R2021a)
Analyzing file dependencies.
Error while determining required deployable files. Compilation terminated. Details:
Unable to resolve the name dependencies.internal.graph.Node.createFileNode.
How do I debug this ? mcc, matlab compiler, 2021a MATLAB Answers — New Questions
Stereo Camera Calibration does not find checkerboard pattern
I am trying to use synchronized images from two cameras in order to obtain their relative rotation and translation using the Stereo Camera Calibrator App. My first step is to obtain the intrinsic matrices for each of the cameras by using the monocular Camera Calibration App, and then I use the same images as an input to the Stereo Camera Calibration. Despite the fact that I am able to calibrate each camera separately, whenever I am trying to perform the stereo camera calibration, the process of detecting the images takes very long, and after all the images are uploaded, I get the error message that no checkerboard pattern was found. I find it very strange since I am using the same images as in the monocular camera calibration!I am trying to use synchronized images from two cameras in order to obtain their relative rotation and translation using the Stereo Camera Calibrator App. My first step is to obtain the intrinsic matrices for each of the cameras by using the monocular Camera Calibration App, and then I use the same images as an input to the Stereo Camera Calibration. Despite the fact that I am able to calibrate each camera separately, whenever I am trying to perform the stereo camera calibration, the process of detecting the images takes very long, and after all the images are uploaded, I get the error message that no checkerboard pattern was found. I find it very strange since I am using the same images as in the monocular camera calibration! I am trying to use synchronized images from two cameras in order to obtain their relative rotation and translation using the Stereo Camera Calibrator App. My first step is to obtain the intrinsic matrices for each of the cameras by using the monocular Camera Calibration App, and then I use the same images as an input to the Stereo Camera Calibration. Despite the fact that I am able to calibrate each camera separately, whenever I am trying to perform the stereo camera calibration, the process of detecting the images takes very long, and after all the images are uploaded, I get the error message that no checkerboard pattern was found. I find it very strange since I am using the same images as in the monocular camera calibration! stereo camera calibration, image processing MATLAB Answers — New Questions
How to combine cells into a single cell?
How to pass from "a" to "b", here following?
a = [{‘[1,2)’}, {‘[2,6)’},{‘[6,11)’}]; % input
b = {‘[1,2)’,'[2,6)’,'[6,11)’}; % desired output
I tried cat, but, it does not work:
b = cat(1,a{:})How to pass from "a" to "b", here following?
a = [{‘[1,2)’}, {‘[2,6)’},{‘[6,11)’}]; % input
b = {‘[1,2)’,'[2,6)’,'[6,11)’}; % desired output
I tried cat, but, it does not work:
b = cat(1,a{:}) How to pass from "a" to "b", here following?
a = [{‘[1,2)’}, {‘[2,6)’},{‘[6,11)’}]; % input
b = {‘[1,2)’,'[2,6)’,'[6,11)’}; % desired output
I tried cat, but, it does not work:
b = cat(1,a{:}) cat, array of cells, cell, cells MATLAB Answers — New Questions
Azure Functions at Build 2024 – Technical underpinnings and challenges
Azure Functions is Azure’s primary serverless service used in production by hundreds of thousands of customers who run trillions of executions on it monthly. It was first released in early 2016 and since then we have learnt a lot from our customers on what works and where they would like to see more.
Taking all this feedback into consideration, the Azure Functions team has worked hard to improve the experience across the stack from the initial getting started experience all the way to running at very high scale. Please see this link for a list of all the capabilities we have released in this year’s Build conference. Taking everything into account, this is one of the most significant set of releases in Functions history.
In this blog post, I will share a brief glimpse behind the scenes of some of the technical work that the Functions and other partner teams did to meet the expectations of our customers. We will write more technical blogs to explain these areas in depth this is a brief overview.
Flex Consumption: Burst scale your apps with networking support
We are releasing a new SKU of Functions, Flex Consumption. This SKU addresses all the feedback that we have received over the years on the Functions Consumption plans. We have looked at each part of the stack and made improvements at all levels. There are many new capabilities including:
Scales much faster than before with user controlled per-instance concurrency
Serverless “scale to zero” SKU that also supports VNET integrated event sources
Supports always allocated workers
Supports multiple memory sizes
Purpose built backend “Legion”
To enable Flex Consumption, we have created a brand-new purpose-built backend internally called Legion.
To host customer code, Legion relies on nested virtualization on Azure VMSS. This gives us the Hyper-V isolation that is a pre-requisite for hostile multi-tenant workloads. Legion was built right from the outset to support scaling to thousands of instances with VNET injection. Efficient use of subnet IP addresses by use of kernel level routing was also a unique achievement in Legion.
For all languages, functions have a strict goal for cold start. To achieve this cold start metric for all languages and versions, and to support functions image update for all these variants, we had to create a construct called Pool Groups that allows functions to specify all the parameters of the pool, as well as networking and upgrade policies.
All this work led us to a solid, scalable and fast infrastructure on which to build Flex Consumption on.
“Trigger Monitor” – scale to 0 and scale out with network restrictions
Flex Consumption also introduces networking features to limit access to the Function app and to be able to trigger on event sources which are network restricted. Since these event sources are network restricted the multi-tenant scaling component scale controller that monitors the rate of events to determine to scale out or scale in cannot access them. In the Elastic Premium plan in which we scale down to 1 instance – we solved this by that instance (which also has access to the network restricted event source) also communicating scale decisions to the scale controller. However, in the Flex Consumption plan we wanted to scale down to 0 instances.
To solve this, we implemented a small scaling component we call “Trigger Monitor” that is injected into the customers VNET. This component is now able to access the network restricted event source. The scale controller now communicates with this component to get scaling decisions.
Scaling Http based apps based on concurrency
When scaling Http based workloads on Function apps our previous implementation used an internal heuristic to decide when to scale out. This heuristic was based on Front End servers,: pinging the workers that are currently running customers workload and deciding to scale based on the latency of the responses. This implementation used SQL Azure to track workers and assignments for these workers.
In Flex Consumption we have rewritten this logic where now scaling is based on user configured concurrency. User configured concurrency gives customers flexibility in deciding based on the language and workload what concurrency they want to set per instance. So, for example, for Python customers they don’t have to think about multithreading and can set concurrency =1 (which is also the default for Python apps). This approach makes the scaling behavior predictable, and it gives customers the ability to control the cost vs performance tradeoff – if they are willing to tolerate the potential for higher latency, they might unlock cost savings by running each worker at higher levels of concurrency.
In our implementation, we use “request slots” that are managed by the Data Role. We split instances into “request slots” and assign them to different Front End servers. For example: If the per-instance concurrency is set to 16, then once the Data Role chooses an instance to allocate a Function app to, there are 16 request slots that it can hand out to Front Ends. It might give all 16 to a single Front End, or share them across multiple. This removes the need for any coordination between Front Ends – they can use the request slots they receive as much as they like, with the restriction of only one concurrent request per request slot. Also, this implementation uses Cosmos DB to track assignments and workers.
Along with the Legion as the compute provider, significantly large compute allocation per app and rapid scale in and capacity reclamation allows us to give customers much better experience than before.
Scaling Non-Http based apps based on concurrency
Similar to Http apps, we have also enabled Non-Http based apps to scale based on concurrency. We refer to this as Target Based Scaling. . From an implementation perspective we have moved to have various extensions implement scaling logic within the extension and the scale controller hosts these extensions. This unifies the scaling logic in one place and unifies all scaling based on concurrency.
Moving configuration to the Control Plane
One more change that we are making directionally is to move from using AppSettings for various configuration properties to moving them to the Control Plane. For Public Preview we are doing this for the areas of Deployment, Scaling, Language. This is an example configuration which shows the new Control Plane properties. By GA we will move other properties as well.
Functions on Azure Container Apps: Cloud-native microservices deployments
At Build we are also announcing GA of Functions running on Azure Container Apps. This new SKU allows customers to run their apps using the Azure Functions programming model and event driven triggers alongside other microservices or web applications co-located on the same environment. It allows a customer to leverage common networking resources and observability for all their applications. Furthermore, this allows Functions customers wanting to leverage frameworks (like Dapr) and compute options like GPU’s which are only available on Container Apps environments.
We had to keep this SKU consistent with other Function SKUs/plans, even though it ran and scaled on a different platform (Container Apps).
In particular,
We created a new database for this SKU that can handle different schema needs (because of the differences in the underlying infra compared to regular Functions) and improved the query performance. We also redesigned some parts of the control plane for Functions on ACA.
We used ARM extensions routing to securely route the traffic to host and enable Function Host APIs via ARM for Apps running inside an internal VNET
We built a sync trigger service inside Azure Container Apps environment that detects Function App, reads trigger information from customer’s functions code and automatically creates corresponding KEDA scaler rules for the Function App. This enables automatic scaling of Function Apps on Azure Container Apps (ACA), without customers having to know about the KEDA scaling platform involved.
We developed a custom KEDA external scaler to support scale-to-zero scenario for Timer trigger functions.
VSCode.Web support: Develop your functions in the browser
The Azure Functions team values developer productivity and our VSCode integration and Core Tools are top-notch and one of the main advantages in experience over other similar products in this category. However, we are always striving to enhance this experience.
It is often challenging for developers to configure their local dev machine with the right pre-requisites before they can begin. This setup also needs to be updated with the new versions of local tools and language versions. On the other hand, GitHub codespaces and similar developer environments have demonstrated that we can have effective development environments hosted in the cloud.
We are launching a new getting started experience using VSCode for the Web for Azure Functions. This experience allows developers to write, debug, test and deploy their function code directly from their browser using VS Code for the Web, which is connected to a container-based-compute. This is the same exact experience that a developer would have locally. This container comes ready with all the required dependencies and supports the rich features offered by VS Code, including extensions. This experience can also be used for function apps that already have code deployed to them as well.
To build this functionality we built an extension that launches VS Code for the Web, a lightweight VS Code that runs in a user’s browser. This VS Code client will communicate with Azure Functions backend infrastructure t to establish a connection to a VS Code server using a Dev Tunnel. With the VS Code client and server connected via a DevTunnel, the user will be able to edit their function as desired.
Open AI extension to build AI apps effortlessly
Azure Functions aims to simplify the development of different types of apps, such as web apps, data pipelines and other related work loads. AI apps is a clear new domain. Azure Functions has a rich extensibility model helping developers abstract away many of the mundane tasks that are required for integration along with making the capability be available for all the languages that Functions support.
We are releasing an extension on top of OpenAI which enables the following scenarios in just a few lines of code:
Retrieval Augmented Generation (Bring your own data)
Text completion and Chat Completion
Assistants’ capability
Key here is that developers can build AI apps in any language of their choice that is supported by Functions and are hosted in a service that can be used within minutes.
Have a look at the following code snippet in C# where in a few lines of code:
This HTTP trigger function takes a query prompt as input, pulls in semantically similar document chunks into a prompt, and then sends the combined prompt to OpenAI. The results are then made available to the function, which simply returns that chat response to the caller.
public class SemanticSearchRequest
{
[JsonPropertyName(“Prompt”)]
public string? Prompt { get; set; }
}
[Function(“PromptFile”)]
public static IActionResult PromptFile(
[HttpTrigger(AuthorizationLevel.Function, “post”)] SemanticSearchRequest unused,
[SemanticSearchInput(“AISearchEndpoint”, “openai-index”, Query = “{Prompt}”, ChatModel = “%CHAT_MODEL_DEPLOYMENT_NAME%”, EmbeddingsModel = “%EMBEDDING_MODEL_DEPLOYMENT_NAME%”)] SemanticSearchContext result)
{
return new ContentResult { Content = result.Response, ContentType = “text/plain” };
}
The challenges of building an extension are making sure that it hides enough of “glue code” and at the same time give enough flexibility to the developer for their business use case.
Furthermore, these were some additional challenges we faced:
To save state across invocations in the chat completion scenarios we experimented with various implementations including Durable Functions and finally we move to using Table storage for preserving state during conversations.
We had to figure out which embeddings store we should pursue support – we currently support Azure AI Search, Cosmos DB and Azure Data Explorer
Like any technology that is moving fast we had to figure out the right strategy to use the underlying Open AI models and SDKS.
Streaming support in Node and Python
Another long asked for support that was added at Build is streaming support in Node (GA) and Python (preview)
With this feature, customers can stream HTTP requests to and responses from their Function Apps, using function exposed request and response APIs. Previously with HTTP requests, the amount of data that could be transmitted was limited to the SKU instance memory size. With HTTP streaming, large amounts of data can be processed with chunking. Especially relevant today is that this feature enables new scenarios when creating AI apps including processing large data streaming OpenAI responses and delivering dynamic content.
The journey to enable streaming support is interesting. It started with us first aiming for parity between in-proc and isolated models for .NET. To achieve this we implemented a new Http pipeline where-in the Http request would be proxied from the Functions Host onto the isolated worker. We were able to piggyback on the same technology to build streaming support in other out-of-proc languages.
OpenTelemetry support
In Build we are releasing support for OpenTelemetry in Functions. This allows customers to export telemetry data from both the Functions Host and from the language workers using OpenTelemetry semantics. These are some of the interesting design directions we took for this work:
The customer’s code ignores the Functions host and re-creates the context in each language worker for a smooth experience.
Telemetry is the same for ApplicationInsights and other vendors; customers get the same telemetry data no matter what they use. LiveLogs works with AI, but the overall experience doesn’t change.
To make things easier for our customers, each language worker has a package/module that removes extra code.
Thank you and going forward
Thank you to all the customers and developers who have used Azure Functions through the years. We would love for you to try out these new features and capabilities and provide feedback and suggestions.
Going forward we will be working on:
Getting Flex Consumption to GA and keep making improvements in the meanwhile.
Continue to keep enhancing the Open AI extension with more scenarios and models to make Azure Functions the easiest and fastest way to create an AI service.
Continue to enhance our getting started experience and take VSCode.Web integration to more languages and to GA.
Adding support for Streaming to other languages including Java.
Microsoft Tech Community – Latest Blogs –Read More
Train the neural network using a two-input XOR gate knowing the initial values: w1 = 0.9; w2 = 1,8; b = – 0.9; Requirements achieved: Analyze the steps to train a perceptron
Train the neural network using a two-input XOR gate knowing the initial values:
w1 = 0.9;
w2 = 1,8;
b = – 0.9;
Requirements achieved:
Analyze the steps to train a perceptron neural network.
Training programming using Matlab software.
Use nntool for survey and analysisTrain the neural network using a two-input XOR gate knowing the initial values:
w1 = 0.9;
w2 = 1,8;
b = – 0.9;
Requirements achieved:
Analyze the steps to train a perceptron neural network.
Training programming using Matlab software.
Use nntool for survey and analysis Train the neural network using a two-input XOR gate knowing the initial values:
w1 = 0.9;
w2 = 1,8;
b = – 0.9;
Requirements achieved:
Analyze the steps to train a perceptron neural network.
Training programming using Matlab software.
Use nntool for survey and analysis 50 MATLAB Answers — New Questions
Plotting Lines and Points in 3D
I need to learn how to plot lines and points in 3D. Can someone please provide an example in Matlab? Thank you.I need to learn how to plot lines and points in 3D. Can someone please provide an example in Matlab? Thank you. I need to learn how to plot lines and points in 3D. Can someone please provide an example in Matlab? Thank you. plot, 3d, matlab MATLAB Answers — New Questions