Category: News
What’s New in Azure App Service at Build 2024
Welcome to Build 2024!
The team will be covering the latest AI enhancements for migrating web applications, how AI helps developers to monitor and troubleshoot applications, examples of integrating generative AI into both classic ASP.NET and .Net Core apps, and platform enhancements for scaling, load testing, observability, WebJobs and sidecar extensibility.
Drop by the breakout session “Using AI with App Service to deploy differentiated web apps and APIs” on Thursday May 23rd (12:30PM to 1:15PM Pacific time – BRK125 – In-Person and Online) to see live demonstrations of all of these topics!
Azure App Service team members will also be in attendance at the Expert Meetup area on the fifth floor – drop by and chat if you are attending Build in-person!
There are additional demos and presentations from partner teams that will cover (in part) App Service specific scenarios, so if you have time consider the additional sessions as well!
Using AI with App Service to deploy differentiated web apps and APIs
BRK125
Thursday, May 23rd
12:30 PM – 1:15 PM Pacific Daylight Time
Breakout Session – In-Person and Online
App innovation in the AI era: cost, benefits, and challenges
BRK120
Tuesday, May 21st
4:45 PM – 5:30 PM Pacific Daylight Time
Breakout Session – In-Person and Online
Conversational app and code assessment in Azure Migrate
DEM713
Wednesday, May 22nd
10:30 AM – 10:45 AM Pacific Daylight Time
Demo Session – In-Person Only
Leverage Azure Testing Services to build high quality applications
BRK183
Thursday, May 23rd
1:45 PM – 2:30 PM Pacific Daylight Time
Breakout Session – In-Person and Online
Vision to value – SAS accelerates modernization at scale with Azure
BRK170
Thursday, May 23rd
1:45 PM – 2:30 PM Pacific Daylight Time
Breakout Session – In-Person and Online
GitHub Copilot Skills for Azure Migrate
In a recent IDC study of 900 IT decision makers worldwide, 74% of the respondents cited faster innovation, faster time to market, and/or improved business agility as one of the top benefits driving the business case for migrating and modernizing apps with a managed cloud service. Microsoft has been continuously investing in first party tools to make it easier and faster to migrate using the tools you already use and love. We are excited to announce that Azure Migrate application and code assessment, which was released at Microsoft Ignite 2023, now adds GitHub Copilot Chat enhancement to the Visual Studio migration extension!
Once you have the updated migration extension installed in Visual Studio, as well as enabling the Visual Studio GitHub Copilot Chat extension, GitHub Copilot Chat will guide you through the individual items found in the application migration report. You can ask questions like “Can I migrate this app to Azure?” or “What changes do I need to make to this code?” and get answers and recommendations from Azure Migrate. (Note: GitHub Copilot licenses sold separately).
You can get started by clicking on the “Open Chat” button in the compatibility report as shown below.
This will open an interactive chat session where you can chat with Copilot to iterate through the various assessment suggestions. In this example the migration report recommends moving secrets like database connection strings out of web.config or code, and into a secure location such as Azure Key Vault.
You can interactively step through recommended remediations for each issue:
In this example after selecting “No, I don’t have an Azure Key Vault…”, Copilot will show the commands necessary to setup Key Vault in Azure:
You can continue to walk through all of the migration suggestions and issues found in the assessment report in this manner, leveraging Copilot to provide specific steps, CLI commands, and code remediations to prepare your application for migration into Azure!
Sidecar Scenarios in Azure App Service on Linux
Sidecar patterns are a way to add extra features to an application, such as logging, monitoring and caching, without changing the application’s core code. Sidecar support for container based applications on Azure App Service on Linux is now in public preview! Public preview for using sidecars with source-code based applications is expected to be available this summer.
Common scenarios include attaching monitoring solutions to your application, including popular third-party application performance monitoring (APM) offerings. This example shows a container-based application configured with an OpenTelemetry (OTel) collector sidecar which exports metrics to OTel compatible targets. There are also additional examples showing how to integrate with commonly used ISV solutions such as Datadog with your web applications.
Other common scenarios include attaching a sidecar for in-memory caching using Azure Cache for Redis, and attaching a vector cache sidecar to reduce traffic to back-end LLLM resources when adding generative AI to your application.
At Microsoft Build 2024, breakout session BRK125 includes demonstrations of sidecar scenarios for both container-based and source-code based applications!
WebJobs for Azure App Service on Linux
Webjobs are background tasks that run on the same server as the web app and can perform various functions, such as sending emails, executing bash scripts and running scheduled jobs. WebJobs are now integrated with Azure App Service on Linux, which means they share the same compute resources as the web app to help save costs and ensure consistent performance. WebJobs support for both Azure App Service on Linux as well as Windows Containers on Azure App Service is broadly available in public preview.
WebJobs enable developers to easily run arbitrary code and scripts, in the language of their choice, on a variety of schedules including continuously, manually on-demand, or on a periodic schedule defined via a crontab expression. For example, Linux developers can continuously run shell scripts that perform background “infra-glue” tasks like scanning through a back-end database and sending email reports.
The full list of supported scripting options, as well as information on how to run jobs in a specific language, is available in the updated WebJobs documentation.
Automatic Scaling on Azure App Service
We’re happy to announce that the automatic scaling feature in Azure App Service is now generally available! Automatic scaling provides significant performance improvements for any web app without writing new code or making code changes. With this feature, Azure App Service automatically adjusts the number of application instances and worker instances based on dynamically assessing the incoming HTTP request rate and observed load on the underlying app service plan.
We improved the Automatic Scaling feature based on your feedback during the preview phase with expanded SKU availability and a new scaling metric:
Automatic scaling expanded support to encompass the P0v3 and P*mv3 SKUs.
A new metric called “AutomaticScalingInstanceCount” was added which shows the number of worker instances your application is consuming.
Let Azure App Service adjust the worker count of your App Service plan to match your web application load, without worrying about auto-scale profiles or manual control. It is like an “automatic cruise control” for your web apps! Also check out our community standup to see this feature in action!
Four Nines’ Resiliency is Kind of a Big Deal!
As of May 1st Azure App Service officially supports 99.99% resiliency when your app service plan is running in an Availability Zone based configuration! Availability Zones are isolated locations within an Azure region that provide high availability and fault tolerance. Please refer to the Service Level Agreement (SLA) documentation dated May 01, 2024 to learn more about the higher SLA.
Azure App Service Environment version 3: New and Notable
For customers using the Isolatedv2 SKU on App Service Environment v3 (ASEv3) with Windows, the new memory-optimized pricing tiers, denoted with an ‘m’ such as in Imv2, are now available and can be configured using the Azure CLI as well as ARM/Bicep! The memory optimized tiers provide a higher memory-to-core ratio than their regular counterparts. For instance, in one of the larger Isolated v2 tiers, both I5v2 and I5mv2 provide the same number of cores at 32 vCPU, but the memory-optimized tier has double the RAM at 256GB. Support for Linux and Windows Containers is expected to be available later this year. Portal support for Windows source-code based apps running on ASEv3 will also be available shortly after Build! Please refer to the product documentation to learn more about the new tiers and availability.
Friendly Reminder: While on the subject of Azure App Service Environment, allow me to rerun our public service announcement about the upcoming retirement of Azure App Service Environment v1 and v2 on August 31 2024. We recommend starting the migration process as soon as possible (time is quickly running out!). Many customers have already completed this migration with little to no downtime. Please visit product documentation for detailed steps, tools, and useful resources to help you. Our next community standup scheduled for June 5th will also cover this in detail.
TLS 1.3 and More!
We are pleased to announce that TLS 1.3 has been rolled out worldwide and is now generally available across App Service on Public Cloud and Azure for US Government! Customers can configure an application to require TLS 1.3 via the minimum TLS setting available in the Azure portal, as well as via ARM.
With the availability of TLS 1.3, App Service has also updated the TLS cipher suite order to account for recommended TLS 1.3 cipher suites. You will see the following two TLS cipher suites listed on the minimum TLS cipher suite feature:
TLS_AES_256_GCM_SHA384
TLS_AES_128_GCM_SHA256
As part of the TLS updates, App Service on both Windows and Linux support End to End (E2E) TLS Encryption (in public preview). Incoming HTTPS requests are usually terminated at the App Service front-ends, with the requests proxied to individual workers over HTTTP. With the updated E2E TLS Encryption feature, both Windows and Linux applications can choose to encrypt the requests between the App Service front-ends and the workers running applications. E2E TLS Encryption is available for Standard App Service Plans and above, and can be enabled in the Azure portal as well as via ARM and Azure CLI.
If you have an Azure Key Vault that uses Azure role-based access control (RBAC), you can now import that Key Vault certificate to your web app. Because newly created Key Vaults are configured to use RBAC by default, instead of the legacy access policies, this new support in Azure App Service will make it easier for you to integrate your Key Vault certificates with App Service. Support for importing certificates into App Service from Key Vault using RBAC permissions is available via ARM and the Azure CLI, with Azure portal support planned for the future. Developers can read more about this new support in the documentation.
For more information regarding TLS 1.3 on App Service, the new minimum cipher suites, and updates to E2E TLS Encryption refer to the all-inclusive article on the Microsoft Community Hub!
Better Together with Recommended Azure Service
You can now find recommendations in the Azure Portal for services commonly deployed with Azure App Service! The initial list is curated and primarily focuses on connecting newly created Azure resources to your existing App Service applications. An example of showing recommended services is shown below.
In addition to the curated listed, the new Recommended Services capability in Copilot for Azure offers quick recommendations tailored to your specific application. For instance, it can suggest a popular database suitable for your application type or ensure that you are “on the right track” with commonly deployed services, drawing insights from similar applications.
To use the new Copilot integrated capability, navigate to the Azure Portal and open Copilot for Azure. Examples of the types of questions that you can ask include: “What are commonly deployed services for my app?” or “What is the recommended database for my app?” Read more about these capabilities and try out the new Recommend Services Copilot capability today!
Azure Load Testing Integration
How many times has new code been released to production only to encounter unexpected performance related problems? With the recent release of Azure Load Testing integration with Azure App Service, there has never been a better time to easily run load tests on your web applications. Discover performance problems before they make it into production and uncover race conditions and other load related bugs ahead of time!
You can start setting up load tests directly from the Overview page of your web applications.
As part of this you configure one or more Urls to include in the test run.
You also configure the size of the load test, along with other parameters governing startup behavior and load test duration. After the load test is completed, you will see summarized results for the specific load test where you can also drill down to more detailed metrics.
For more advanced scenarios involving high-scale production scenarios, Azure Load Testing integration also makes it easy for developers to experiment with different scaling strategies and compare the results to achieve desired workload performance.
Language and Deployment Updates
App Service regularly updates major and minor language versions across both the Windows and Linux variants of the platform. As part of that continuing cadence, App Service on Linux just released PHP 8.3 last week! And just last month WordPress on Linux App Service GA’d the Free Tier option which includes a twelve month no-cost backend database running on Azure Database for MySql!
An interesting technical tidbit for the curious, there is also a great write-up here on how to use WordPress on App Service as a headless CMS back-end in conjunction with Azure Static Web Apps.
gRPC has been generally available for App Service on Linux since last November. We’re happy to announce that gRPC support is now available in public preview for App Service on Windows! The team recently demonstrated using gRPC on Windows and Linux at the recently concluded .Net Day 2024.
Azure App Service on Linux has also added a new deployment status tracking API that surfaces detailed deployment log information when deploying source-code based applications. The deployment status tracking API surfaces detailed step-by-step progress information including specific failure information, a link to follow for more detailed deployment failure logs, and post-deployment app startup information. The platform is continuing to expand this capability with additional integration planned for the Azure Portal. For more details on the new deploy status tracking API and guidance on how to use it see this article!
Next Steps
Developers can learn more about Azure App Service at Getting Started with Azure App Service. Stay up to date on new features and innovations on Azure App Service via Azure Updates as well as the Azure App Service (@AzAppService) X feed. There is always a steady stream of great deep-dive technical articles about App Service as well as the breadth of developer focused Azure services over on the Apps on Azure blog.
Take a look at innovation with .Net, and .Net on Azure App Service, with the recently completed .Net Day 2024 event where the new code assessment migration tools were demonstrated as well as gRPC functionality running on both Windows and Linux App Service.
And lastly take a look at Azure App Service Community Standups hosted on the Microsoft Azure Developers YouTube channel. The Azure App Service Community Standup series regularly features walkthroughs of new and upcoming features from folks that work directly on the product!
Microsoft Tech Community – Latest Blogs –Read More
Azure SQL DB availability portal metric
Azure SQL database is the modern cloud based relational database service to power wide variety of applications including mission critical, resource intensive and the latest generative AI applications. Azure SQL database provides industry leading availability SLA of 99.99%. We know customers want to monitor availability of critical Azure services like Azure SQL database in granular, consistent way and in near real time with high quality data.
We are excited to announce Public Preview of Availability portal metric enabling you to monitor SLA compliant availability. This Azure monitor metric is emitted at 1-minute frequency and has up to 93 days of history. Typically, the latency to display availability is less than three minutes. You can visualize the metric in Azure monitor and set up alerts too.
Availability is determined based on the database being operational for connections. A minute is considered as downtime or unavailable if all continuous attempts by users to establish connection to the database within the minute fail due to a service issue. If there is intermittent unavailability, the duration of continuous unavailability must cross the minute boundary to be considered as downtime.
Availability metric data is applicable for a database in DTU or vCore purchasing model and in all the service tiers (Basic, Standard, Premium, General Purpose, Business Critical & Hyperscale). Both singleton and elastic pool deployments are supported. You can monitor the metric by adding Availability metric in portal as shown below:
For comprehensive details on Availability metric like the logic used for computing availability please refer to documentation. To learn more of Azure SQL database Service Level Agreements (SLA) refer to SLA.
Microsoft Tech Community – Latest Blogs –Read More
Leveraging Azure AI Services to Build, Deploy, and Monitor AI Applications with .NET
Azure AI services offer robust tools and platforms that enable developers to bring their AI solutions from concept to production seamlessly. Using .NET 8 alongside these services, developers can experiment, build, and scale their AI applications effectively. This post explores how you can harness the power of Azure AI and .NET to transform your ideas into production-ready AI solutions.
From Prototyping to Production with Azure AI
Start your AI journey by experimenting with local prototypes using Azure AI’s extensive suite of tools. Azure Machine Learning and Azure Cognitive Services provide the necessary components to plug in different AI models and build comprehensive solutions. When you’re ready to scale, Azure OpenAI Service and .NET Aspire enable you to run and monitor your applications efficiently, ensuring high performance and reliability.
Why Build AI Apps with Azure AI Services?
Integrating AI into your applications with Azure AI offers numerous benefits:
Enhanced User Engagement: Deliver more relevant and satisfying user interactions.
Increased Productivity: Automate tasks to save time and reduce errors.
New Business Opportunities: Create innovative, value-added services.
Competitive Advantage: Stay ahead of market trends with cutting-edge AI capabilities.
Getting Started with Azure AI and .NET
Explore the new Azure AI and .NET documentation to learn core AI development concepts. These resources include quickstart guides to help you get hands-on experience with code and start building your AI applications.
Utilizing Semantic Kernel
Semantic Kernel, an open-source SDK, simplifies building AI solutions by enabling easy integration with various models like OpenAI, Azure OpenAI, and Hugging Face. It supports connections to popular vector stores such as Weaviate, Pinecone, and Azure AI Search. By providing common abstractions for dependency injection in .NET, Semantic Kernel allows you to experiment and iterate on your apps with minimal code impact.
Testing and Monitoring with .NET Aspire
.NET Aspire offers robust support for debugging and diagnostics, leveraging the .NET OpenTelemetry SDK. It simplifies the configuration of logging, tracing, and metrics, making it easy to monitor your applications. Azure Monitor and Prometheus can be used to keep an eye on your production deployments, ensuring your applications run smoothly.
Real-World Example: H&R Block’s AI Tax Assistant
H&R Block has developed an innovative AI Tax Assistant using .NET and Azure OpenAI, transforming how clients handle tax-related queries. This assistant provides personalized advice and simplifies the tax process, showcasing the capabilities of Azure AI in building scalable, AI-driven solutions. This project serves as an inspiring example for developers looking to integrate AI into their applications.
Join H&R Block at Microsoft Build as they discuss their journey and experience building AI with .NET and Azure in the session, Infusing your .NET Apps with AI: Practical Tools and Techniques.
Learn More
To dive deeper into AI development with Azure AI and .NET:
Explore the latest .NET and Azure AI documentation
Get started with our quickstart guides for Azure AI and Semantic Kernel
Read the Semantic Kernel announcement post
Share your feedback and connect with our team
Microsoft Tech Community – Latest Blogs –Read More
Announcing new supported formats for Azure Schema Registry
Ever since its general availability in November 2021, Azure Schema Registry has provided a central repository for schema documents, essential for event-driven and messaging-centric applications, greatly simplifying schema management, governance and evolution and streamlining data pipelines for customers.
When we began, we supported Avro format due to its popularity in the open-source community and within the Apache Kafka ecosystem. However, as architectures have evolved, customers have asked us to enable additional formats so that they can onboard more workflows and use Azure Schema Registry for ALL their schema management needs.
On that note, we’re excited to make a few announcements.
General availability of JSON Schema formats for Kafka applications
Today we are excited to announce the General Availability of JSON Schema support in Azure Schema Registry for Kafka applications.
JSON provides a simple, extensible model for development in an increasing cloud-native world. JSON Schema is the standard to support reliable use of the JSON data format in production-grade solutions. Additionally, JSON Schema’s rich ecosystem supercharges development with tools to generate documentation, interfaces, code, and other artifacts thus significantly reducing operational overhead.
Real time streaming workloads on Azure Event Hubs and analytics workloads on Microsoft Fabric can leverage JSON Schema with Azure Schema Registry to simplify schema management at scale.
To learn more about how to use JSON Schema with Azure Schema Registry, for Azure Event Hubs and Apache Kafka applications, refer to the documentation.
Examples
You can find examples of how to use JSON Schema with Azure Schema Registry SDK for different languages in the following links:
Public preview of Protobuf support
We’re also excited to announce preview support for the Protobuf data format.
Protocol Buffers, often abbreviated as protobuf, is a language-neutral, platform-neutral, and extensible mechanism developed for the serialization of structured data. Protobuf’s rich ecosystem allows developers to define data structures once in .proto files, and then use generated source code to supercharge development workflows.
To utilize protobuf in your client applications, you can use the Schema Registry REST API for various management operations.
To create a new schema group, you can call PutSchemaGroup and specify the “schemaType” in the request body as below –
“schemaType”: “Protobuf”
Once the schema group has been created, you may PutSchema in that group, and specify “contentType” in the request headers as below –
“content-type” : text/vnd.ms.protobuf
Learn more
To learn more about JSON Schema, visit the official website at JSON Schema (json-schema.org)
To learn more about Azure Schema Registry, visit the documentation at Use Azure Schema Registry from Apache Kafka and other apps – Azure Event Hubs | Microsoft Learn
To learn more about Azure Schema Registry REST API, see the Schema Registry REST API overview.
To learn more about Azure Event Hubs, visit the documentation at Azure Event Hubs documentation | Microsoft Learn
Microsoft Tech Community – Latest Blogs –Read More
Announcing general availability for Kafka compression in Azure Event Hubs
Today, we’re excited to announce that compression for Kafka clients is generally available in Azure Event Hubs.
Azure Event Hubs is a cloud native streaming service enabling you to build scalable, durable and low-latency workflows with massive volumes of event data with ease. As you onboard more workloads to Azure and build them around Azure Event Hubs, your bandwidth and storage requirements may scale exponentially.
Kafka compression can help with this by reducing the data payloads that are stored and transmitted across your architecture, thus reducing network bandwidth and storage requirements and costs, while still keeping the programming model simple.
To utilize Kafka compression, you must set the below setting in the Kafka producer configuration properties –
compression.type = gzip
No changes are required on the Kafka consumer side, since the compression information is made available in the message header and the consumer can automagically uncompress these and make it available for processing.
Learn more
To learn more about Kafka compression in Azure Event Hubs, please refer to the documentation.
Microsoft Tech Community – Latest Blogs –Read More
How to use Bluetooth Low Energy in a compiled application ?
I am trying to compile a Matlab Application using BLE communication into a standalone app. The application is compiled properly, without any errors, and can be launched, but each time I try to scan BLE devices using the function blelist, the application crashes.
I used the diary function to get the error message, and it says that " Bluetooth permission is not enabled for MATLAB. Allow MATLAB or Terminal to use Bluetooth from the Security & Privacy settings ". Of course, I allowed both Matlab and Terminal to use Bluetooth, but the errors still occurs and the application continues to crash.
I don’t know if I missed an application to add into the Bluetooth permissions, like an executable of Matlab Runtime ?
I work on macOs, if it can help.
Thanks for your help !I am trying to compile a Matlab Application using BLE communication into a standalone app. The application is compiled properly, without any errors, and can be launched, but each time I try to scan BLE devices using the function blelist, the application crashes.
I used the diary function to get the error message, and it says that " Bluetooth permission is not enabled for MATLAB. Allow MATLAB or Terminal to use Bluetooth from the Security & Privacy settings ". Of course, I allowed both Matlab and Terminal to use Bluetooth, but the errors still occurs and the application continues to crash.
I don’t know if I missed an application to add into the Bluetooth permissions, like an executable of Matlab Runtime ?
I work on macOs, if it can help.
Thanks for your help ! I am trying to compile a Matlab Application using BLE communication into a standalone app. The application is compiled properly, without any errors, and can be launched, but each time I try to scan BLE devices using the function blelist, the application crashes.
I used the diary function to get the error message, and it says that " Bluetooth permission is not enabled for MATLAB. Allow MATLAB or Terminal to use Bluetooth from the Security & Privacy settings ". Of course, I allowed both Matlab and Terminal to use Bluetooth, but the errors still occurs and the application continues to crash.
I don’t know if I missed an application to add into the Bluetooth permissions, like an executable of Matlab Runtime ?
I work on macOs, if it can help.
Thanks for your help ! bluetooth, standalone application, matlab compiler, matlab runtime, blelist MATLAB Answers — New Questions
Overview Error when creating RTU Object in MODBUS EXPLORER
When I want to create an Device via. "Modbus Explorer" it works flawless when I choose Modbus TCP. When I try to do the same with Modbus RTU, I get the following window. Only explenation window about config settings, but no Config setting available. Seems to me like a bug. Is there a fix for it ?
In Matlab I get this codes as feedback:
Warning: Error executing listener callback for PostSet event on UserAddNonEnumDeviceStartPage dynamic property in object of matlabshared.mediator.internal.Mediator class:
Error using matlab.ui.internal.toolstrip.DropDown/set.SelectedIndex
"SelectedIndex" property accepts an integer between 1 and the number of items.
Error in matlab.hwmgr.internal.toolstrip.ParamTabHandler/populateTab/createDropDown (line 534)
valueWidget.SelectedIndex = 1;
Error in matlab.hwmgr.internal.toolstrip.ParamTabHandler/populateTab/createWidget (line 436)
valueWidget = createDropDown(paramStruct.ParamID);
Error in matlab.hwmgr.internal.toolstrip.ParamTabHandler/populateTab (line 337)
valueWidget = createWidget(paramStruct);
Error in matlab.hwmgr.internal.toolstrip.ParamTabHandler (line 118)
obj.populateTab();
Error in matlab.hwmgr.internal.Toolstrip/showModalTsTab (line 518)
obj.ModalTabHandler = matlab.hwmgr.internal.toolstrip.ParamTabHandler(…
Error in matlab.hwmgr.internal.Toolstrip/showNonEnumDeviceConfigTab (line 363)
obj.showModalTsTab(descriptor, @obj.confirmAddDevice, @()obj.cancelConfiguringDevice(cancelDestination));
Error in matlab.hwmgr.internal.Toolstrip/startNonEnumDeviceConfig (line 394)
obj.showNonEnumDeviceConfigTab(descriptor, cancelDestination);
Error in matlab.hwmgr.internal.Toolstrip/handleUserAddNonEnumDeviceStartPage (line 115)
obj.startNonEnumDeviceConfig(descriptor, "StartPage");
Error in matlab.hwmgr.internal.MessageLogger/logAndInvoke (line 59)
obj.(methodName)(evt.AffectedObject.(propName));
Error in matlab.hwmgr.internal.MessageLogger>@(src,evt)obj.logAndInvoke(src,evt,eventsAndCallbacks(i,2),eventsAndCallbacks(i,1)) (line 83)
@(src,evt)obj.logAndInvoke(src, evt, eventsAndCallbacks(i,2),eventsAndCallbacks(i,1)));
Error in matlabshared.mediator.internal.Publisher/setMediatorProp (line 68)
obj.Mediator.(src.Name) = evt.AffectedObject.(src.Name);
Error in matlabshared.mediator.internal.Publisher>@(varargin)obj.setMediatorProp(varargin{:}) (line 62)
obj.PropEventListenerArray(end + 1) = obj.listener(propList(i).Name, ‘PostSet’, @obj.setMediatorProp);
Error in matlab.hwmgr.internal.MessageLogger/logAndSet (line 40)
obj.(propName) = propVal;
Error in matlab.hwmgr.internal.ClientAppStartPage/clientAddNonEnumDevice (line 225)
obj.logAndSet("UserAddNonEnumDeviceStartPage", descriptor);
Error in matlab.hwmgr.internal.MessageHandler>@(msg)obj.Subject.(action)(msg) (line 71)
callback (1, 1) function_handle = @(msg)obj.Subject.(action)(msg)
Error in matlab.hwmgr.internal.MessageHandler/callbackHandler (line 157)
feval(obj.Subscriptions(msg.action), msg.params);
Error in matlab.hwmgr.internal.MessageHandler>@(msg)obj.callbackHandler(msg) (line 135)
obj.Subscriber = message.subscribe(obj.PubSubChannel, @(msg)obj.callbackHandler(msg));
Error in message.subscribe
Error in message.internal.executeCallback
> In matlabshared.mediator.internal/Publisher/setMediatorProp (line 68)
In matlabshared.mediator.internal.Publisher>@(varargin)obj.setMediatorProp(varargin{:}) (line 62)
In matlab.hwmgr.internal/MessageLogger/logAndSet (line 40)
In matlab.hwmgr.internal/ClientAppStartPage/clientAddNonEnumDevice (line 225)
In matlab.hwmgr.internal.MessageHandler>@(msg)obj.Subject.(action)(msg) (line 71)
In matlab.hwmgr.internal/MessageHandler/callbackHandler (line 157)
In matlab.hwmgr.internal.MessageHandler>@(msg)obj.callbackHandler(msg) (line 135)
In message.subscribe
In message.internal.executeCallback
Tried this on different Matlab versions R2024a,R2022b,R2020a. Laptop with Win 11
Any fix for this available, if not how to properly report bugs.When I want to create an Device via. "Modbus Explorer" it works flawless when I choose Modbus TCP. When I try to do the same with Modbus RTU, I get the following window. Only explenation window about config settings, but no Config setting available. Seems to me like a bug. Is there a fix for it ?
In Matlab I get this codes as feedback:
Warning: Error executing listener callback for PostSet event on UserAddNonEnumDeviceStartPage dynamic property in object of matlabshared.mediator.internal.Mediator class:
Error using matlab.ui.internal.toolstrip.DropDown/set.SelectedIndex
"SelectedIndex" property accepts an integer between 1 and the number of items.
Error in matlab.hwmgr.internal.toolstrip.ParamTabHandler/populateTab/createDropDown (line 534)
valueWidget.SelectedIndex = 1;
Error in matlab.hwmgr.internal.toolstrip.ParamTabHandler/populateTab/createWidget (line 436)
valueWidget = createDropDown(paramStruct.ParamID);
Error in matlab.hwmgr.internal.toolstrip.ParamTabHandler/populateTab (line 337)
valueWidget = createWidget(paramStruct);
Error in matlab.hwmgr.internal.toolstrip.ParamTabHandler (line 118)
obj.populateTab();
Error in matlab.hwmgr.internal.Toolstrip/showModalTsTab (line 518)
obj.ModalTabHandler = matlab.hwmgr.internal.toolstrip.ParamTabHandler(…
Error in matlab.hwmgr.internal.Toolstrip/showNonEnumDeviceConfigTab (line 363)
obj.showModalTsTab(descriptor, @obj.confirmAddDevice, @()obj.cancelConfiguringDevice(cancelDestination));
Error in matlab.hwmgr.internal.Toolstrip/startNonEnumDeviceConfig (line 394)
obj.showNonEnumDeviceConfigTab(descriptor, cancelDestination);
Error in matlab.hwmgr.internal.Toolstrip/handleUserAddNonEnumDeviceStartPage (line 115)
obj.startNonEnumDeviceConfig(descriptor, "StartPage");
Error in matlab.hwmgr.internal.MessageLogger/logAndInvoke (line 59)
obj.(methodName)(evt.AffectedObject.(propName));
Error in matlab.hwmgr.internal.MessageLogger>@(src,evt)obj.logAndInvoke(src,evt,eventsAndCallbacks(i,2),eventsAndCallbacks(i,1)) (line 83)
@(src,evt)obj.logAndInvoke(src, evt, eventsAndCallbacks(i,2),eventsAndCallbacks(i,1)));
Error in matlabshared.mediator.internal.Publisher/setMediatorProp (line 68)
obj.Mediator.(src.Name) = evt.AffectedObject.(src.Name);
Error in matlabshared.mediator.internal.Publisher>@(varargin)obj.setMediatorProp(varargin{:}) (line 62)
obj.PropEventListenerArray(end + 1) = obj.listener(propList(i).Name, ‘PostSet’, @obj.setMediatorProp);
Error in matlab.hwmgr.internal.MessageLogger/logAndSet (line 40)
obj.(propName) = propVal;
Error in matlab.hwmgr.internal.ClientAppStartPage/clientAddNonEnumDevice (line 225)
obj.logAndSet("UserAddNonEnumDeviceStartPage", descriptor);
Error in matlab.hwmgr.internal.MessageHandler>@(msg)obj.Subject.(action)(msg) (line 71)
callback (1, 1) function_handle = @(msg)obj.Subject.(action)(msg)
Error in matlab.hwmgr.internal.MessageHandler/callbackHandler (line 157)
feval(obj.Subscriptions(msg.action), msg.params);
Error in matlab.hwmgr.internal.MessageHandler>@(msg)obj.callbackHandler(msg) (line 135)
obj.Subscriber = message.subscribe(obj.PubSubChannel, @(msg)obj.callbackHandler(msg));
Error in message.subscribe
Error in message.internal.executeCallback
> In matlabshared.mediator.internal/Publisher/setMediatorProp (line 68)
In matlabshared.mediator.internal.Publisher>@(varargin)obj.setMediatorProp(varargin{:}) (line 62)
In matlab.hwmgr.internal/MessageLogger/logAndSet (line 40)
In matlab.hwmgr.internal/ClientAppStartPage/clientAddNonEnumDevice (line 225)
In matlab.hwmgr.internal.MessageHandler>@(msg)obj.Subject.(action)(msg) (line 71)
In matlab.hwmgr.internal/MessageHandler/callbackHandler (line 157)
In matlab.hwmgr.internal.MessageHandler>@(msg)obj.callbackHandler(msg) (line 135)
In message.subscribe
In message.internal.executeCallback
Tried this on different Matlab versions R2024a,R2022b,R2020a. Laptop with Win 11
Any fix for this available, if not how to properly report bugs. When I want to create an Device via. "Modbus Explorer" it works flawless when I choose Modbus TCP. When I try to do the same with Modbus RTU, I get the following window. Only explenation window about config settings, but no Config setting available. Seems to me like a bug. Is there a fix for it ?
In Matlab I get this codes as feedback:
Warning: Error executing listener callback for PostSet event on UserAddNonEnumDeviceStartPage dynamic property in object of matlabshared.mediator.internal.Mediator class:
Error using matlab.ui.internal.toolstrip.DropDown/set.SelectedIndex
"SelectedIndex" property accepts an integer between 1 and the number of items.
Error in matlab.hwmgr.internal.toolstrip.ParamTabHandler/populateTab/createDropDown (line 534)
valueWidget.SelectedIndex = 1;
Error in matlab.hwmgr.internal.toolstrip.ParamTabHandler/populateTab/createWidget (line 436)
valueWidget = createDropDown(paramStruct.ParamID);
Error in matlab.hwmgr.internal.toolstrip.ParamTabHandler/populateTab (line 337)
valueWidget = createWidget(paramStruct);
Error in matlab.hwmgr.internal.toolstrip.ParamTabHandler (line 118)
obj.populateTab();
Error in matlab.hwmgr.internal.Toolstrip/showModalTsTab (line 518)
obj.ModalTabHandler = matlab.hwmgr.internal.toolstrip.ParamTabHandler(…
Error in matlab.hwmgr.internal.Toolstrip/showNonEnumDeviceConfigTab (line 363)
obj.showModalTsTab(descriptor, @obj.confirmAddDevice, @()obj.cancelConfiguringDevice(cancelDestination));
Error in matlab.hwmgr.internal.Toolstrip/startNonEnumDeviceConfig (line 394)
obj.showNonEnumDeviceConfigTab(descriptor, cancelDestination);
Error in matlab.hwmgr.internal.Toolstrip/handleUserAddNonEnumDeviceStartPage (line 115)
obj.startNonEnumDeviceConfig(descriptor, "StartPage");
Error in matlab.hwmgr.internal.MessageLogger/logAndInvoke (line 59)
obj.(methodName)(evt.AffectedObject.(propName));
Error in matlab.hwmgr.internal.MessageLogger>@(src,evt)obj.logAndInvoke(src,evt,eventsAndCallbacks(i,2),eventsAndCallbacks(i,1)) (line 83)
@(src,evt)obj.logAndInvoke(src, evt, eventsAndCallbacks(i,2),eventsAndCallbacks(i,1)));
Error in matlabshared.mediator.internal.Publisher/setMediatorProp (line 68)
obj.Mediator.(src.Name) = evt.AffectedObject.(src.Name);
Error in matlabshared.mediator.internal.Publisher>@(varargin)obj.setMediatorProp(varargin{:}) (line 62)
obj.PropEventListenerArray(end + 1) = obj.listener(propList(i).Name, ‘PostSet’, @obj.setMediatorProp);
Error in matlab.hwmgr.internal.MessageLogger/logAndSet (line 40)
obj.(propName) = propVal;
Error in matlab.hwmgr.internal.ClientAppStartPage/clientAddNonEnumDevice (line 225)
obj.logAndSet("UserAddNonEnumDeviceStartPage", descriptor);
Error in matlab.hwmgr.internal.MessageHandler>@(msg)obj.Subject.(action)(msg) (line 71)
callback (1, 1) function_handle = @(msg)obj.Subject.(action)(msg)
Error in matlab.hwmgr.internal.MessageHandler/callbackHandler (line 157)
feval(obj.Subscriptions(msg.action), msg.params);
Error in matlab.hwmgr.internal.MessageHandler>@(msg)obj.callbackHandler(msg) (line 135)
obj.Subscriber = message.subscribe(obj.PubSubChannel, @(msg)obj.callbackHandler(msg));
Error in message.subscribe
Error in message.internal.executeCallback
> In matlabshared.mediator.internal/Publisher/setMediatorProp (line 68)
In matlabshared.mediator.internal.Publisher>@(varargin)obj.setMediatorProp(varargin{:}) (line 62)
In matlab.hwmgr.internal/MessageLogger/logAndSet (line 40)
In matlab.hwmgr.internal/ClientAppStartPage/clientAddNonEnumDevice (line 225)
In matlab.hwmgr.internal.MessageHandler>@(msg)obj.Subject.(action)(msg) (line 71)
In matlab.hwmgr.internal/MessageHandler/callbackHandler (line 157)
In matlab.hwmgr.internal.MessageHandler>@(msg)obj.callbackHandler(msg) (line 135)
In message.subscribe
In message.internal.executeCallback
Tried this on different Matlab versions R2024a,R2022b,R2020a. Laptop with Win 11
Any fix for this available, if not how to properly report bugs. modbus, modbus explorer, rtu, bug, overview, error MATLAB Answers — New Questions
Is it possible to assign variables in the workspace to other variables using for loop?
I have variables like a_1_1, a_1_2, a_1_3, a_2_1, a_2_2, etc…… in the workspace. I want to assign these variables to another variable inside a for loop. Such that for each loop each variable will be loaded. For instance,
for i=1:100
for j=1:3
e=a_i_j
end
end
The abovementioned is just a pseudo code only for understanding.
How this can be done inside a for loop?I have variables like a_1_1, a_1_2, a_1_3, a_2_1, a_2_2, etc…… in the workspace. I want to assign these variables to another variable inside a for loop. Such that for each loop each variable will be loaded. For instance,
for i=1:100
for j=1:3
e=a_i_j
end
end
The abovementioned is just a pseudo code only for understanding.
How this can be done inside a for loop? I have variables like a_1_1, a_1_2, a_1_3, a_2_1, a_2_2, etc…… in the workspace. I want to assign these variables to another variable inside a for loop. Such that for each loop each variable will be loaded. For instance,
for i=1:100
for j=1:3
e=a_i_j
end
end
The abovementioned is just a pseudo code only for understanding.
How this can be done inside a for loop? image processing, image analysis, image segmentation, image acquisition, digital signal processing, digital image processing, signal processing, signal, mathematics, deep learning, machine learning, biomedical signal, ecg, multiplication MATLAB Answers — New Questions
Roadrunner unknown option suppressCrashHandlerFailures
Roadrunner unknown option suppressCrashHandlerFailures when lanuch RoadrunnerRoadrunner unknown option suppressCrashHandlerFailures when lanuch Roadrunner Roadrunner unknown option suppressCrashHandlerFailures when lanuch Roadrunner roadrunner unknown option suppresscrashhandlerfail MATLAB Answers — New Questions
outlook repeatedly asks for 365 credentials
We have a new RDS solution so user profiles newly created etc.. AD environment with two session hosts, profile disks and a broker server. Since we have used this solution outlook repeatedly asks for 365 credentials. This is happening for all users at different intervals. Some get asked every time they log on others could go several days without incident. I have ran office repair on both servers, recreated profiles, tried several reg entries but nothing seems to change this behaviour. Sometimes we also get the 1001 something went wrong error and user has to sign out of RDS log back in and re-enter credentials. Any help would be greatly appreciated.
We have a new RDS solution so user profiles newly created etc.. AD environment with two session hosts, profile disks and a broker server. Since we have used this solution outlook repeatedly asks for 365 credentials. This is happening for all users at different intervals. Some get asked every time they log on others could go several days without incident. I have ran office repair on both servers, recreated profiles, tried several reg entries but nothing seems to change this behaviour. Sometimes we also get the 1001 something went wrong error and user has to sign out of RDS log back in and re-enter credentials. Any help would be greatly appreciated. Read More
Format for time
Dear all,
i transfer my file CSV to excel but the time format is different (last colume)
could you pls help me how can i change time to same format?
thank you very much
Dear all,i transfer my file CSV to excel but the time format is different (last colume)could you pls help me how can i change time to same format? thank you very much Read More
Azure Arc enabled Servers unable to assess Updates
Starting yesterday, several of my Arc-enabled Win 2019 and 2022 Servers are unable to assess Windows Updates anymore.
Error: “Assessment failed due to this reason: Not able to complete assessment within specified time.”
Is there anything I can do to reinstall “WindowsPatchExtension” as it won’t automatically install itself after removing it from the Extensions?
(It’s not available for manual install, at least not via “Install extension” GUI)
Starting yesterday, several of my Arc-enabled Win 2019 and 2022 Servers are unable to assess Windows Updates anymore.Error: “Assessment failed due to this reason: Not able to complete assessment within specified time.”Is there anything I can do to reinstall “WindowsPatchExtension” as it won’t automatically install itself after removing it from the Extensions?(It’s not available for manual install, at least not via “Install extension” GUI) Read More
Build 2024 companion guide: Windows developer security resources
Ready to learn more about the topics discussed in our sessions on “Unleash Windows App Security & Reputation with Trusted Signing” and “The Latest in Windows Security for Developers” at Microsoft Build 2024? Here are some resources and tools to help you get started:
Dive deeper into:
Passkeys in Windows – (1 min.) Get a quick overview of passkeys, how they are used in Windows, and how they compare to passwords.
Virtualization-based security (VBS) key protection – (5 min.) Learn how to create, import, and protect your keys using VBS.
NTLM-less – (4 min.) Find the syntax, parameters, return value, and remarks for the AcquireCredentialsHandle (Negotiate) function.
Personal Data Encryption (PDE) – (5 min.) Get information on prerequisites, protection levels, and more for this security feature that provides file-based data encryption capabilities to Windows.
Virtualization-based security (VBS) Enclave – (1 min.) Explore the functions used by System Services and Secure Enclaves.
Trusted Platform Module attestation – (8 min.) Explore key TPM attestation concepts and capabilities supported by Azure Attestation.
Zero Trust DNS – (4 min.) Learn more about Zero Trust DNS (ZTDNS), currently in development for a future version of Windows to help support those trying to lock down devices so that they can access approved network destinations only.
Win32 app isolation repo – Access the documentation and tools you need to help you isolate your applications.
MSIX app packaging – (3 min.) Learn how to use the MSIX Packaging Tool to repackage your existing desktop applications to the MSIX format.
Trusted Signing – Access how-to guides, quickstart tutorials, and other documentation to help you utilize this Microsoft fully managed end-to-end signing solution for third party developers.
Smart App Control – (3 min.) Get to know the requirements and stages for Smart App Control, plus get answers to frequently asked questions.
Coming soon:
Making admins more secure
Granular privacy controls for all Win32 apps
Continue the conversation. Find best practices. Join us on the Windows security discussion board.
Ready to learn more about the topics discussed in our sessions on “Unleash Windows App Security & Reputation with Trusted Signing” and “The Latest in Windows Security for Developers” at Microsoft Build 2024? Here are some resources and tools to help you get started:
Dive deeper into:
Passkeys in Windows – (1 min.) Get a quick overview of passkeys, how they are used in Windows, and how they compare to passwords.
Virtualization-based security (VBS) key protection – (5 min.) Learn how to create, import, and protect your keys using VBS.
NTLM-less – (4 min.) Find the syntax, parameters, return value, and remarks for the AcquireCredentialsHandle (Negotiate) function.
Personal Data Encryption (PDE) – (5 min.) Get information on prerequisites, protection levels, and more for this security feature that provides file-based data encryption capabilities to Windows.
Virtualization-based security (VBS) Enclave – (1 min.) Explore the functions used by System Services and Secure Enclaves.
Trusted Platform Module attestation – (8 min.) Explore key TPM attestation concepts and capabilities supported by Azure Attestation.
Zero Trust DNS – (4 min.) Learn more about Zero Trust DNS (ZTDNS), currently in development for a future version of Windows to help support those trying to lock down devices so that they can access approved network destinations only.
Win32 app isolation repo – Access the documentation and tools you need to help you isolate your applications.
MSIX app packaging – (3 min.) Learn how to use the MSIX Packaging Tool to repackage your existing desktop applications to the MSIX format.
Trusted Signing – Access how-to guides, quickstart tutorials, and other documentation to help you utilize this Microsoft fully managed end-to-end signing solution for third party developers.
Smart App Control – (3 min.) Get to know the requirements and stages for Smart App Control, plus get answers to frequently asked questions.
Coming soon:
Making admins more secure
Granular privacy controls for all Win32 apps
Continue the conversation. Find best practices. Join us on the Windows security discussion board. Read More
Submittable accelerates growth and AI innovation with Microsoft
Submittable’s social impact platform enables foundations to manage the end-to-end process for grants, corporate giving, and awards. With Submittable, customers collect and review applications, award funds, track change, and report on results. “We enable the social impact sector to work more efficiently and more equitably,” says Sam Caplan, Vice President of Social Impact at Submittable. “We focus on creating a trust-based environment and deepening relationships between funders, nonprofit organizations, and the communities they serve.”
To multiply its impact, Submittable joined the Digital Natives Partner Program, a select group of innovative ISV partners that Microsoft Tech for Social Impact (TSI) has chosen to invest in and grow with. The Digital Natives program helps cloud-first SaaS companies connect with Microsoft’s vast community of nearly 400,000 nonprofits and continuously innovate their Azure-based solutions.
“We’re continually looking for ways to grow and help our customers serve their communities,” Caplan says. “We get a huge jumpstart by working on the Azure platform, taking advantage of the work Microsoft has done around AI, and being closely partnered with TSI.”
“The partnership with Submittable is built on shared values and a shared commitment to bring the very best capabilities our two organizations have to offer,” explains Jeremy Pitman, Director of the Digital Natives Partner Program at Microsoft TSI. He adds, “By working together on AI and modern technology solutions, we are able to accelerate the impact mission-driven organizations are having in their communities.”
About a year ago, Submittable decided to leave their current cloud platform and move workloads to Microsoft Azure with the support of Redapt, an Azure partner. With this strategic move, Submittable can expand their reach to more mission-driven organizations, develop industry leading AI-powered features, and be leaders in the impact technology landscape alongside Microsoft. “Microsoft’s TSI team has some of the world’s deepest knowledge of the nonprofit sector—a level of expertise we couldn’t get anywhere else,” Caplan says. “It made perfect sense for us to be closely connected so we can continue to learn and contribute to this amazing work.”
Submittable always seeks to streamline the grantmaking and giving process for its customers and applicants. They recently worked with a Microsoft Partner Technical Strategist to develop and launch a series of AI-enabled tools that reduce busy work. The tools, which run on Azure, help applicants fill out forms quickly, reviewers understand and synthesize the most vital information from applications, and administrators easily build new application forms using natural language prompts—no coding expertise required.
“Our AI features are becoming some of our most appreciated benefits on the platform,” Caplan says. “I don’t think we could have introduced them as fast or as well as we did without this partnership.”
The AI tools don’t replace but rather complement the people who make decisions and craft creative solutions. By freeing up their time, these features enable Submittable’s customers and grant applicants to focus on work that moves the needle on their most ambitious goals.
“As nonprofits harness the potential of artificial intelligence, Submittable is mindful that technology alone is not the destination—it’s the vehicle,” says Justin Spelhaug, Corporate Vice President of Tech for Social Impact at Microsoft Philanthropies. “In the realm of AI in social impact, Submittable is a leading example of the future where nonprofits can achieve more than ever before.”
Download the full case study to learn how Submittable is boosting social impact with the Digital Natives Partner Program.
Submittable’s social impact platform enables foundations to manage the end-to-end process for grants, corporate giving, and awards. With Submittable, customers collect and review applications, award funds, track change, and report on results. “We enable the social impact sector to work more efficiently and more equitably,” says Sam Caplan, Vice President of Social Impact at Submittable. “We focus on creating a trust-based environment and deepening relationships between funders, nonprofit organizations, and the communities they serve.”
To multiply its impact, Submittable joined the Digital Natives Partner Program, a select group of innovative ISV partners that Microsoft Tech for Social Impact (TSI) has chosen to invest in and grow with. The Digital Natives program helps cloud-first SaaS companies connect with Microsoft’s vast community of nearly 400,000 nonprofits and continuously innovate their Azure-based solutions.
“We’re continually looking for ways to grow and help our customers serve their communities,” Caplan says. “We get a huge jumpstart by working on the Azure platform, taking advantage of the work Microsoft has done around AI, and being closely partnered with TSI.”
“The partnership with Submittable is built on shared values and a shared commitment to bring the very best capabilities our two organizations have to offer,” explains Jeremy Pitman, Director of the Digital Natives Partner Program at Microsoft TSI. He adds, “By working together on AI and modern technology solutions, we are able to accelerate the impact mission-driven organizations are having in their communities.”
About a year ago, Submittable decided to leave their current cloud platform and move workloads to Microsoft Azure with the support of Redapt, an Azure partner. With this strategic move, Submittable can expand their reach to more mission-driven organizations, develop industry leading AI-powered features, and be leaders in the impact technology landscape alongside Microsoft. “Microsoft’s TSI team has some of the world’s deepest knowledge of the nonprofit sector—a level of expertise we couldn’t get anywhere else,” Caplan says. “It made perfect sense for us to be closely connected so we can continue to learn and contribute to this amazing work.”
Submittable always seeks to streamline the grantmaking and giving process for its customers and applicants. They recently worked with a Microsoft Partner Technical Strategist to develop and launch a series of AI-enabled tools that reduce busy work. The tools, which run on Azure, help applicants fill out forms quickly, reviewers understand and synthesize the most vital information from applications, and administrators easily build new application forms using natural language prompts—no coding expertise required.
“Our AI features are becoming some of our most appreciated benefits on the platform,” Caplan says. “I don’t think we could have introduced them as fast or as well as we did without this partnership.”
The AI tools don’t replace but rather complement the people who make decisions and craft creative solutions. By freeing up their time, these features enable Submittable’s customers and grant applicants to focus on work that moves the needle on their most ambitious goals.
“As nonprofits harness the potential of artificial intelligence, Submittable is mindful that technology alone is not the destination—it’s the vehicle,” says Justin Spelhaug, Corporate Vice President of Tech for Social Impact at Microsoft Philanthropies. “In the realm of AI in social impact, Submittable is a leading example of the future where nonprofits can achieve more than ever before.”
Download the full case study to learn how Submittable is boosting social impact with the Digital Natives Partner Program. Read More
Announcing new pub-sub capabilities in Azure Event Grid
Azure Event Grid is a highly scalable, fully managed publish-subscribe message distribution service that offers flexible message consumption patterns using the MQTT and HTTP protocols. Our recent efforts have been dedicated to enhancing MQTT compliance, simplifying security for IoT and event-driven solutions, and facilitating seamless integrations. Today, we announce the newest features in these critical areas and their potential impact on your solutions.
Event Grid’s MQTT Broker capability
The MQTT broker capability leverages standard MQTT features and secure authentication methods to enable your clients to communicate in a compliant, secure, and flexible manner. This capability is vital for IoT solutions where efficient communication is essential for seamless operations and where security is critical to protect sensitive data and maintain device integrity. We are excited to announce the release of the following features, reinforcing our commitment to these goals.
Last Will and Testament (LWT): is now generally available (GA), enabling MQTT clients to notify other MQTT clients of their abrupt disconnections through a will message. You can use LWT to ensure predictable and reliable flow of communication among MQTT clients during unexpected disconnections, which is valuable for scenarios where real-time communication, system reliability, and coordinated actions are critical. Now, you’re able to use will delay interval to reduce the noise from fluctuating disconnections.
OAuth 2.0 authentication: is now public preview, allowing clients to authenticate and connect with the MQTT broker using JSON Web Tokens (JWT) issued by any third-party OpenID Connect (OIDC) identity provider, aside from Microsoft Entra Id. MQTT clients can get their token from their identity provider (IDP) and provide the token in the MQTTv5 or MQTTv3.1.1 CONNECT packets to authenticate with the MQTT broker. This authentication method provides a lightweight, secure, and flexible option for MQTT clients that are not provisioned in Azure.
Custom domain names support: is now public preview, allowing users to assign their own domain names to Event Grid namespace’s MQTT and HTTP endpoints, enhancing security and simplifying client configuration. This feature helps enterprises meet their security and compliance requirements and eliminates the need to modify clients already linked to the domain. Assigning a custom domain name to multiple namespaces can also help enhance availability, manage capacity, and handle cross-region client mobility.
Event Grid Namespace Topic
The namespace topic offers flexible consumption of messages through HTTP Push and HTTP Pull delivery, enabling seamless integration of cloud applications in an asynchronous and decoupled manner. Enterprise applications rely on distributed and asynchronous messaging to scale and evolve independently. Using Event Grid, publishers can send messages to the namespace topic, which subscribers can consume using push or pull delivery. Additionally, you can also configure the MQTT broker to route MQTT messages to the namespace topic to integrate your IoT data with Azure services and your backend applications.
We are thrilled to announce the release of the following features aimed at enhancing integration with Azure services, providing flexibility in consuming messages in any format, and offering a versatile authentication method.
Push delivery to Azure Event Hubs: is now GA, allowing you to configure event subscriptions on namespace topics to send messages to Azure Event Hubs at scale. Event Hubs is a cloud native data streaming service that can stream millions of events per second, with low latency, from any source to any destination.
Push delivery to Webhooks: is now public preview, allowing you to configure event subscriptions on namespace topics to send messages to your application’s public endpoint using a simple, scalable, and reliable delivery mechanism. The WebHook doesn’t need to be hosted in Azure to receive events from the namespace topic. You can also use an Azure Automation workbook or an Azure logic app as an event handler via webhooks. With the support of these push delivery destinations, we are offering more options for you to build integrated solutions and data pipelines using namespace topics.
CloudEvents 1.0 Binary Content Mode: is now GA, offering the ability to produce messages whose payload is encoded in any media type. With this namespace topic feature, you can publish events using the encoding format of your choice like AVRO, Protobuf, XML, or even your own proprietary encoding.
Shared Access Signature (SAS) tokens authentication: is now public preview, allowing you to publish or receive (pull delivery) messages using SAS tokens for authentication. SAS token authentication is a simple mechanism to delegate and enforce access control when sending or receiving messages scoped to a specific namespace, namespace topic, or event subscription. While Microsoft Entra ID offers exceptional authentication and access control features, you may still want to use SAS for scenarios where the publisher or subscriber is not protected by Microsoft Entra ID; for example, your client is hosted on another cloud provider, or uses another identity provider.
Event Grid Basic
Event Grid basic tier enables you to build event-driven solutions by sending events to a diverse set of Azure services or webhooks using push event delivery through custom, system, domain, and partner topics. Event sources include your custom applications, Azure services, and partner (SaaS) services that publish events announcing system state changes (also known as “discrete” events). In turn, Event Grid delivers those events to your subscribers, allowing you to filter events and control delivery settings. We are excited to announce the release of the following features to enhance integration among Event Grid resources, Azure services, and partners.
Namespace Topic as a destination: is now GA, enabling you to create an event subscription on a custom, system, domain, and partner topics (Event Grid Basic) that forwards events to namespace topics. This feature will enable you to create data integrations using a diverse set of Event Grid resources. Forwarding events to the namespace topic allows you to take advantage of its pull delivery support and flexibility in consumption.
Microsoft Graph API events: is now GA, enabling you to react to resource changes in Microsoft Entra ID, Microsoft Teams, Outlook, SharePoint, etc. This feature is key for enterprise scenarios such as auditing, onboarding, and policy enforcement, to name a few. Now, you can subscribe to Microsoft Entra ID events through a new simplified Azure portal experience.
Sending Azure Resource Notifications health resources events to Azure Monitor alerts: is now public preview, to notify you when your workload is impacted so you can act quickly. Azure Resource Notifications events in Event Grid provide reliable and thorough information on the status of your virtual machines, including single instance VMs, Virtual Machine Scale Set VMS, and Virtual Machine Scale Sets. With this feature, you can get a better understanding of any service issues that may be affecting your resources.
API Center system topic: is public preview, enabling you to receive real-time updates when an API definition is added or updated. This means you can keep track of your APIs and ensure they are always up to date, making it easier for stakeholders throughout your organization to discover, reuse, and govern APIs. With this new integration, Event Grid is now even more powerful and versatile, giving you the tools you need to build modern, event-driven applications.
Summary
Event Grid continues to invest in MQTT compliance to ensure interoperability and support of non-Azure providers for IoT and event-driven solutions for flexibility. Additionally, Event Grid is adding more integrations among Event Grid resources, Azure services, and partners, and providing flexible consumption of messages in any format. We are excited to have you try these new capabilities. To learn more about Event Grid, got to the Event Grid documentation. If you have questions or feedback, you can contact us at askgrid@microsoft.com or askmqtt@microsoft.com.
Microsoft Tech Community – Latest Blogs –Read More
Introducing GenAI Gateway Capabilities in Azure API Management
We are thrilled to announce GenAI Gateway capabilities in Azure API Management – a set of features designed specifically for GenAI use cases.
Azure OpenAI service offers a diverse set of tools, providing access to advanced models like GPT3.5-Turbo to GPT-4 and GPT-4 Vision, enabling developers to build intelligent applications that can understand, interpret, and generate human-like text and images.
One of the main resources you have in Azure OpenAI is tokens. Azure OpenAI assigns quota for your model deployments expressed in tokens-per-minute (TPMs) which is then distributed across your model consumers that can be represented by different applications, developer teams, departments within the company, etc.
Starting with a single application integration, Azure makes it easy to connect your app to Azure OpenAI. Your intelligent application connects to Azure OpenAI directly using API Key with a TPM limit configured directly on the model deployment level. However, when you start growing your application portfolio, you are presented with multiple apps calling single or even multiple Azure OpenAI endpoints deployed as Pay-as-you-go or Provisioned Throughput Units (PTUs) instances. That comes with certain challenges:
How can we track token usage across multiple applications? How can we do cross charges for multiple applications/teams that use Azure OpenAI models?
How can we make sure that a single app does not consume the whole TPM quota, leaving other apps with no option to use Azure OpenAI models?
How can we make sure that the API key is securely distributed across multiple applications?
How can we distribute load across multiple Azure OpenAI endpoints? How can we make sure that PTUs are used first before falling back to Pay-as-you-go instances?
To tackle these operational and scalability challenges, Azure API Management has built a set of GenAI Gateway capabilities:
Azure OpenAI Token Limit Policy
Azure OpenAI Emit Token Metric Policy
Load Balancer and Circuit Breaker
Import Azure OpenAI as an API
Azure OpenAI Semantic Caching Policy (in public preview)
Azure OpenAI Token Limit Policy
Azure OpenAI Token Limit policy allows you to manage and enforce limits per API consumer based on the usage of Azure OpenAI tokens. With this policy you can set limits, expressed in tokens-per-minute (TPM).
This policy provides flexibility to assign token-based limits on any counter key, such as Subscription Key, IP Address or any other arbitrary key defined through policy expression. Azure OpenAI Token Limit policy also enables pre-calculation of prompt tokens on the Azure API Management side, minimizing unnecessary request to the Azure OpenAI backend if the prompt already exceeds the limit.
Learn more about this policy here.
Azure OpenAI Emit Token Metric Policy
Azure OpenAI enables you to configure token usage metrics to be sent to Azure Applications Insights, providing overview of the utilization of Azure OpenAI models across multiple applications or API consumers.
This policy captures prompt, completions, and total token usage metrics and sends them to Application Insights namespace of your choice. Moreover, you can configure or select from pre-defined dimensions to split token usage metrics, enabling granular analysis by Subscription ID, IP Address, or any custom dimension of your choice.
Learn more about this policy here.
Load Balancer and Circuit Breaker
Load Balancer and Circuit Breaker features allow you to spread the load across multiple Azure OpenAI endpoints.
With support for round-robin, weighted (new), and priority-based (new) load balancing, you can now define your own load distribution strategy according to your specific requirements.
Define priorities within the load balancer configuration to ensure optimal utilization of specific Azure OpenAI endpoints, particularly those purchased as PTUs. In the event of any disruption, a circuit breaker mechanism kicks in, seamlessly transitioning to lower-priority instances based on predefined rules.
Our updated circuit breaker now features dynamic trip duration, leveraging values from the retry-after header provided by the backend. This ensures precise and timely recovery of the backends, maximizing the utilization of your priority backends to their fullest.
Learn more about load balancer and circuit breaker here.
Import Azure OpenAI as an API
New Import Azure OpenAI as an API in Azure API management provides an easy single click experience to import your existing Azure OpenAI endpoints as APIs.
We streamline the onboarding process by automatically importing the OpenAPI schema for Azure OpenAI and setting up authentication to the Azure OpenAI endpoint using managed identity, removing the need for manual configuration. Additionally, within the same user-friendly experience, you can pre-configure Azure OpenAI policies, such as token limit and emit token metric, enabling swift and convenient setup.
Learn more about Import Azure OpenAI as an API here.
Azure OpenAI Semantic Caching policy
Azure OpenAI Semantic Caching policy empowers you to optimize token usage by leveraging semantic caching, which stores completions for prompts with similar meaning.
Our semantic caching mechanism leverages Azure Redis Enterprise or any other external cache compatible with RediSearch and onboarded to Azure API Management. By leveraging the Azure OpenAI Embeddings model, this policy identifies semantically similar prompts and stores their respective completions in the cache. This approach ensures completions reuse, resulting in reduced token consumption and improved response performance.
Learn more about semantic caching policy here.
Get Started with GenAI Gateway Capabilities in Azure API Management
We’re excited to introduce these GenAI Gateway capabilities in Azure API Management, designed to empower developers to efficiently manage and scale their applications leveraging Azure OpenAI services. Get started today and bring your intelligent application development to the next level with Azure API Management.
Microsoft Tech Community – Latest Blogs –Read More
Your Guide to Surface at Microsoft Build
Join the Surface team as we kick off Microsoft Build! Here are all the details to check out the latest developer news and announcements happening this week (May 21-23, 2024) live from Seattle and streaming online worldwide.
Whether you’re tuning in online (for free) or joining us in person in Seattle, prepare to be immersed in the latest innovations in Microsoft developer tools and technologies. Microsoft Build also offers unparalleled opportunities to network and create valuable connections with industry leaders and like-minded professionals.
During the keynote and throughout the show, the Surface team will be showcasing the all-new Surface Laptop and Surface Pro designed to accelerate AI in the workplace. The latest Laptop and Pro devices announced this week are designed to revolutionize PC productivity and will be available alongside Surface Pro 10 and Surface Laptop 6 announced earlier this year. By adding to the diversity of hardware within the Surface portfolio, we’re giving customers more choice than ever before to choose the right devices that meet the unique needs of their organization.
This year’s lineup promises an array of engaging sessions. While this blog post focuses on the Surface presence and experience, there will be a lot more to discover at the conference!
Register for Build
Wherever you are, we’re coming to you! Get ready to connect with Microsoft experts, technology professionals, and developers from around the world.
When: May 21-23 in Seattle & online
Why: Check out Microsoft’s latest developer news & announcements
Link to register: Microsoft Build registration
Build Keynote live from Seattle
Check out the latest Microsoft news and announcements for all developers. Join Microsoft CEO Satya Nadella, as well as Rajesh Jha, Mustafa Suleyman, and Kevin Scott at the opening keynote to learn how this era of AI will unlock new opportunities, transform how developers work, and drive business productivity across industries. Don’t miss the demos from Surface and Windows!
Surface sessions at Build
Join us during the sessions below for an exclusive look into the latest advancements in Microsoft Surface technology led by subject matter experts. (All times Pacific)
Session Code
Session Date / Time
Session Title
Speakers
STUDIO67
Tuesday, May 21
11-11:10 a.m.
Microsoft Surface Innovation (plays immediately after keynote)
Sam Morton
Malex Guinand (Microsoft)
DEM780
Tuesday, May 21
1:45-2 p.m.
Microsoft Surface Innovation in Action
Frank Buchholz
Jacob Rhoades (Microsoft)
DEM781
Wednesday, May 22
3:45-4 p.m.
Microsoft Surface Innovation in Action
Frank Buchholz
Jacob Rhoades (Microsoft)
Seattle experience
For those joining us in Seattle, this is your opportunity to be the first to get hands-on with the latest devices and meet with Surface experts in the Expert Meet-up, located in the Seattle Convention Center Summit Building on the top floor (Floor 5), the Ballroom, to the right of the Microsoft Build Stage. Explore the possibilities of how AI experiences in Windows can enhance productivity with the latest Surface PCs built for the new era of AI.
Expert meet up hours
May 21: 11 a.m.-6:30 p.m.
May 22: 10 a.m.-6:45 p.m.
May 23: 8:30 a.m.-5 p.m.
Social Media
Be sure to amplify Microsoft Surface content using the combination of these hashtags #MicrosoftSurface and #MSBuild. Be sure to watch the Surface LinkedIn for Microsoft Build posts.
Windows at Build
And for even more exciting updates happening at Build for Windows developers, be sure to check out the Windows Developer blog post.
Microsoft Tech Community – Latest Blogs –Read More
Discover How App Modernization on Azure Enables Intelligent App Innovation
AI is accelerating the need for app modernization to drive innovation with AI-powered intelligent apps while simultaneously transforming the speed and process of modernization itself. All of this speaks to the value of Intelligent apps, enabling businesses to deliver differentiated customer experiences, product innovation and business process efficiencies.
At Microsoft we’re focused on helping every customer modernize their legacy applications as fast and easily as possible, rearchitecting them to a modern platform that enables rapid innovation and an environment that’s purpose built for the wave of AI innovation that is coming to the enterprise. To help with this, Azure offers a comprehensive set of services to build and modernize intelligent applications across Platform as a Service (PaaS), Serverless offerings, and managed Kubernetes, integrated with cloud scale databases, and a broad selection of foundational and open models for AI. At this year’s Microsoft Build conference—May 21-23 in Seattle and online—you’ll have the chance to learn more about exciting new product releases, capabilities and enhancements to help you seamlessly build and modernize intelligent applications.
Modernize your App estate for AI and continuous innovation
Legacy applications, built on outdated technologies, are increasingly becoming a roadblock for businesses in the fast-paced digital world. They struggle to manage growing data volumes and user traffic, posing scalability challenges that can lead to performance bottlenecks and system failures. Additionally, their reliance on unsupported technologies leaves them vulnerable to security threats and compliance issues, while cumbersome manual updates hinder AI innovation and agility.
Modernizing these applications is crucial for businesses to stay competitive and thrive in this era of AI. This involves transitioning to scalable architecture, embracing modern technologies like cloud application, data and AI services, and streamlining development processes. According to a recent survey by IDC Research, 43% of respondents said modernizing applications to a PaaS service improved IT Operations productivity, 36% said it helped with scalability to meet peak demand while reducing costs at low usage times, and 35% said it improved security. You can learn more about these findings in the whitepaper, Exploring the Benefits of Cloud Migration and Modernization for the Development of Intelligent Applications.
Product enhancements to accelerate your App modernization journey
GitHub Copilot skills for Azure Migrate Code assessment
Last November at Microsoft Ignite 2023, we launched a new capability within Azure Migrate to help you quickly assess applications and identify key code changes required before migrating these applications to Azure. At this year’s Build, we’re excited to launch and demo the integration of GitHub Copilot skills for Azure Migrate application and code assessment. With this integration of AI-assisted development, developers can ask questions like “Can I migrate this app to Azure?” or “What changes do I need to make to this code?” and get tailored answers and recommendations.
New Azure App Service features to simplify App Modernization
Azure App Service plays a crucial role in app modernization by offering a platform that simplifies and accelerates the process of modernizing legacy applications to cloud. By leveraging Azure App Service, you can quickly and efficiently modernize your legacy apps, making them more scalable, reliable, secure, and adaptable.
At this year’s Microsoft Build we’re happy to announce the public preview of some key Azure App Service features:
Sidecar will let customers add new features like logging, monitoring, or caching to their apps without changing the main code.
With Webjobs, customers can run any code and scripts in the language they prefer on different schedules. Now that WebJobs is part of Azure App Service, they use the same compute resources as the web app to help reduce costs and ensure reliable performance. Webjobs for both Azure App Service on Linux and Windows Containers on Azure App Service is widely available in public preview.
Other features that are now generally available include automatic scaling, which helps users manage growing site traffic without wasting resources. Automatic scaling improves the performance of any web app without requiring new code or code changes.
Another important update is that Azure App Service now offers 99.99% resiliency when your plan runs in an Availability Zone-based configuration. We encourage you to use four-nines resiliency to bring more complex and more critical workloads to Azure App Service.
Check out this blog for details on these and other exciting Azure App Service updates.
Simplify App Modernization to Kubernetes with AKS Automatic
Now available in public preview, AKS Automatic provides the easiest managed Kubernetes experience for developers, DevOps, and platform engineers. It’s ideal for modern and AI applications, automating AKS cluster setup and management, and embedding best practice configurations. This ensures users of any skill level have security, performance, and dependability for their applications. Check out this blog to learn more.
Modernizing Java applications on Azure
We continue to bring product innovations to the market to enable Java customers to modernize enterprise applications on Azure. Red Hat JBoss EAP is a popular Java application framework used by many enterprise customers. We are excited to share that a free tier and flexible pricing options for Red Hat JBoss EAP on Azure App Service are now generally available, providing customers a low-risk environment to evaluate the technology before committing to a paid subscription.
Azure Spring Apps Enterprise is a fully managed service for Java Spring applications jointly offered in partnership between Microsoft and VMWare Tanzu by Broadcom. We are announcing the public preview of Jobs in Azure Spring Apps to enable you to deploy and scale Spring Batch applications without worrying about job scalability, cost control, lifecycle, infrastructure, security, and monitoring. This makes it easier to handle large-scale data processing efficiently, leveraging the flexibility and scalability of the cloud.
Gain valuable insights into the potential impact of Azure Spring Apps Enterprise on your organization. Download the Azure Spring Apps Economic Validation Report to explore the quantified benefits in development speed, cost reduction, and security enhancement.
Customers see increased cost efficiency and enhanced security
There’s no better showcase for our deep roster of AI and app modernization tools than the success stories told by valued customers.
Del Monte Foods, a global leader in packaged foods, leveraged Azure Migrate to streamline their cloud migration journey. By using Azure Migrate’s discovery and assessment tools, Del Monte gained insights into their on-premises environment, identifying optimal migration paths and dependencies. This streamlined approach enabled them to reduce the complexity and risks associated with moving their workloads to Azure, ensuring a smooth and efficient transition.
“We reduced certain infrastructure costs by 57%, increased system availability by 99.99%, and improved system performance by 40%,” said Hari Ramakrishnan, Del Monte Foods’ VP of Information Technology.
Nexi Group, a major European PayTech company, partnered with us to revolutionize their digital payments platform, eventually building a solution capable of handling billions of transactions annually. Azure App Service and Azure Kubernetes Service provided the scalability and performance needed to meet fluctuating demands, while our robust security features ensured the protection of sensitive financial data. Azure’s cost-effective model also allowed Nexi to optimize their IT spending, freeing up resources for further investment in strategic initiatives.
Jens Barnow, Nexi Group’s Senior VP of Group Technology, said that by using Microsoft technology the company “achieved faster time to market with new customer propositions, empowered our developer teams to do more, time for provisioning in new location, and cost efficiency.”
Scandinavian Airlines wanted to improve its tech infrastructure to better serve over 30 million fliers it serves each year. The airline chose to move from an IaaS solution to PaaS and elected to migrate critical databases and applications first, using Microsoft Azure SQL Database, Azure SQL Managed Instance, Azure App Service, and Defender for Cloud. With support from Microsoft Customer Success Migration Factory, they completed the complex migration quickly, immediately enhancing their security posture and creating an environment for more streamlined DevOps workflows.
“We are now operating in an environment that fosters innovation,” said Daniel Engberg, Head of AI, Data, and Platforms at Scandinavian Airlines. “The capabilities of Azure empower SAS to develop new applications faster and focus on what really matters: simplifying travelers’ lives and enhancing their overall experience.”
Check out our full line-up of modernization sessions at Build 2024
Building a connected vehicle and app experience with BMW and Azure: BMW utilizes Azure Kubernetes Service, GitHub, and other Azure services to power their MyBMW app, which serves over 13 million active users worldwide. In this session, BMW will share their insights on scaling cloud architecture for increased performance and adopting DevOps practices for global deployment. Tuesday, May 21, 11:30 am PDT. In person and online.
App innovation in the AI era: cost, benefits, and challenges: Modernizing existing apps to leverage AI capabilities can be a daunting task due to cost constraints, technical complexities, and compatibility challenges. This session will explore strategies and best practices for overcoming these obstacles, drawing on the real-world experiences of organizations that have successfully navigatedapp migration projects. Tuesday, May 21, 4:45 pm PDT. In person and online.
Conversational app and code assessment in Azure Migrate: Discover how Azure Migrate’s latest AI-powered assistant, Azure Copilot, can help simplify your cloud migration process. It evaluates your applications for cloud readiness, identifies potential issues, offers optimization recommendations, and helps reduce costs. Wednesday, May 22, 10:30 am PDT. In person only.
Leverage AKS for your enterprise platform: H&M’s journey: This session focuses on strategies and best practices for building scalable, reliable, and developer-friendly platforms on Azure Kubernetes Service. H&M will share their own experience and insights, and the session will also cover the latest AKS features designed to enhance reliability, performance, security, and ease of use. Thursday, May 23, 9:45 am. In person and online.
Using AI with App Service to deploy differentiated web apps and APIs: Explore how to utilize AI-powered Azure App Service capabilities to modernize your web applications, optimize their performance and reliability, and troubleshoot issues more efficiently. You will see real-world examples of integrating generative AI, as well as how Dynatrace and Datadog simplify observability using AI. Thursday, May 23, 12:30 pm PDT. In person and online.
Vision to value—SAS accelerates modernization at scale with Azure: While recovering from COVID-19 travel restrictions, Scandinavian Airlines chose Azure app and database services as the foundation for modernizing their critical operational applications. This session will cover their modernization journey and explore the latest features in Azure App Service and Azure SQL. Thursday, May 23, 1:45 pm PDT. In person and online.
Scaling Spring Batch in the Cloud: This session focuses on Spring Batch, a framework for large-scale data processing, and how it’s used in Azure Spring Apps Enterprise for cloud-based batch jobs. You’ll learn about essential Spring Batch features and how to effectively leverage them in the cloud. Online only.
Spring Unlocks the Power of AI Platform—End-to-End: Discover how AI can elevate your Spring projects, making them more interactive, intelligent, and innovative. Learn how to seamlessly integrate AI into your Spring applications, adding AI-powered features to improve self-service and customer support in existing apps and discover techniques to create AI-driven user interfaces that provide more natural and intuitive interactions with your users. Online only.
Join us at Build and bring your app development into the future
Are you ready to unlock new opportunities for innovation and empower your business with cutting-edge AI? Join us in person or online at this year’s Microsoft Build to discover how modernizing your applications can make them more scalable, reliable, and efficient to better handle increasing user demands while reducing operational costs and be AI ready.
Finally, don’t forget about the full suite of robust tools Azure offers to enable your app modernization journey, including Azure Migrate and Modernize, Azure Innovate, Azure Solution Assessments, Azure Landing Zone Accelerators, Reliable Web App Patterns and more!
By embracing app modernization on Azure, your organization can stay competitive, agile, and prepared for the future of Intelligent Apps.
Microsoft Tech Community – Latest Blogs –Read More
Introducing the Azure AI Model Inference API
We launched the model catalog in early 2023, featuring a curated selection of open-source models that customers can trust and consume in their organizations. The Azure AI model catalog offers around ~1700 models, including the latest open-source innovations like Llama3 from Meta, but also models coming from partnerships like OpenAI, Mistral, and Cohere. Each of these models with unique capabilities that we think will inspire developers to build the next generation of copilots.
A screenshot of the Azure AI model catalog displaying the large diversity of models it brings in for customers.
To enable developers to get access to these capabilities consistently, we are launching the Azure AI model inference API, which enables customers to consume the capabilities of those models using the same syntax and the same language. This API introduces a single layer of abstraction, yet it allows each model to expose unique features or capabilities that differentiate them.
Starting today, all language models deployed as serverless API support this common API. This means you can interact with GPT-4 from Azure OpenAI Service, Cohere Command R+, or Mistral-Large, in the same way without the need for translations. Soon, these capabilities will also be available on models deployed to our self-hosted managed endpoints, unifying the consumption experience across all our inferencing solutions.
A graphic depicting that the Azure AI model inference API can be used to consume models from Cohere, Mistral, Meta LLama, Microsoft (including Phi-3) and Core42 JAIS, and it’s also compatible with Azure OpenAI Service model deployments.
This is the same API utilized within Azure AI Studio and Azure Machine Learning. You can use prompt flow to build intelligent experiences that can now leverage various models. Since all the models speak the same language, you can run evaluations to compare them across different tasks, determine which one to use for each use case, exploit their strengths, and build experiences that delight your customers.
A screenshot showing the comparison of 3 different evaluations of a prompt flow chat application that implements the RAG pattern. The evaluation was run using 3 different variations of the same prompt flow, each of them running GPT-3.5 Turbo, Mistral-Large, and Llama2-70B-chat, using the same prompt message for the generation step.
We see more customers eager to combine the innovation from across the industry and redefine what’s possible. They are either integrating foundational models as building blocks for their applications or by fine-tuning them to achieve niche capabilities in specific use cases. We hope these new set of capabilities unlock the experimentation and evaluation required to move across models, picking the right one for the right job.
We want to help customers to fulfill that mission, empowering every single AI developer to achieve more with Azure AI.
Resources:
Azure AI Model Inference API
Deploy models as serverless APIs
Model Catalog and Collections in Azure AI Studio
Microsoft Tech Community – Latest Blogs –Read More
Azure Functions: Support for HTTP Streams in Node.js is Generally Available
Azure Functions support for HTTP streams in Node.js is now generally available. With this feature, customers can stream HTTP requests to and responses from their Node.js Functions Apps. Streaming is a mechanism for transmitting data over HTTP in a continuous and efficient manner. Instead of sending all the data at once, streams allow data to be transmitted in small, manageable chunks, which can be processed as they arrive. They are particularly valuable in scenarios where low latency, high throughput, and efficient resource utilization are crucial.
Ever since the preview release of this feature in February of this year, we’ve heard positive feedback from customers that have used this feature for various use cases including, but not limited to, streaming OpenAI responses, delivering dynamic content, processing large data etc. Today, at MS Build 2024, we announce General Availability of HTTP Streaming for Azure Functions using Node.js.
HTTP Streams in Node.js is supported only in the Azure Functions Node.js v4 programming model. Follow these instructions to try out HTTP Streams for your Node.js apps.
Prerequisites
Version 4 of the Node.js programming model. Learn more about the differences between v3 and v4 in the migration guide.
Version 4.3.0 or higher of the @azure/functions npm package.
If running in Azure, version 4.28 of the Azure Functions runtime.
If running locally, version 4.0.5530 of Azure Functions Core Tools.
Steps
If you plan to stream large amounts of data, adjust the app setting `FUNCTIONS_REQUEST_BODY_SIZE_LIMIT` in Azure or in your local.settings.json file. The default value is 104857600, i.e., limiting your request to 100mb maximum.
Add the following code to your app in any file included by your main field.
JavaScript
const { app } = require(‘@azure/functions’);
app.setup({ enableHttpStream: true });
TypeScript
import { app } from ‘@azure/functions’;
app.setup({ enableHttpStream: true });
3. That’s it! The existing HttpRequest and HttpResponse types in programming model v4 already support many ways of handling the body, including as a stream. Use request.body to truly benefit from streams, but rest assured you can continue to use methods like request.text() which will always return the body as a string.
Example code
Below is an example of an HTTP triggered function that receives data via an HTTP POST request, and the function streams this data to a specified output file:
JavaScript
const { app } = require(‘@azure/functions’);
const { createWriteStream } = require(‘fs’);
const { Writable } = require(‘stream’);
app.http(‘httpTriggerStreamRequest’, {
methods: [‘POST’],
authLevel: ‘anonymous’,
handler: async (request, context) => {
const writeStream = createWriteStream(‘<output file path>’);
await request.body.pipeTo(Writable.toWeb(writeStream));
return { body: ‘Done!’ };
},
});
TypeScript
import { app, HttpRequest, HttpResponseInit, InvocationContext } from ‘@azure/functions’;
import { createWriteStream } from ‘fs’;
import { Writable } from ‘stream’;
export async function httpTriggerStreamRequest(
request: HttpRequest,
context: InvocationContext
): Promise<HttpResponseInit> {
const writeStream = createWriteStream(‘<output file path>’);
await request.body.pipeTo(Writable.toWeb(writeStream));
return { body: ‘Done!’ };
}
app.http(‘httpTriggerStreamRequest’, {
methods: [‘POST’],
authLevel: ‘anonymous’,
handler: httpTriggerStreamRequest,
});
Below is an example of an HTTP triggered function that streams a file’s content as the response to incoming HTTP GET requests:
JavaScript
const { app } = require(‘@azure/functions’);
const { createReadStream } = require(‘fs’);
app.http(‘httpTriggerStreamResponse’, {
methods: [‘GET’],
authLevel: ‘anonymous’,
handler: async (request, context) => {
const body = createReadStream(‘<input file path>’);
return { body };
},
});
TypeScript
import { app, HttpRequest, HttpResponseInit, InvocationContext } from ‘@azure/functions’;
import { createReadStream } from ‘fs’;
export async function httpTriggerStreamResponse(
request: HttpRequest,
context: InvocationContext
): Promise<HttpResponseInit> {
const body = createReadStream(‘<input file path>’);
return { body };
}
app.http(‘httpTriggerStreamResponse’, {
methods: [‘GET’],
authLevel: ‘anonymous’,
handler: httpTriggerStreamResponse,
});
Try it out!
For a ready-to-run sample app with more detailed code, check out this GitHub repo.
Check out this GitHub repo to discover the journey of building a generative AI application using LangChain.js and Azure. This demo explores the development process from idea to production, using a RAG-based approach for a Q&A system based on YouTube video transcripts.
Do try out this feature and share your valuable feedback with us on GitHub.
Microsoft Tech Community – Latest Blogs –Read More