Category: Microsoft
Category Archives: Microsoft
Azure Arc enabled Servers unable to assess Updates
Starting yesterday, several of my Arc-enabled Win 2019 and 2022 Servers are unable to assess Windows Updates anymore.
Error: “Assessment failed due to this reason: Not able to complete assessment within specified time.”
Is there anything I can do to reinstall “WindowsPatchExtension” as it won’t automatically install itself after removing it from the Extensions?
(It’s not available for manual install, at least not via “Install extension” GUI)
Starting yesterday, several of my Arc-enabled Win 2019 and 2022 Servers are unable to assess Windows Updates anymore.Error: “Assessment failed due to this reason: Not able to complete assessment within specified time.”Is there anything I can do to reinstall “WindowsPatchExtension” as it won’t automatically install itself after removing it from the Extensions?(It’s not available for manual install, at least not via “Install extension” GUI) Read More
Build 2024 companion guide: Windows developer security resources
Ready to learn more about the topics discussed in our sessions on “Unleash Windows App Security & Reputation with Trusted Signing” and “The Latest in Windows Security for Developers” at Microsoft Build 2024? Here are some resources and tools to help you get started:
Dive deeper into:
Passkeys in Windows – (1 min.) Get a quick overview of passkeys, how they are used in Windows, and how they compare to passwords.
Virtualization-based security (VBS) key protection – (5 min.) Learn how to create, import, and protect your keys using VBS.
NTLM-less – (4 min.) Find the syntax, parameters, return value, and remarks for the AcquireCredentialsHandle (Negotiate) function.
Personal Data Encryption (PDE) – (5 min.) Get information on prerequisites, protection levels, and more for this security feature that provides file-based data encryption capabilities to Windows.
Virtualization-based security (VBS) Enclave – (1 min.) Explore the functions used by System Services and Secure Enclaves.
Trusted Platform Module attestation – (8 min.) Explore key TPM attestation concepts and capabilities supported by Azure Attestation.
Zero Trust DNS – (4 min.) Learn more about Zero Trust DNS (ZTDNS), currently in development for a future version of Windows to help support those trying to lock down devices so that they can access approved network destinations only.
Win32 app isolation repo – Access the documentation and tools you need to help you isolate your applications.
MSIX app packaging – (3 min.) Learn how to use the MSIX Packaging Tool to repackage your existing desktop applications to the MSIX format.
Trusted Signing – Access how-to guides, quickstart tutorials, and other documentation to help you utilize this Microsoft fully managed end-to-end signing solution for third party developers.
Smart App Control – (3 min.) Get to know the requirements and stages for Smart App Control, plus get answers to frequently asked questions.
Coming soon:
Making admins more secure
Granular privacy controls for all Win32 apps
Continue the conversation. Find best practices. Join us on the Windows security discussion board.
Ready to learn more about the topics discussed in our sessions on “Unleash Windows App Security & Reputation with Trusted Signing” and “The Latest in Windows Security for Developers” at Microsoft Build 2024? Here are some resources and tools to help you get started:
Dive deeper into:
Passkeys in Windows – (1 min.) Get a quick overview of passkeys, how they are used in Windows, and how they compare to passwords.
Virtualization-based security (VBS) key protection – (5 min.) Learn how to create, import, and protect your keys using VBS.
NTLM-less – (4 min.) Find the syntax, parameters, return value, and remarks for the AcquireCredentialsHandle (Negotiate) function.
Personal Data Encryption (PDE) – (5 min.) Get information on prerequisites, protection levels, and more for this security feature that provides file-based data encryption capabilities to Windows.
Virtualization-based security (VBS) Enclave – (1 min.) Explore the functions used by System Services and Secure Enclaves.
Trusted Platform Module attestation – (8 min.) Explore key TPM attestation concepts and capabilities supported by Azure Attestation.
Zero Trust DNS – (4 min.) Learn more about Zero Trust DNS (ZTDNS), currently in development for a future version of Windows to help support those trying to lock down devices so that they can access approved network destinations only.
Win32 app isolation repo – Access the documentation and tools you need to help you isolate your applications.
MSIX app packaging – (3 min.) Learn how to use the MSIX Packaging Tool to repackage your existing desktop applications to the MSIX format.
Trusted Signing – Access how-to guides, quickstart tutorials, and other documentation to help you utilize this Microsoft fully managed end-to-end signing solution for third party developers.
Smart App Control – (3 min.) Get to know the requirements and stages for Smart App Control, plus get answers to frequently asked questions.
Coming soon:
Making admins more secure
Granular privacy controls for all Win32 apps
Continue the conversation. Find best practices. Join us on the Windows security discussion board. Read More
Submittable accelerates growth and AI innovation with Microsoft
Submittable’s social impact platform enables foundations to manage the end-to-end process for grants, corporate giving, and awards. With Submittable, customers collect and review applications, award funds, track change, and report on results. “We enable the social impact sector to work more efficiently and more equitably,” says Sam Caplan, Vice President of Social Impact at Submittable. “We focus on creating a trust-based environment and deepening relationships between funders, nonprofit organizations, and the communities they serve.”
To multiply its impact, Submittable joined the Digital Natives Partner Program, a select group of innovative ISV partners that Microsoft Tech for Social Impact (TSI) has chosen to invest in and grow with. The Digital Natives program helps cloud-first SaaS companies connect with Microsoft’s vast community of nearly 400,000 nonprofits and continuously innovate their Azure-based solutions.
“We’re continually looking for ways to grow and help our customers serve their communities,” Caplan says. “We get a huge jumpstart by working on the Azure platform, taking advantage of the work Microsoft has done around AI, and being closely partnered with TSI.”
“The partnership with Submittable is built on shared values and a shared commitment to bring the very best capabilities our two organizations have to offer,” explains Jeremy Pitman, Director of the Digital Natives Partner Program at Microsoft TSI. He adds, “By working together on AI and modern technology solutions, we are able to accelerate the impact mission-driven organizations are having in their communities.”
About a year ago, Submittable decided to leave their current cloud platform and move workloads to Microsoft Azure with the support of Redapt, an Azure partner. With this strategic move, Submittable can expand their reach to more mission-driven organizations, develop industry leading AI-powered features, and be leaders in the impact technology landscape alongside Microsoft. “Microsoft’s TSI team has some of the world’s deepest knowledge of the nonprofit sector—a level of expertise we couldn’t get anywhere else,” Caplan says. “It made perfect sense for us to be closely connected so we can continue to learn and contribute to this amazing work.”
Submittable always seeks to streamline the grantmaking and giving process for its customers and applicants. They recently worked with a Microsoft Partner Technical Strategist to develop and launch a series of AI-enabled tools that reduce busy work. The tools, which run on Azure, help applicants fill out forms quickly, reviewers understand and synthesize the most vital information from applications, and administrators easily build new application forms using natural language prompts—no coding expertise required.
“Our AI features are becoming some of our most appreciated benefits on the platform,” Caplan says. “I don’t think we could have introduced them as fast or as well as we did without this partnership.”
The AI tools don’t replace but rather complement the people who make decisions and craft creative solutions. By freeing up their time, these features enable Submittable’s customers and grant applicants to focus on work that moves the needle on their most ambitious goals.
“As nonprofits harness the potential of artificial intelligence, Submittable is mindful that technology alone is not the destination—it’s the vehicle,” says Justin Spelhaug, Corporate Vice President of Tech for Social Impact at Microsoft Philanthropies. “In the realm of AI in social impact, Submittable is a leading example of the future where nonprofits can achieve more than ever before.”
Download the full case study to learn how Submittable is boosting social impact with the Digital Natives Partner Program.
Submittable’s social impact platform enables foundations to manage the end-to-end process for grants, corporate giving, and awards. With Submittable, customers collect and review applications, award funds, track change, and report on results. “We enable the social impact sector to work more efficiently and more equitably,” says Sam Caplan, Vice President of Social Impact at Submittable. “We focus on creating a trust-based environment and deepening relationships between funders, nonprofit organizations, and the communities they serve.”
To multiply its impact, Submittable joined the Digital Natives Partner Program, a select group of innovative ISV partners that Microsoft Tech for Social Impact (TSI) has chosen to invest in and grow with. The Digital Natives program helps cloud-first SaaS companies connect with Microsoft’s vast community of nearly 400,000 nonprofits and continuously innovate their Azure-based solutions.
“We’re continually looking for ways to grow and help our customers serve their communities,” Caplan says. “We get a huge jumpstart by working on the Azure platform, taking advantage of the work Microsoft has done around AI, and being closely partnered with TSI.”
“The partnership with Submittable is built on shared values and a shared commitment to bring the very best capabilities our two organizations have to offer,” explains Jeremy Pitman, Director of the Digital Natives Partner Program at Microsoft TSI. He adds, “By working together on AI and modern technology solutions, we are able to accelerate the impact mission-driven organizations are having in their communities.”
About a year ago, Submittable decided to leave their current cloud platform and move workloads to Microsoft Azure with the support of Redapt, an Azure partner. With this strategic move, Submittable can expand their reach to more mission-driven organizations, develop industry leading AI-powered features, and be leaders in the impact technology landscape alongside Microsoft. “Microsoft’s TSI team has some of the world’s deepest knowledge of the nonprofit sector—a level of expertise we couldn’t get anywhere else,” Caplan says. “It made perfect sense for us to be closely connected so we can continue to learn and contribute to this amazing work.”
Submittable always seeks to streamline the grantmaking and giving process for its customers and applicants. They recently worked with a Microsoft Partner Technical Strategist to develop and launch a series of AI-enabled tools that reduce busy work. The tools, which run on Azure, help applicants fill out forms quickly, reviewers understand and synthesize the most vital information from applications, and administrators easily build new application forms using natural language prompts—no coding expertise required.
“Our AI features are becoming some of our most appreciated benefits on the platform,” Caplan says. “I don’t think we could have introduced them as fast or as well as we did without this partnership.”
The AI tools don’t replace but rather complement the people who make decisions and craft creative solutions. By freeing up their time, these features enable Submittable’s customers and grant applicants to focus on work that moves the needle on their most ambitious goals.
“As nonprofits harness the potential of artificial intelligence, Submittable is mindful that technology alone is not the destination—it’s the vehicle,” says Justin Spelhaug, Corporate Vice President of Tech for Social Impact at Microsoft Philanthropies. “In the realm of AI in social impact, Submittable is a leading example of the future where nonprofits can achieve more than ever before.”
Download the full case study to learn how Submittable is boosting social impact with the Digital Natives Partner Program. Read More
Announcing new pub-sub capabilities in Azure Event Grid
Azure Event Grid is a highly scalable, fully managed publish-subscribe message distribution service that offers flexible message consumption patterns using the MQTT and HTTP protocols. Our recent efforts have been dedicated to enhancing MQTT compliance, simplifying security for IoT and event-driven solutions, and facilitating seamless integrations. Today, we announce the newest features in these critical areas and their potential impact on your solutions.
Event Grid’s MQTT Broker capability
The MQTT broker capability leverages standard MQTT features and secure authentication methods to enable your clients to communicate in a compliant, secure, and flexible manner. This capability is vital for IoT solutions where efficient communication is essential for seamless operations and where security is critical to protect sensitive data and maintain device integrity. We are excited to announce the release of the following features, reinforcing our commitment to these goals.
Last Will and Testament (LWT): is now generally available (GA), enabling MQTT clients to notify other MQTT clients of their abrupt disconnections through a will message. You can use LWT to ensure predictable and reliable flow of communication among MQTT clients during unexpected disconnections, which is valuable for scenarios where real-time communication, system reliability, and coordinated actions are critical. Now, you’re able to use will delay interval to reduce the noise from fluctuating disconnections.
OAuth 2.0 authentication: is now public preview, allowing clients to authenticate and connect with the MQTT broker using JSON Web Tokens (JWT) issued by any third-party OpenID Connect (OIDC) identity provider, aside from Microsoft Entra Id. MQTT clients can get their token from their identity provider (IDP) and provide the token in the MQTTv5 or MQTTv3.1.1 CONNECT packets to authenticate with the MQTT broker. This authentication method provides a lightweight, secure, and flexible option for MQTT clients that are not provisioned in Azure.
Custom domain names support: is now public preview, allowing users to assign their own domain names to Event Grid namespace’s MQTT and HTTP endpoints, enhancing security and simplifying client configuration. This feature helps enterprises meet their security and compliance requirements and eliminates the need to modify clients already linked to the domain. Assigning a custom domain name to multiple namespaces can also help enhance availability, manage capacity, and handle cross-region client mobility.
Event Grid Namespace Topic
The namespace topic offers flexible consumption of messages through HTTP Push and HTTP Pull delivery, enabling seamless integration of cloud applications in an asynchronous and decoupled manner. Enterprise applications rely on distributed and asynchronous messaging to scale and evolve independently. Using Event Grid, publishers can send messages to the namespace topic, which subscribers can consume using push or pull delivery. Additionally, you can also configure the MQTT broker to route MQTT messages to the namespace topic to integrate your IoT data with Azure services and your backend applications.
We are thrilled to announce the release of the following features aimed at enhancing integration with Azure services, providing flexibility in consuming messages in any format, and offering a versatile authentication method.
Push delivery to Azure Event Hubs: is now GA, allowing you to configure event subscriptions on namespace topics to send messages to Azure Event Hubs at scale. Event Hubs is a cloud native data streaming service that can stream millions of events per second, with low latency, from any source to any destination.
Push delivery to Webhooks: is now public preview, allowing you to configure event subscriptions on namespace topics to send messages to your application’s public endpoint using a simple, scalable, and reliable delivery mechanism. The WebHook doesn’t need to be hosted in Azure to receive events from the namespace topic. You can also use an Azure Automation workbook or an Azure logic app as an event handler via webhooks. With the support of these push delivery destinations, we are offering more options for you to build integrated solutions and data pipelines using namespace topics.
CloudEvents 1.0 Binary Content Mode: is now GA, offering the ability to produce messages whose payload is encoded in any media type. With this namespace topic feature, you can publish events using the encoding format of your choice like AVRO, Protobuf, XML, or even your own proprietary encoding.
Shared Access Signature (SAS) tokens authentication: is now public preview, allowing you to publish or receive (pull delivery) messages using SAS tokens for authentication. SAS token authentication is a simple mechanism to delegate and enforce access control when sending or receiving messages scoped to a specific namespace, namespace topic, or event subscription. While Microsoft Entra ID offers exceptional authentication and access control features, you may still want to use SAS for scenarios where the publisher or subscriber is not protected by Microsoft Entra ID; for example, your client is hosted on another cloud provider, or uses another identity provider.
Event Grid Basic
Event Grid basic tier enables you to build event-driven solutions by sending events to a diverse set of Azure services or webhooks using push event delivery through custom, system, domain, and partner topics. Event sources include your custom applications, Azure services, and partner (SaaS) services that publish events announcing system state changes (also known as “discrete” events). In turn, Event Grid delivers those events to your subscribers, allowing you to filter events and control delivery settings. We are excited to announce the release of the following features to enhance integration among Event Grid resources, Azure services, and partners.
Namespace Topic as a destination: is now GA, enabling you to create an event subscription on a custom, system, domain, and partner topics (Event Grid Basic) that forwards events to namespace topics. This feature will enable you to create data integrations using a diverse set of Event Grid resources. Forwarding events to the namespace topic allows you to take advantage of its pull delivery support and flexibility in consumption.
Microsoft Graph API events: is now GA, enabling you to react to resource changes in Microsoft Entra ID, Microsoft Teams, Outlook, SharePoint, etc. This feature is key for enterprise scenarios such as auditing, onboarding, and policy enforcement, to name a few. Now, you can subscribe to Microsoft Entra ID events through a new simplified Azure portal experience.
Sending Azure Resource Notifications health resources events to Azure Monitor alerts: is now public preview, to notify you when your workload is impacted so you can act quickly. Azure Resource Notifications events in Event Grid provide reliable and thorough information on the status of your virtual machines, including single instance VMs, Virtual Machine Scale Set VMS, and Virtual Machine Scale Sets. With this feature, you can get a better understanding of any service issues that may be affecting your resources.
API Center system topic: is public preview, enabling you to receive real-time updates when an API definition is added or updated. This means you can keep track of your APIs and ensure they are always up to date, making it easier for stakeholders throughout your organization to discover, reuse, and govern APIs. With this new integration, Event Grid is now even more powerful and versatile, giving you the tools you need to build modern, event-driven applications.
Summary
Event Grid continues to invest in MQTT compliance to ensure interoperability and support of non-Azure providers for IoT and event-driven solutions for flexibility. Additionally, Event Grid is adding more integrations among Event Grid resources, Azure services, and partners, and providing flexible consumption of messages in any format. We are excited to have you try these new capabilities. To learn more about Event Grid, got to the Event Grid documentation. If you have questions or feedback, you can contact us at askgrid@microsoft.com or askmqtt@microsoft.com.
Microsoft Tech Community – Latest Blogs –Read More
Introducing GenAI Gateway Capabilities in Azure API Management
We are thrilled to announce GenAI Gateway capabilities in Azure API Management – a set of features designed specifically for GenAI use cases.
Azure OpenAI service offers a diverse set of tools, providing access to advanced models like GPT3.5-Turbo to GPT-4 and GPT-4 Vision, enabling developers to build intelligent applications that can understand, interpret, and generate human-like text and images.
One of the main resources you have in Azure OpenAI is tokens. Azure OpenAI assigns quota for your model deployments expressed in tokens-per-minute (TPMs) which is then distributed across your model consumers that can be represented by different applications, developer teams, departments within the company, etc.
Starting with a single application integration, Azure makes it easy to connect your app to Azure OpenAI. Your intelligent application connects to Azure OpenAI directly using API Key with a TPM limit configured directly on the model deployment level. However, when you start growing your application portfolio, you are presented with multiple apps calling single or even multiple Azure OpenAI endpoints deployed as Pay-as-you-go or Provisioned Throughput Units (PTUs) instances. That comes with certain challenges:
How can we track token usage across multiple applications? How can we do cross charges for multiple applications/teams that use Azure OpenAI models?
How can we make sure that a single app does not consume the whole TPM quota, leaving other apps with no option to use Azure OpenAI models?
How can we make sure that the API key is securely distributed across multiple applications?
How can we distribute load across multiple Azure OpenAI endpoints? How can we make sure that PTUs are used first before falling back to Pay-as-you-go instances?
To tackle these operational and scalability challenges, Azure API Management has built a set of GenAI Gateway capabilities:
Azure OpenAI Token Limit Policy
Azure OpenAI Emit Token Metric Policy
Load Balancer and Circuit Breaker
Import Azure OpenAI as an API
Azure OpenAI Semantic Caching Policy (in public preview)
Azure OpenAI Token Limit Policy
Azure OpenAI Token Limit policy allows you to manage and enforce limits per API consumer based on the usage of Azure OpenAI tokens. With this policy you can set limits, expressed in tokens-per-minute (TPM).
This policy provides flexibility to assign token-based limits on any counter key, such as Subscription Key, IP Address or any other arbitrary key defined through policy expression. Azure OpenAI Token Limit policy also enables pre-calculation of prompt tokens on the Azure API Management side, minimizing unnecessary request to the Azure OpenAI backend if the prompt already exceeds the limit.
Learn more about this policy here.
Azure OpenAI Emit Token Metric Policy
Azure OpenAI enables you to configure token usage metrics to be sent to Azure Applications Insights, providing overview of the utilization of Azure OpenAI models across multiple applications or API consumers.
This policy captures prompt, completions, and total token usage metrics and sends them to Application Insights namespace of your choice. Moreover, you can configure or select from pre-defined dimensions to split token usage metrics, enabling granular analysis by Subscription ID, IP Address, or any custom dimension of your choice.
Learn more about this policy here.
Load Balancer and Circuit Breaker
Load Balancer and Circuit Breaker features allow you to spread the load across multiple Azure OpenAI endpoints.
With support for round-robin, weighted (new), and priority-based (new) load balancing, you can now define your own load distribution strategy according to your specific requirements.
Define priorities within the load balancer configuration to ensure optimal utilization of specific Azure OpenAI endpoints, particularly those purchased as PTUs. In the event of any disruption, a circuit breaker mechanism kicks in, seamlessly transitioning to lower-priority instances based on predefined rules.
Our updated circuit breaker now features dynamic trip duration, leveraging values from the retry-after header provided by the backend. This ensures precise and timely recovery of the backends, maximizing the utilization of your priority backends to their fullest.
Learn more about load balancer and circuit breaker here.
Import Azure OpenAI as an API
New Import Azure OpenAI as an API in Azure API management provides an easy single click experience to import your existing Azure OpenAI endpoints as APIs.
We streamline the onboarding process by automatically importing the OpenAPI schema for Azure OpenAI and setting up authentication to the Azure OpenAI endpoint using managed identity, removing the need for manual configuration. Additionally, within the same user-friendly experience, you can pre-configure Azure OpenAI policies, such as token limit and emit token metric, enabling swift and convenient setup.
Learn more about Import Azure OpenAI as an API here.
Azure OpenAI Semantic Caching policy
Azure OpenAI Semantic Caching policy empowers you to optimize token usage by leveraging semantic caching, which stores completions for prompts with similar meaning.
Our semantic caching mechanism leverages Azure Redis Enterprise or any other external cache compatible with RediSearch and onboarded to Azure API Management. By leveraging the Azure OpenAI Embeddings model, this policy identifies semantically similar prompts and stores their respective completions in the cache. This approach ensures completions reuse, resulting in reduced token consumption and improved response performance.
Learn more about semantic caching policy here.
Get Started with GenAI Gateway Capabilities in Azure API Management
We’re excited to introduce these GenAI Gateway capabilities in Azure API Management, designed to empower developers to efficiently manage and scale their applications leveraging Azure OpenAI services. Get started today and bring your intelligent application development to the next level with Azure API Management.
Microsoft Tech Community – Latest Blogs –Read More
Your Guide to Surface at Microsoft Build
Join the Surface team as we kick off Microsoft Build! Here are all the details to check out the latest developer news and announcements happening this week (May 21-23, 2024) live from Seattle and streaming online worldwide.
Whether you’re tuning in online (for free) or joining us in person in Seattle, prepare to be immersed in the latest innovations in Microsoft developer tools and technologies. Microsoft Build also offers unparalleled opportunities to network and create valuable connections with industry leaders and like-minded professionals.
During the keynote and throughout the show, the Surface team will be showcasing the all-new Surface Laptop and Surface Pro designed to accelerate AI in the workplace. The latest Laptop and Pro devices announced this week are designed to revolutionize PC productivity and will be available alongside Surface Pro 10 and Surface Laptop 6 announced earlier this year. By adding to the diversity of hardware within the Surface portfolio, we’re giving customers more choice than ever before to choose the right devices that meet the unique needs of their organization.
This year’s lineup promises an array of engaging sessions. While this blog post focuses on the Surface presence and experience, there will be a lot more to discover at the conference!
Register for Build
Wherever you are, we’re coming to you! Get ready to connect with Microsoft experts, technology professionals, and developers from around the world.
When: May 21-23 in Seattle & online
Why: Check out Microsoft’s latest developer news & announcements
Link to register: Microsoft Build registration
Build Keynote live from Seattle
Check out the latest Microsoft news and announcements for all developers. Join Microsoft CEO Satya Nadella, as well as Rajesh Jha, Mustafa Suleyman, and Kevin Scott at the opening keynote to learn how this era of AI will unlock new opportunities, transform how developers work, and drive business productivity across industries. Don’t miss the demos from Surface and Windows!
Surface sessions at Build
Join us during the sessions below for an exclusive look into the latest advancements in Microsoft Surface technology led by subject matter experts. (All times Pacific)
Session Code
Session Date / Time
Session Title
Speakers
STUDIO67
Tuesday, May 21
11-11:10 a.m.
Microsoft Surface Innovation (plays immediately after keynote)
Sam Morton
Malex Guinand (Microsoft)
DEM780
Tuesday, May 21
1:45-2 p.m.
Microsoft Surface Innovation in Action
Frank Buchholz
Jacob Rhoades (Microsoft)
DEM781
Wednesday, May 22
3:45-4 p.m.
Microsoft Surface Innovation in Action
Frank Buchholz
Jacob Rhoades (Microsoft)
Seattle experience
For those joining us in Seattle, this is your opportunity to be the first to get hands-on with the latest devices and meet with Surface experts in the Expert Meet-up, located in the Seattle Convention Center Summit Building on the top floor (Floor 5), the Ballroom, to the right of the Microsoft Build Stage. Explore the possibilities of how AI experiences in Windows can enhance productivity with the latest Surface PCs built for the new era of AI.
Expert meet up hours
May 21: 11 a.m.-6:30 p.m.
May 22: 10 a.m.-6:45 p.m.
May 23: 8:30 a.m.-5 p.m.
Social Media
Be sure to amplify Microsoft Surface content using the combination of these hashtags #MicrosoftSurface and #MSBuild. Be sure to watch the Surface LinkedIn for Microsoft Build posts.
Windows at Build
And for even more exciting updates happening at Build for Windows developers, be sure to check out the Windows Developer blog post.
Microsoft Tech Community – Latest Blogs –Read More
Discover How App Modernization on Azure Enables Intelligent App Innovation
AI is accelerating the need for app modernization to drive innovation with AI-powered intelligent apps while simultaneously transforming the speed and process of modernization itself. All of this speaks to the value of Intelligent apps, enabling businesses to deliver differentiated customer experiences, product innovation and business process efficiencies.
At Microsoft we’re focused on helping every customer modernize their legacy applications as fast and easily as possible, rearchitecting them to a modern platform that enables rapid innovation and an environment that’s purpose built for the wave of AI innovation that is coming to the enterprise. To help with this, Azure offers a comprehensive set of services to build and modernize intelligent applications across Platform as a Service (PaaS), Serverless offerings, and managed Kubernetes, integrated with cloud scale databases, and a broad selection of foundational and open models for AI. At this year’s Microsoft Build conference—May 21-23 in Seattle and online—you’ll have the chance to learn more about exciting new product releases, capabilities and enhancements to help you seamlessly build and modernize intelligent applications.
Modernize your App estate for AI and continuous innovation
Legacy applications, built on outdated technologies, are increasingly becoming a roadblock for businesses in the fast-paced digital world. They struggle to manage growing data volumes and user traffic, posing scalability challenges that can lead to performance bottlenecks and system failures. Additionally, their reliance on unsupported technologies leaves them vulnerable to security threats and compliance issues, while cumbersome manual updates hinder AI innovation and agility.
Modernizing these applications is crucial for businesses to stay competitive and thrive in this era of AI. This involves transitioning to scalable architecture, embracing modern technologies like cloud application, data and AI services, and streamlining development processes. According to a recent survey by IDC Research, 43% of respondents said modernizing applications to a PaaS service improved IT Operations productivity, 36% said it helped with scalability to meet peak demand while reducing costs at low usage times, and 35% said it improved security. You can learn more about these findings in the whitepaper, Exploring the Benefits of Cloud Migration and Modernization for the Development of Intelligent Applications.
Product enhancements to accelerate your App modernization journey
GitHub Copilot skills for Azure Migrate Code assessment
Last November at Microsoft Ignite 2023, we launched a new capability within Azure Migrate to help you quickly assess applications and identify key code changes required before migrating these applications to Azure. At this year’s Build, we’re excited to launch and demo the integration of GitHub Copilot skills for Azure Migrate application and code assessment. With this integration of AI-assisted development, developers can ask questions like “Can I migrate this app to Azure?” or “What changes do I need to make to this code?” and get tailored answers and recommendations.
New Azure App Service features to simplify App Modernization
Azure App Service plays a crucial role in app modernization by offering a platform that simplifies and accelerates the process of modernizing legacy applications to cloud. By leveraging Azure App Service, you can quickly and efficiently modernize your legacy apps, making them more scalable, reliable, secure, and adaptable.
At this year’s Microsoft Build we’re happy to announce the public preview of some key Azure App Service features:
Sidecar will let customers add new features like logging, monitoring, or caching to their apps without changing the main code.
With Webjobs, customers can run any code and scripts in the language they prefer on different schedules. Now that WebJobs is part of Azure App Service, they use the same compute resources as the web app to help reduce costs and ensure reliable performance. Webjobs for both Azure App Service on Linux and Windows Containers on Azure App Service is widely available in public preview.
Other features that are now generally available include automatic scaling, which helps users manage growing site traffic without wasting resources. Automatic scaling improves the performance of any web app without requiring new code or code changes.
Another important update is that Azure App Service now offers 99.99% resiliency when your plan runs in an Availability Zone-based configuration. We encourage you to use four-nines resiliency to bring more complex and more critical workloads to Azure App Service.
Check out this blog for details on these and other exciting Azure App Service updates.
Simplify App Modernization to Kubernetes with AKS Automatic
Now available in public preview, AKS Automatic provides the easiest managed Kubernetes experience for developers, DevOps, and platform engineers. It’s ideal for modern and AI applications, automating AKS cluster setup and management, and embedding best practice configurations. This ensures users of any skill level have security, performance, and dependability for their applications. Check out this blog to learn more.
Modernizing Java applications on Azure
We continue to bring product innovations to the market to enable Java customers to modernize enterprise applications on Azure. Red Hat JBoss EAP is a popular Java application framework used by many enterprise customers. We are excited to share that a free tier and flexible pricing options for Red Hat JBoss EAP on Azure App Service are now generally available, providing customers a low-risk environment to evaluate the technology before committing to a paid subscription.
Azure Spring Apps Enterprise is a fully managed service for Java Spring applications jointly offered in partnership between Microsoft and VMWare Tanzu by Broadcom. We are announcing the public preview of Jobs in Azure Spring Apps to enable you to deploy and scale Spring Batch applications without worrying about job scalability, cost control, lifecycle, infrastructure, security, and monitoring. This makes it easier to handle large-scale data processing efficiently, leveraging the flexibility and scalability of the cloud.
Gain valuable insights into the potential impact of Azure Spring Apps Enterprise on your organization. Download the Azure Spring Apps Economic Validation Report to explore the quantified benefits in development speed, cost reduction, and security enhancement.
Customers see increased cost efficiency and enhanced security
There’s no better showcase for our deep roster of AI and app modernization tools than the success stories told by valued customers.
Del Monte Foods, a global leader in packaged foods, leveraged Azure Migrate to streamline their cloud migration journey. By using Azure Migrate’s discovery and assessment tools, Del Monte gained insights into their on-premises environment, identifying optimal migration paths and dependencies. This streamlined approach enabled them to reduce the complexity and risks associated with moving their workloads to Azure, ensuring a smooth and efficient transition.
“We reduced certain infrastructure costs by 57%, increased system availability by 99.99%, and improved system performance by 40%,” said Hari Ramakrishnan, Del Monte Foods’ VP of Information Technology.
Nexi Group, a major European PayTech company, partnered with us to revolutionize their digital payments platform, eventually building a solution capable of handling billions of transactions annually. Azure App Service and Azure Kubernetes Service provided the scalability and performance needed to meet fluctuating demands, while our robust security features ensured the protection of sensitive financial data. Azure’s cost-effective model also allowed Nexi to optimize their IT spending, freeing up resources for further investment in strategic initiatives.
Jens Barnow, Nexi Group’s Senior VP of Group Technology, said that by using Microsoft technology the company “achieved faster time to market with new customer propositions, empowered our developer teams to do more, time for provisioning in new location, and cost efficiency.”
Scandinavian Airlines wanted to improve its tech infrastructure to better serve over 30 million fliers it serves each year. The airline chose to move from an IaaS solution to PaaS and elected to migrate critical databases and applications first, using Microsoft Azure SQL Database, Azure SQL Managed Instance, Azure App Service, and Defender for Cloud. With support from Microsoft Customer Success Migration Factory, they completed the complex migration quickly, immediately enhancing their security posture and creating an environment for more streamlined DevOps workflows.
“We are now operating in an environment that fosters innovation,” said Daniel Engberg, Head of AI, Data, and Platforms at Scandinavian Airlines. “The capabilities of Azure empower SAS to develop new applications faster and focus on what really matters: simplifying travelers’ lives and enhancing their overall experience.”
Check out our full line-up of modernization sessions at Build 2024
Building a connected vehicle and app experience with BMW and Azure: BMW utilizes Azure Kubernetes Service, GitHub, and other Azure services to power their MyBMW app, which serves over 13 million active users worldwide. In this session, BMW will share their insights on scaling cloud architecture for increased performance and adopting DevOps practices for global deployment. Tuesday, May 21, 11:30 am PDT. In person and online.
App innovation in the AI era: cost, benefits, and challenges: Modernizing existing apps to leverage AI capabilities can be a daunting task due to cost constraints, technical complexities, and compatibility challenges. This session will explore strategies and best practices for overcoming these obstacles, drawing on the real-world experiences of organizations that have successfully navigatedapp migration projects. Tuesday, May 21, 4:45 pm PDT. In person and online.
Conversational app and code assessment in Azure Migrate: Discover how Azure Migrate’s latest AI-powered assistant, Azure Copilot, can help simplify your cloud migration process. It evaluates your applications for cloud readiness, identifies potential issues, offers optimization recommendations, and helps reduce costs. Wednesday, May 22, 10:30 am PDT. In person only.
Leverage AKS for your enterprise platform: H&M’s journey: This session focuses on strategies and best practices for building scalable, reliable, and developer-friendly platforms on Azure Kubernetes Service. H&M will share their own experience and insights, and the session will also cover the latest AKS features designed to enhance reliability, performance, security, and ease of use. Thursday, May 23, 9:45 am. In person and online.
Using AI with App Service to deploy differentiated web apps and APIs: Explore how to utilize AI-powered Azure App Service capabilities to modernize your web applications, optimize their performance and reliability, and troubleshoot issues more efficiently. You will see real-world examples of integrating generative AI, as well as how Dynatrace and Datadog simplify observability using AI. Thursday, May 23, 12:30 pm PDT. In person and online.
Vision to value—SAS accelerates modernization at scale with Azure: While recovering from COVID-19 travel restrictions, Scandinavian Airlines chose Azure app and database services as the foundation for modernizing their critical operational applications. This session will cover their modernization journey and explore the latest features in Azure App Service and Azure SQL. Thursday, May 23, 1:45 pm PDT. In person and online.
Scaling Spring Batch in the Cloud: This session focuses on Spring Batch, a framework for large-scale data processing, and how it’s used in Azure Spring Apps Enterprise for cloud-based batch jobs. You’ll learn about essential Spring Batch features and how to effectively leverage them in the cloud. Online only.
Spring Unlocks the Power of AI Platform—End-to-End: Discover how AI can elevate your Spring projects, making them more interactive, intelligent, and innovative. Learn how to seamlessly integrate AI into your Spring applications, adding AI-powered features to improve self-service and customer support in existing apps and discover techniques to create AI-driven user interfaces that provide more natural and intuitive interactions with your users. Online only.
Join us at Build and bring your app development into the future
Are you ready to unlock new opportunities for innovation and empower your business with cutting-edge AI? Join us in person or online at this year’s Microsoft Build to discover how modernizing your applications can make them more scalable, reliable, and efficient to better handle increasing user demands while reducing operational costs and be AI ready.
Finally, don’t forget about the full suite of robust tools Azure offers to enable your app modernization journey, including Azure Migrate and Modernize, Azure Innovate, Azure Solution Assessments, Azure Landing Zone Accelerators, Reliable Web App Patterns and more!
By embracing app modernization on Azure, your organization can stay competitive, agile, and prepared for the future of Intelligent Apps.
Microsoft Tech Community – Latest Blogs –Read More
Introducing the Azure AI Model Inference API
We launched the model catalog in early 2023, featuring a curated selection of open-source models that customers can trust and consume in their organizations. The Azure AI model catalog offers around ~1700 models, including the latest open-source innovations like Llama3 from Meta, but also models coming from partnerships like OpenAI, Mistral, and Cohere. Each of these models with unique capabilities that we think will inspire developers to build the next generation of copilots.
A screenshot of the Azure AI model catalog displaying the large diversity of models it brings in for customers.
To enable developers to get access to these capabilities consistently, we are launching the Azure AI model inference API, which enables customers to consume the capabilities of those models using the same syntax and the same language. This API introduces a single layer of abstraction, yet it allows each model to expose unique features or capabilities that differentiate them.
Starting today, all language models deployed as serverless API support this common API. This means you can interact with GPT-4 from Azure OpenAI Service, Cohere Command R+, or Mistral-Large, in the same way without the need for translations. Soon, these capabilities will also be available on models deployed to our self-hosted managed endpoints, unifying the consumption experience across all our inferencing solutions.
A graphic depicting that the Azure AI model inference API can be used to consume models from Cohere, Mistral, Meta LLama, Microsoft (including Phi-3) and Core42 JAIS, and it’s also compatible with Azure OpenAI Service model deployments.
This is the same API utilized within Azure AI Studio and Azure Machine Learning. You can use prompt flow to build intelligent experiences that can now leverage various models. Since all the models speak the same language, you can run evaluations to compare them across different tasks, determine which one to use for each use case, exploit their strengths, and build experiences that delight your customers.
A screenshot showing the comparison of 3 different evaluations of a prompt flow chat application that implements the RAG pattern. The evaluation was run using 3 different variations of the same prompt flow, each of them running GPT-3.5 Turbo, Mistral-Large, and Llama2-70B-chat, using the same prompt message for the generation step.
We see more customers eager to combine the innovation from across the industry and redefine what’s possible. They are either integrating foundational models as building blocks for their applications or by fine-tuning them to achieve niche capabilities in specific use cases. We hope these new set of capabilities unlock the experimentation and evaluation required to move across models, picking the right one for the right job.
We want to help customers to fulfill that mission, empowering every single AI developer to achieve more with Azure AI.
Resources:
Azure AI Model Inference API
Deploy models as serverless APIs
Model Catalog and Collections in Azure AI Studio
Microsoft Tech Community – Latest Blogs –Read More
Azure Functions: Support for HTTP Streams in Node.js is Generally Available
Azure Functions support for HTTP streams in Node.js is now generally available. With this feature, customers can stream HTTP requests to and responses from their Node.js Functions Apps. Streaming is a mechanism for transmitting data over HTTP in a continuous and efficient manner. Instead of sending all the data at once, streams allow data to be transmitted in small, manageable chunks, which can be processed as they arrive. They are particularly valuable in scenarios where low latency, high throughput, and efficient resource utilization are crucial.
Ever since the preview release of this feature in February of this year, we’ve heard positive feedback from customers that have used this feature for various use cases including, but not limited to, streaming OpenAI responses, delivering dynamic content, processing large data etc. Today, at MS Build 2024, we announce General Availability of HTTP Streaming for Azure Functions using Node.js.
HTTP Streams in Node.js is supported only in the Azure Functions Node.js v4 programming model. Follow these instructions to try out HTTP Streams for your Node.js apps.
Prerequisites
Version 4 of the Node.js programming model. Learn more about the differences between v3 and v4 in the migration guide.
Version 4.3.0 or higher of the @azure/functions npm package.
If running in Azure, version 4.28 of the Azure Functions runtime.
If running locally, version 4.0.5530 of Azure Functions Core Tools.
Steps
If you plan to stream large amounts of data, adjust the app setting `FUNCTIONS_REQUEST_BODY_SIZE_LIMIT` in Azure or in your local.settings.json file. The default value is 104857600, i.e., limiting your request to 100mb maximum.
Add the following code to your app in any file included by your main field.
JavaScript
const { app } = require(‘@azure/functions’);
app.setup({ enableHttpStream: true });
TypeScript
import { app } from ‘@azure/functions’;
app.setup({ enableHttpStream: true });
3. That’s it! The existing HttpRequest and HttpResponse types in programming model v4 already support many ways of handling the body, including as a stream. Use request.body to truly benefit from streams, but rest assured you can continue to use methods like request.text() which will always return the body as a string.
Example code
Below is an example of an HTTP triggered function that receives data via an HTTP POST request, and the function streams this data to a specified output file:
JavaScript
const { app } = require(‘@azure/functions’);
const { createWriteStream } = require(‘fs’);
const { Writable } = require(‘stream’);
app.http(‘httpTriggerStreamRequest’, {
methods: [‘POST’],
authLevel: ‘anonymous’,
handler: async (request, context) => {
const writeStream = createWriteStream(‘<output file path>’);
await request.body.pipeTo(Writable.toWeb(writeStream));
return { body: ‘Done!’ };
},
});
TypeScript
import { app, HttpRequest, HttpResponseInit, InvocationContext } from ‘@azure/functions’;
import { createWriteStream } from ‘fs’;
import { Writable } from ‘stream’;
export async function httpTriggerStreamRequest(
request: HttpRequest,
context: InvocationContext
): Promise<HttpResponseInit> {
const writeStream = createWriteStream(‘<output file path>’);
await request.body.pipeTo(Writable.toWeb(writeStream));
return { body: ‘Done!’ };
}
app.http(‘httpTriggerStreamRequest’, {
methods: [‘POST’],
authLevel: ‘anonymous’,
handler: httpTriggerStreamRequest,
});
Below is an example of an HTTP triggered function that streams a file’s content as the response to incoming HTTP GET requests:
JavaScript
const { app } = require(‘@azure/functions’);
const { createReadStream } = require(‘fs’);
app.http(‘httpTriggerStreamResponse’, {
methods: [‘GET’],
authLevel: ‘anonymous’,
handler: async (request, context) => {
const body = createReadStream(‘<input file path>’);
return { body };
},
});
TypeScript
import { app, HttpRequest, HttpResponseInit, InvocationContext } from ‘@azure/functions’;
import { createReadStream } from ‘fs’;
export async function httpTriggerStreamResponse(
request: HttpRequest,
context: InvocationContext
): Promise<HttpResponseInit> {
const body = createReadStream(‘<input file path>’);
return { body };
}
app.http(‘httpTriggerStreamResponse’, {
methods: [‘GET’],
authLevel: ‘anonymous’,
handler: httpTriggerStreamResponse,
});
Try it out!
For a ready-to-run sample app with more detailed code, check out this GitHub repo.
Check out this GitHub repo to discover the journey of building a generative AI application using LangChain.js and Azure. This demo explores the development process from idea to production, using a RAG-based approach for a Q&A system based on YouTube video transcripts.
Do try out this feature and share your valuable feedback with us on GitHub.
Microsoft Tech Community – Latest Blogs –Read More
Announcing the 2024 Imagine Cup World Champion!
The Imagine Cup, a visionary global technology competition for student startups building with AI, has just crowned its 2024 World Champion: FROM YOUR EYES!
Left to right: Emre Yildiz, Zülal Tannur, Ege Ketrez
Using GPT-4 and their own image recognition technology, FROM YOUR EYES has built a mobile application and API, which offer real-time visual explanations to users with a vision disability. The mobile application enables users to design their own AI assistant to obtain descriptions of photos, videos, or other visual documents – and works with smart glasses and watches to describe aspects of the users’ environment. FROM YOUR EYES also licenses their technology to other developers and businesses via their API and has already secured partnerships with multiple entities.
The exciting finale of this year’s Imagine Cup took place at Microsoft Build, where the three world championship finalists showcased their groundbreaking innovations on a global stage, hosted by Microsoft CVP of Ecosystems, Annie Pearl, and Principal Cloud Advocate, Dona Sarkar.
The atmosphere at Microsoft Build was electric as the world joined live, culminating in the thrilling announcement of FROM YOUR EYES being crowned champion, earning USD100,000 and an exclusive mentorship session with Microsoft Chairman and CEO, Satya Nadella. JRE and PlanRoadmap were the two runners-up – each of these startups also pushed the boundaries of what’s possible, impressing judges with their solutions in sustainable manufacturing and accessibility, and was each awarded USD50,000 in prize money to help propel their startups forward.
Runners-up: JRE is using cutting-edge AI to create a greener steel industry and PlanRoadmap has created an AI productivity coach to help people with ADHD overcome task paralysis.
The road to Microsoft Build
This momentous occasion marks the pinnacle of an incredible journey for these talented student entrepreneurs. From an initial pool of more than 20,000 student entrepreneurs from around the world, startups were narrowed down to the prestigious semifinalists, and finally, the top three startups emerged – selected from a panel of judges. These judges, including CEO of Neo and Co-founder of Code.org, Ali Partovi, and Founder and CEO ROYBI, Elnaz Sarraf, evaluated the startups based on their AI technology, inclusivity, and fundamental business viability. All these exceptional startups demonstrated creativity, innovation and impressive expertise in cutting edge AI technology.
For those who missed the live event, you can catch a recap and learn more about the competition! If you’re inspired by this year’s Imagine Cup, consider joining us next year and take your shot at innovation on a global stage.
Congratulations to FROM YOUR EYES!
FROM YOUR EYES was created out of a profound personal need and a visionary goal. “After losing my vision completely at the age of ten, I knew I would never be able to see biologically again, but I believed it could be possible with technology,” says Zülal Tannur, Founder and CEO of FROM YOUR EYES. She encountered various image processing technologies, though none provided the effective real-time solutions she needed; this sparked the idea for FROM YOUR EYES. Through involvement with Microsoft’s Seeing AI initiative, Zülal met other developers that were visually impaired from around the world, and they inspired her to delve into coding.
In 2020, FROM YOUR EYES’ journey began. They soon won first place in an idea marathon, and over the next year and a half, continued to innovate. In 2021, the team started prototyping and joined Microsoft for Startups Founders Hub where they have since received USD $150,000 of Azure credits and access to the Microsoft for Startups Expert Network, which helped them continue growing their business. Through rigorous development, they have trained their own custom AI model with over 15 million images, achieving an impressive accuracy rate of 98.03% and an image processing speed of 15 milliseconds, which is about four times faster than the world standard. Azure Cosmos DB and Blog Storage give users quick access and upload capabilities, and output is sent to GPT-4 for Natural Language Processing.
The team has accessibility at its core. “Being a startup with a visually impaired leader as the Founder and CEO naturally leads us to approach these issues with great sensitivity,” says Zülal. “For example, our CTO, Ege, is a person with autism, ADHD, and dyslexia. We create the most conducive working conditions for him. Prioritizing acceptance of each other with all our differences and unconditional support are fundamental to us.”
It’s clear that FROM YOUR EYES has an exciting path ahead, making an impact not just for FROM YOUR EYES users, but for developers and entrepreneurs worldwide. “I want to prove that a leader who is visually impaired can be strong, independently capable of building groundbreaking technology, and that being a young, female entrepreneur doesn’t hinder you from establishing and managing a company,” Zülal said. With this ethos, Zülal states, “we don’t believe there is anything this team cannot achieve when we’re together.”
+ + + +
Congratulations to all of the incredible student entrepreneurs who joined the Imagine Cup this year. The Imagine Cup is not just a competition; it’s a community of student visionaries, dreamers and bold entrepreneurs who are inspired by the impact that AI and tech can make.
Learn more at ImagineCup.com.
Microsoft Tech Community – Latest Blogs –Read More
Azure Functions: Support for HTTP Streams in Python is now in Preview!
HTTP streams lets you accept and return data from your HTTP endpoints using FastAPI request and response APIs enabled in your functions. These APIs lets the host process large data in HTTP messages as chunks instead of reading an entire message into memory.
This feature makes it possible to handle large data stream, OpenAI integrations, deliver dynamic content, and support other core HTTP scenarios requiring real-time interactions over HTTP. You can also use FastAPI response types with HTTP streams. Without HTTP streams, the size of your HTTP requests and responses are limited by memory restrictions that can be encountered when processing entire message payloads all in memory.
To get started, the following prerequisites are required:
Azure Functions runtime version 4.34.1, or a later version.
Python version 3.8, or a later supported version.
Python v2 programming model
Then, enable HTTP streaming in your Azure Function app. HTTP streams are disabled by default. You need to enable this feature in your application settings and also update your code to use the FastAPI package.
Add the azurefunctions-extensions-http-fastapi extension package to the requirements.txt file in the project.
Add the following code to the function_app.py file in the project, which imports the FastAPI extension:
from azurefunctions.extensions.http.fastapi import Request, StreamingResponse
When deploying, add these application settings: “PYTHON_ISOLATE_WORKER_DEPENDENCIES”: “1” “PYTHON_ENABLE_INIT_INDEXING”: “1”
When running locally, you also need to add these same settings to the local.settings.json project file.
Following are a few example code snippets on using HTTP streams with Azure Functions in Python.
This example is an HTTP triggered function that streams HTTP response data. You might use these capabilities to support scenarios like sending event data through a pipeline for real time visualization or detecting anomalies in large sets of data and providing instant notifications.
import time
import azure.functions as func
from azurefunctions.extensions.http.fastapi import Request, StreamingResponse
app = func.FunctionApp(http_auth_level=func.AuthLevel.ANONYMOUS)
def generate_count():
“””Generate a stream of chronological numbers.”””
count = 0
while True:
yield f”counting, {count}nn”
count += 1
@app.route(route=”stream”, methods=[func.HttpMethod.GET])
async def stream_count(req: Request) -> StreamingResponse:
“””Endpoint to stream of chronological numbers.”””
return StreamingResponse(generate_count(), media_type=”text/event-stream”)
This example is an HTTP triggered function that receives and processes streaming data from a client in real time. It demonstrates streaming upload capabilities that can be helpful for scenarios like processing continuous data streams and handling event data from IoT devices.
import azure.functions as func
from azurefunctions.extensions.http.fastapi import JSONResponse, Request
app = func.FunctionApp(http_auth_level=func.AuthLevel.ANONYMOUS)
@app.route(route=”streaming_upload”, methods=[func.HttpMethod.POST])
async def streaming_upload(req: Request) -> JSONResponse:
“””Handle streaming upload requests.”””
# Process each chunk of data as it arrives
async for chunk in req.stream():
process_data_chunk(chunk)
# Once all data is received, return a JSON response indicating successful processing
return JSONResponse({“status”: “Data uploaded and processed successfully”})
def process_data_chunk(chunk: bytes):
“””Process each data chunk.”””
# Add custom processing logic here
pass
Note, you must use an HTTP client library to make streaming calls to a function’s FastAPI endpoints. The client tool or browser you’re using might not natively support streaming or could only return the first chunk of data. You can use a client script like this to send streaming data to an HTTP endpoint.
import openai
from azurefunctions.extensions.http.fastapi import Request, StreamingResponse
import asyncio
import os
# Azure Function App
app = func.FunctionApp(http_auth_level=func.AuthLevel.ANONYMOUS)
endpoint = os.environ[“AZURE_OPEN_AI_ENDPOINT”]
api_key = os.environ[“AZURE_OPEN_AI_API_KEY”]
# Azure Open AI
deployment = os.environ[“AZURE_OPEN_AI_DEPLOYMENT_MODEL”]
temperature = 0.7
client = openai.AsyncAzureOpenAI(
azure_endpoint=endpoint,
api_key=api_key,
api_version=”2023-09-01-preview”
)
# Get data from Azure Open AI
async def stream_processor(response):
async for chunk in response:
if len(chunk.choices) > 0:
delta = chunk.choices[0].delta
if delta.content: # Get remaining generated response if applicable
await asyncio.sleep(0.1)
yield delta.content
# HTTP streaming Azure Function
@app.route(route=”stream-cities”, methods=[func.HttpMethod.GET])
async def stream_openai_text(req: Request) -> StreamingResponse:
prompt = “List the 100 most populous cities in the United States.”
azure_open_ai_response = await client.chat.completions.create(
model=deployment,
temperature=temperature,
max_tokens=1000,
messages=[{“role”: “user”, “content”: prompt}],
stream=True
)
return StreamingResponse(stream_processor(azure_open_ai_response), media_type=”text/event-stream”)
Microsoft Tech Community – Latest Blogs –Read More
Discover your next integration inspiration at this year’s Build!
Get ready for an exciting digital experience at Microsoft Build 2024! Running May 21-23 in Seattle and online, this year’s event is all about delving deep into the cutting-edge world of AI and cloud technology. And if you’re eager to dive into the transformative world of Azure Integration Services, get ready for something special.
From seamless application and data integration to API management and powerful workflow automation, Azure Integration Services is revolutionizing the way businesses operate. Kantar, a global leader in marketing data, used Azure Integration Services to create the KantarHub, a centralized platform that simplifies data sharing and enhances security, integrating approximately 150 internal applications. Össur, a prosthetic innovation leader, migrated its diverse legacy apps to the cloud with Azure Integration Services, ensuring uninterrupted operations and improving data security and API access. These examples highlight how Azure Integration Services is transforming customer operations through seamless integration and increased efficiency.
In this blog, we’ll unpack the major announcements for Azure Integration Services from this year’s Build event. Register today to attend!
Azure API Management
With the rise in Gen AI app usage, there’s an urgent need for enterprise-wide, federated access to manage and secure endpoints. This year, we’re excited to announce GenAI Gateway capabilities in Azure API Management to tackle these challenges for Azure OpenAI Services endpoints (general availability).
As a first step, we’ve simplified the onboarding process so you can now import all Azure OpenAI endpoints into the Azure API Management platform with a single click. These endpoints will be protected by Azure API Management’s built-in managed identity authentication. For scaled workloads, we provide load balancing, rate limiting, and out-of-box observability support.
Here’s a rundown of all the policies and features we’ve added:
Import Azure OpenAI as an API: New Import Azure OpenAI as an API in Azure API management provides an easy single click experience to import your existing Azure OpenAI endpoints as APIs and simplifies the onboarding process.
Azure OpenAI Token Limit Policy: Manage and enforce token-based limits per API consumer to ensure fair usage.
Azure OpenAI Emit Token Metric Policy: Get detailed monitoring and analysis by logging the token usage metrics and sending those to Azure Application Insights.
Load Balancer and Circuit Breaker: Distribute the load across multiple Azure OpenAI endpoints with support for various load distribution strategies ensuring optimal performance and reliability.
Azure OpenAI Semantic Caching Policy (public preview): Optimize token usage by caching completions for semantically similar prompts improving response performance.
Click here to learn more about the GenAI Gateway capabilities in Azure API Management. We launched the “Gen AI Gateway Accelerator,” a reference implementation that demonstrates how to provision and interact with Generative AI resources through API Management. This new scenario in the APIM landing zone accelerator helps accelerate our customers on their path to Gen AI production workloads. Learn more about the “Gen AI Gateway Accelerator” here.
In addition, we have two features now in General Availability (GA):
OData API Type : First-class support for OData makes it easier for customers to publish OData APIs in API Management, including the ability to secure them with standard API protections. You can now use Azure API Management for publishing APIs from platforms like SAP, Oracle, Dataverse, and others that expose OData APIs.
gRPC API Type in Self-Hosted Gateway: Seamlessly manage your gRPC services as APIs within Azure API Management.
Azure API Center
Another exciting announcement—Azure API Center is now in General Availability! Complementing Azure API Management, Azure API Center is a centralized solution that offers a unified inventory for seamless discovery, consumption, and governance of APIs, regardless of their type, lifecycle stage, or deployment location. With Azure API Center, your organization can effectively manage your API landscape and promote efficiency, consistency, and innovation across the board.
Key features of Azure API Center include:
API Inventory Management: Create an up-to-date API catalog that includes essential metadata like API names, descriptions, lifecycle stages, and owners. Custom metadata can be added to capture organization-specific API information.
API Cataloging for Azure API Management: Quickly import APIs into API Center via a single CLI command, creating a cohesive center across different API Management services.
API Design Governance: Enable API best practices at scale and enforce design rules across your organization. This empowers API developers to ensure quality and uniformity across all produced APIs.
API Reusability: Foster reusability by empowering consumers to swiftly discover and utilize the appropriate APIs.
API Development Enhancement: Seamlessly integrate with our API Center Visual Studio Code extension, enhancing and simplifying the API development process.
Azure Logic Apps
By simplifying and automating how you connect and integrate various applications, services, and data sources in the cloud, Azure Logic Apps users can create and run automated workflows with little to no code. Recent updates to the platform include new features that enhance seamless management of integration flows, simplify legacy integration, and enable efficient B2B integration.
Seamless Management of Integration Flows
Efficiently monitoring, troubleshooting, and updating automated workflows can be challenging, especially when dealing with multiple integrations. To address these pain points, we’ve introduced:
Support for Zero Downtime deployment scenarios in the portal (public preview for Logic Apps Standard): Zero downtime deployment is a technique that allows updating an application without affecting its availability or performance. Logic Apps Standard now supports zero downtime deployment by using deployment slots, which are isolated environments that can host different versions of the application and can be swapped with the production slot without any interruption. Click here more details.
Logic Apps Monitoring dashboard for workflow monitoring, troubleshooting and resubmissions (public preview): We have released UI dashboards for Logic Apps Standard to help with diagnosis and troubleshooting of Logic Apps workflow runs and failures. The dashboard also offers the ability to take actions such as bulk resubmission of failed runs.
Advanced Development and Customization
Developers need the flexibility to customize workflows and integrate the latest technologies seamlessly, while also benefiting from efficient debugging and development environments.
.NET 8 Custom Code Support (public preview for Logic Apps Standard): We’ve extended our built-in action capabilities to include support for calling .NET 8 custom code. Within a Logic Apps workspace, you can now effortlessly develop and debug your custom code right alongside your workflows, streamlining your development process with the most up-to-date .NET technology.
Improved Onboarding Experience on VS Code for Logic Apps Standard (general availability): Extend the Logic App designer to empower users to transition from developing workflows in the cloud to a local environment. The intuitive no-code designer of Logic Apps combined with the powerful pro-code capabilities of VS Code has enabled developers to build, run and test their Logic App workflows locally with features such as breakpoint debugging.
Logic Apps Standard Deployment Scripting Tools in VS Code (public preview for Logic Apps Standard): For Standard logic app workflows that run in single-tenant Azure Logic Apps, you can use Visual Studio Code with the Azure Logic Apps Standard extension to locally develop, test, and store your logic app project using any source control system. You can also use the extension to streamline the creation of deployment pipelines, automating the deployment of your Logic Apps Standard infrastructure and code. Click here for more technical details.
B2B Integration
Managing complex B2B transactions and integrations requires robust, scalable solutions and efficient management tools. And, we have new features to help with these transactions:
EDI (X12/EDIFACT) processing with built in actions (general availability): Run B2B workloads at scale with connectors that can process single or batched EDI messages and larger payloads, providing greater control over performance.
Integration Account Enhancements (public preview): Integration Account Premium offers UI based Trading Partner management capabilities and centralized store for artifacts including maps and schemas. With this release, we have enabled Availably Zone support for Integration Account.
Mainframes and midranges Integration
Extending the functionality of legacy systems to the cloud without extensive re-investment can be difficult. That’s why we have connectors for IBM mainframes and midranges.
Azure Logic Apps connectors for IBM Mainframe and Midranges: Preserve the value of your workloads running on mainframes and midranges by extending them to the Azure Cloud, without investing more resources on the mainframe or midrange environments using Azure Logic Apps. Click here for more technical details.
Azure Service Bus
Azure Service Bus is a fully managed enterprise message broker that ensures secure and efficient delivery of data messages between different parts of your system, even when they’re disconnected or processing tasks at different speeds. At Build, we’re thrilled to announce a new feature: batch delete. Currently in preview, this feature empowers customers to delete messages on the service side from an entity or the dead letter queue in batches of up to 4,000 messages.
Azure Event Grid
Like an event dispatcher for your cloud, Azure Event Grid triggers actions across your applications and services in near real-time whenever something significant happens. New features are generally available that are tailored to customers who are looking for a pub-sub message broker that can enable Internet of Things (IoT) solutions using MQTT protocol and can help build event-driven applications.
These capabilities enhance Event Grid’s MQTT broker capability, make it easier to transition to Event Grid namespaces for push and pull delivery of messages, and integrate new sources. Customers can now:
Use the Last Will Testament feature in compliance with MQTT v5 and MQTT v.3.1.1 specifications, so applications can get notifications when clients get disconnected, enabling management of downstream tasks to prevent performance degradation.
Create data pipelines that utilize both Event Grid Basic resources and Event Grid Namespace Topics (supported in Event Grid Standard). This means customers can utilize Event Grid namespace capabilities such as MQTT broker without needing to reconstruct existing workflows.
Support new event sources, such as Microsoft Entra ID and Microsoft Outlook, leveraging Event Grid’s support for the Microsoft Graph API. This means customers can use Event Grid for new use cases, like when a new employee is hired or a new email is received, to process that information and send to other applications for more action.
For more technical details on these announcements, click here.
See you at Build for these exciting sessions!
Don’t miss the chance to explore these exciting updates at Microsoft Build 2024. Register now and if you’re attending in-person, be sure to stop by the Azure API Management booth in The Hub! You can meet with the engineering and product teams behind API Management and API Center, and further explore Azure Integration Services capabilities to discover exciting new solutions.
Join us for these breakout sessions both in-person or online:
Unleash the Potential of APIs with Azure API Management: Through practical demos we’ll show how to use Azure API Management to expose Azure OpenAI services, manage OpenAI tokens allocation, distribute load across multiple model deployments and gain valuable insights into token usage throughout your intelligent applications portfolio. Explore how Azure API Center revolutionizes API governance and discoverability, driving innovation and efficiency in your organization’s operations.
GenAI Gateway Capabilities in Azure API Management: We will demonstrate how API Management can be configured for authentication and authorization for OpenAI endpoint, enforcing rate limits based on OpenAI tokens used, load balancing across multiple OpenAI endpoints and more.
Microsoft Tech Community – Latest Blogs –Read More
Azure Functions: SDK type bindings for Azure Blob Storage with Azure Functions in Python (Preview)
Azure Functions triggers and bindings enable you to easily integrate event and data sources with function applications. With SDK type bindings, you can use types from service SDKs and frameworks, providing more capability beyond what is currently offered. SDK type bindings for Azure Storage Blob when using Python in Azure Functions is now in Preview.
SDK type bindings for Azure Storage Blob enable the following key scenarios:
Downloading and uploading blobs of large sizes, reducing current memory limitations and GRPC limits.
Improved performance by using blobs with Azure Functions
To get started using SDK type bindings for Azure Storage Blob, the following prerequisites are required:
Azure Functions runtime version 4.34.1, or a later version.
Python version 3.9, or a later supported version.
Python v2 programming model
Note that currently, only synchronous SDK types are supported.
Then, enable the feature in your Azure Function app:
Add the azurefunctions-extensions-bindings-blob extension package to the requirements.txt file in the project.
Add this code to the function_app.py file in the project, which imports the SDK type bindings:
import azurefunctions.extensions.bindings.blob as blob
This example shows how to get the BlobClient from both a Blob storage trigger (blob_trigger) and from the input binding on an HTTP trigger (blob_input).
import logging
import azure.functions as func
import azurefunctions.extensions.bindings.blob as blob
app = func.FunctionApp(http_auth_level=func.AuthLevel.ANONYMOUS)
@app.blob_trigger(
arg_name=”client”, path=”PATH/TO/BLOB”, connection=”AzureWebJobsStorage”
)
def blob_trigger(client: blob.BlobClient):
logging.info(
f”Python blob trigger function processed blob n”
f”Properties: {client.get_blob_properties()}n”
f”Blob content head: {client.download_blob().read(size=1)}”
)
@app.route(route=”file”)
@app.blob_input(
arg_name=”client”, path=”PATH/TO/BLOB”, connection=”AzureWebJobsStorage”
)
def blob_input(req: func.HttpRequest, client: blob.BlobClient):
logging.info(
f”Python blob input function processed blob n”
f”Properties: {client.get_blob_properties()}n”
f”Blob content head: {client.download_blob().read(size=1)}”
)
return “ok”
You can view other SDK type bindings samples for Blob storage in the Python extensions repository:
ContainerClient type
StorageStreamDownloader type
Microsoft Tech Community – Latest Blogs –Read More
Macro Excel : create an excel for each city and send it to the email adresse
Hello can you help me please ? i would like to create using excel VBA macro a code wich will create an Independent excel from with each city and send it to the corresponding mail in the excel.
For exemple in the following testing file : i would like to create an excel for each of the cities and send it to the following emails : also each of the excel will only include data of the city corresponding to the excel
GE SOTRemail address removed for privacy reasonsAD MAD NFemail address removed for privacy reasonsIALTERemail address removed for privacy reasonsLATTEemail address removed for privacy reasonsMOP DOTemail address removed for privacy reasons
(so it will basically create 5 excel and send them to these emails)
Thank you in advance for your help 🙂
Hello can you help me please ? i would like to create using excel VBA macro a code wich will create an Independent excel from with each city and send it to the corresponding mail in the excel. For exemple in the following testing file : i would like to create an excel for each of the cities and send it to the following emails : also each of the excel will only include data of the city corresponding to the excelGE SOTRemail address removed for privacy reasonsAD MAD NFemail address removed for privacy reasonsIALTERemail address removed for privacy reasonsLATTEemail address removed for privacy reasonsMOP DOTemail address removed for privacy reasons (so it will basically create 5 excel and send them to these emails)Thank you in advance for your help 🙂 Read More
Teams: Organisationsweit aktivieren
Hallo zusammen,
wir haben das Problem, dass wir im Admin-Center unter Einstellungen/Einstellungen der OrganisationseinstellungenMicrosoft Teams für alle benutzer aktivieren möchten und dabei
folgende Meldung erhalten und nicht wissen, was wir tun können:
Wir können Ihre Lizenzänerungen nicht speichern. Schließen Sie diese Einstellung, aktualisieren Sie die Seite und versuchen Sie es noch mal.
Wir haben bereits mehrere Browser verwendet, den Virenscanner und die Firewall deaktiviert – alles ohne Nutzen.
Hat hier jemand noch eine Idee?
Hallo zusammen, wir haben das Problem, dass wir im Admin-Center unter Einstellungen/Einstellungen der OrganisationseinstellungenMicrosoft Teams für alle benutzer aktivieren möchten und dabeifolgende Meldung erhalten und nicht wissen, was wir tun können: Wir können Ihre Lizenzänerungen nicht speichern. Schließen Sie diese Einstellung, aktualisieren Sie die Seite und versuchen Sie es noch mal. Wir haben bereits mehrere Browser verwendet, den Virenscanner und die Firewall deaktiviert – alles ohne Nutzen. Hat hier jemand noch eine Idee? Read More
Images are not displayed in incoming emails
Hello,
Since 10 days or so, Outlook 365 does not display images anymore. See below.
Note these images are displayed alright in the New Outlook, in https://outlook.live.com/ and in Mail under iOS.
Of course I have already unchecked both options in File/Options/Trust Center/Automatic Download. See below.
What else should I do to get those images painted?
Thank you!
Stefano
Hello,Since 10 days or so, Outlook 365 does not display images anymore. See below.Note these images are displayed alright in the New Outlook, in https://outlook.live.com/ and in Mail under iOS.Of course I have already unchecked both options in File/Options/Trust Center/Automatic Download. See below.What else should I do to get those images painted? Thank you!Stefano Read More
Policy Tip Text not working for some policies
Hi,
I have an issue with the Policy Tip Text not showing for a specific rule in Outlook. It does however show in OWA.
My rule is quite simple. Condition = File extension is docx, pdf, xlsx, pptx and Content is shared from Microsoft 365 with people outside my organisation.
The policy tip should appear and allow the user to override by providing a business justification.
The policy tip does not appear in the Outlook client, but does appear in OWA.
If I change the condition to Recipient Domain is hotmail.com, gmail.com and Content is shared from Microsoft 365 with people outside my organisation then the policy tip does appear.
Does the policy tip not work when using file extensions in the Outlook Client?
Secondly, when the rule is configured as Condition = File extension is docx, pdf, xlsx, pptx and Content is shared from Microsoft 365 with people outside my organisation, sending through OWA (with the user overriding the policy restriction), the email is still blocked from being sent to an external recipient. The email is sent, but the DLP rule still blocks the email and the user receives the email block notification.
I have other policies where the override works successfully.
Any ideas on how to fix this issues?
Thanks,
Ben
Hi,I have an issue with the Policy Tip Text not showing for a specific rule in Outlook. It does however show in OWA.My rule is quite simple. Condition = File extension is docx, pdf, xlsx, pptx and Content is shared from Microsoft 365 with people outside my organisation. The policy tip should appear and allow the user to override by providing a business justification. The policy tip does not appear in the Outlook client, but does appear in OWA. If I change the condition to Recipient Domain is hotmail.com, gmail.com and Content is shared from Microsoft 365 with people outside my organisation then the policy tip does appear.Does the policy tip not work when using file extensions in the Outlook Client? Secondly, when the rule is configured as Condition = File extension is docx, pdf, xlsx, pptx and Content is shared from Microsoft 365 with people outside my organisation, sending through OWA (with the user overriding the policy restriction), the email is still blocked from being sent to an external recipient. The email is sent, but the DLP rule still blocks the email and the user receives the email block notification. I have other policies where the override works successfully. Any ideas on how to fix this issues? Thanks,Ben Read More
Feature request: ability to submit feedback on EDR blocks
Good day,
I’d like to suggest to add the ability to report behavioral blocks to Microsoft for a review.
The current reporting feature is focused on files and hashes and requires a file or hash to be able to submit something – which does not make any sense for behavioral detections.
To make it a bit clearer I attached a screenshot of a behavioral false positive – there is currently no lightweight approach to report that I believe – but we’d really love to be able to provide quicker feedback on behavioral blocks.
Good day, I’d like to suggest to add the ability to report behavioral blocks to Microsoft for a review.The current reporting feature is focused on files and hashes and requires a file or hash to be able to submit something – which does not make any sense for behavioral detections. To make it a bit clearer I attached a screenshot of a behavioral false positive – there is currently no lightweight approach to report that I believe – but we’d really love to be able to provide quicker feedback on behavioral blocks. Read More
App running as background process
I use this to open my company portal via powershell :
start-process companyportal:
please is there any way to run company portal on background (task manager) without display the company portal on desktop.
thanks.
I use this to open my company portal via powershell :start-process companyportal:please is there any way to run company portal on background (task manager) without display the company portal on desktop.thanks. Read More
Identifier(s) in API calls to load mail folders and mails from folders
Hi all.
I am trying to load user folders with call to list user folders , and later emails for given folder with call to list emails in given folder .
In both calls common part is:
GET /users/{id | userPrincipalName}…
On Azure portal userPrincipalName parameter is editable:
Is it a must to use Object ID (below) for accessing user and so forth ?
For my use case it would be of great benefit to use User principal name , but what happens if someone changes it ?
Thanks in advance,
Dragan
Hi all. I am trying to load user folders with call to list user folders , and later emails for given folder with call to list emails in given folder .In both calls common part is:GET /users/{id | userPrincipalName}…On Azure portal userPrincipalName parameter is editable:Is it a must to use Object ID (below) for accessing user and so forth ?For my use case it would be of great benefit to use User principal name , but what happens if someone changes it ? Thanks in advance,Dragan Read More