Category: Microsoft
Category Archives: Microsoft
Discover How App Modernization on Azure Enables Intelligent App Innovation
AI is accelerating the need for app modernization to drive innovation with AI-powered intelligent apps while simultaneously transforming the speed and process of modernization itself. All of this speaks to the value of Intelligent apps, enabling businesses to deliver differentiated customer experiences, product innovation and business process efficiencies.
At Microsoft we’re focused on helping every customer modernize their legacy applications as fast and easily as possible, rearchitecting them to a modern platform that enables rapid innovation and an environment that’s purpose built for the wave of AI innovation that is coming to the enterprise. To help with this, Azure offers a comprehensive set of services to build and modernize intelligent applications across Platform as a Service (PaaS), Serverless offerings, and managed Kubernetes, integrated with cloud scale databases, and a broad selection of foundational and open models for AI. At this year’s Microsoft Build conference—May 21-23 in Seattle and online—you’ll have the chance to learn more about exciting new product releases, capabilities and enhancements to help you seamlessly build and modernize intelligent applications.
Modernize your App estate for AI and continuous innovation
Legacy applications, built on outdated technologies, are increasingly becoming a roadblock for businesses in the fast-paced digital world. They struggle to manage growing data volumes and user traffic, posing scalability challenges that can lead to performance bottlenecks and system failures. Additionally, their reliance on unsupported technologies leaves them vulnerable to security threats and compliance issues, while cumbersome manual updates hinder AI innovation and agility.
Modernizing these applications is crucial for businesses to stay competitive and thrive in this era of AI. This involves transitioning to scalable architecture, embracing modern technologies like cloud application, data and AI services, and streamlining development processes. According to a recent survey by IDC Research, 43% of respondents said modernizing applications to a PaaS service improved IT Operations productivity, 36% said it helped with scalability to meet peak demand while reducing costs at low usage times, and 35% said it improved security. You can learn more about these findings in the whitepaper, Exploring the Benefits of Cloud Migration and Modernization for the Development of Intelligent Applications.
Product enhancements to accelerate your App modernization journey
GitHub Copilot skills for Azure Migrate Code assessment
Last November at Microsoft Ignite 2023, we launched a new capability within Azure Migrate to help you quickly assess applications and identify key code changes required before migrating these applications to Azure. At this year’s Build, we’re excited to launch and demo the integration of GitHub Copilot skills for Azure Migrate application and code assessment. With this integration of AI-assisted development, developers can ask questions like “Can I migrate this app to Azure?” or “What changes do I need to make to this code?” and get tailored answers and recommendations.
New Azure App Service features to simplify App Modernization
Azure App Service plays a crucial role in app modernization by offering a platform that simplifies and accelerates the process of modernizing legacy applications to cloud. By leveraging Azure App Service, you can quickly and efficiently modernize your legacy apps, making them more scalable, reliable, secure, and adaptable.
At this year’s Microsoft Build we’re happy to announce the public preview of some key Azure App Service features:
Sidecar will let customers add new features like logging, monitoring, or caching to their apps without changing the main code.
With Webjobs, customers can run any code and scripts in the language they prefer on different schedules. Now that WebJobs is part of Azure App Service, they use the same compute resources as the web app to help reduce costs and ensure reliable performance. Webjobs for both Azure App Service on Linux and Windows Containers on Azure App Service is widely available in public preview.
Other features that are now generally available include automatic scaling, which helps users manage growing site traffic without wasting resources. Automatic scaling improves the performance of any web app without requiring new code or code changes.
Another important update is that Azure App Service now offers 99.99% resiliency when your plan runs in an Availability Zone-based configuration. We encourage you to use four-nines resiliency to bring more complex and more critical workloads to Azure App Service.
Check out this blog for details on these and other exciting Azure App Service updates.
Simplify App Modernization to Kubernetes with AKS Automatic
Now available in public preview, AKS Automatic provides the easiest managed Kubernetes experience for developers, DevOps, and platform engineers. It’s ideal for modern and AI applications, automating AKS cluster setup and management, and embedding best practice configurations. This ensures users of any skill level have security, performance, and dependability for their applications. Check out this blog to learn more.
Modernizing Java applications on Azure
We continue to bring product innovations to the market to enable Java customers to modernize enterprise applications on Azure. Red Hat JBoss EAP is a popular Java application framework used by many enterprise customers. We are excited to share that a free tier and flexible pricing options for Red Hat JBoss EAP on Azure App Service are now generally available, providing customers a low-risk environment to evaluate the technology before committing to a paid subscription.
Azure Spring Apps Enterprise is a fully managed service for Java Spring applications jointly offered in partnership between Microsoft and VMWare Tanzu by Broadcom. We are announcing the public preview of Jobs in Azure Spring Apps to enable you to deploy and scale Spring Batch applications without worrying about job scalability, cost control, lifecycle, infrastructure, security, and monitoring. This makes it easier to handle large-scale data processing efficiently, leveraging the flexibility and scalability of the cloud.
Gain valuable insights into the potential impact of Azure Spring Apps Enterprise on your organization. Download the Azure Spring Apps Economic Validation Report to explore the quantified benefits in development speed, cost reduction, and security enhancement.
Customers see increased cost efficiency and enhanced security
There’s no better showcase for our deep roster of AI and app modernization tools than the success stories told by valued customers.
Del Monte Foods, a global leader in packaged foods, leveraged Azure Migrate to streamline their cloud migration journey. By using Azure Migrate’s discovery and assessment tools, Del Monte gained insights into their on-premises environment, identifying optimal migration paths and dependencies. This streamlined approach enabled them to reduce the complexity and risks associated with moving their workloads to Azure, ensuring a smooth and efficient transition.
“We reduced certain infrastructure costs by 57%, increased system availability by 99.99%, and improved system performance by 40%,” said Hari Ramakrishnan, Del Monte Foods’ VP of Information Technology.
Nexi Group, a major European PayTech company, partnered with us to revolutionize their digital payments platform, eventually building a solution capable of handling billions of transactions annually. Azure App Service and Azure Kubernetes Service provided the scalability and performance needed to meet fluctuating demands, while our robust security features ensured the protection of sensitive financial data. Azure’s cost-effective model also allowed Nexi to optimize their IT spending, freeing up resources for further investment in strategic initiatives.
Jens Barnow, Nexi Group’s Senior VP of Group Technology, said that by using Microsoft technology the company “achieved faster time to market with new customer propositions, empowered our developer teams to do more, time for provisioning in new location, and cost efficiency.”
Scandinavian Airlines wanted to improve its tech infrastructure to better serve over 30 million fliers it serves each year. The airline chose to move from an IaaS solution to PaaS and elected to migrate critical databases and applications first, using Microsoft Azure SQL Database, Azure SQL Managed Instance, Azure App Service, and Defender for Cloud. With support from Microsoft Customer Success Migration Factory, they completed the complex migration quickly, immediately enhancing their security posture and creating an environment for more streamlined DevOps workflows.
“We are now operating in an environment that fosters innovation,” said Daniel Engberg, Head of AI, Data, and Platforms at Scandinavian Airlines. “The capabilities of Azure empower SAS to develop new applications faster and focus on what really matters: simplifying travelers’ lives and enhancing their overall experience.”
Check out our full line-up of modernization sessions at Build 2024
Building a connected vehicle and app experience with BMW and Azure: BMW utilizes Azure Kubernetes Service, GitHub, and other Azure services to power their MyBMW app, which serves over 13 million active users worldwide. In this session, BMW will share their insights on scaling cloud architecture for increased performance and adopting DevOps practices for global deployment. Tuesday, May 21, 11:30 am PDT. In person and online.
App innovation in the AI era: cost, benefits, and challenges: Modernizing existing apps to leverage AI capabilities can be a daunting task due to cost constraints, technical complexities, and compatibility challenges. This session will explore strategies and best practices for overcoming these obstacles, drawing on the real-world experiences of organizations that have successfully navigatedapp migration projects. Tuesday, May 21, 4:45 pm PDT. In person and online.
Conversational app and code assessment in Azure Migrate: Discover how Azure Migrate’s latest AI-powered assistant, Azure Copilot, can help simplify your cloud migration process. It evaluates your applications for cloud readiness, identifies potential issues, offers optimization recommendations, and helps reduce costs. Wednesday, May 22, 10:30 am PDT. In person only.
Leverage AKS for your enterprise platform: H&M’s journey: This session focuses on strategies and best practices for building scalable, reliable, and developer-friendly platforms on Azure Kubernetes Service. H&M will share their own experience and insights, and the session will also cover the latest AKS features designed to enhance reliability, performance, security, and ease of use. Thursday, May 23, 9:45 am. In person and online.
Using AI with App Service to deploy differentiated web apps and APIs: Explore how to utilize AI-powered Azure App Service capabilities to modernize your web applications, optimize their performance and reliability, and troubleshoot issues more efficiently. You will see real-world examples of integrating generative AI, as well as how Dynatrace and Datadog simplify observability using AI. Thursday, May 23, 12:30 pm PDT. In person and online.
Vision to value—SAS accelerates modernization at scale with Azure: While recovering from COVID-19 travel restrictions, Scandinavian Airlines chose Azure app and database services as the foundation for modernizing their critical operational applications. This session will cover their modernization journey and explore the latest features in Azure App Service and Azure SQL. Thursday, May 23, 1:45 pm PDT. In person and online.
Scaling Spring Batch in the Cloud: This session focuses on Spring Batch, a framework for large-scale data processing, and how it’s used in Azure Spring Apps Enterprise for cloud-based batch jobs. You’ll learn about essential Spring Batch features and how to effectively leverage them in the cloud. Online only.
Spring Unlocks the Power of AI Platform—End-to-End: Discover how AI can elevate your Spring projects, making them more interactive, intelligent, and innovative. Learn how to seamlessly integrate AI into your Spring applications, adding AI-powered features to improve self-service and customer support in existing apps and discover techniques to create AI-driven user interfaces that provide more natural and intuitive interactions with your users. Online only.
Join us at Build and bring your app development into the future
Are you ready to unlock new opportunities for innovation and empower your business with cutting-edge AI? Join us in person or online at this year’s Microsoft Build to discover how modernizing your applications can make them more scalable, reliable, and efficient to better handle increasing user demands while reducing operational costs and be AI ready.
Finally, don’t forget about the full suite of robust tools Azure offers to enable your app modernization journey, including Azure Migrate and Modernize, Azure Innovate, Azure Solution Assessments, Azure Landing Zone Accelerators, Reliable Web App Patterns and more!
By embracing app modernization on Azure, your organization can stay competitive, agile, and prepared for the future of Intelligent Apps.
Microsoft Tech Community – Latest Blogs –Read More
Introducing the Azure AI Model Inference API
We launched the model catalog in early 2023, featuring a curated selection of open-source models that customers can trust and consume in their organizations. The Azure AI model catalog offers around ~1700 models, including the latest open-source innovations like Llama3 from Meta, but also models coming from partnerships like OpenAI, Mistral, and Cohere. Each of these models with unique capabilities that we think will inspire developers to build the next generation of copilots.
A screenshot of the Azure AI model catalog displaying the large diversity of models it brings in for customers.
To enable developers to get access to these capabilities consistently, we are launching the Azure AI model inference API, which enables customers to consume the capabilities of those models using the same syntax and the same language. This API introduces a single layer of abstraction, yet it allows each model to expose unique features or capabilities that differentiate them.
Starting today, all language models deployed as serverless API support this common API. This means you can interact with GPT-4 from Azure OpenAI Service, Cohere Command R+, or Mistral-Large, in the same way without the need for translations. Soon, these capabilities will also be available on models deployed to our self-hosted managed endpoints, unifying the consumption experience across all our inferencing solutions.
A graphic depicting that the Azure AI model inference API can be used to consume models from Cohere, Mistral, Meta LLama, Microsoft (including Phi-3) and Core42 JAIS, and it’s also compatible with Azure OpenAI Service model deployments.
This is the same API utilized within Azure AI Studio and Azure Machine Learning. You can use prompt flow to build intelligent experiences that can now leverage various models. Since all the models speak the same language, you can run evaluations to compare them across different tasks, determine which one to use for each use case, exploit their strengths, and build experiences that delight your customers.
A screenshot showing the comparison of 3 different evaluations of a prompt flow chat application that implements the RAG pattern. The evaluation was run using 3 different variations of the same prompt flow, each of them running GPT-3.5 Turbo, Mistral-Large, and Llama2-70B-chat, using the same prompt message for the generation step.
We see more customers eager to combine the innovation from across the industry and redefine what’s possible. They are either integrating foundational models as building blocks for their applications or by fine-tuning them to achieve niche capabilities in specific use cases. We hope these new set of capabilities unlock the experimentation and evaluation required to move across models, picking the right one for the right job.
We want to help customers to fulfill that mission, empowering every single AI developer to achieve more with Azure AI.
Resources:
Azure AI Model Inference API
Deploy models as serverless APIs
Model Catalog and Collections in Azure AI Studio
Microsoft Tech Community – Latest Blogs –Read More
Azure Functions: Support for HTTP Streams in Node.js is Generally Available
Azure Functions support for HTTP streams in Node.js is now generally available. With this feature, customers can stream HTTP requests to and responses from their Node.js Functions Apps. Streaming is a mechanism for transmitting data over HTTP in a continuous and efficient manner. Instead of sending all the data at once, streams allow data to be transmitted in small, manageable chunks, which can be processed as they arrive. They are particularly valuable in scenarios where low latency, high throughput, and efficient resource utilization are crucial.
Ever since the preview release of this feature in February of this year, we’ve heard positive feedback from customers that have used this feature for various use cases including, but not limited to, streaming OpenAI responses, delivering dynamic content, processing large data etc. Today, at MS Build 2024, we announce General Availability of HTTP Streaming for Azure Functions using Node.js.
HTTP Streams in Node.js is supported only in the Azure Functions Node.js v4 programming model. Follow these instructions to try out HTTP Streams for your Node.js apps.
Prerequisites
Version 4 of the Node.js programming model. Learn more about the differences between v3 and v4 in the migration guide.
Version 4.3.0 or higher of the @azure/functions npm package.
If running in Azure, version 4.28 of the Azure Functions runtime.
If running locally, version 4.0.5530 of Azure Functions Core Tools.
Steps
If you plan to stream large amounts of data, adjust the app setting `FUNCTIONS_REQUEST_BODY_SIZE_LIMIT` in Azure or in your local.settings.json file. The default value is 104857600, i.e., limiting your request to 100mb maximum.
Add the following code to your app in any file included by your main field.
JavaScript
const { app } = require(‘@azure/functions’);
app.setup({ enableHttpStream: true });
TypeScript
import { app } from ‘@azure/functions’;
app.setup({ enableHttpStream: true });
3. That’s it! The existing HttpRequest and HttpResponse types in programming model v4 already support many ways of handling the body, including as a stream. Use request.body to truly benefit from streams, but rest assured you can continue to use methods like request.text() which will always return the body as a string.
Example code
Below is an example of an HTTP triggered function that receives data via an HTTP POST request, and the function streams this data to a specified output file:
JavaScript
const { app } = require(‘@azure/functions’);
const { createWriteStream } = require(‘fs’);
const { Writable } = require(‘stream’);
app.http(‘httpTriggerStreamRequest’, {
methods: [‘POST’],
authLevel: ‘anonymous’,
handler: async (request, context) => {
const writeStream = createWriteStream(‘<output file path>’);
await request.body.pipeTo(Writable.toWeb(writeStream));
return { body: ‘Done!’ };
},
});
TypeScript
import { app, HttpRequest, HttpResponseInit, InvocationContext } from ‘@azure/functions’;
import { createWriteStream } from ‘fs’;
import { Writable } from ‘stream’;
export async function httpTriggerStreamRequest(
request: HttpRequest,
context: InvocationContext
): Promise<HttpResponseInit> {
const writeStream = createWriteStream(‘<output file path>’);
await request.body.pipeTo(Writable.toWeb(writeStream));
return { body: ‘Done!’ };
}
app.http(‘httpTriggerStreamRequest’, {
methods: [‘POST’],
authLevel: ‘anonymous’,
handler: httpTriggerStreamRequest,
});
Below is an example of an HTTP triggered function that streams a file’s content as the response to incoming HTTP GET requests:
JavaScript
const { app } = require(‘@azure/functions’);
const { createReadStream } = require(‘fs’);
app.http(‘httpTriggerStreamResponse’, {
methods: [‘GET’],
authLevel: ‘anonymous’,
handler: async (request, context) => {
const body = createReadStream(‘<input file path>’);
return { body };
},
});
TypeScript
import { app, HttpRequest, HttpResponseInit, InvocationContext } from ‘@azure/functions’;
import { createReadStream } from ‘fs’;
export async function httpTriggerStreamResponse(
request: HttpRequest,
context: InvocationContext
): Promise<HttpResponseInit> {
const body = createReadStream(‘<input file path>’);
return { body };
}
app.http(‘httpTriggerStreamResponse’, {
methods: [‘GET’],
authLevel: ‘anonymous’,
handler: httpTriggerStreamResponse,
});
Try it out!
For a ready-to-run sample app with more detailed code, check out this GitHub repo.
Check out this GitHub repo to discover the journey of building a generative AI application using LangChain.js and Azure. This demo explores the development process from idea to production, using a RAG-based approach for a Q&A system based on YouTube video transcripts.
Do try out this feature and share your valuable feedback with us on GitHub.
Microsoft Tech Community – Latest Blogs –Read More
Announcing the 2024 Imagine Cup World Champion!
The Imagine Cup, a visionary global technology competition for student startups building with AI, has just crowned its 2024 World Champion: FROM YOUR EYES!
Left to right: Emre Yildiz, Zülal Tannur, Ege Ketrez
Using GPT-4 and their own image recognition technology, FROM YOUR EYES has built a mobile application and API, which offer real-time visual explanations to users with a vision disability. The mobile application enables users to design their own AI assistant to obtain descriptions of photos, videos, or other visual documents – and works with smart glasses and watches to describe aspects of the users’ environment. FROM YOUR EYES also licenses their technology to other developers and businesses via their API and has already secured partnerships with multiple entities.
The exciting finale of this year’s Imagine Cup took place at Microsoft Build, where the three world championship finalists showcased their groundbreaking innovations on a global stage, hosted by Microsoft CVP of Ecosystems, Annie Pearl, and Principal Cloud Advocate, Dona Sarkar.
The atmosphere at Microsoft Build was electric as the world joined live, culminating in the thrilling announcement of FROM YOUR EYES being crowned champion, earning USD100,000 and an exclusive mentorship session with Microsoft Chairman and CEO, Satya Nadella. JRE and PlanRoadmap were the two runners-up – each of these startups also pushed the boundaries of what’s possible, impressing judges with their solutions in sustainable manufacturing and accessibility, and was each awarded USD50,000 in prize money to help propel their startups forward.
Runners-up: JRE is using cutting-edge AI to create a greener steel industry and PlanRoadmap has created an AI productivity coach to help people with ADHD overcome task paralysis.
The road to Microsoft Build
This momentous occasion marks the pinnacle of an incredible journey for these talented student entrepreneurs. From an initial pool of more than 20,000 student entrepreneurs from around the world, startups were narrowed down to the prestigious semifinalists, and finally, the top three startups emerged – selected from a panel of judges. These judges, including CEO of Neo and Co-founder of Code.org, Ali Partovi, and Founder and CEO ROYBI, Elnaz Sarraf, evaluated the startups based on their AI technology, inclusivity, and fundamental business viability. All these exceptional startups demonstrated creativity, innovation and impressive expertise in cutting edge AI technology.
For those who missed the live event, you can catch a recap and learn more about the competition! If you’re inspired by this year’s Imagine Cup, consider joining us next year and take your shot at innovation on a global stage.
Congratulations to FROM YOUR EYES!
FROM YOUR EYES was created out of a profound personal need and a visionary goal. “After losing my vision completely at the age of ten, I knew I would never be able to see biologically again, but I believed it could be possible with technology,” says Zülal Tannur, Founder and CEO of FROM YOUR EYES. She encountered various image processing technologies, though none provided the effective real-time solutions she needed; this sparked the idea for FROM YOUR EYES. Through involvement with Microsoft’s Seeing AI initiative, Zülal met other developers that were visually impaired from around the world, and they inspired her to delve into coding.
In 2020, FROM YOUR EYES’ journey began. They soon won first place in an idea marathon, and over the next year and a half, continued to innovate. In 2021, the team started prototyping and joined Microsoft for Startups Founders Hub where they have since received USD $150,000 of Azure credits and access to the Microsoft for Startups Expert Network, which helped them continue growing their business. Through rigorous development, they have trained their own custom AI model with over 15 million images, achieving an impressive accuracy rate of 98.03% and an image processing speed of 15 milliseconds, which is about four times faster than the world standard. Azure Cosmos DB and Blog Storage give users quick access and upload capabilities, and output is sent to GPT-4 for Natural Language Processing.
The team has accessibility at its core. “Being a startup with a visually impaired leader as the Founder and CEO naturally leads us to approach these issues with great sensitivity,” says Zülal. “For example, our CTO, Ege, is a person with autism, ADHD, and dyslexia. We create the most conducive working conditions for him. Prioritizing acceptance of each other with all our differences and unconditional support are fundamental to us.”
It’s clear that FROM YOUR EYES has an exciting path ahead, making an impact not just for FROM YOUR EYES users, but for developers and entrepreneurs worldwide. “I want to prove that a leader who is visually impaired can be strong, independently capable of building groundbreaking technology, and that being a young, female entrepreneur doesn’t hinder you from establishing and managing a company,” Zülal said. With this ethos, Zülal states, “we don’t believe there is anything this team cannot achieve when we’re together.”
+ + + +
Congratulations to all of the incredible student entrepreneurs who joined the Imagine Cup this year. The Imagine Cup is not just a competition; it’s a community of student visionaries, dreamers and bold entrepreneurs who are inspired by the impact that AI and tech can make.
Learn more at ImagineCup.com.
Microsoft Tech Community – Latest Blogs –Read More
Azure Functions: Support for HTTP Streams in Python is now in Preview!
HTTP streams lets you accept and return data from your HTTP endpoints using FastAPI request and response APIs enabled in your functions. These APIs lets the host process large data in HTTP messages as chunks instead of reading an entire message into memory.
This feature makes it possible to handle large data stream, OpenAI integrations, deliver dynamic content, and support other core HTTP scenarios requiring real-time interactions over HTTP. You can also use FastAPI response types with HTTP streams. Without HTTP streams, the size of your HTTP requests and responses are limited by memory restrictions that can be encountered when processing entire message payloads all in memory.
To get started, the following prerequisites are required:
Azure Functions runtime version 4.34.1, or a later version.
Python version 3.8, or a later supported version.
Python v2 programming model
Then, enable HTTP streaming in your Azure Function app. HTTP streams are disabled by default. You need to enable this feature in your application settings and also update your code to use the FastAPI package.
Add the azurefunctions-extensions-http-fastapi extension package to the requirements.txt file in the project.
Add the following code to the function_app.py file in the project, which imports the FastAPI extension:
from azurefunctions.extensions.http.fastapi import Request, StreamingResponse
When deploying, add these application settings: “PYTHON_ISOLATE_WORKER_DEPENDENCIES”: “1” “PYTHON_ENABLE_INIT_INDEXING”: “1”
When running locally, you also need to add these same settings to the local.settings.json project file.
Following are a few example code snippets on using HTTP streams with Azure Functions in Python.
This example is an HTTP triggered function that streams HTTP response data. You might use these capabilities to support scenarios like sending event data through a pipeline for real time visualization or detecting anomalies in large sets of data and providing instant notifications.
import time
import azure.functions as func
from azurefunctions.extensions.http.fastapi import Request, StreamingResponse
app = func.FunctionApp(http_auth_level=func.AuthLevel.ANONYMOUS)
def generate_count():
“””Generate a stream of chronological numbers.”””
count = 0
while True:
yield f”counting, {count}nn”
count += 1
@app.route(route=”stream”, methods=[func.HttpMethod.GET])
async def stream_count(req: Request) -> StreamingResponse:
“””Endpoint to stream of chronological numbers.”””
return StreamingResponse(generate_count(), media_type=”text/event-stream”)
This example is an HTTP triggered function that receives and processes streaming data from a client in real time. It demonstrates streaming upload capabilities that can be helpful for scenarios like processing continuous data streams and handling event data from IoT devices.
import azure.functions as func
from azurefunctions.extensions.http.fastapi import JSONResponse, Request
app = func.FunctionApp(http_auth_level=func.AuthLevel.ANONYMOUS)
@app.route(route=”streaming_upload”, methods=[func.HttpMethod.POST])
async def streaming_upload(req: Request) -> JSONResponse:
“””Handle streaming upload requests.”””
# Process each chunk of data as it arrives
async for chunk in req.stream():
process_data_chunk(chunk)
# Once all data is received, return a JSON response indicating successful processing
return JSONResponse({“status”: “Data uploaded and processed successfully”})
def process_data_chunk(chunk: bytes):
“””Process each data chunk.”””
# Add custom processing logic here
pass
Note, you must use an HTTP client library to make streaming calls to a function’s FastAPI endpoints. The client tool or browser you’re using might not natively support streaming or could only return the first chunk of data. You can use a client script like this to send streaming data to an HTTP endpoint.
import openai
from azurefunctions.extensions.http.fastapi import Request, StreamingResponse
import asyncio
import os
# Azure Function App
app = func.FunctionApp(http_auth_level=func.AuthLevel.ANONYMOUS)
endpoint = os.environ[“AZURE_OPEN_AI_ENDPOINT”]
api_key = os.environ[“AZURE_OPEN_AI_API_KEY”]
# Azure Open AI
deployment = os.environ[“AZURE_OPEN_AI_DEPLOYMENT_MODEL”]
temperature = 0.7
client = openai.AsyncAzureOpenAI(
azure_endpoint=endpoint,
api_key=api_key,
api_version=”2023-09-01-preview”
)
# Get data from Azure Open AI
async def stream_processor(response):
async for chunk in response:
if len(chunk.choices) > 0:
delta = chunk.choices[0].delta
if delta.content: # Get remaining generated response if applicable
await asyncio.sleep(0.1)
yield delta.content
# HTTP streaming Azure Function
@app.route(route=”stream-cities”, methods=[func.HttpMethod.GET])
async def stream_openai_text(req: Request) -> StreamingResponse:
prompt = “List the 100 most populous cities in the United States.”
azure_open_ai_response = await client.chat.completions.create(
model=deployment,
temperature=temperature,
max_tokens=1000,
messages=[{“role”: “user”, “content”: prompt}],
stream=True
)
return StreamingResponse(stream_processor(azure_open_ai_response), media_type=”text/event-stream”)
Microsoft Tech Community – Latest Blogs –Read More
Discover your next integration inspiration at this year’s Build!
Get ready for an exciting digital experience at Microsoft Build 2024! Running May 21-23 in Seattle and online, this year’s event is all about delving deep into the cutting-edge world of AI and cloud technology. And if you’re eager to dive into the transformative world of Azure Integration Services, get ready for something special.
From seamless application and data integration to API management and powerful workflow automation, Azure Integration Services is revolutionizing the way businesses operate. Kantar, a global leader in marketing data, used Azure Integration Services to create the KantarHub, a centralized platform that simplifies data sharing and enhances security, integrating approximately 150 internal applications. Össur, a prosthetic innovation leader, migrated its diverse legacy apps to the cloud with Azure Integration Services, ensuring uninterrupted operations and improving data security and API access. These examples highlight how Azure Integration Services is transforming customer operations through seamless integration and increased efficiency.
In this blog, we’ll unpack the major announcements for Azure Integration Services from this year’s Build event. Register today to attend!
Azure API Management
With the rise in Gen AI app usage, there’s an urgent need for enterprise-wide, federated access to manage and secure endpoints. This year, we’re excited to announce GenAI Gateway capabilities in Azure API Management to tackle these challenges for Azure OpenAI Services endpoints (general availability).
As a first step, we’ve simplified the onboarding process so you can now import all Azure OpenAI endpoints into the Azure API Management platform with a single click. These endpoints will be protected by Azure API Management’s built-in managed identity authentication. For scaled workloads, we provide load balancing, rate limiting, and out-of-box observability support.
Here’s a rundown of all the policies and features we’ve added:
Import Azure OpenAI as an API: New Import Azure OpenAI as an API in Azure API management provides an easy single click experience to import your existing Azure OpenAI endpoints as APIs and simplifies the onboarding process.
Azure OpenAI Token Limit Policy: Manage and enforce token-based limits per API consumer to ensure fair usage.
Azure OpenAI Emit Token Metric Policy: Get detailed monitoring and analysis by logging the token usage metrics and sending those to Azure Application Insights.
Load Balancer and Circuit Breaker: Distribute the load across multiple Azure OpenAI endpoints with support for various load distribution strategies ensuring optimal performance and reliability.
Azure OpenAI Semantic Caching Policy (public preview): Optimize token usage by caching completions for semantically similar prompts improving response performance.
Click here to learn more about the GenAI Gateway capabilities in Azure API Management. We launched the “Gen AI Gateway Accelerator,” a reference implementation that demonstrates how to provision and interact with Generative AI resources through API Management. This new scenario in the APIM landing zone accelerator helps accelerate our customers on their path to Gen AI production workloads. Learn more about the “Gen AI Gateway Accelerator” here.
In addition, we have two features now in General Availability (GA):
OData API Type : First-class support for OData makes it easier for customers to publish OData APIs in API Management, including the ability to secure them with standard API protections. You can now use Azure API Management for publishing APIs from platforms like SAP, Oracle, Dataverse, and others that expose OData APIs.
gRPC API Type in Self-Hosted Gateway: Seamlessly manage your gRPC services as APIs within Azure API Management.
Azure API Center
Another exciting announcement—Azure API Center is now in General Availability! Complementing Azure API Management, Azure API Center is a centralized solution that offers a unified inventory for seamless discovery, consumption, and governance of APIs, regardless of their type, lifecycle stage, or deployment location. With Azure API Center, your organization can effectively manage your API landscape and promote efficiency, consistency, and innovation across the board.
Key features of Azure API Center include:
API Inventory Management: Create an up-to-date API catalog that includes essential metadata like API names, descriptions, lifecycle stages, and owners. Custom metadata can be added to capture organization-specific API information.
API Cataloging for Azure API Management: Quickly import APIs into API Center via a single CLI command, creating a cohesive center across different API Management services.
API Design Governance: Enable API best practices at scale and enforce design rules across your organization. This empowers API developers to ensure quality and uniformity across all produced APIs.
API Reusability: Foster reusability by empowering consumers to swiftly discover and utilize the appropriate APIs.
API Development Enhancement: Seamlessly integrate with our API Center Visual Studio Code extension, enhancing and simplifying the API development process.
Azure Logic Apps
By simplifying and automating how you connect and integrate various applications, services, and data sources in the cloud, Azure Logic Apps users can create and run automated workflows with little to no code. Recent updates to the platform include new features that enhance seamless management of integration flows, simplify legacy integration, and enable efficient B2B integration.
Seamless Management of Integration Flows
Efficiently monitoring, troubleshooting, and updating automated workflows can be challenging, especially when dealing with multiple integrations. To address these pain points, we’ve introduced:
Support for Zero Downtime deployment scenarios in the portal (public preview for Logic Apps Standard): Zero downtime deployment is a technique that allows updating an application without affecting its availability or performance. Logic Apps Standard now supports zero downtime deployment by using deployment slots, which are isolated environments that can host different versions of the application and can be swapped with the production slot without any interruption. Click here more details.
Logic Apps Monitoring dashboard for workflow monitoring, troubleshooting and resubmissions (public preview): We have released UI dashboards for Logic Apps Standard to help with diagnosis and troubleshooting of Logic Apps workflow runs and failures. The dashboard also offers the ability to take actions such as bulk resubmission of failed runs.
Advanced Development and Customization
Developers need the flexibility to customize workflows and integrate the latest technologies seamlessly, while also benefiting from efficient debugging and development environments.
.NET 8 Custom Code Support (public preview for Logic Apps Standard): We’ve extended our built-in action capabilities to include support for calling .NET 8 custom code. Within a Logic Apps workspace, you can now effortlessly develop and debug your custom code right alongside your workflows, streamlining your development process with the most up-to-date .NET technology.
Improved Onboarding Experience on VS Code for Logic Apps Standard (general availability): Extend the Logic App designer to empower users to transition from developing workflows in the cloud to a local environment. The intuitive no-code designer of Logic Apps combined with the powerful pro-code capabilities of VS Code has enabled developers to build, run and test their Logic App workflows locally with features such as breakpoint debugging.
Logic Apps Standard Deployment Scripting Tools in VS Code (public preview for Logic Apps Standard): For Standard logic app workflows that run in single-tenant Azure Logic Apps, you can use Visual Studio Code with the Azure Logic Apps Standard extension to locally develop, test, and store your logic app project using any source control system. You can also use the extension to streamline the creation of deployment pipelines, automating the deployment of your Logic Apps Standard infrastructure and code. Click here for more technical details.
B2B Integration
Managing complex B2B transactions and integrations requires robust, scalable solutions and efficient management tools. And, we have new features to help with these transactions:
EDI (X12/EDIFACT) processing with built in actions (general availability): Run B2B workloads at scale with connectors that can process single or batched EDI messages and larger payloads, providing greater control over performance.
Integration Account Enhancements (public preview): Integration Account Premium offers UI based Trading Partner management capabilities and centralized store for artifacts including maps and schemas. With this release, we have enabled Availably Zone support for Integration Account.
Mainframes and midranges Integration
Extending the functionality of legacy systems to the cloud without extensive re-investment can be difficult. That’s why we have connectors for IBM mainframes and midranges.
Azure Logic Apps connectors for IBM Mainframe and Midranges: Preserve the value of your workloads running on mainframes and midranges by extending them to the Azure Cloud, without investing more resources on the mainframe or midrange environments using Azure Logic Apps. Click here for more technical details.
Azure Service Bus
Azure Service Bus is a fully managed enterprise message broker that ensures secure and efficient delivery of data messages between different parts of your system, even when they’re disconnected or processing tasks at different speeds. At Build, we’re thrilled to announce a new feature: batch delete. Currently in preview, this feature empowers customers to delete messages on the service side from an entity or the dead letter queue in batches of up to 4,000 messages.
Azure Event Grid
Like an event dispatcher for your cloud, Azure Event Grid triggers actions across your applications and services in near real-time whenever something significant happens. New features are generally available that are tailored to customers who are looking for a pub-sub message broker that can enable Internet of Things (IoT) solutions using MQTT protocol and can help build event-driven applications.
These capabilities enhance Event Grid’s MQTT broker capability, make it easier to transition to Event Grid namespaces for push and pull delivery of messages, and integrate new sources. Customers can now:
Use the Last Will Testament feature in compliance with MQTT v5 and MQTT v.3.1.1 specifications, so applications can get notifications when clients get disconnected, enabling management of downstream tasks to prevent performance degradation.
Create data pipelines that utilize both Event Grid Basic resources and Event Grid Namespace Topics (supported in Event Grid Standard). This means customers can utilize Event Grid namespace capabilities such as MQTT broker without needing to reconstruct existing workflows.
Support new event sources, such as Microsoft Entra ID and Microsoft Outlook, leveraging Event Grid’s support for the Microsoft Graph API. This means customers can use Event Grid for new use cases, like when a new employee is hired or a new email is received, to process that information and send to other applications for more action.
For more technical details on these announcements, click here.
See you at Build for these exciting sessions!
Don’t miss the chance to explore these exciting updates at Microsoft Build 2024. Register now and if you’re attending in-person, be sure to stop by the Azure API Management booth in The Hub! You can meet with the engineering and product teams behind API Management and API Center, and further explore Azure Integration Services capabilities to discover exciting new solutions.
Join us for these breakout sessions both in-person or online:
Unleash the Potential of APIs with Azure API Management: Through practical demos we’ll show how to use Azure API Management to expose Azure OpenAI services, manage OpenAI tokens allocation, distribute load across multiple model deployments and gain valuable insights into token usage throughout your intelligent applications portfolio. Explore how Azure API Center revolutionizes API governance and discoverability, driving innovation and efficiency in your organization’s operations.
GenAI Gateway Capabilities in Azure API Management: We will demonstrate how API Management can be configured for authentication and authorization for OpenAI endpoint, enforcing rate limits based on OpenAI tokens used, load balancing across multiple OpenAI endpoints and more.
Microsoft Tech Community – Latest Blogs –Read More
Azure Functions: SDK type bindings for Azure Blob Storage with Azure Functions in Python (Preview)
Azure Functions triggers and bindings enable you to easily integrate event and data sources with function applications. With SDK type bindings, you can use types from service SDKs and frameworks, providing more capability beyond what is currently offered. SDK type bindings for Azure Storage Blob when using Python in Azure Functions is now in Preview.
SDK type bindings for Azure Storage Blob enable the following key scenarios:
Downloading and uploading blobs of large sizes, reducing current memory limitations and GRPC limits.
Improved performance by using blobs with Azure Functions
To get started using SDK type bindings for Azure Storage Blob, the following prerequisites are required:
Azure Functions runtime version 4.34.1, or a later version.
Python version 3.9, or a later supported version.
Python v2 programming model
Note that currently, only synchronous SDK types are supported.
Then, enable the feature in your Azure Function app:
Add the azurefunctions-extensions-bindings-blob extension package to the requirements.txt file in the project.
Add this code to the function_app.py file in the project, which imports the SDK type bindings:
import azurefunctions.extensions.bindings.blob as blob
This example shows how to get the BlobClient from both a Blob storage trigger (blob_trigger) and from the input binding on an HTTP trigger (blob_input).
import logging
import azure.functions as func
import azurefunctions.extensions.bindings.blob as blob
app = func.FunctionApp(http_auth_level=func.AuthLevel.ANONYMOUS)
@app.blob_trigger(
arg_name=”client”, path=”PATH/TO/BLOB”, connection=”AzureWebJobsStorage”
)
def blob_trigger(client: blob.BlobClient):
logging.info(
f”Python blob trigger function processed blob n”
f”Properties: {client.get_blob_properties()}n”
f”Blob content head: {client.download_blob().read(size=1)}”
)
@app.route(route=”file”)
@app.blob_input(
arg_name=”client”, path=”PATH/TO/BLOB”, connection=”AzureWebJobsStorage”
)
def blob_input(req: func.HttpRequest, client: blob.BlobClient):
logging.info(
f”Python blob input function processed blob n”
f”Properties: {client.get_blob_properties()}n”
f”Blob content head: {client.download_blob().read(size=1)}”
)
return “ok”
You can view other SDK type bindings samples for Blob storage in the Python extensions repository:
ContainerClient type
StorageStreamDownloader type
Microsoft Tech Community – Latest Blogs –Read More
Macro Excel : create an excel for each city and send it to the email adresse
Hello can you help me please ? i would like to create using excel VBA macro a code wich will create an Independent excel from with each city and send it to the corresponding mail in the excel.
For exemple in the following testing file : i would like to create an excel for each of the cities and send it to the following emails : also each of the excel will only include data of the city corresponding to the excel
GE SOTRemail address removed for privacy reasonsAD MAD NFemail address removed for privacy reasonsIALTERemail address removed for privacy reasonsLATTEemail address removed for privacy reasonsMOP DOTemail address removed for privacy reasons
(so it will basically create 5 excel and send them to these emails)
Thank you in advance for your help 🙂
Hello can you help me please ? i would like to create using excel VBA macro a code wich will create an Independent excel from with each city and send it to the corresponding mail in the excel. For exemple in the following testing file : i would like to create an excel for each of the cities and send it to the following emails : also each of the excel will only include data of the city corresponding to the excelGE SOTRemail address removed for privacy reasonsAD MAD NFemail address removed for privacy reasonsIALTERemail address removed for privacy reasonsLATTEemail address removed for privacy reasonsMOP DOTemail address removed for privacy reasons (so it will basically create 5 excel and send them to these emails)Thank you in advance for your help 🙂 Read More
Teams: Organisationsweit aktivieren
Hallo zusammen,
wir haben das Problem, dass wir im Admin-Center unter Einstellungen/Einstellungen der OrganisationseinstellungenMicrosoft Teams für alle benutzer aktivieren möchten und dabei
folgende Meldung erhalten und nicht wissen, was wir tun können:
Wir können Ihre Lizenzänerungen nicht speichern. Schließen Sie diese Einstellung, aktualisieren Sie die Seite und versuchen Sie es noch mal.
Wir haben bereits mehrere Browser verwendet, den Virenscanner und die Firewall deaktiviert – alles ohne Nutzen.
Hat hier jemand noch eine Idee?
Hallo zusammen, wir haben das Problem, dass wir im Admin-Center unter Einstellungen/Einstellungen der OrganisationseinstellungenMicrosoft Teams für alle benutzer aktivieren möchten und dabeifolgende Meldung erhalten und nicht wissen, was wir tun können: Wir können Ihre Lizenzänerungen nicht speichern. Schließen Sie diese Einstellung, aktualisieren Sie die Seite und versuchen Sie es noch mal. Wir haben bereits mehrere Browser verwendet, den Virenscanner und die Firewall deaktiviert – alles ohne Nutzen. Hat hier jemand noch eine Idee? Read More
Images are not displayed in incoming emails
Hello,
Since 10 days or so, Outlook 365 does not display images anymore. See below.
Note these images are displayed alright in the New Outlook, in https://outlook.live.com/ and in Mail under iOS.
Of course I have already unchecked both options in File/Options/Trust Center/Automatic Download. See below.
What else should I do to get those images painted?
Thank you!
Stefano
Hello,Since 10 days or so, Outlook 365 does not display images anymore. See below.Note these images are displayed alright in the New Outlook, in https://outlook.live.com/ and in Mail under iOS.Of course I have already unchecked both options in File/Options/Trust Center/Automatic Download. See below.What else should I do to get those images painted? Thank you!Stefano Read More
Policy Tip Text not working for some policies
Hi,
I have an issue with the Policy Tip Text not showing for a specific rule in Outlook. It does however show in OWA.
My rule is quite simple. Condition = File extension is docx, pdf, xlsx, pptx and Content is shared from Microsoft 365 with people outside my organisation.
The policy tip should appear and allow the user to override by providing a business justification.
The policy tip does not appear in the Outlook client, but does appear in OWA.
If I change the condition to Recipient Domain is hotmail.com, gmail.com and Content is shared from Microsoft 365 with people outside my organisation then the policy tip does appear.
Does the policy tip not work when using file extensions in the Outlook Client?
Secondly, when the rule is configured as Condition = File extension is docx, pdf, xlsx, pptx and Content is shared from Microsoft 365 with people outside my organisation, sending through OWA (with the user overriding the policy restriction), the email is still blocked from being sent to an external recipient. The email is sent, but the DLP rule still blocks the email and the user receives the email block notification.
I have other policies where the override works successfully.
Any ideas on how to fix this issues?
Thanks,
Ben
Hi,I have an issue with the Policy Tip Text not showing for a specific rule in Outlook. It does however show in OWA.My rule is quite simple. Condition = File extension is docx, pdf, xlsx, pptx and Content is shared from Microsoft 365 with people outside my organisation. The policy tip should appear and allow the user to override by providing a business justification. The policy tip does not appear in the Outlook client, but does appear in OWA. If I change the condition to Recipient Domain is hotmail.com, gmail.com and Content is shared from Microsoft 365 with people outside my organisation then the policy tip does appear.Does the policy tip not work when using file extensions in the Outlook Client? Secondly, when the rule is configured as Condition = File extension is docx, pdf, xlsx, pptx and Content is shared from Microsoft 365 with people outside my organisation, sending through OWA (with the user overriding the policy restriction), the email is still blocked from being sent to an external recipient. The email is sent, but the DLP rule still blocks the email and the user receives the email block notification. I have other policies where the override works successfully. Any ideas on how to fix this issues? Thanks,Ben Read More
Feature request: ability to submit feedback on EDR blocks
Good day,
I’d like to suggest to add the ability to report behavioral blocks to Microsoft for a review.
The current reporting feature is focused on files and hashes and requires a file or hash to be able to submit something – which does not make any sense for behavioral detections.
To make it a bit clearer I attached a screenshot of a behavioral false positive – there is currently no lightweight approach to report that I believe – but we’d really love to be able to provide quicker feedback on behavioral blocks.
Good day, I’d like to suggest to add the ability to report behavioral blocks to Microsoft for a review.The current reporting feature is focused on files and hashes and requires a file or hash to be able to submit something – which does not make any sense for behavioral detections. To make it a bit clearer I attached a screenshot of a behavioral false positive – there is currently no lightweight approach to report that I believe – but we’d really love to be able to provide quicker feedback on behavioral blocks. Read More
App running as background process
I use this to open my company portal via powershell :
start-process companyportal:
please is there any way to run company portal on background (task manager) without display the company portal on desktop.
thanks.
I use this to open my company portal via powershell :start-process companyportal:please is there any way to run company portal on background (task manager) without display the company portal on desktop.thanks. Read More
Identifier(s) in API calls to load mail folders and mails from folders
Hi all.
I am trying to load user folders with call to list user folders , and later emails for given folder with call to list emails in given folder .
In both calls common part is:
GET /users/{id | userPrincipalName}…
On Azure portal userPrincipalName parameter is editable:
Is it a must to use Object ID (below) for accessing user and so forth ?
For my use case it would be of great benefit to use User principal name , but what happens if someone changes it ?
Thanks in advance,
Dragan
Hi all. I am trying to load user folders with call to list user folders , and later emails for given folder with call to list emails in given folder .In both calls common part is:GET /users/{id | userPrincipalName}…On Azure portal userPrincipalName parameter is editable:Is it a must to use Object ID (below) for accessing user and so forth ?For my use case it would be of great benefit to use User principal name , but what happens if someone changes it ? Thanks in advance,Dragan Read More
Syncing Project to Planner
I’d like to use a PowerAutomate flow to sync my MS Project Online to MS Planner (it would be amazing if I could sync so that the tasks are copied to the same buckets that I defined in Project Online and in Planner too). I haven’t been able to find a flow that will help me achieve this Project Online to Planner synchronization. Does anyone have suggestions?
I’d like to use a PowerAutomate flow to sync my MS Project Online to MS Planner (it would be amazing if I could sync so that the tasks are copied to the same buckets that I defined in Project Online and in Planner too). I haven’t been able to find a flow that will help me achieve this Project Online to Planner synchronization. Does anyone have suggestions? Read More
Top Stories: May 21, 2024
Take a look!
English Top Stories: May 21, 2024 | Microsoft
Français À la une : 21 mai 2024 | Microsoft
Español Novedades más relevantes: 21 de mayo de 2024 | Microsoft
Português Blog de parceiro das Américas | Microsoft
Take a look!
English Top Stories: May 21, 2024 | Microsoft
Français À la une : 21 mai 2024 | Microsoft
Español Novedades más relevantes: 21 de mayo de 2024 | Microsoft
Português Blog de parceiro das Américas | Microsoft Read More
steps to transfer 365 accounts with email from service provider to self manage
We have our 365 accounts managed by a service provider. we want to stop using their services so we need to transfer administration of our accounts to self administor. where can I find instructions so I can make sure we don’t have any loss of data. The microsoft account subscriptions are through the service provider so we need to purchase our own subscriptions for email and office (desktop). I know how to self manage 365 accounts, but I want to make sure with the transfer we don’t lose our email data with the transfer from their subscriptions to our own.
We have our 365 accounts managed by a service provider. we want to stop using their services so we need to transfer administration of our accounts to self administor. where can I find instructions so I can make sure we don’t have any loss of data. The microsoft account subscriptions are through the service provider so we need to purchase our own subscriptions for email and office (desktop). I know how to self manage 365 accounts, but I want to make sure with the transfer we don’t lose our email data with the transfer from their subscriptions to our own. Read More
Announcing key updates to Responsible AI features and content filters in Azure OpenAI Service
We’re excited to announce the release of new Responsible AI features and content filter improvements in Azure OpenAI Service (AOAI) and AI Studio, spanning from new unified content filters, to customizable content filters for DALL-E and GPT-4 Turbo Vision deployments, safety system message templates in the AOAI Studio, asynchronous filters now available for all AOAI customers, and updates to protected material and image generation features.
Unified content filters
We are excited to announce that a new unified content filter experience is coming soon to Azure AI. This update will streamline the process of setting up content filters across different deployments and various products such as Azure AI Studio, AOAI, and Azure AI Content Safety for a more uniform user experience. Content filters enable users to effectively block harmful content, whether it’s text, images, or multimodal forms. With this unified approach, users have the flexibility to establish a content filtering policy tailored to their particular needs and scenarios.
Configurable content filters for DALL-E and GPT-4 Turbo with Vision GA
The integrated content filtering system in AOAI provides Azure AI Content Safety content filters by default, and they detect and the output of harmful content. Furthermore, we also provide a range of different content safety customization options for the AOAI GPT model series. Today, we are releasing configurable content filters for DALL-E 2 and 3, and GPT-4 Turbo Vision GA deployments, enabling content filter customization based on specific use case needs. Customers can configure input and output filters, adjust severity levels for the content harms categories and add additional applicable RAI models and capabilities such as Prompt Shields and custom blocklists. Customers who have been approved for modified content filters can turn the content filters off or use annotate mode to return annotations via API response, without blocking content. Learn more.
Asynchronous Filters
In addition to the default streaming experience in AOAI – where completions are vetted before they are returned to the user, or blocked in case of a policy violation – we’re excited to announce that all customers now have access to the Asynchronous Filter feature. Content filters are run asynchronously, and completion content is returned immediately with a smooth and fast token-by-token streaming experience. No content is buffered, which allows for a faster streaming experience at zero latency associated with content safety. Customers must be aware that while the feature improves latency, it’s a trade-off against the safety and real-time vetting of smaller sections of model output. Because content filters are run asynchronously, content moderation messages and policy violation signals are delayed, which means some sections of harmful content that would otherwise have been filtered immediately could be displayed to the user. Content that is retroactively flagged as protected material may not be eligible for Customer Copyright Commitment coverage. Read more about Asynchronous Filter and how to enable it.
Safety System Messages
System messages for generative AI models are an effective strategy for additional AI content safety. The AOAI Studio and AI Studio are now supporting safety system message templates directly in the playground that can be quickly tested and deployed, covering a range of different safety related topics such as preventing harmful content, jailbreak attempts, as well as grounding instructions. Learn more.
Protected Materials
Protections for Azure OpenAI GPT-based models
In November 2023, Microsoft announced the release of Protected Material Detection for Text in AOAI and Azure AI Content Safety. Soon, this model will upgrade to version 2.0 and identifies content that highly resembles pre-existing content. This update also prevents attempts to subvert the filter by asking for known modifications of the original text, e.g. the original text with repeated characters or more whitespace. Soon, the Protected Material Detection for Code model version 2.0 will update its attribution feature to flag 2023 public GitHub repository code from flagging 2021 repository code.
Updated Features in Azure OpenAI Service DALL-E
AOAI now prevents DALL-E from generating works that closely resemble certain types of known creative content, such as studio characters and contemporary artwork. It does this by re-interpreting the text prompt to DALL-E, removing keywords or phrases associated with creative content categories. Below are examples showing image outputs before and after the modification is applied. Please note that the DALL-E model is non-deterministic and so is likely not going to generate the same image with the same prompt each time.
New Responsible AI features in Azure AI Content Safety & Azure AI Studio
Custom Categories
This week at Build 2024 we also previewed other important features for responsible AI, one of which will be coming soon to Azure OpenAI Service: Custom Categories. Learn more about Custom Categories.
Get started today
Visit Azure OpenAI Service Studio: oai.azure.com
Visit Azure AI Studio: ai.azure.com
Visit Azure AI Content Safety Studio: aka.ms/contentsafetystudio
Microsoft Tech Community – Latest Blogs –Read More
Gen AI simplified: The azure_ai extension now generally available on Azure Database for PostgreSQL
We are thrilled to announce the general availability of the azure_ai extension on Azure Database for PostgreSQL. The azure_ai extension allows developers to seamlessly integrate Azure AI services from within their database using SQL queries. In conjunction with vector data this simplifies building Gen AI applications on Azure Database for PostgreSQL.
Features and Capabilities
With the azure_ai extension, you can now access Azure OpenAI, Azure AI Language services, Azure Translation and Azure Machine learning services with simple function calls from within SQL.
The azure_ai extension enables
Generation of embeddings with embedding models of creating embeddings with dimensions ranging from 384 to 3072. Embeddings can be generated as a scalar single embedding or as a batch for a set of them. Along with native vector data type using vector extension, embeddings can be generated as data is inserted or updated.
Calling into Azure AI Language services to perform summarization, sentiment analysis , Key phrase extraction or PII detection on your data.
Real-time text translation within your database with Azure AI translator simplifies building multi-lingual applications.
Real-time predictions enable many scenarios such as fraud detection, product recommendations, predictive maintenance or predictive healthcare. You can invoke custom trained models or pre-trained models from Azure Machine learning catalog that are hosted on online endpoints. Online inferencing endpoints are a highly scalable way to operationalize models for real-time low latency requests with features such as auto-scale and rich monitoring and debugging support.
Getting Started
To learn more about the azure_ai extension, and how it simplifies building GenAI applications on, visit our documentation below:
Azure AI Extension.
Azure AI Language Services integration
Azure AI Text Translation
Azure AI real-time machine learning scoring.
Vectors on Azure Database for PostgreSQL
Generative AI Overview
To learn even more about our Flexible Server managed service, see the Azure Database for PostgreSQL Flexible Server.
You can always find the latest features added to Flexible server in this release notes page.
Microsoft Tech Community – Latest Blogs –Read More
Build 2024: Unveiling performance and AI innovations in Azure Database for MySQL
Today, we’re thrilled to announce a suite of new features for Azure Database for MySQL that focus on performance enhancements, enterprise capabilities, and cutting-edge AI functionality designed to revolutionize your database management experience and efficiency. Read on to see how these innovations can elevate your workflows!
Microsoft Copilot in Azure: Unlock the benefits of Azure Database for MySQL with your AI companion (Public Preview)
We’re excited to announce Microsoft Copilot in Azure extends capabilities to Azure Database for MySQL. Microsoft Copilot in Azure is an AI-powered tool that leverages Large Language Models (LLMs) and the Azure control plane to help you get answers to your general questions and receive high quality recommendations to real-time problems. With this new integration with Azure Database for MySQL, you can converse with Microsoft Copilot in Azure to discover new features, determine when to enable new features to supplement your own scenarios, learn from summarized tutorials to enable features or build applications, and obtain tips and best practice recommendations to avoid issues.
Learn more: Documentation | Announcement blog with demo video coming soon!
Build RAG applications with Azure OpenAI and MySQL with Azure AI Search
We’re excited to announce that you can now create Retrieval-Augmented Generation (RAG) applications using Azure OpenAI and Azure Database for MySQL with Azure AI Search.
You can combine the smart, human-like responses of Azure OpenAI with MySQL’s powerful database management and Azure AI Search’s advanced search capabilities, making it easier to build apps that deliver relevant info quickly and efficiently. If you’re running applications (content management systems (CMS), e-commerce applications, or gaming sites) with data hosted in Azure Database for MySQL, enhance your user experience by building generative AI search and chat applications using LLMs available in Azure OpenAI and vector storage and indexing provided by Azure AI Search. Unleash the power of your data hosted on MySQL with the simple and seamless AI integrations on Azure!
Learn more: Demo video and sample architecture coming soon! | RAG in Azure AI Search documentation
Advancements in Azure Database for MySQL – Business Critical service tier (General Availability)
Achieve a 2x increase in throughput using Accelerated Logs (General Availability): We’re excited to announce the General Availability of Accelerated Logs, a feature that significantly boosts performance for write heavy workloads, offering up to a 2x improvement in throughput, out of the box, with no additional cost or application changes required. By reducing latency and enhancing data access speeds, the Accelerated Logs feature ensures that your mission-critical applications run more efficiently and smoothly on the Business Critical service tier. Try out this new feature to experience the difference in your workload performance!
Expand storage up to 32TB (General Availability) for your workloads using the Business Critical service tier. With storage auto-grow up to 32TB and auto-scale IOPs up to 80K, you can now run your large, growing mission-critical workloads worry-free on Azure!
Learn more: Documentation | Announcement blog with demo video coming soon!
Enhance data redundancy, availability, and auditing capabilities with on-demand backup and export (Public Preview)
With Public Preview of the on-demand backup and export feature, you can now easily export a physical backup of your MySQL flexible server to an Azure storage account (Azure blob storage) with just a few clicks on the Azure portal or with a single CLI command whenever you want. After exporting backups to blob storage, you can use them for multiple purposes, including:
Data recovery, redundancy, and availability. In addition to the automated backups managed by the service, you can export backups on-demand and use them for data recovery. In case of data corruption, accidental deletion, or hardware failure, simply restore the server to its previous state using this copy of your data.
Auditing. You can use exported physical backup files to restore on-premises MySQL servers to address the auditing, compliance, and archival requirements of an organization.
Compliance. Regulated industries must be able to export any data hosted by a cloud provider.
Avoid vendor lock in. Thake advantage of this solution to export data from MySQL flexible server or avoid vendor lock-in.
Learn more: Documentation | Announcement blog with demo video coming soon!
Simplify security management with Microsoft Defender for Cloud support (General Availability)
Last month, we announced the general availability of Microsoft Defender for Cloud support for Azure Database for MySQL – Flexible Server. The Defender for Cloud Advanced Threat Protection (ATP) feature simplifies security management of your MySQL flexible server by enabling effortless threat prevention, detection, and mitigation through increased visibility into and control over harmful events.
With the Defender for Cloud ATP feature, there’s no need to be a security expert to safeguard your MySQL flexible server against today’s growing threat landscape. ATP uses integrated security monitoring to detect anomalous database access and query patterns, as well as suspicious database activities, and provides targeted security recommendations and alerts.
Learn more: Demo video | Announcement blog
Conclusion
With the release of these capabilities, Azure Database for MySQL continues to be an industry leader for hosting your mission-critical applications on the cloud, offering top-tier performance for your workloads, enterprise capabilities and scale, enhanced monitoring, and robust backup and restore capabilities. The service seamlessly integrates with cutting-edge AI technologies through OpenAI, Azure Copilot, and Azure AI Search to deliver advanced functionalities and insights. Security is paramount, and with Microsoft Defender, your applications are protected by Microsoft’s expertise in cybersecurity, ensuring peace of mind against increasingly sophisticated threats. Azure Database for MySQL combines performance, innovation, and security to support your most demanding applications, while remaining on the open-source community MySQL version to protect against lock-ins.
To learn more about what’s new with Flexible Server, see What’s new in Azure Database for MySQL – Flexible Server. Stay tuned for more updates and announcements by following us on social media: YouTube | LinkedIn | X.
If you have any suggestions for or queries about our service, please let us know by emailing us at AskAzureDBforMySQL@service.microsoft.com. Thank you!
Microsoft Tech Community – Latest Blogs –Read More