New controls for model governance and secure access to on-premises or custom VNET resources
New enterprise security and governance features in Azure AI for October 2024
At Microsoft, we’re focused on helping customers build and use AI that is trustworthy, meaning AI that is secure, safe and private. This month, we’re pleased to highlight new security capabilities that support enterprise-readiness, so organizations can build and scale GenAI solutions with confidence:
Enhanced model governance: Control which GenAI models are available for deployment from the Azure AI model catalog with new built-in and custom policies
Secure access to hybrid resources: Securely access on-premises and custom VNET resources from your managed VNET with Application Gateway for your training, fine-tuning, and inferencing needs
Below, we share more information about these enterprise features and guidance to help you get started.
Control which GenAI models are available for deployment from the Azure AI model catalog with new built-in and custom policies (public preview)
The Azure AI model catalog offers over 1,700 models for developers to explore, evaluate, customize, and deploy. While this vast selection empowers innovation and flexibility, it can also present significant challenges for enterprises that want to ensure all deployed models align with their internal policies, security standards, and compliance requirements. Now, Azure AI administrators can use new Azure policies to restrict select models for deployment from the Azure AI model catalog, for greater control and compliance.
With this update, organizations can use pre-built policies for Model as a Service (MaaS) and Model as a Platform (MaaP) deployments or create custom policies for Azure OpenAI Service and other AI services using detailed guidance:
1) Apply a built-in policy for MaaS and MaaP
Admins can now leverage the “[Preview] Azure Machine Learning Deployments should only use approved Registry Models” built-in policy within Azure Portal. This policy enables admins to specify which MaaS and MaaP models are approved for deployment. When developers access the Azure AI model catalog from Azure AI Studio or Azure Machine Learning, they will only be able to deploy approved models. See the documentation here: Control AI model deployment with built-in policies – Azure AI Studio.
2) Build a custom policy for AI Services and Azure OpenAI Service
Admins can now create custom policies for Azure AI Services and models in Azure OpenAI Service using detailed guidance. With custom policies, admins can tailor which services and models are accessible to their development teams, helping to align deployments with their organization’s compliance requirements. See the documentation here: Control AI model deployment with custom policies – Azure AI Studio.
Together, these policies provide comprehensive coverage for creating an allowed model list and enforcing it across Azure Machine Learning and Azure AI Studio.
Securely access on-premises and custom VNET resources from your managed VNET with Application Gateway (public preview)
Virtual networks keep your network traffic securely isolated in your own tenant, even when other customers use the same physical servers. Previously, Azure AI customers could only access Azure resources from their managed virtual network (VNET) that were supported by private endpoints (see a list of supported private endpoints here). This meant hybrid cloud customers using a managed VNET could not access machine learning resources that were not within an Azure subscription, such as resources located on-premises, or resources located in their custom Azure VNET but not supported with a private endpoint.
Now, Azure Machine Learning and Azure AI Studio customers can securely access on-premises or custom VNET resources for their training, fine-tuning, and inferencing scenarios from their managed VNET using Application Gateway. Application Gateway is a load balancer that makes routing decisions based on the URL of an HTTPS request. Application Gateway will support a private connection from a managed VNET to any resources using an HTTP or HTTPs protocol. With this capability, customers can access the machine learning resources they need from outside their Azure subscription without compromising their security posture.
Supported scenarios for Azure AI customers using hybrid cloud
Today, Application Gateway is verified to support a private connection to Jfrog Artifactory, Snowflake Database, and Private APIs, supporting critical use cases for enterprise:
JFrog Artifactory is used to store custom Docker images for training and inferencing pipelines, store trained models ready to deploy, and for security and compliance of ML models and dependencies used in production. JFrog Artifactory may be in another Azure VNET, separate from the VNET used to access the ML workspace or AI Studio project. Thus, a private connection is necessary to secure the data transferred from a managed VNET to the JFrog Artifactory resource.
Snowflake is a cloud data platform where users may store their data for training and fine-tuning models on managed compute. To securely send and receive data, a connection to a Snowflake database should be entirely private and never exposed to the Internet.
Private APIs are used for managed online endpoints. Managed online endpoints are used to deploy machine learning models for real-time inferencing. Certain private APIs could be required to deploy managed online endpoints and must be secured through a private network.
Get started with Application Gateway
To get started with Application Gateway in Azure Machine Learning, see How to access on-premises resources – Azure Machine Learning | Microsoft Learn. To get started with Application Gateway in Azure AI Studio, see How to access on-premises resources – Azure AI Studio | Microsoft Learn.
How to use Microsoft Cost Management to analyze and optimize your Azure OpenAI Service costs
One more thing… As organizations increasingly rely on AI for core operations, it has become essential to closely track and manage AI spend. In this month’s blog, the Microsoft Cost Management team does a great job highlighting tools to help you analyze, monitor, and optimize your costs with Azure OpenAI Service. Read it here: Microsoft Cost Management updates.
Build secure, production-ready GenAI apps with Azure AI Studio
Ready to go deeper? Check out these top resources:
Azure security baseline for Azure AI Studio
5 Ways to Implement Enterprise Security with Azure AI Studio
Bicep template – Azure AI Studio basics
Whether you’re joining in person or online, we can’t wait to see you at Microsoft Ignite 2024! We’ll share the latest from Azure AI and go deeper into enterprise-grade security capabilities with these sessions:
Keynote: Microsoft Ignite Keynote
Breakout: Trustworthy AI: Future trends and best practices
Breakout: Secure and govern custom AI built on Azure AI and Copilot Studio
Breakout: Build secure GenAI apps with Azure AI Studio (in-person only)
Demo: Secure your GenAI project in 15 minutes with Azure AI Studio (in-person only)
Microsoft Tech Community – Latest Blogs –Read More