Phi-3 fine-tuning and new generative AI models are available for customizing and scaling AI apps
Developing and deploying AI applications at scale requires a robust and flexible platform that can handle the complex and diverse needs of modern enterprises. This is where Azure AI services come into play, offering developers the tools they need to create customized AI solutions grounded in their organizational data.
One of the most exciting updates in Azure AI is the recent introduction of serverless fine-tuning for Phi-3-mini and Phi-3-medium models. This feature enables developers to quickly and easily customize models for both cloud and edge scenarios without the need for extensive compute resources. Additionally, updates to Phi-3-mini have brought significant improvements in core quality, instruction-following, and structured output, allowing developers to build more performant models without additional costs.
Azure AI continues to expand its model offerings, with the latest additions including OpenAI’s GPT-4o mini, Meta’s Llama 3.1 405B, and Mistral’s Large 2. These models provide customers with greater choice and flexibility, enabling them to leverage the best tools for their specific needs. The introduction of Cohere Rerank further enhances Azure AI’s capabilities, offering enterprise-ready language models that deliver superior search results in production environments.
The Phi-3 family of small language models (SLMs) developed by Microsoft has been a game-changer in the AI landscape. These models are not only cost-effective but also outperform other models of the same size and even larger ones. Developers can fine-tune Phi-3-mini and Phi-3-medium with their data to build AI experiences that are more relevant to their users, safely and economically. The small compute footprint and cloud and edge compatibility of Phi-3 models make them ideal for a variety of scenarios, from tutoring to enhancing the consistency and quality of responses in chat and Q&A applications.
Microsoft’s collaboration with Khan Academy is a testament to the potential of Phi-3 models. Khan Academy uses Azure OpenAI Service to power Khanmigo for Teachers, an AI-powered teaching assistant that helps educators across 44 countries. Initial data shows that Phi-3 outperforms most other leading generative AI models in correcting and identifying student mistakes in math tutoring scenarios.
Azure AI’s commitment to innovation is further demonstrated by the introduction of Phi Silica, a powerful model designed specifically for the Neural Processing Unit (NPU) in Copilot+ PCs. This model empowers developers to build apps with safe, secure AI experiences, making Microsoft Windows the first platform to have a state-of-the-art SLM custom-built for the NPU.
The Azure AI model catalog now boasts over 1,600 models from various providers, including AI21, Cohere, Databricks, Hugging Face, Meta, Mistral, Microsoft Research, OpenAI, Snowflake, and Stability AI. This extensive selection ensures that developers have access to the best tools for their AI projects, whether they are working on traditional machine learning or generative AI applications.
Building AI solutions responsibly is at the core of AI development at Microsoft. Azure AI evaluations enable developers to iteratively assess the quality and safety of models and applications, informing mitigations and ensuring responsible AI deployment. Additional Azure AI Content Safety features, such as prompt shields and protected material detection, are now “on by default” in Azure OpenAI Service, providing an extra layer of security for developers.
Learn more about these recent exciting developments by checking out this blog: Announcing Phi-3 fine-tuning, new generative AI models, and other Azure AI updates to empower organizations to customize and scale AI applications | Microsoft Azure Blog
Developing and deploying AI applications at scale requires a robust and flexible platform that can handle the complex and diverse needs of modern enterprises. This is where Azure AI services come into play, offering developers the tools they need to create customized AI solutions grounded in their organizational data.
One of the most exciting updates in Azure AI is the recent introduction of serverless fine-tuning for Phi-3-mini and Phi-3-medium models. This feature enables developers to quickly and easily customize models for both cloud and edge scenarios without the need for extensive compute resources. Additionally, updates to Phi-3-mini have brought significant improvements in core quality, instruction-following, and structured output, allowing developers to build more performant models without additional costs.
Azure AI continues to expand its model offerings, with the latest additions including OpenAI’s GPT-4o mini, Meta’s Llama 3.1 405B, and Mistral’s Large 2. These models provide customers with greater choice and flexibility, enabling them to leverage the best tools for their specific needs. The introduction of Cohere Rerank further enhances Azure AI’s capabilities, offering enterprise-ready language models that deliver superior search results in production environments.
The Phi-3 family of small language models (SLMs) developed by Microsoft has been a game-changer in the AI landscape. These models are not only cost-effective but also outperform other models of the same size and even larger ones. Developers can fine-tune Phi-3-mini and Phi-3-medium with their data to build AI experiences that are more relevant to their users, safely and economically. The small compute footprint and cloud and edge compatibility of Phi-3 models make them ideal for a variety of scenarios, from tutoring to enhancing the consistency and quality of responses in chat and Q&A applications.
Microsoft’s collaboration with Khan Academy is a testament to the potential of Phi-3 models. Khan Academy uses Azure OpenAI Service to power Khanmigo for Teachers, an AI-powered teaching assistant that helps educators across 44 countries. Initial data shows that Phi-3 outperforms most other leading generative AI models in correcting and identifying student mistakes in math tutoring scenarios.
Azure AI’s commitment to innovation is further demonstrated by the introduction of Phi Silica, a powerful model designed specifically for the Neural Processing Unit (NPU) in Copilot+ PCs. This model empowers developers to build apps with safe, secure AI experiences, making Microsoft Windows the first platform to have a state-of-the-art SLM custom-built for the NPU.
The Azure AI model catalog now boasts over 1,600 models from various providers, including AI21, Cohere, Databricks, Hugging Face, Meta, Mistral, Microsoft Research, OpenAI, Snowflake, and Stability AI. This extensive selection ensures that developers have access to the best tools for their AI projects, whether they are working on traditional machine learning or generative AI applications.
Building AI solutions responsibly is at the core of AI development at Microsoft. Azure AI evaluations enable developers to iteratively assess the quality and safety of models and applications, informing mitigations and ensuring responsible AI deployment. Additional Azure AI Content Safety features, such as prompt shields and protected material detection, are now “on by default” in Azure OpenAI Service, providing an extra layer of security for developers.
Learn more about these recent exciting developments by checking out this blog: Announcing Phi-3 fine-tuning, new generative AI models, and other Azure AI updates to empower organizations to customize and scale AI applications | Microsoft Azure Blog Read More