Fine Tune GPT-4o on Azure OpenAI Service
Get excited – you can now fine tune GPT-4o using the Azure OpenAI Service!
We’re thrilled to announce the public preview of fine-tuning for GPT-4o on Azure. After a successful private preview, GPT-4o is now available to all of our Azure OpenAI customers, offering unparalleled customization and performance in Azure OpenAI Service.
Why fine-tuning matters
Fine-tuning is a powerful tool that allows you to tailor our advanced models to your specific needs. Whether you’re looking to enhance the accuracy of responses, ensure outputs align with your brand voice, reduce token consumption or latency, or optimize the model for a particular use case, fine-tuning allows you to customize best-in-class models with your own proprietary data.
GPT-4o: Make a great model even better with your own training data
GPT-4o offers the same performance as GPT-4 Turbo, but improved efficiency – and the best performance on non-English language content of any OpenAI model. With the launch of fine tuning for GPT-4o, you now have the ability to customize it for your unique needs. Fine-tuning GPT-4o enables developers to train the model on domain-specific data, creating outputs that are more relevant, accurate, and contextually appropriate.
This release marks a significant milestone for Azure OpenAI Service, as it allows you to build highly specialized models that drive better outcomes, use fewer tokens with greater accuracy, and create truly differentiated models to support your use cases.
Fine tuning capabilities
Today, we’re announcing the availability of text-to-text fine tuning for GPT-4o. In addition to basic customization, we support advanced features to help you create customized models for your needs:
Tool Calling: Include function and tool calls in your training data to empower your custom models to do even more!
Continuous Fine Tuning: Fine tune previously fine-tuned models with new, or additional, data to update or improve accuracy
Deployable Snapshots: No need to worry about overfitting – you can now deploy snapshots, preserved at each epoch, and evaluate them alongside your final model
Built in Safety: GPT-4, 4o, and 4o mini models have automatic guardrails in place to ensure that your fine-tuned models are not capable of generating harmful content.
GPT-4o is available to customers using Azure OpenAI resources in North Central US and Sweden Central. Stay tuned as we add support in additional regions.
Lowering prices to make experimentation accessible
We’ve heard your feedback about the costs of fine-tuning and hosting models. To make it easier for you to experiment and deploy fine-tuned models, we’ve updated our pricing structure to:
Bill for training based on the total tokens trained – not the number of hours
Reduce hosting charges by ~40% for some of our most popular models, including the GPT-35-Turbo family.
These changes make experimentation easier (and less expensive!) than ever before. You can find the updated pricing for Fine tuning models on the Azure OpenAI Service – Pricing | Microsoft Azure
Get started today!
Whether you’re new to fine-tuning or an experienced developer, getting started with Azure OpenAI Service has never been easier. Fine-tuning is available through both Azure OpenAI Studio and Azure AI Studio, providing a user-friendly interface for those who prefer a GUI and robust APIs for advanced users.
Ready to get started?
Learn more about Azure OpenAI Service
Check out our How-To Guide for Fine Tuning with Azure OpenAI
Try it out with Azure AI Studio
Microsoft Tech Community – Latest Blogs –Read More