AI Innovation Continues: Introducing Mistral Large 2 and Mistral Nemo in Azure
Mistral Large 2 & Mistral Nemo: These models offer state-of-the-art reasoning, multilingual support, and coding capabilities, enhancing our AI offerings.
And coming soon to the catalog:
Mistral’s Codestral models which are designed specifically for code generation tasks and trained on 80+ programming languages, including Python, Java, C, C++, JavaScript, and Bash.
Fine-tuning for Mistral Large 2 and Mistral Nemo.
Mistral Large 2
According to Mistral AI, Mistral Large 2 is an advanced Large Language Model (LLM) known for its state-of-the-art reasoning, knowledge, and coding capabilities.
Multilingual Support: Supports dozens of languages, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, and Polish.
Proficient in Coding: Trained on 80+ coding languages, such as Python, Java, C, C++, JavaScript, Bash, Swift, and Fortran.
Agent-centric: Features best-in-class agentic capabilities with native function calling and JSON outputting.
Advanced Reasoning: Demonstrates state-of-the-art mathematical and reasoning capabilities.
Context Length: Supports a context length of 128K.
Input/Output: Models input text only and generate text only.
Source : Large Enough | Mistral AI | Frontier AI in your hands
Mistral Nemo
According to Mistral AI, Mistral Nemo is a cutting-edge Language Model (LLM) developed in collaboration with Nvidia, boasting state-of-the-art reasoning, world knowledge, and coding capabilities within its size category.
Joint Development with Nvidia: Resulting in a powerful 12B model.
Multilingual Proficiency: Equipped with the Tekken tokenizer, supporting over 100 languages and outperforming the Llama 3 tokenizer in efficiency for approximately 85% of all languages.
Agent-centric: Possesses top-tier agentic capabilities, including native function calling and JSON outputting.
Advanced Reasoning: Demonstrates state-of-the-art mathematical and reasoning capabilities within its size category.
Context Length: Supports a context length of 128K.
Number of Parameters: A 12B model, making it a powerful drop-in replacement for systems using Mistral 7B.
Input/Output: Models input text only and generate text only.
Source : Mistral NeMo | Mistral AI | Frontier AI in your hands
With the introduction of these advanced models, we continue to support developers and enterprises in leveraging AI to build complex applications efficiently. The transition to GA for our Models as a Service product marks a significant step forward in our mission to provide cutting-edge AI solutions.
Why Azure AI for Mistral large 2 and Mistral Nemo?
Enhanced Security and data privacy: Azure places a strong emphasis on data privacy and security to protect customer data. Check out this article to learn more about data handling when you deploy models from the Azure AI Model Catalog.
Provision an API Endpoint: Create your Mistral Large 2 and Mistral Nemo API endpoint in seconds.
Experiment: Try it out in the Azure AI Studio playground or integrate it with popular LLM app development tools.
Build Safely: Leverage Azure AI Content Safety to create reliable and secure
Get started with Mistral Large 2 and Mistral Nemo on Azure AI
Prerequisites:
If you don’t have an Azure subscription, get one here: https://azure.microsoft.com/en-us/pricing/purchase-options/pay-as-you-go
Familiarize yourself with Azure AI Model Catalog
Create an Azure AI Studio hub and project. Make sure you pick East US, West US3, South Central US, West US, North Central US, East US 2 or Sweden Central as the Azure region for the hub.
Create a deployment to obtain the inference API and key:
Open the model card in the model catalog on Azure AI Studio.
Click on Deploy and select the Pay-as-you-go option.
Subscribe to the Marketplace offer and deploy. You can also review the API pricing at this step.
You should land on the deployment page that shows you the API and key in less than a minute. You can try out your prompts in the playground.
The prerequisites and deployment steps are explained in the product documentation. You can use the API and key with various clients. Check out the samples to get started.
FAQ
1. Do we have fine-tuning on Mistral Large 2 and Mistral Nemo?
Finetuning of these models are not supported yet but coming soon. Stay tuned!!!
2. What does it cost to use Mistral Large 2 and Mistral Nemo on Azure?
You are billed based on the number of prompt and completions tokens. You can review the pricing on the Mistral Large 2 and Mistral Nemo offer in the Azure Marketplace offer details tab when deploying the model. You can also find the pricing on the Azure Marketplace.
3. Are the Mistral models’ region specific on Azure?
Mistral Large 2 and Mistral Nemo are available through MaaS as serverless API endpoints.
These endpoints can be created in Azure AI Studio projects or Azure Machine Learning workspaces. Cross-regional support for these endpoints is available in the following regions in the US: East US, West US3, South Central US, West US, North Central US, East US 2
4. Do I require GPU capacity quota in my Azure subscription to deploy Mistral Large 2 and Mistral Nemo?
Mistral Large 2 and Mistral Nemo are available through MaaS as serverless API endpoints. You don’t require GPU capacity quota in your Azure subscription to deploy these models.
5. Mistral Large 2 and Mistral Nemo are listed on the Azure Marketplace. Can I purchase and use these models directly from Azure Marketplace?
Azure Marketplace is our foundation for commercial transactions for models built on or built for Azure. The Azure Marketplace enables the purchasing and billing of Mistral models. However, model discoverability occurs in both Azure Marketplace and the Azure AI model catalog. Meaning you can search and find Mistral models in both the Azure Marketplace and Azure AI Model Catalog.
If you search for Mistral Large 2 in Azure Marketplace, you can subscribe to the offer before being redirected to the Azure AI Model Catalog in Azure AI Studio where you can complete subscribing and can deploy the model.
If you search for Mistral Large 2 in the Azure AI Model Catalog, you can subscribe and deploy the model from the Azure AI Model Catalog without starting from the Azure Marketplace. The Azure Marketplace still tracks the underlying commerce flow.
6. Given that Mistral Large 2 and Mistral Nemo are billed through the Azure Marketplace, does it retire my Azure consumption commitment (aka MACC)?
Yes, Mistral Large 2 and Mistral Nemo are “Azure benefit eligible” Marketplace offers, which indicates MACC eligibility. Learn more about MACC here: https://learn.microsoft.com/en-us/marketplace/azure-consumption-commitment-benefit
7. Is my inference data shared with Mistral?
No, Microsoft does not share the content of any inference request or response data with any model provider.
Microsoft acts as the data processor for prompts and outputs sent to and generated by a model deployed for pay-as-you-go inferencing (MaaS). Microsoft doesn’t share these prompts and outputs with the model provider, and Microsoft doesn’t use these prompts and outputs to train or improve Microsoft’s, the model providers, or any third party’s models. Read more on data, security and privacy for Models-as-a-Service.
8. Are there rate limits for the Mistral models on Azure?
Yes, Mistral models come with 400 K tokens per minute and 1 K requests per minute limit. Reach out to Azure customer support if this doesn’t suffice.
9. Can I use MaaS models in any Azure subscription types?
Customers can use MaaS models in all Azure subsection types with a valid payment method, except for the CSP (Cloud Solution Provider) program. Free or trial Azure subscriptions are not supported.
Microsoft Tech Community – Latest Blogs –Read More