Mistral AI is an open-source and portable generative AI for developers and businesses. It creates cutting-edge AI technology for developers, achieving top-tier reasoning performance on all common benchmarks. Mistral AI models are designed to be as unbiased and useful as possible. The Mistral AI portable developer platform serves our open and optimized models for building fast and intelligent applications. Mistral AI focuses on developing large language models (LLMs) and socially responsible AI, emphasizing open-source technologies.
- Mistral Large - cutting-edge text generation model.
- Le Chat.
- La Plateforme.
- Frontier performance.
- Open and portable technology.
- Flexible deployment.
- Customizable.
- Multilingual proficiency: English, French, Italian, German, and Spanish.
- Code comprehension: Strong understanding and processing of code.
- Extensive context window: Considers 32,000 tokens of text for comprehensive understanding, aiding retrieval tasks.
- Retrieval augmentation: Employs "excellent recall" techniques to retrieve relevant information effectively.
- Function calling capabilities: Can interact with external functions and services, expanding its potential applications.
- JSON output format: Provides structured data for easy integration with various systems.
- Focus on conciseness and usefulness: Aims to deliver factual and relevant information without personal opinions.
- Modular moderation control: Offers customizable control over content moderation for specific use cases.
- High retrieval score: Achieved a score of 55.26 on the MTEB benchmark, indicating strong performance in retrieving relevant information.
Engage with AI: Try Le Chat from Mistral AI (Chat Interface)
Mistral AI: Open-Source, High-Performance, and State-of-the-Art
Figure 1: A comparison was conducted on MMLU (Measuring Massive Multitask Language Understanding) among several leading models, including GPT-4, Mistral Large (pre-trained), Claude 2, Gemini Pro 1.0, GPT 3.5, and LLaMA 2 70B.
Figure 3: A comparison between Mistral Large, Mixtral 8x7B, and LLaMA 2 70B across HellaSwag, Arc Challenge, and MMLU benchmarks in French, German, Spanish, and Italian.
Image Credit : Mistral AI Website & X/GuillaumeLample
Mistral AI: Powering Businesses with Performance-Driven LLMs
AI models:
Mistral AI-optimized commercial models are designed for performance.
- Mistral Small : Affordable reasoning for quick-response tasks.
- Mistral Large : High-performance reasoning for complex tasks.
- Mistral Embed: Extracting semantic representations of text data.
Mistral AI open models are available for free, with a fully permissive license.
Mistral 7B :
- Mistral 7B is a 7-billion-parameter transformer model.
- Fast-deployment.
- Easy customization.
- Fluency in English.
- Supports an 8,000-token context window for comprehensive understanding.
Mixtral 8x7B:
- Mixtral 8x7B stands as the premier choice for cutting-edge AI applications.
- Leveraging a 7-billion-parameter sparse Mixture-of-Experts (SMoE) architecture, it utilizes 12 billion active parameters out of a total of 45 billion.
- Proficiency in English, French, Italian, German, and Spanish.
- Strong in code.
- Supports an 32,000-token context window for deeper understanding.
Both Models have an Apache 2.0 License. Both models offer concise, useful, and impartial outputs, with modular moderation control for enhanced customization and control.
Mistral AI Powers Innovation: Top Clients Leading the AI Charge
MongoDB, OctoAI, Lamini, Arcane , Lindy, HuggingFace, BNP Paribas, Orange, Brave , Cloudflare, Pretto
Mistral AI pricing
- Mistral Large have a different pricing structure and is based on the number of tokens processed.
- Mistral Large (Input) - $8/1M tokens.
- Mistral Large (Output) - $24/1M tokens.
- Mistral 7B(Input) - $0.25/1M tokens.
- Mistral 7B(Output) - $0.25/1M tokens.
- Mixtral 8x7B (Input) - $0.7/1M tokens.
- Mixtral 8x7B (Output) - $0.7/1M tokens.
- Mistral Small (Input) -$2/1M tokens.
- Mistral Small (Output) -$6/1M tokens.
- Mistral Medium (Input) - $2.7/ 1M tokens.
- Mistral Medium (Output) - $8.1/ 1M tokens.
- Mistral Embed - $0.1 / 1M tokens.
- All endpoints have a rate limit of 2 requests per second, 2 million tokens per minute, and 200 million tokens per month.
- Embedding models: Increased limits coming in the future.