Meta Llama 4 Scout and Maverick AI Models With MoE Architecture Released

Date:

Meta introduced the first artificial intelligence (AI) models in the Llama 4 family on Saturday. The Menlo Park-based tech giant released two models — Llama 4 Scout and Llama 4 Maverick — with native multimodal capabilities to the open community. The company says these are the first open models built with Mixture-of-Experts (MoE) architecture. Compared to the predecessor, these come with higher context windows and better power efficiency. Alongside, Meta also previewed Llama 4 Behemoth, the largest AI model in the family unveiled so far.

Meta Llama 4 AI Models Arrive With MoE Architecture

In a blog post, the tech giant detailed its new AI models. Just like the previous Llama models, the Llama 4 Scout and Llama 4 Maverick are open-source AI models and can be downloaded via its Hugging Face listing or the dedicated Llama website. Starting today, users can also experience the Llama 4 AI models in WhatsApp, Messenger, Instagram Direct, and on the Meta.AI website.

The Llama 4 Scout is a 17 billion active parameter model with 16 experts, whereas the Maverick model comes with 17 billion active parameters and 128 experts. Scout is said to be able to run on a single Nvidia H100 GPU. Additionally, the company claimed that the previewed Llama 4 Behemoth outperforms GPT-4.5, Claude Sonnet 3.7, and Gemini 2.0 Pro on several benchmarks. Meta said the Behemoth model, with 288 billion active parameters and 16 experts, was not released as it is still being trained.

llama 4 moe Llama 4 MoE Architecture

The MoE architecture in Llama 4 AI models
Photo Credit: Meta

 

Coming to the architecture, the Llama 4 models are built on an MoE architecture. The MoE architecture activates only a fraction of the total parameters based on the requirement of the initial prompt, which makes it more compute efficient for training and inference. In the pre-training phase, Meta also used new techniques such as early fusion to integrate text and vision tokens simultaneously, and MetaP to set critical model hyper-parameters and initialisation scales.

See also  WhatsApp Rolling Out Custom Lists Feature to All Users Globally

For post-training, Meta chose to start the process with lightweight supervised fine-tuning (SFT), followed by online reinforcement learning (RL) and lightweight direct preference optimisation (DPO). The sequence was chosen to not over-constrain the model. The researchers also performed SFT on only 50 percent of the “harder” dataset.

Based on internal testing, the company claimed that the Maverick model outperforms Gemini 2.0 Flash, DeepSeek v3.1, and GPT-4o on the MMMU (image reasoning), ChartQA (image understanding), GPQA Diamond (reasoning and knowledge), and MTOB (long context) benchmarks.

On the other hand, the Scout model is said to outperform Gemma 3, Mistral 3.1, and Gemini 2.0 on the MMMU, ChartQA, MMLU (reasoning and knowledge), GPQA Diamond, and MTOB benchmarks.

Meta has also taken steps to make the AI models safer in both the pre-training and post-training processes. In pre-training, the researchers used data filtering methods to ensure harmful data was not added to its knowledge base. In post-training, the researchers added open-source safety tools such as Llama Guard and Prompt Guard to protect the model from external attacks. Additionally, the researchers have also stress-tested the models internally and have allowed red-teaming of the Llama 4 Scout and Maverick models.

Notably, the models are available to the open community with a permissive Llama 4 licence. It allows both academic and commercial usage of the models, however, Meta no longer allows companies with more than 700 million monthly active users to access its AI models.

Meta introduced the first artificial intelligence (AI) models in the Llama 4 family on Saturday. The Menlo Park-based tech giant released two models — Llama 4 Scout and Llama 4 Maverick — with native multimodal capabilities to the open community. The company says these are the first open models built with Mixture-of-Experts (MoE) architecture. Compared to the predecessor, these come with higher context windows and better power efficiency. Alongside, Meta also previewed Llama 4 Behemoth, the largest AI model in the family unveiled so far.

See also  How to Use Meta AI in WhatsApp Individual and Group Chats?

Meta Llama 4 AI Models Arrive With MoE Architecture

In a blog post, the tech giant detailed its new AI models. Just like the previous Llama models, the Llama 4 Scout and Llama 4 Maverick are open-source AI models and can be downloaded via its Hugging Face listing or the dedicated Llama website. Starting today, users can also experience the Llama 4 AI models in WhatsApp, Messenger, Instagram Direct, and on the Meta.AI website.

The Llama 4 Scout is a 17 billion active parameter model with 16 experts, whereas the Maverick model comes with 17 billion active parameters and 128 experts. Scout is said to be able to run on a single Nvidia H100 GPU. Additionally, the company claimed that the previewed Llama 4 Behemoth outperforms GPT-4.5, Claude Sonnet 3.7, and Gemini 2.0 Pro on several benchmarks. Meta said the Behemoth model, with 288 billion active parameters and 16 experts, was not released as it is still being trained.

llama 4 moe Llama 4 MoE Architecture

The MoE architecture in Llama 4 AI models
Photo Credit: Meta

 

Coming to the architecture, the Llama 4 models are built on an MoE architecture. The MoE architecture activates only a fraction of the total parameters based on the requirement of the initial prompt, which makes it more compute efficient for training and inference. In the pre-training phase, Meta also used new techniques such as early fusion to integrate text and vision tokens simultaneously, and MetaP to set critical model hyper-parameters and initialisation scales.

For post-training, Meta chose to start the process with lightweight supervised fine-tuning (SFT), followed by online reinforcement learning (RL) and lightweight direct preference optimisation (DPO). The sequence was chosen to not over-constrain the model. The researchers also performed SFT on only 50 percent of the “harder” dataset.

See also  Luma AI Releases Ray2 Flash AI Video Model With Faster Generation Time

Based on internal testing, the company claimed that the Maverick model outperforms Gemini 2.0 Flash, DeepSeek v3.1, and GPT-4o on the MMMU (image reasoning), ChartQA (image understanding), GPQA Diamond (reasoning and knowledge), and MTOB (long context) benchmarks.

On the other hand, the Scout model is said to outperform Gemma 3, Mistral 3.1, and Gemini 2.0 on the MMMU, ChartQA, MMLU (reasoning and knowledge), GPQA Diamond, and MTOB benchmarks.

Meta has also taken steps to make the AI models safer in both the pre-training and post-training processes. In pre-training, the researchers used data filtering methods to ensure harmful data was not added to its knowledge base. In post-training, the researchers added open-source safety tools such as Llama Guard and Prompt Guard to protect the model from external attacks. Additionally, the researchers have also stress-tested the models internally and have allowed red-teaming of the Llama 4 Scout and Maverick models.

Notably, the models are available to the open community with a permissive Llama 4 licence. It allows both academic and commercial usage of the models, however, Meta no longer allows companies with more than 700 million monthly active users to access its AI models.

 

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

South Carolina prepares for second firing squad execution

A firing squad is set to kill a South...

RRB ALP Recruitment 2025: Apply for 9,970 vacancies from April 12; check selection process and other details here

The RRB ALP Recruitment 2025 application process for 9,970...

‘Gauti (Gautam Gambhir) bhai has helped me understand my potential’

Washington Sundar, a versatile all-rounder, faces the challenge of...

Apple is left without a life raft as Trump’s China trade war intensifies, analysts warn

Apple remains stranded without a life raft, experts say,...