Llama 4 is the newest large language model (LLM) family from Meta. The Meta AI assistant is now powered by these types of models on the web, Instagram, WhatsApp, and Messenger. Two models—Llama 4 Scout and Llama 4 Maverick—are ready for usage right away in this release, while Llama 4 Behemoth, a more sophisticated variant, is currently being developed.
The 4 Llama
Because Llama 4 Scout can function with just one Nvidia H100 GPU, Meta developed it as a more compact version to run efficiently. The model can analyze a lot of data at once since it offers a context window of 10 million tokens. On a number of commonly used benchmarks, Scout beats Google’s Gemma 3 and Gemini 2.0 Flash-Lite in addition to Mistral 3.1, according to Meta. Additionally, Scout may be downloaded from Hugging Face and Meta’s platform.
Llama 4 Maverick
Presented as a more sophisticated option, the Llama 4 Maverick model competes with Google’s Gemini 2.0 Flash and OpenAI’s GPT-4o. According to Meta, Maverick uses fewer active variables and produces outcomes that are equivalent in coding and thinking tasks; this design may make it more effective for some applications. DeepSeek-V3 is particularly mentioned by Meta as a comparative point.
Read More: Meta Fined by Turkey for Not Restricting Facebook & Instagram
Llama 4 Behemoth
Meta’s next flagship model, the Llama 4 Behemoth, remains in training. It has a total of 2 trillion variables,including 288 billion operational parameters. Meta CEO Mark Zuckerberg promotes it as the “highest performance base model in the world,” despite the fact that it isn’t currently accessible. According to early claims, Behemoth would do better on a variety of STEM standards than models such as Claude Sonnet 3.7 and GPT-4.5.
Design for MoE
Meta has chosen to use a combination of Experts (MoE) structure for the Llama 4 series. This method optimizes efficiency while using less resources by dynamically activating various model components according to the job at hand. According to Meta, this change has made it possible for the company to create efficient and adaptable models.
Conditional Open Source
Despite Meta’s claims that Llama 4 is open-source, the development community is still concerned about the license conditions. According to the terms of the existing license, any business that has more than 700 million active users each month is required to ask Meta for permission to utilize the models. The Open Source Initiative has argued that Llama does not fit into the conventional idea of open source because of this constraint.
Insights
At LlamaCon, which is set for April 29, Meta intends to provide further details on its planned AI advancements. More information about the company’s plan for AI models and associated products is anticipated.