Skip to main content
TechnologyApr 9, 2026· 5 min read

Muse Spark is here: Meta abandons open-source to compete with the giants of AI

Muse Spark is here: Meta abandons open-source to compete with the giants of AI

Meta has introduced Muse Spark, the first public model developed by Meta Superintelligence Labs (MSL), the AI division founded by Mark Zuckerberg in the second half of 2025 with the declared goal of achieving "personal superintelligence". The launch represents the culmination of nine months of intense work during which the team rebuilt the entire AI stack of the company from the ground up, at a pace that Meta claims is unprecedented in its development cycles.

Alexandr Wang, Scale AI and the $14 billion

To understand what Muse Spark represents, one must start in June 2025, when Zuckerberg hired Alexandr Wang, founder and CEO of Scale AI, as Chief AI Officer of Meta as part of a $14 billion investment in Scale AI itself. Wang, 29, is known as one of the most fervent advocates of closed models in the industry, and it is no coincidence that Muse Spark marks a sharp break from Meta's traditional open-source strategy, which had built its reputation precisely on the Llama family. MSL was conceived as a separate unit, with independent budgets and hiring policies, where Wang gathered some of the highest-paid researchers in the field with the explicit mandate to close the gap with OpenAI, Anthropic, and Google.

The result of this cycle is Muse Spark (internally known by the codename Avocado), the first release of the Muse family, a deliberately incremental approach to scaling: each generation validates hypotheses before moving to the next scale. The initial model is intentionally compact and fast by design, not a maximalist bet, but a proof of concept for the new framework. Meta itself admits that the next generation is already in development.

Architecture and technical capabilities

Muse Spark is a natively multimodal model, trained to reason about text, images, and complex visual data without relying on separate pipelines. It supports tool-use, visual chain of thought, and multi-agent orchestration: the latter allows launching multiple sub-agents in parallel to tackle complex problems. The example provided by Meta is emblematic: while planning a trip to Florida, one agent drafts the itinerary, a second compares Orlando and the Keys, and a third identifies child-friendly activities. All simultaneously, not sequentially.

The model is available in two operational modes: Instant for quick responses and Thinking for tasks that require deeper reasoning. On a technical level, Meta stated that the new training methodologies and updated infrastructure, developed entirely within the new AI stack of MSL, enable performance comparable to the previous Llama 4 with an order of magnitude less compute.

Visual perception and healthcare

One area where Meta has invested most explicitly is visual perception applied to the real world. With Muse Spark, the Meta AI assistant can analyze real-time photos: scanning a snack shelf and classifying them by protein content, reading labels, comparing products with alternatives. Vision is treated as a native input layer, not an optional extension.

The healthcare sector receives special attention: Meta has worked with a team of doctors to refine the model's ability to answer complex clinical questions, including those involving diagnostic images and charts. Health is indicated as one of the main use cases for which users turn to AI, which explains the investment in this direction. When Muse Spark arrives on Meta's AI glasses, this perceptual capability will make even more sense: the assistant will be able to see the user's physical environment and contextualize responses in real time.

Closed strategy and distribution

Muse Spark, as we mentioned, is a closed model: neither the weights nor the architecture are publicly accessible, marking a clear break from the Llama strategy that had made Meta a benchmark for the open-source community. This choice is largely consistent with Wang's philosophy, who has always been skeptical about the total openness of frontier models. However, Meta does not completely close the door: the company has explicitly stated that it wants to release future versions of the model in open-source, framing the current choice as a temporary measure linked to the development phase, not as a permanent shift. The current strategy and Meta's historical approach therefore coexist as two parallel tracks: closed for frontier MSL models, open for subsequent iterations once the architecture is solidified.

At launch, Muse Spark is available on meta.ai and in the Meta AI app in the United States, with a private preview API for selected partners. The rollout to WhatsApp, Instagram, Facebook, Messenger, and the AI glasses is expected in the coming weeks. The new app interface also includes a Shopping mode that aggregates style inspirations and branded content already present in Meta apps, and a contextual layer that integrates public posts and local content directly into the assistant's responses.

Benchmark and gaps with competitors

On the performance front, Meta does not hide that there are structural improvement areas, particularly in coding and complex agent tasks, where Anthropic has built a consistent advantage. The company frames these gaps not as permanent weaknesses but as explicit priorities for the next development cycle: the scaling framework validated with Muse Spark is precisely the mechanism to iterate on to make up for lost ground. In writing and reasoning tasks, however, the model comes close to the leading models of Google and OpenAI. The most significant signal of the launch is not so much the model itself but the confirmation that the new AI stack works: the next generation of the Muse family is already in the works.