Equinox IT Blog

Rethinking openness: Meta's shift and the blind spot in New Zealand’s AI Strategy

Rethinking openness: Meta's shift and the blind spot in New Zealand’s AI Strategy
5:06

Meta’s Llama models have helped define the open-weight1 AI era. So, it is understandable that recent commentary from Mark Zuckerberg, suggesting that Meta may be moving into a more cautious phase in their AI openness, shaped by safety concerns and questions around competitive benefit, has raised questions about Meta’s continued support of open-weight AI.

AI-Models

AI-generated image (OpenAI DALL-E)

In Meta’s 30 July 2025 Personal Superintelligence post, Zuckerberg framed future model releases through the lens of superintelligence safety considerations, saying:

“We’ll need to be rigorous about mitigating these risks and careful about what we choose to opensource.”

That remark ignited speculation on the likes of Reddit and Hugging Face about Meta’s continued support of open-weight AI.

Later that same day, during Meta’s Q2 2025 earnings call, Zuckerberg was asked about open-sourcing AI. He acknowledged the trade-offs more directly:

“…we kind of wrestle with whether it’s productive or helpful to share that or if that’s, you know, really just primarily helping competitors.”

These remarks contrast sharply with his July 2024 post Open Source AI is the Path Forward, fuelling speculation that Meta is reevaluating the openness of its LLM roadmap.

Why it Matters

Llama is Meta’s primary general-purpose (mostly) open-weight LLM family. As of March 2025, it had surpassed a billion downloads and it has spawned tens of thousands of derivatives. It was the basis of technology ecosystems such as Llama.cpp and is a frequent base for fine-tuned models. 

The original LLaMA research itself has been influential, including The Alibaba Group’s Qwen Team acknowledging that they adopted the LLaMA approach of training LLMs in their original Qwen model (noting Qwen was trained independently and does not use Meta’s weights).

And there continue to be notable derivative models based on Llama, including:

  • Cisco’s FoundationAI‑SecurityLLM‑8B model (April 2025): a cybersecurity model designed to support tasks such as threat analysis, vulnerability triage and secure code and configuration review.
  • NVIDIA’s Llama 3.1 Nemotron family of models (March 2025): a suite of open-weight reasoning models designed for code generation, instruction following and agentic platform development, with variants optimised for inference on both enterprise-scale infrastructure and consumer-grade hardware.
  • The Deepseek-R1 Distill series (January 2025): a set of models that used Llama (and Qwen) as student architectures, transferring R1’s reasoning capabilities into smaller variants optimised for inference on VRAM-constrained hardware.

Whilst the open-weight AI community's focus has broadened to include model families such as Qwen (Alibaba Cloud), Mistral and Mixtral (Mistral AI), DeepSeek (DeepSeek AI), and Phi (Microsoft Research), Llama-based models - such as Llama 4 Maverick and NVIDIA’s Nemotron Ultra - remain competitive, even if they do not lead consistently. For example, on Vellum AI’s Open LLM Leaderboard (as of July 2025), these models perform strongly on key reasoning benchmarks like GPQA and GRIND, though they are now outpaced in math benchmarks like AIME by newer families such as DeepSeek-R1 

However, when considered against closed-weight models from providers such as OpenAI, Google, Anthropic and xAI, the Llama family is increasingly looking outclassed. This situation could well be fuelling Zuckerberg’s concerns that their openness might be “just primarily helping competitors.”

Yes, Llama helped seed a vibrant open-weight ecosystem. However, Meta appears not to have captured a proportionate return from that openness. The July 2025 remarks from Zuckerberg suggest a strategic recalibration - perhaps not abandoning open-weight releases but certainly applying more scrutiny to them.

Meta’s more measured approach to releasing increasingly powerful open-weight AI models may also be prudent. Geoffrey Hinton, the so-called godfather of AI, reportedly warned that open-sourcing big models is akin to allowing someone to buy nuclear weapons at RadioShack. That said, recent advances - such as the Hierarchical Reasoning Model open-sourced by Sapient Intelligence - suggest that model size may no longer be the best (or only) heuristic for assessing these types of risks.

Questions around New Zealand's AI Strategy

This shift also exposes a gap in New Zealand's AI Strategy. While the strategy emphasises AI adoption and application - rather than building foundational models to rival the likes of Google or OpenAI - it isn't clear how Aotearoa might navigate the resulting supplier concentration risk that arises from AI ecosystems shaped by a small number of offshore technology providers - such as those built around Meta's Llama - where shifting commercial or geopolitical priorities could carry material consequences for Aotearoa’s ability to maintain and shape our AI ecosystem. 

TL;DR

Meta’s Llama models helped lay the foundations of the open-weight AI movement, spawning entire ecosystems and derivatives. But with Mark Zuckerberg now signalling a more cautious approach – framed around safety concerns and fears of aiding competitors – those foundations may be starting to erode. 

And while New Zealand’s AI Strategy doesn’t directly rely on open-weight models, Meta’s shift highlights a broader gap: how we manage supplier concentration risk in ecosystems dominated by a few offshore technology providers. 

 

1. “Open-weight” refers to the release of model parameters enabling others to fine-tune or build upon the model. This is distinct from “open source”, which is often misused in this context. See also Meta’s LLaMa license is not Open Source

 

Download From Overspend to Advantage whitepaper

Cloud spending continues to surge globally, but most organisations haven’t made the changes necessary to maximise the value and cost-efficiency benefits of their cloud investments. Download the whitepaper From Overspend to Advantage to learn about our proven approach to optimising cloud value.

 

Subscribe by email