Share this
Rethinking openness: Meta's shift and the blind spot in New Zealand’s AI Strategy
by Deane Sloan on 06 August 2025
Meta’s Llama models have helped define the open-weight1 AI era. So, it is understandable that recent commentary from Mark Zuckerberg, suggesting that Meta may be moving into a more cautious phase in their AI openness, shaped by safety concerns and questions around competitive benefit, has raised questions about Meta’s continued support of open-weight AI.
AI-generated image (OpenAI DALL-E)
In Meta’s 30 July 2025 Personal Superintelligence post, Zuckerberg framed future model releases through the lens of superintelligence safety considerations, saying:
“We’ll need to be rigorous about mitigating these risks and careful about what we choose to open‑source.”
That remark ignited speculation on the likes of Reddit and Hugging Face about Meta’s continued support of open-weight AI.
Later that same day, during Meta’s Q2 2025 earnings call, Zuckerberg was asked about open-sourcing AI. He acknowledged the trade-offs more directly:
“…we kind of wrestle with whether it’s productive or helpful to share that or if that’s, you know, really just primarily helping competitors.”
These remarks contrast sharply with his July 2024 post Open Source AI is the Path Forward, fuelling speculation that Meta is reevaluating the openness of its LLM roadmap.
Why it Matters
Llama is Meta’s primary general-purpose (mostly) open-weight LLM family. As of March 2025, it had surpassed a billion downloads and it has spawned tens of thousands of derivatives. It was the basis of technology ecosystems such as Llama.cpp and is a frequent base for fine-tuned models.
The original LLaMA research itself has been influential, including The Alibaba Group’s Qwen Team acknowledging that they adopted the LLaMA approach of training LLMs in their original Qwen model (noting Qwen was trained independently and does not use Meta’s weights).
And there continue to be notable derivative models based on Llama, including:
- Cisco’s FoundationAI‑SecurityLLM‑8B model (April 2025): a cybersecurity model designed to support tasks such as threat analysis, vulnerability triage and secure code and configuration review.
- NVIDIA’s Llama 3.1 Nemotron family of models (March 2025): a suite of open-weight reasoning models designed for code generation, instruction following and agentic platform development, with variants optimised for inference on both enterprise-scale infrastructure and consumer-grade hardware.
- The Deepseek-R1 Distill series (January 2025): a set of models that used Llama (and Qwen) as student architectures, transferring R1’s reasoning capabilities into smaller variants optimised for inference on VRAM-constrained hardware.
Whilst the open-weight AI community's focus has broadened to include model families such as Qwen (Alibaba Cloud), Mistral and Mixtral (Mistral AI), DeepSeek (DeepSeek AI), and Phi (Microsoft Research), Llama-based models - such as Llama 4 Maverick and NVIDIA’s Nemotron Ultra - remain competitive, even if they do not lead consistently. For example, on Vellum AI’s Open LLM Leaderboard (as of July 2025), these models perform strongly on key reasoning benchmarks like GPQA and GRIND, though they are now outpaced in math benchmarks like AIME by newer families such as DeepSeek-R1.
However, when considered against closed-weight models from providers such as OpenAI, Google, Anthropic and xAI, the Llama family is increasingly looking outclassed. This situation could well be fuelling Zuckerberg’s concerns that their openness might be “just primarily helping competitors.”
Yes, Llama helped seed a vibrant open-weight ecosystem. However, Meta appears not to have captured a proportionate return from that openness. The July 2025 remarks from Zuckerberg suggest a strategic recalibration - perhaps not abandoning open-weight releases but certainly applying more scrutiny to them.
Meta’s more measured approach to releasing increasingly powerful open-weight AI models may also be prudent. Geoffrey Hinton, the so-called godfather of AI, reportedly warned that open-sourcing big models is akin to allowing someone to buy nuclear weapons at RadioShack. That said, recent advances - such as the Hierarchical Reasoning Model open-sourced by Sapient Intelligence - suggest that model size may no longer be the best (or only) heuristic for assessing these types of risks.
Questions around New Zealand's AI Strategy
This shift also exposes a gap in New Zealand's AI Strategy. While the strategy emphasises AI adoption and application - rather than building foundational models to rival the likes of Google or OpenAI - it isn't clear how Aotearoa might navigate the resulting supplier concentration risk that arises from AI ecosystems shaped by a small number of offshore technology providers - such as those built around Meta's Llama - where shifting commercial or geopolitical priorities could carry material consequences for Aotearoa’s ability to maintain and shape our AI ecosystem.
TL;DR
Meta’s Llama models helped lay the foundations of the open-weight AI movement, spawning entire ecosystems and derivatives. But with Mark Zuckerberg now signalling a more cautious approach – framed around safety concerns and fears of aiding competitors – those foundations may be starting to erode.
And while New Zealand’s AI Strategy doesn’t directly rely on open-weight models, Meta’s shift highlights a broader gap: how we manage supplier concentration risk in ecosystems dominated by a few offshore technology providers.
1. “Open-weight” refers to the release of model parameters enabling others to fine-tune or build upon the model. This is distinct from “open source”, which is often misused in this context. See also Meta’s LLaMa license is not Open Source.
Cloud spending continues to surge globally, but most organisations haven’t made the changes necessary to maximise the value and cost-efficiency benefits of their cloud investments. Download the whitepaper From Overspend to Advantage to learn about our proven approach to optimising cloud value.
Share this
- Agile Development (84)
- Software Development (64)
- Scrum (39)
- Business Analysis (28)
- Agile (27)
- Application Lifecycle Management (26)
- Capability Development (20)
- Requirements (20)
- Solution Architecture (19)
- Lean Software Development (17)
- Digital Disruption (16)
- IT Project (15)
- Project Management (15)
- Coaching (14)
- DevOps (14)
- Equinox IT News (12)
- IT Professional (11)
- Knowledge Sharing (10)
- Strategic Planning (10)
- Agile Transformation (9)
- Digital Transformation (9)
- IT Governance (9)
- International Leaders (9)
- People (9)
- IT Consulting (8)
- Cloud (7)
- MIT Sloan CISR (7)
- Change Management (6)
- Azure DevOps (5)
- Innovation (5)
- Working from Home (5)
- AI (4)
- Business Architecture (4)
- Continuous Integration (4)
- Enterprise Analysis (4)
- FinOps (4)
- ✨ (4)
- Client Briefing Events (3)
- Cloud Value Optimisation (3)
- GitHub (3)
- IT Services (3)
- Business Rules (2)
- Data Visualisation (2)
- Java Development (2)
- Security (2)
- System Performance (2)
- Automation (1)
- Communities of Practice (1)
- Kanban (1)
- Lean Startup (1)
- Microsoft Azure (1)
- Satir Change Model (1)
- Testing (1)
- July 2025 (3)
- March 2025 (1)
- December 2024 (1)
- August 2024 (1)
- February 2024 (3)
- January 2024 (1)
- September 2023 (2)
- July 2023 (3)
- August 2022 (4)
- July 2021 (1)
- March 2021 (1)
- February 2021 (1)
- November 2020 (2)
- July 2020 (1)
- June 2020 (2)
- May 2020 (2)
- March 2020 (3)
- August 2019 (1)
- July 2019 (2)
- June 2019 (1)
- April 2019 (2)
- October 2018 (1)
- August 2018 (1)
- July 2018 (1)
- April 2018 (2)
- January 2018 (1)
- September 2017 (1)
- July 2017 (1)
- February 2017 (1)
- January 2017 (1)
- October 2016 (2)
- September 2016 (1)
- August 2016 (4)
- July 2016 (3)
- June 2016 (3)
- May 2016 (4)
- April 2016 (5)
- March 2016 (1)
- February 2016 (1)
- January 2016 (1)
- December 2015 (5)
- November 2015 (11)
- October 2015 (3)
- September 2015 (1)
- August 2015 (1)
- July 2015 (7)
- June 2015 (7)
- April 2015 (1)
- March 2015 (2)
- February 2015 (2)
- December 2014 (3)
- September 2014 (2)
- July 2014 (1)
- June 2014 (2)
- May 2014 (8)
- April 2014 (1)
- March 2014 (2)
- February 2014 (2)
- November 2013 (1)
- October 2013 (2)
- September 2013 (2)
- August 2013 (2)
- May 2013 (1)
- April 2013 (3)
- March 2013 (2)
- February 2013 (1)
- January 2013 (1)
- November 2012 (1)
- October 2012 (1)
- September 2012 (1)
- July 2012 (2)
- June 2012 (1)
- May 2012 (1)
- November 2011 (2)
- August 2011 (2)
- July 2011 (3)
- June 2011 (4)
- April 2011 (2)
- February 2011 (1)
- January 2011 (2)
- December 2010 (1)
- November 2010 (1)
- October 2010 (1)
- February 2010 (1)
- July 2009 (1)
- October 2008 (1)