Share this
The gpt-oss Blossom
by Deane Sloan on 27 August 2025
One of the most inspiring facets of an open weight model release isn’t that it sparks innovation - it’s that creativity is almost guaranteed. With each open release, we don’t just see usage grow, we see an organic ecosystem blossom - an apt metaphor given the examples below relate to the recent gpt-oss release from OpenAI with their Blossom logo.
AI-generated image (OpenAI DALL-E)
OpenAI gpt-oss
OpenAI released gpt-oss-120b and gpt-oss-20b on 5 August 2025, its first open-weight models since GPT-2, under the Apache 2.0 license and OpenAI’s gpt-oss usage policy.
Both are Mixture of Experts (MoE) models. Instead of activating every parameter for every token, MoE models route each input through a small subset of “experts.”
This makes them far more efficient to train as well as run, and opens new ways to scale models without the linear compute cost of traditional dense LLMs.
These models seem to be sized for cloud inference, with OpenAI stating that the 120b parameter variant runs “efficiently on a single 80 GB GPU”. Even the 20b parameter variant looks too large for most consumer hardware, with OpenAI noting that the 20b variant can run with 16 GB of VRAM.
But predictably, the community responded with ingenuity.
Mixture of Experts (MoE) Offload
Thanks to llama.cpp’s Mixture of Experts (MoE) offload (--cpu-moe flag), the community are running gpt-oss-120b on systems with as little as 8–9 GB of VRAM, streaming expert layers to RAM - albeit requiring 64 GB+ of system RAM.
Whilst actual throughput will vary with CPU speed, RAM bandwidth, storage, and context length, the performance looks to be usable, with references of ~17–25 tokens/sec throughput.
This is unexpected enough that it might even cannibalise a slice of ChatGPT usage at the margins.
Expert Pruning
Aman Priyanshu and Supriti Vijay analysed expert activations in gpt-oss-20b and pruned under-utilised experts across domain-specialised variants, producing ~4.2b to ~20b models spanning 1–32 experts.
This sees variants of gpt-oss that look to be performant but are considerably lighter weight.
Interestingly, quantisation and distillation usually get most of the attention when it comes to compressing models for smaller inference footprints. With this work I imagine pruning will get more attention, especially coupled with Nvidia’s use of pruning in their Minitron work.
Why This Matters
Open weight models provide public access to their trained parameters, allowing users to download, adapt, and fine-tune them for specific tasks without needing the original training data or code.
These models transform flagship releases into fertile ground for community experimentation. The base models are often excellent themselves - but it is the downstream community effort - offloading hacks, pruning pipelines, distillation tangents - that can unlock unexpected uses.
References and Links
OpenAI release
MoE offloading (--cpu-moe)
- https://github.com/ggml-org/llama.cpp/discussions/15396
- https://www.reddit.com/r/LocalLLaMA/comments/1mke7ef/120b_runs_awesome_on_just_8gb_vram/
(Note - given this is a Reddit reference, I wouldn’t click through if “direct” language isn’t something you’re comfortable with)
Pruning and expert fingerprinting
- https://github.com/AmanPriyanshu/GPT-OSS-MoE-ExpertFingerprinting
- https://huggingface.co/collections/AmanPriyanshu/gpt-oss-pruned-experts-42b-20b-if-science-math-etc-689c380a366950b1787a20c6
- https://huggingface.co/AmanPriyanshu/collections
Nvidia Minitron
Cloud spending continues to surge globally, but most organisations haven’t made the changes necessary to maximise the value and cost-efficiency benefits of their cloud investments. Download the whitepaper From Overspend to Advantage to learn about our proven approach to optimising cloud value.
Share this
- Agile Development (84)
- Software Development (64)
- Scrum (39)
- Business Analysis (28)
- Agile (27)
- Application Lifecycle Management (26)
- Capability Development (20)
- Requirements (20)
- Solution Architecture (19)
- Lean Software Development (17)
- Digital Disruption (16)
- IT Project (15)
- Project Management (15)
- Coaching (14)
- DevOps (14)
- Equinox IT News (12)
- IT Professional (11)
- Knowledge Sharing (10)
- Strategic Planning (10)
- Agile Transformation (9)
- Digital Transformation (9)
- IT Governance (9)
- International Leaders (9)
- People (9)
- IT Consulting (8)
- Cloud (7)
- MIT Sloan CISR (7)
- AI (6)
- Change Management (6)
- ✨ (6)
- Azure DevOps (5)
- Innovation (5)
- Working from Home (5)
- Business Architecture (4)
- Continuous Integration (4)
- Enterprise Analysis (4)
- FinOps (4)
- Client Briefing Events (3)
- Cloud Value Optimisation (3)
- GitHub (3)
- IT Services (3)
- Business Rules (2)
- Data Visualisation (2)
- Java Development (2)
- Security (2)
- System Performance (2)
- Automation (1)
- Communities of Practice (1)
- Kanban (1)
- Lean Startup (1)
- Microsoft Azure (1)
- Satir Change Model (1)
- Testing (1)
- August 2025 (2)
- July 2025 (3)
- March 2025 (1)
- December 2024 (1)
- August 2024 (1)
- February 2024 (3)
- January 2024 (1)
- September 2023 (2)
- July 2023 (3)
- August 2022 (4)
- July 2021 (1)
- March 2021 (1)
- February 2021 (1)
- November 2020 (2)
- July 2020 (1)
- June 2020 (2)
- May 2020 (2)
- March 2020 (3)
- August 2019 (1)
- July 2019 (2)
- June 2019 (1)
- April 2019 (2)
- October 2018 (1)
- August 2018 (1)
- July 2018 (1)
- April 2018 (2)
- January 2018 (1)
- September 2017 (1)
- July 2017 (1)
- February 2017 (1)
- January 2017 (1)
- October 2016 (2)
- September 2016 (1)
- August 2016 (4)
- July 2016 (3)
- June 2016 (3)
- May 2016 (4)
- April 2016 (5)
- March 2016 (1)
- February 2016 (1)
- January 2016 (1)
- December 2015 (5)
- November 2015 (11)
- October 2015 (3)
- September 2015 (1)
- August 2015 (1)
- July 2015 (7)
- June 2015 (7)
- April 2015 (1)
- March 2015 (2)
- February 2015 (2)
- December 2014 (3)
- September 2014 (2)
- July 2014 (1)
- June 2014 (2)
- May 2014 (8)
- April 2014 (1)
- March 2014 (2)
- February 2014 (2)
- November 2013 (1)
- October 2013 (2)
- September 2013 (2)
- August 2013 (2)
- May 2013 (1)
- April 2013 (3)
- March 2013 (2)
- February 2013 (1)
- January 2013 (1)
- November 2012 (1)
- October 2012 (1)
- September 2012 (1)
- July 2012 (2)
- June 2012 (1)
- May 2012 (1)
- November 2011 (2)
- August 2011 (2)
- July 2011 (3)
- June 2011 (4)
- April 2011 (2)
- February 2011 (1)
- January 2011 (2)
- December 2010 (1)
- November 2010 (1)
- October 2010 (1)
- February 2010 (1)
- July 2009 (1)
- October 2008 (1)