Share this
Is generative AI about to have its PC Moment?
by Deane Sloan on 25 August 2025
We may be seeing the start of another shift in generative AI: away from predominantly cloud-focused inference and towards a more hybrid ecosystem, with viable on-premises and portable AI compute options starting to land.
AI-generated image (OpenAI DALL-E)
This year has seen a steady stream of NVIDIA GB10-based “AI supercomputer” announcements, with examples from MSI, GIGABYTE and ASUS now available for pre-order in New Zealand. Each delivers ~1,000 TOPS (FP4)* with 128 GB of unified memory, though on comparatively slow LPDDR5X at ~273 GB/s.
Dell has also flagged a GB10-based Pro Max with GB10, while reports suggest some variants of its Pro Max 18 Plus laptop may ship with a discrete Qualcomm AI 100 Inferencing Card, with estimates of over ~450 TOPS (INT8)* with 64 GB of dedicated memory. That is a major step up from today’s laptop NPUs, which mostly sit in the 38–50 TOPS (INT8)* range.
Meanwhile, a rumoured NVIDIA-MediaTek ARM-based N1 laptop and N1-X desktop SoC promises 180–200 TOPS (INT8)* and up to 128 GB of unified memory, targeting the thin-and-light segment, though timelines appear to have slipped more than once.
This is not a new thing
We have seen this kind of shift before - from mainframes to PCs, and later to a hybrid of edge devices connected to the cloud.
Generative AI first made its mark in the cloud, but with open-weight models enabling local inference and new classes of on-premises and portable AI compute becoming available, it could be about to have its own “PC moment.”
If that is the case, the implications could be significant. Organisations may begin to rely less on hyperscaler-only AI compute and more on a hybrid of local and cloud AI compute.
What happens now?
Such a shift could reshape the economics, security and governance of AI investments, with procurement conversations moving beyond cloud consumption costs to also include fleet refresh decisions.
At the same time, hyperscalers are not about to walk away from their specialised infrastructure, custom silicon and cost optimisation at scales most organisations cannot match. For many workloads, the economics of cloud-based AI, with its elasticity and fully managed services, will likely remain compelling.
Organisations that can successfully anticipate if or when a shift to more hybridised LLM inference happens will be better positioned to optimise their AI investments.
*: FP4 and INT8 TOPS are not directly comparable. They reflect vendor-quoted peak performance in different precision modes. Memory bandwidth also varies significantly and can strongly influence real-world throughput.
Cloud spending continues to surge globally, but most organisations haven’t made the changes necessary to maximise the value and cost-efficiency benefits of their cloud investments. Download the whitepaper From Overspend to Advantage to learn about our proven approach to optimising cloud value.
Share this
- Agile Development (84)
- Software Development (64)
- Scrum (39)
- Business Analysis (28)
- Agile (27)
- Application Lifecycle Management (26)
- Capability Development (20)
- Requirements (20)
- Solution Architecture (19)
- Lean Software Development (17)
- Digital Disruption (16)
- IT Project (15)
- Project Management (15)
- Coaching (14)
- DevOps (14)
- Equinox IT News (12)
- IT Professional (11)
- Knowledge Sharing (10)
- Strategic Planning (10)
- Agile Transformation (9)
- Digital Transformation (9)
- IT Governance (9)
- International Leaders (9)
- People (9)
- IT Consulting (8)
- Cloud (7)
- MIT Sloan CISR (7)
- Change Management (6)
- AI (5)
- Azure DevOps (5)
- Innovation (5)
- Working from Home (5)
- ✨ (5)
- Business Architecture (4)
- Continuous Integration (4)
- Enterprise Analysis (4)
- FinOps (4)
- Client Briefing Events (3)
- Cloud Value Optimisation (3)
- GitHub (3)
- IT Services (3)
- Business Rules (2)
- Data Visualisation (2)
- Java Development (2)
- Security (2)
- System Performance (2)
- Automation (1)
- Communities of Practice (1)
- Kanban (1)
- Lean Startup (1)
- Microsoft Azure (1)
- Satir Change Model (1)
- Testing (1)
- August 2025 (2)
- July 2025 (3)
- March 2025 (1)
- December 2024 (1)
- August 2024 (1)
- February 2024 (3)
- January 2024 (1)
- September 2023 (2)
- July 2023 (3)
- August 2022 (4)
- July 2021 (1)
- March 2021 (1)
- February 2021 (1)
- November 2020 (2)
- July 2020 (1)
- June 2020 (2)
- May 2020 (2)
- March 2020 (3)
- August 2019 (1)
- July 2019 (2)
- June 2019 (1)
- April 2019 (2)
- October 2018 (1)
- August 2018 (1)
- July 2018 (1)
- April 2018 (2)
- January 2018 (1)
- September 2017 (1)
- July 2017 (1)
- February 2017 (1)
- January 2017 (1)
- October 2016 (2)
- September 2016 (1)
- August 2016 (4)
- July 2016 (3)
- June 2016 (3)
- May 2016 (4)
- April 2016 (5)
- March 2016 (1)
- February 2016 (1)
- January 2016 (1)
- December 2015 (5)
- November 2015 (11)
- October 2015 (3)
- September 2015 (1)
- August 2015 (1)
- July 2015 (7)
- June 2015 (7)
- April 2015 (1)
- March 2015 (2)
- February 2015 (2)
- December 2014 (3)
- September 2014 (2)
- July 2014 (1)
- June 2014 (2)
- May 2014 (8)
- April 2014 (1)
- March 2014 (2)
- February 2014 (2)
- November 2013 (1)
- October 2013 (2)
- September 2013 (2)
- August 2013 (2)
- May 2013 (1)
- April 2013 (3)
- March 2013 (2)
- February 2013 (1)
- January 2013 (1)
- November 2012 (1)
- October 2012 (1)
- September 2012 (1)
- July 2012 (2)
- June 2012 (1)
- May 2012 (1)
- November 2011 (2)
- August 2011 (2)
- July 2011 (3)
- June 2011 (4)
- April 2011 (2)
- February 2011 (1)
- January 2011 (2)
- December 2010 (1)
- November 2010 (1)
- October 2010 (1)
- February 2010 (1)
- July 2009 (1)
- October 2008 (1)