Share this
Six ways to manage the hidden AI costs
by Deane Sloan on 09 July 2025
Organisations are increasingly turning to generative artificial intelligence (generative AI) to help turn ideas into reality. These generative AI tools rely on foundation models (FMs) and can be applied to a wide range of use cases including language, coding, genomics and much more.
Creating these models is resource-intensive work and requires specific skills. You can get around these requirements and skip directly to building and scaling generative AI applications, by using Foundation Model as a Service (FMaaS).
What is Foundation Model as a Service?
FMaaS provide API-based access to frontier or enterprise‑tuned models, together with security controls and optional fine‑tuning. This allows you to integrate generative AI capabilities into your products or service without the need to manage the underlying model infrastructure.
Think of it as the generative AI analogue of Software as a Service (SaaS).
Examples include Microsoft Azure OpenAI Service, Amazon Bedrock, Google Vertex AI and Anthropic Claude API.
In my role as Co-CEO and as a consultant, I understand the importance of using these powerful services while balancing cost optimisation, efficiency and value growth.
The six key strategies to keep FMaaS costs under control
Effective cost management becomes crucial when adopting FMaaS at scale. Runway costs can quickly add up and result on a shock bill.
Here are six key strategies to ensure you get the most value from your investment:
- Provisioned throughput: A fixed-cost, fixed-term subscription that reserves resources and ensure specific throughput for generative AI services.
- Cost benefit: Up to 70% discount compared to on-demand usage.
- Best practice: Match provisioned throughput to stable workloads and monitor usage to avoid over-provisioning.
- Batch inference: Making predictions or running inference on a large set of data points, instead of processing each data point individually.
- Cost benefit: Around 50% cheaper than real-time calls, ideal for large asynchronous jobs.
- Best practice: Use batch inference for high-volume, non-real-time tasks to minimise idle time and reduce costs.
- Token / Prompt Caching: Use previously processed prompts to reduce latency and computational costs.
- Cost benefit: Up to 90% discount on cached input tokens and latency improvement up to 80‑85%. In Microsoft Azure OpenAI Service, if the call runs under a Provisioned Throughput Unit (PTU), the cached input tokens can receive up to a 100 % discount.
- Best practice: Cache repeated context (system prompts, RAG prefixes) across calls to cut both cost and response time.
- Real-time API usage: Interact with models through an API that provides immediate responses with minimal latency.
- Cost benefit: Pay-as-you-go model allows for flexibility.
- Best practice: Right-size models and optimise prompts to ensure efficient usage.
- Model selection: Select the most suitable model for a specific task, based on performance metrics, complexity and generalisation ability.
- Cost benefit: Balances cost and performance.
- Best practice: Choose models that are fit-for-purpose to avoid unnecessary expenses.
- Cross-team centralisation: Integrate AI tools and data across multiple teams to streamline collaboration, improve efficiency and ensure consistent decision-making.
- Cost benefit: Pooled savings and scale.
- Best practice: Enable shared FMaaS platforms and use tagging to track and manage costs across teams.
FMaaS platforms are perfect candidates for FinOps
FinOps, the operational framework and practices to maximise the value of cloud and technology, can be used side-by-side with FMaaS platforms.
FMaaS platforms align well with the FinOps principles of financial accountability, cost optimisation and real-time decision making in cloud environments.
Adopting an "optimisation, efficiency, growth" mindset can help you keep the cost structure under control. In turn this can ensure that your AI initiatives are both innovative and cost-effective.
Cloud spending continues to surge globally, but most organisations haven’t made the changes necessary to maximise the value and cost-efficiency benefits of their cloud investments. Download the whitepaper From Overspend to Advantage to learn about our proven approach to optimising cloud value.
Share this
- Agile Development (84)
- Software Development (64)
- Scrum (39)
- Business Analysis (28)
- Agile (27)
- Application Lifecycle Management (26)
- Capability Development (20)
- Requirements (20)
- Solution Architecture (19)
- Lean Software Development (17)
- Digital Disruption (16)
- IT Project (15)
- Project Management (15)
- Coaching (14)
- DevOps (14)
- Equinox IT News (12)
- IT Professional (11)
- Knowledge Sharing (10)
- Strategic Planning (10)
- Agile Transformation (9)
- Digital Transformation (9)
- IT Governance (9)
- International Leaders (9)
- People (9)
- IT Consulting (8)
- Cloud (7)
- MIT Sloan CISR (7)
- Change Management (6)
- Azure DevOps (5)
- Innovation (5)
- Working from Home (5)
- Business Architecture (4)
- Continuous Integration (4)
- Enterprise Analysis (4)
- FinOps (4)
- AI (3)
- Client Briefing Events (3)
- Cloud Value Optimisation (3)
- GitHub (3)
- IT Services (3)
- ✨ (3)
- Business Rules (2)
- Data Visualisation (2)
- Java Development (2)
- Security (2)
- System Performance (2)
- Automation (1)
- Communities of Practice (1)
- Kanban (1)
- Lean Startup (1)
- Microsoft Azure (1)
- Satir Change Model (1)
- Testing (1)
- July 2025 (3)
- March 2025 (1)
- December 2024 (1)
- August 2024 (1)
- February 2024 (3)
- January 2024 (1)
- September 2023 (2)
- July 2023 (3)
- August 2022 (4)
- July 2021 (1)
- March 2021 (1)
- February 2021 (1)
- November 2020 (2)
- July 2020 (1)
- June 2020 (2)
- May 2020 (2)
- March 2020 (3)
- August 2019 (1)
- July 2019 (2)
- June 2019 (1)
- April 2019 (2)
- October 2018 (1)
- August 2018 (1)
- July 2018 (1)
- April 2018 (2)
- January 2018 (1)
- September 2017 (1)
- July 2017 (1)
- February 2017 (1)
- January 2017 (1)
- October 2016 (2)
- September 2016 (1)
- August 2016 (4)
- July 2016 (3)
- June 2016 (3)
- May 2016 (4)
- April 2016 (5)
- March 2016 (1)
- February 2016 (1)
- January 2016 (1)
- December 2015 (5)
- November 2015 (11)
- October 2015 (3)
- September 2015 (1)
- August 2015 (1)
- July 2015 (7)
- June 2015 (7)
- April 2015 (1)
- March 2015 (2)
- February 2015 (2)
- December 2014 (3)
- September 2014 (2)
- July 2014 (1)
- June 2014 (2)
- May 2014 (8)
- April 2014 (1)
- March 2014 (2)
- February 2014 (2)
- November 2013 (1)
- October 2013 (2)
- September 2013 (2)
- August 2013 (2)
- May 2013 (1)
- April 2013 (3)
- March 2013 (2)
- February 2013 (1)
- January 2013 (1)
- November 2012 (1)
- October 2012 (1)
- September 2012 (1)
- July 2012 (2)
- June 2012 (1)
- May 2012 (1)
- November 2011 (2)
- August 2011 (2)
- July 2011 (3)
- June 2011 (4)
- April 2011 (2)
- February 2011 (1)
- January 2011 (2)
- December 2010 (1)
- November 2010 (1)
- October 2010 (1)
- February 2010 (1)
- July 2009 (1)
- October 2008 (1)