The Runbook | Fission's Blog

AI costs are rising. Your operations are the moat.

Written by Connor Skelly | May 15, 2026 3:44:58 AM

Three things happened in the AI market over the last few weeks. Separately, they look like product announcements. Together, they tell a single story about where AI is headed and what it means for companies running on HubSpot.

Anthropic announced that starting June 15, programmatic usage of Claude (automated agents, CI pipelines, scheduled tasks built on Claude Code and the Agent SDK) is moving to a separate, credit-based billing system. Pro plan users get $20 per month in credits. Max users get $100 to $200. For teams that built automated workflows on top of Claude, this turns projects that felt free into projects with a real monthly bill. Many of those projects become difficult to justify once you're paying per token.

Perplexity's CTO announced they're moving away from MCP, the Model Context Protocol that Anthropic positioned as the universal standard for connecting AI tools. MCP's overhead consumes up to 72% of available context before an agent even processes a request. The protocol that was supposed to make everything interoperable turned out to be too expensive and too unreliable for production use. Perplexity is going back to traditional APIs.

Meanwhile, HubSpot published their vision for the agent era, and the entire thesis is built on APIs. Their position: anything you can do inside HubSpot should be doable through an API. They're opening up both their data layer and an intelligence layer (scores, assessments, benchmarks drawn from patterns across 280,000+ customer portals) to any agent or integration that wants to use it. We wrote about what this means for HubSpot customers here.

These three moves point in the same direction. The free-and-easy AI era is correcting, and the correction favors platforms with established APIs over custom AI builds on top of expensive language models.

The cheap AI window is closing

For the last 18 months, a certain category of AI project felt almost free to build. You could spin up a Claude subscription, wire it to your data, automate a workflow, and run it indefinitely for the cost of the subscription. The model providers were subsidizing usage to drive adoption. That subsidy is ending.

The Claude credit change is the clearest signal. A team that was running automated prospecting agents, data enrichment pipelines, or scheduled analysis jobs on Claude is about to see those costs itemized. $20 per month in API credits does not go far when you're running agents that consume thousands of tokens per execution. The Max plans give you more room, but the principle is the same: usage that was bundled is now metered.

This matters for anyone evaluating AI projects, because the cost model for "build it on Claude" or "build it on GPT" just changed. The total cost of ownership for custom AI workflows went up, and it went up on a timeline that gives teams about a month to figure out what they're going to do about it.

Why platform APIs win this round

The Perplexity and HubSpot stories are two sides of the same coin.

Perplexity tried MCP (the protocol designed to let AI agents talk to any tool through a standard interface) and found it too expensive for what it delivered. The context overhead alone ate most of the available processing capacity before the agent could do any real work. They went back to APIs because APIs are cheaper, faster, and more predictable. The universal connector dream is a nice idea, but the economics favor purpose-built integrations with platforms that have mature API surfaces.

HubSpot was one of the first major platforms to ship an MCP server, and it's gotten real traction. Partners are building on it, the use cases fill the LinkedIn feed weekly, and there are genuinely useful applications already in production (with several notable gaps that HubSpot is still closing). But the interesting thing about HubSpot's approach is that MCP is one access method, not the strategy. The strategy is APIs.

HubSpot is betting on exactly that. Their agent era vision is about making HubSpot the operating layer for AI agents, where agents can read, write, and act on CRM data through documented APIs. The intelligence layer they're building on top (deal risk scores, customer health assessments, conversion benchmarks) is the kind of thing companies have been trying to build from scratch by piping CRM data into language models and asking for analysis. HubSpot is going to offer that natively, built on patterns from hundreds of thousands of portals, available through an API call. I wrote about what this means for HubSpot customers and where I think the AI training approach still has room to mature when the announcement came out. The API-first direction is the right call, and it's one of the reasons I'm bullish on building AI workflows on HubSpot's infrastructure.

The implication is straightforward. It is significantly cheaper and technically more reliable to build AI workflows on top of robust platform APIs like HubSpot's than to build them from scratch using expensive language model subscriptions. The per-query cost of hitting a HubSpot API endpoint is a fraction of the cost of running the same analysis through a language model. The reliability is higher because APIs return structured data with defined schemas, while language models return probabilistic text that has to be parsed and validated.

And most of these capabilities run within the AI credits already included in your HubSpot subscription. When usage limits tighten everywhere else, the platform with built-in credits becomes the cost-effective default.

Data quality is the prerequisite either way

This is where the conversation always comes back to operations.

Whether you're building on HubSpot's APIs, running agents through Claude, or using any other AI tool, the output quality depends entirely on the data quality underneath. We wrote about this in depth: AI on top of bad data produces confident wrong answers at scale. AI on top of a well-structured CRM produces genuine operational advantage.

The companies that invested in data integrity, pipeline discipline, and integration governance are now sitting on the most cost-effective AI infrastructure available. Their CRM data is clean enough to feed AI workflows directly. Their processes are defined enough that AI can operate within them. Their properties and lifecycle stages mean something, so when an agent reads a deal record or a contact timeline, it's reading signal instead of noise.

The companies that skipped that work are in a harder position. They can't take advantage of HubSpot's native AI capabilities because the data those capabilities read isn't trustworthy. They can't build cost-effective workflows on platform APIs because the APIs return garbage if the underlying records are garbage. And building custom AI on top of language models just got more expensive.

The foundation work was always the right investment. The economics now make that undeniable.

What AI-enabled projects actually look like on a clean foundation

When the operational layer is solid, AI projects built on HubSpot APIs become practical in ways that custom builds can't match on cost or reliability.

Enriched lead magnets move beyond static PDFs and generic quizzes. AI delivers personalized, data-backed assessments to each respondent based on their actual inputs, scored against patterns from your CRM data. Higher conversion, higher quality leads, and an immediate value exchange that earns the contact info.

Proactive prospecting replaces manual target account research with AI-powered identification, enrichment, scoring, and routing. Your team works a prioritized list based on real fit signals, scored against your actual ICP and routed to the right rep automatically. The scoring model improves with every closed-won and closed-lost deal because it's reading from your pipeline data.

Sales readiness at the opportunity level gives reps a pre-call briefing that connects enrichment data about the account to what your team actually sells. Contextualized to your offerings and your sales process. Prep that maps to your conversations.

Post-discovery follow-up takes the call transcript, layers in all upstream enrichment data, and generates specific follow-up content. Follow-up that references what was actually discussed and connects it to what matters for that account.

Customer agents for support connect to your existing support channels, source answers from your knowledge base, and manage the full ticket lifecycle from creation to resolution. Routine conversations are handled automatically. Complex issues route to your team for human escalation. The agent learns from your resolved tickets and your knowledge base content, so its accuracy improves as your support documentation improves.

Each of these works with most HubSpot subscriptions, your existing AI credits, and your existing data and processes. If the data and process layer isn't ready, that's where you need to start. 

The sequence still matters

AI costs rising is a market correction, and market corrections reward discipline. The teams that sequenced their work correctly (foundation first, AI on top) are positioned to build on stable, cost-effective infrastructure. The teams that jumped to AI before the foundation was ready are now paying more for less reliable results.

The sequencing question for any company on HubSpot is the same one it's been: can you trust your data? Are your processes defined and enforced? Do your properties, stages, and lifecycle definitions actually mean something? If yes, you're ready to layer AI on top and it will compound. If not, every AI project you build will inherit the problems underneath it, and those problems just got more expensive.

The strategy (and really, diagnostic) call is where we figure out which layer you're actually on. Some teams are ready for AI now. Most need foundational work first, and the ones who do that work are the ones who get the most out of AI when they get there.

Book a strategy call and we'll take an honest look at where your operations are today.