The Runbook | Fission's Blog

HubSpot reporting and attribution: building dashboards your team actually trusts

Written by Connor Skelly | Apr 27, 2026 5:15:00 PM

If you're exporting to Excel to double-check your HubSpot numbers, the reports aren't trusted. That's the simplest diagnostic we have for reporting maturity. Not whether dashboards exist. Not whether someone built a report last quarter. Whether the people using the data actually believe it.

In most portals, the answer is some version of "mostly" or "for the big stuff" or "we always cross-reference with the spreadsheet." That hedging tells you everything. The reporting exists but the confidence doesn't. And when confidence is low, people stop using the reports and start assembling their own numbers from whatever sources they personally trust. Now you have three people presenting three different versions of the same metric in the same meeting, and the conversation becomes about which number is right instead of what to do about the number.

That's not a dashboard design problem. It's a process problem. And it starts well before anyone opens the report builder.

Here's how we think about it. A good report comes down to the question you're trying to answer. Behind that question are specific datapoints. And behind each datapoint is a process, automated or manual, that makes the data accurate and available. That process layer is where most companies get stuck. They skip straight from "what question do we want to answer?" to "let's build a report," without asking "is the data this report needs actually being captured, and is it being captured correctly?" The answer is usually no, or "sort of, in three different fields."

As an operations agency, the process layer is the world we live in. Not the dashboard. Not the chart. The system that makes the data trustworthy before it ever reaches a report.

Why the numbers don't agree with each other

Reports pull from properties. If the properties are unreliable, the reports are unreliable. This is the same dependency chain from data integrity, but it shows up most painfully in reporting because reporting is where leadership looks.

Here's the most common version. Marketing tracks lead source using one property. Sales has a different field for the same concept. The attribution model in HubSpot references a third. When the VP asks "what's our best-performing channel?" the answer depends entirely on which field the report is built on. And the three fields disagree because they're populated by different systems at different times.

The person who built the report picked one field. They probably picked the one that was easiest to filter on, or the one that had the most data, or the one they were told to use by whoever set up the portal two years ago. But nobody documented the decision. So when someone else builds a report next quarter using a different field, the numbers won't match. Both reports are technically correct. They're just measuring different things with the same label.

This is where the process question becomes concrete. It's not enough to decide which field is the source of truth (though that's step one). You also have to ask: what process populates that field? Is it automated through a workflow, manually entered by a rep, or synced from an integration? How reliable is that process? If it's manual and reps skip it 40% of the time, the field is 40% empty, and any report built on it is 40% fiction. The reporting work starts with the process that makes the data real, then the property architecture that organizes it, then the dashboard that displays it. Most teams work that sequence in reverse.

Cross-object relationships are the hidden dependency

The other structural issue most teams miss is association completeness. HubSpot is a relational database. Contacts associate to companies. Deals associate to contacts and companies. Tickets associate to contacts and companies. When those associations are incomplete, your cross-object reporting breaks in ways that are hard to spot.

A company record that's missing deal associations can't show accurate revenue at the account level. If your sales team creates deals without associating them to a company (which happens constantly when deals are created from the contact record and HubSpot's auto-association doesn't fire), your company-level revenue reporting undercounts. You might look at a key account and see $50K in lifetime revenue when the real number is $180K, because three deals were never associated.

Orphaned contacts are the same problem in reverse. A contact without a company association can't roll up into account-level reporting. If you're running a report on "companies with more than 5 contacts," you're undercounting every company that has contacts floating without associations. Your ABM targeting, your account scoring, your company-level engagement metrics are all working with an incomplete picture.

The fix isn't complicated, but it requires a defined process. Auto-association rules for contacts to companies based on email domain. Required company association on deal creation. Regular audits for orphaned records (an active list that filters for contacts with no company association is one of the most useful lists you can build). Some of this is automated, some of it is a manual review cadence. Either way, it's a process that runs consistently. Without it, your cross-object reporting is working with an incomplete dataset and nobody knows by how much.

Role-based dashboards vs. the everything dashboard

Most HubSpot portals have one of two problems with dashboards: too few or too many. Either there's a single dashboard that tries to answer every question for every role, or there are 40 dashboards that nobody maintains and nobody can find the right one.

The fix is designing dashboards by role and cadence. What questions does each person need answered, and how often?

An executive opening a dashboard before a board meeting needs to see "how are we doing?" in under 10 seconds. Revenue, pipeline coverage, conversion rates, retention. Trends over time, not individual records. No clicking required. If they have to ask someone to interpret it, the dashboard failed.

A sales leader has a completely different question. They need "what needs my attention this week?" Pipeline health, deal velocity, close rates by rep, forecast vs. actual. They're looking at this before every pipeline review, and the dashboard should surface the exceptions (stale deals, slipped close dates, rep outliers) without making them hunt.

Individual reps care about none of that. They need their pipeline by stage, overdue tasks, and today's meetings. The question is "what do I do first this morning?" and the dashboard should answer it in one glance.

Marketing has its own version: MQLs generated, MQL-to-SQL conversion, source performance, content engagement. The question changes week to week between "is this working?" and "where do we shift budget?"

When you try to serve all four of these from one dashboard, everyone gets overwhelmed and nobody gets what they need. The executive scrolls past rep-level detail. The rep ignores company-wide metrics that have nothing to do with their day. Four roles, four dashboards, four different questions.

Attribution is the hardest version of this problem

Attribution gets more attention than almost any other reporting topic, and most of the conversation focuses on the wrong thing: which model to use. First touch, last touch, multi-touch, linear, time-decay, W-shaped. Teams spend weeks debating model selection when the real question is whether their data can support any model at all.

Attribution requires specific data to be captured at specific moments by specific processes. And once those moments pass, you can't backfill them. Every attribution gap traces back to a process that wasn't in place when the data needed to be captured.

Original source has to be captured at the moment of first contact creation and never overwritten. This is the one most teams get wrong, because an integration or import quietly overwrites original source, and now your first-touch attribution is permanently damaged for those records. You won't notice for months. By then the historical data is gone.

UTM parameters are a governance problem disguised as a technical one. If your marketing team uses "google" in one campaign, "Google" in another, and "google-ads" in a third, your source reporting fragments into variations that should be the same bucket. The fix is a naming convention doc. It takes an hour to create and saves hundreds of hours of cleanup later. Most teams don't have one.

Lifecycle stage transitions need timestamps, and this is where the connection to automation becomes concrete. When did a contact become an MQL? When did they become an SQL? When was the deal created? If lifecycle stages change manually and inconsistently, the timestamps are meaningless. You can't measure funnel velocity if you can't trust when each transition happened.

There's also a subtlety most teams miss around source tracking. A contact's first visit might come from organic search, but the form submission that converted them might come from a paid ad two weeks later. If you only track original source, you're giving organic credit for a conversion that paid media drove. Session source and form source need to exist as separate properties from original source.

Here's our practical advice on attribution: pick a model, any model, and commit to it for at least two quarters. The model matters less than most people think. What matters is that the data feeding it is consistent, the properties are governed, and you're measuring the same thing the same way over time. You can always change models later. You can't retroactively capture data you didn't track.

One more thing on attribution that most teams overcomplicate: it's okay to put humans in the loop. Not everything needs to be automated. At the deal level, your rep is already in the system working the deal. Before closed-won, require them to verify deal source. "How did this opportunity actually originate?" That's a 10-second field update from the person who knows the answer, and it produces cleaner attribution data than any automated model trying to stitch together touchpoints after the fact. The same applies to disqualification reasons, competitive intel, and buying committee mapping. Reps have context that the system doesn't. Asking them to confirm a field at a stage transition is a lightweight process that produces high-quality data. The goal isn't to automate every datapoint. It's to make sure every datapoint has a process behind it, and sometimes that process is a human who's already there.

The attribution data you should be capturing now

Even if you're not running attribution reports today, there are properties you should be populating now because you can't backfill them later.

Original source and original source drill-down (HubSpot defaults, but make sure integrations aren't overwriting them). First conversion (the form or action that turned an anonymous visitor into a known contact). Lifecycle stage timestamps for every transition. Deal source (how the opportunity was generated, distinct from how the contact was generated). Campaign membership with first and last touch dates.

If those properties are being captured cleanly, you can run any attribution model you want when you're ready. If they're not, you'll have gaps in your historical data that no reporting tool can fix.

From "someone builds a report" to self-serve

The last piece of reporting architecture is governance. Who builds reports, who maintains them, and who has access.

In most teams, one person becomes the accidental BI analyst. They know the report builder, so every data question gets routed to them. They become a bottleneck. Simple questions that should take five minutes take three days because they're queued behind six other requests.

The goal is self-serve reporting for recurring questions and managed reporting for complex analysis. Standing dashboards, once built and validated, should answer 80% of regular questions. The remaining 20% goes to whoever owns reporting.

That requires documentation: what each dashboard measures, what properties it references, when it was last validated. It requires permissions so nobody accidentally edits a shared dashboard. And it requires a quarterly review, because properties change, processes change, and reports that were accurate six months ago might be pulling from deprecated fields.

The first question we ask about reporting is whether you trust the numbers. The second is whether the processes behind those numbers, the ones that capture, validate, and maintain the data, are actually running. Usually those are two different conversations. The diagnostic call is where we figure out which one you need first.