For digital marketers, questions like “Where’s this number coming from?” or “Why doesn’t this platform match the other one?” sound all too familiar. With multiple platforms and cross-channel performance to track, analyzing data – and its discrepancies – is just part of the job. We spend an inordinate amount of time trying to understand which number is “right.” And the numbers between platforms inevitably never match. So, what do we do? We pick a source of truth – our holy grail. Then we work tirelessly to ensure all reported numbers match this self-defined source of truth. But we’ve been doing it all wrong.
You might be shaking your head but hear me out. A single source of truth can actually be shortsighted. Having multiple sources that measure results differently should be viewed (and utilized) as our biggest asset. And it’s time we capitalize on it.
No Match, No Panic: Why Are We Seeing Data Discrepancies?
Before determining that a number is “wrong” simply because it’s different from the source of truth, let’s take a step back. Tracking between different platforms is always slightly different and it’s important to understand why. The usual culprits? Lookback windows and attribution models are the primary reasons for discrepant numbers.
Across Google’s owned platforms, the defaults for lookback windows and attribution models are different. For example, Google Ads defaults to a last-click attribution model, while Google Analytics defaults to a last-click non-direct attribution model. Meaning Google Analytics attributes the conversion to the day the conversion occurred, whereas Google Ads attributes the conversion to the day of the impression before the converted click occurred. Where this leaves you: confused and with more conversions in Google Analytics than Google Ads reports. What actually happened? Google Ads retroactively applied the conversions based on the day the click happened.
Don’t Force It: You Can Make Numbers Match, But Why Bother?
Lookback windows and attribution models are customizable, but why bother? Embrace the discrepancies instead of forcing all the numbers to perfectly match. Having multiple sources of truth can provide a better picture of your path to purchase. Focus on understanding multiple sources of truth dependent on the insight you’re looking for.
Take your performance data from ads, for example. You could certainly use Google Analytics as the source of truth. This would de-duplicate any conversions that multiple publishers are taking credit for. You’d have an understanding of the true number of conversions and see the dates those conversions happened. However, while this option provides clean data, it can be misleading. Knowing the date, a conversion occurred is beneficial, but isn’t it equally as important to understand (and not to mention optimize for) when the click occurred? It’s also important to understand if multiple channels are taking credit for a conversion – not to inflate your conversion metrics, but to understand the touchpoints in the purchase journey. By utilizing Google Analytics as the source of truth, marketers could be optimizing away from data that could lead to conversion down the funnel.
Use different sources of truth depending on the question you’re trying to answer. Wondering how many leads your marketing efforts drove that can be followed up on this month? Use analytics. Wondering how effective your marketing efforts were at driving leads? Use publisher data.
Analyzing between different lookback windows, attribution models, and conversion data is definitely more challenging than the alternative. However, it will ultimately provide a more realistic view of how people are interacting with your brand and make your marketing dollars work harder and smarter.
The Impact in Real Scenarios
For brands with higher priced items and/or long purchase journeys, utilizing multiple sources of truth can be especially pivotal in marketing strategies.
Let’s take a look at luxury home goods, an industry where the consumer decision journey typically takes more time. Say a consumer visits a website from a Google search campaign and browses, but ultimately does not make a purchase. Then the consumer returns directly to the website multiple times over a 90-day period and finally makes a $10,000 purchase. Because the Google Analytics attribution model is last-click non-direct, the consumer’s first click that happened 90 days before the purchase would be attributed to Google Ads, but no revenue would be attributed on that date within the Google Ads platform. This is because Google Ads would attribute the revenue to the original date the click happened.
By fully understanding this discrepancy, one could report on when the revenue came in but still optimize toward the click that introduced the consumer to the brand. By identifying the clicks that drove a delayed purchase, one could work to increase purchases and conversion rates. In comparison, relying solely on Google Analytics as the source of truth would’ve optimized away from valuable Google Ads clicks, simply because those clicks take longer to convert.
Before You Go: Here’s an Important Consideration
Be communicative with the greater team. Avoid confusion and proactively tell them you’re using multiple sources of truth. Of course, this doesn’t mean your information needs to be segmented. In fact, using a tool like Tableau, PowerBI, or Qlik will ensure multiple sources are easily accessible in one spot – making your life easier. When discussing data discrepancies, being able to compare numbers side by side is especially impactful and provides a strong case for using multiple sources of data for the team’s analysis and optimization.
Nice contrarian perspective. Definitely get the perspective that these are estimates at best and each estimate will reflect potential influences and/or biases of its provenance. And understanding the differences yields insights on the origination source that is valuable. However couple of reasons to reconcile different estimates of same or similar metrics would be –
1) dashboarding/reporting- multiple metrics may create confusion. However, to your point its pointless if the benchmark source of truth is arbitrarily selected. Ideally need to create a composite weighted by importance of the source.
2) reconciling differences is extremely critical to distinguish meaningful differences vs. data collection/processing errors.