1Introduction
Marketing attribution, the practice of determining which marketing touches deserve credit for driving conversions, is fundamentally broken for most businesses. Not broken in the sense that it needs minor adjustments or better tools, but broken in the sense that the underlying assumptions and technical infrastructure that made deterministic attribution possible have been systematically dismantled over the past five years. iOS privacy restrictions eliminated cross-app and cross-website tracking on Apple devices, which represent 50 to 60 percent of mobile traffic for most US businesses according to StatCounter data. Google announced and then repeatedly delayed the deprecation of third-party cookies in Chrome, but the intent is clear and the eventual execution is inevitable. Walled garden platforms like Facebook, Google, and Amazon hoard user data and refuse to share granular conversion paths with advertisers. The regulatory environment in Europe, California, and increasingly across US states imposes consent requirements and data minimization practices that further limit tracking capabilities. Privacy advocates, browser vendors, and platform incentives all point in the same direction, toward a future where marketers can measure aggregate performance but cannot track individual users across properties with the precision that attribution models require.
This transition has left most small business owners in a state of measurement paralysis. They know that their marketing campaigns generate customers, but they cannot confidently say which campaigns deserve credit or how much to invest in each channel. Facebook reports one set of conversion numbers. Google reports different numbers. Their website analytics show yet another picture. When they try to reconcile these platform-reported metrics, the totals exceed their actual customer count by 30 to 80 percent because each platform takes credit using attribution windows and methodologies that inflate their contribution. According to a 2025 Gartner survey of over 400 marketing leaders, 68 percent said they have low or very low confidence in their marketing attribution data, and 71 percent said that attribution challenges negatively impact their ability to allocate budget effectively. The problem is not lack of tools. The market offers dozens of attribution platforms, multi-touch attribution models, and analytics dashboards. The problem is that the data these tools depend on is increasingly unavailable, incomplete, or unreliable.
The good news is that perfect attribution was never necessary to make good marketing decisions. The belief that you need to know exactly which ad impression caused which conversion is a fantasy perpetuated by analytics vendors and the remnants of a tracking-rich era that is not coming back. What businesses actually need is directionally accurate insight into which channels drive most of their results, which audiences respond best to which messages, and where incremental budget will generate positive returns. That level of insight remains achievable even in a privacy-first world, but it requires abandoning attribution models built for tracking conditions that no longer exist and adopting frameworks that work with the data you can actually collect. This article explains exactly what broke, why it is not fixable using traditional methods, and what measurement approaches actually work for small businesses navigating the post-tracking landscape.
Track prospects across channels using visitor identification, not broken cookies.
2What Broke and Why It Matters
Marketing attribution depends on the ability to connect marketing exposures to conversion outcomes at the individual user level. If you can observe that Jane Doe saw your Facebook ad on Monday, clicked your Google ad on Wednesday, visited your website directly on Friday, and purchased on Saturday, you can build models that assign credit to each of those touches based on various rules. The problem is that in 2026, you cannot reliably observe that sequence for most users. The tracking infrastructure that made it possible has been systematically dismantled.
The first major break was Apple's iOS 14.5 App Tracking Transparency (ATT) update released in April 2021. This update required apps to ask users for explicit permission before tracking their activity across other companies' apps and websites. The opt-in rate stabilized between 15 and 25 percent according to Flurry Analytics, meaning that 75 to 85 percent of iOS users became functionally invisible to cross-app and cross-website tracking. Before ATT, when a user clicked a Facebook ad, visited a website, and made a purchase, Facebook could track the entire journey using the Facebook Pixel combined with the Identifier for Advertisers (IDFA) that allowed cross-app attribution. After ATT, that tracking chain broke for the majority of iOS users. Facebook introduced Aggregated Event Measurement as a workaround, but it uses probabilistic modeling and aggregate data instead of deterministic person-level tracking. Conversion windows shrunk from 28 days to 7 days. Attribution became modeled estimates rather than measured facts.
The second break was the gradual deprecation of third-party cookies, the small text files that websites use to track users across different domains. Third-party cookies enabled retargeting, cross-site conversion tracking, and multi-touch attribution for desktop web traffic. Safari and Firefox blocked them by default starting in 2019 and 2020 respectively. Google announced plans to deprecate them in Chrome, the dominant browser with approximately 65 percent global market share, though the timeline has been repeatedly delayed due to regulatory scrutiny and advertiser pushback. As of early 2026, Chrome still supports third-party cookies for most users, but Google's stated direction is elimination, and the industry is operating under the assumption that third-party cookie tracking will be severely limited or eliminated within one to three years. When that happens, the primary mechanism for tracking users across websites disappears, and with it goes most cross-site attribution for desktop traffic.
The third break is structural rather than technical. Walled garden platforms like Facebook, Google, and Amazon operate closed ecosystems where they collect user data but share only limited aggregated insights with advertisers. When a user clicks your Facebook ad, you know that someone clicked. If they convert, Facebook reports the conversion if it happens within the attribution window and the user is trackable. But Facebook does not tell you whether that user also saw your Google ad, your CTV ad, your direct mail piece, and your organic search result before converting. Each platform reports conversions in isolation using attribution rules that favor their own contribution. Google uses last-click attribution by default, crediting the final ad click. Facebook uses 7-day click and 1-day view attribution, crediting conversions that happen within those windows. When you aggregate these platform reports, the total attributed conversions often exceed your actual conversion count by 30 to 100 percent because multiple platforms claim credit for the same conversions.
These three breaks, iOS tracking restrictions, cookie deprecation, and walled garden data silos, collectively destroyed the technical foundation that traditional attribution models depend on. The models still exist. Attribution platforms still offer sophisticated algorithms with names like data-driven attribution, time decay, position-based, and algorithmic attribution. But these models require complete or near-complete visibility into user journeys across devices and platforms. When 60 to 80 percent of the user journey is invisible due to privacy restrictions and platform data hoarding, the models produce outputs that look mathematically precise but bear limited relationship to reality. Garbage in, garbage out, as the saying goes, and the inputs are increasingly garbage.
3Why Multi-Touch Attribution Is a Mathematical Fantasy
Multi-touch attribution (MTA) emerged in the mid-2010s as the gold standard for marketing measurement. The concept was elegant. Track every marketing touch a user experiences across all channels and devices. When they convert, use statistical models to allocate fractional credit to each touch based on its position in the journey, the time elapsed between touches, and learned patterns from thousands of similar journeys. The promise was that you could finally know exactly how much credit Facebook awareness ads, Google search ads, email touches, and retargeting impressions each deserved, allowing you to optimize budget allocation with scientific precision.
The reality is that MTA was always more aspiration than achievement, even in the tracking-rich era before iOS 14.5 and cookie deprecation. The first problem is the identity graph, the system that connects a user's behavior across devices and platforms into a unified profile. Building an accurate cross-device identity graph requires observing the same user on multiple devices through deterministic identifiers like logins or probabilistic signals like device fingerprints, IP addresses, and behavioral patterns. According to research by Tapad and Drawbridge, two leading identity resolution providers before their acquisitions, cross-device match rates for most businesses ranged from 40 to 70 percent pre-ATT, meaning that 30 to 60 percent of users could not be reliably tracked across devices even under ideal conditions. Post-ATT, those match rates dropped to 15 to 40 percent for most businesses because iOS tracking restrictions eliminated the primary signals used for probabilistic matching. An attribution model that only sees 30 percent of the full customer journey cannot possibly allocate credit accurately across the other 70 percent.
The second problem is causal inference. Even if you could track every marketing touch perfectly, determining which touches caused the conversion versus which touches merely correlated with conversion is mathematically intractable without controlled experiments. A user who sees your display ad, your Facebook ad, and your search ad before converting might have converted from any one of those touches alone, from the combination of all three, or from none of them because they were already planning to buy and the ads just happened to be present. Attribution models use rules or machine learning to allocate credit, but these are fundamentally assumptions about causation, not measurements of causation. Different models produce wildly different credit allocations for the same data. According to a 2024 study by C3 Metrics analyzing attribution models across 150 advertisers, first-click attribution, last-click attribution, linear attribution, and algorithmic attribution produced credit allocations that differed by an average of 43 percent for the same campaign data. All of these models claimed to represent truth, but they could not all be correct because they disaged so dramatically.
The third problem is channel coverage. MTA systems can only attribute credit to channels they can track. For most businesses, that includes paid search, paid social, display advertising, and owned properties like your website and email. It does not include offline channels like direct mail, TV, radio, or out-of-home advertising because those channels cannot drop tracking pixels. It often does not include organic search, direct traffic, or word-of-mouth because those sources do not have trackable marketing touches even though they represent massive drivers of conversions for most businesses. An attribution system that only measures 50 or 60 percent of the customer journey will systematically over-attribute credit to the tracked channels and under-credit the untracked channels, creating a bias toward paid digital channels and away from brand-building activities and offline marketing that are equally or more important but harder to measure.
Industry practitioners have known these limitations for years, but the combination of iOS restrictions, cookie deprecation, and regulatory pressure has made them impossible to ignore. What was previously a measurement challenge, imperfect tracking and causal inference problems, has become a measurement impossibility. You simply cannot build accurate multi-touch attribution models when you cannot track users across platforms, when platform data is siloed, and when the majority of the customer journey is invisible. Businesses that continue investing in MTA platforms are paying for mathematical sophistication applied to incomplete and biased data, producing false precision that hurts decision-making more than it helps.
4Incrementality Testing as Ground Truth
If traditional attribution cannot reliably tell you which channels drive conversions, what can? The most credible alternative is incrementality testing, also known as holdout testing or geo-experiments. Incrementality testing measures the causal impact of marketing activity by comparing outcomes between groups that are exposed to marketing and control groups that are not, using randomized controlled trials or statistical methods that approximate randomization.
The simplest form of incrementality testing is a channel on-off test. You run your Facebook campaign in January and measure conversions. You turn off Facebook in February and measure conversions again. The difference, adjusted for seasonality and other variables, represents the incremental contribution of Facebook advertising. If you generated 120 conversions in January with Facebook running and 90 conversions in February with Facebook off, Facebook contributed approximately 30 incremental conversions. This is not perfect because other factors might have changed between January and February, but it provides directionally accurate insight that is causally grounded rather than based on attribution rules that inflate platform contribution.
A more sophisticated version is geo-testing, where you divide your market into matched geographic areas and run campaigns in some areas while holding out others. If you operate in ten cities, you might run Facebook ads in five randomly selected cities and not run them in the other five, then compare conversion rates between the two groups. The difference represents the incremental impact of Facebook advertising while controlling for brand awareness, seasonality, and economic conditions that affect all cities equally. According to a 2025 study by Meta (yes, Facebook's parent company), advertisers who conducted geo-experiments found that platform-reported attributed conversions overstated actual incremental conversions by an average of 35 percent across all industries. In other words, Facebook's attribution system claimed to drive 135 conversions when the actual incremental impact measured through holdout testing was 100 conversions. The difference reflects conversions that would have happened anyway even without Facebook advertising, but which Facebook claimed credit for because they fell within the attribution window.
Incrementality testing is not practical for every campaign or every channel. It requires sufficient scale to detect statistically significant differences between test and control groups, which typically means spending at least several thousand dollars per month per channel and running tests for 30 to 90 days. Small businesses with monthly marketing budgets under $5,000 often cannot afford to hold out entire channels for extended periods just to measure incrementality. But even occasional incrementality tests, perhaps once per quarter or twice per year, provide anchor points that validate or challenge platform-reported attribution. If Facebook claims to drive 40 percent of your conversions based on last-click attribution but incrementality testing reveals that Facebook actually drives 20 percent of incremental conversions, you have ground truth data that should inform budget allocation more than platform-reported metrics.
Technology platforms are beginning to build incrementality testing into their products, making it more accessible to small businesses. Google has offered geo-experiments through their Ads platform since 2017, allowing advertisers to run controlled tests without sophisticated statistical expertise. Meta introduced Conversion Lift Studies and GeoLift for incrementality measurement. According to a 2025 survey by AdExchanger, 42 percent of advertisers now conduct some form of incrementality testing, up from 18 percent in 2020, and those who test report 27 percent higher confidence in their marketing spend allocation compared to those relying solely on attribution models. The barrier is no longer technical capability but awareness and willingness to accept that platform-reported metrics systematically overstate performance.
5Media Mix Modeling for Cross-Channel Budget Allocation
While incrementality testing answers "does this channel work," media mix modeling (MMM) answers "how should I allocate budget across all channels." MMM uses statistical regression to analyze the relationship between marketing spend across different channels and business outcomes like revenue, leads, or conversions over time. The models account for seasonality, economic conditions, competitive activity, and lagged effects where marketing in one period drives outcomes in future periods. The output is an estimate of how much each channel contributes to outcomes and, critically, how changing budget allocation would impact results.
MMM has been around since the 1980s when it was primarily used for TV, radio, and print advertising. It fell out of favor in the 2000s and 2010s as digital attribution promised more granular user-level tracking. But MMM is experiencing a renaissance precisely because it does not require user-level tracking. MMM operates on aggregate data: total spend by channel by week or month, total conversions, total revenue. It does not need cookies, device IDs, or cross-platform identity graphs. It works with the data that businesses can still reliably collect in a privacy-first world, aggregate performance metrics and spend data.
The classic use case for MMM is evaluating offline channels like TV, radio, direct mail, and out-of-home advertising alongside digital channels in a unified model. A home services company might spend $10,000 per month on Google Ads, $8,000 on direct mail, $5,000 on local radio, and $7,000 on Facebook. Traditional attribution would only measure Google and Facebook because they offer pixel-based tracking. MMM incorporates all four channels by analyzing how weekly spend variation across channels correlates with weekly conversion variation after controlling for external factors. The model might reveal that direct mail has a three-week lag before conversions peak, that radio amplifies the effectiveness of search advertising when run simultaneously, and that Facebook delivers 60 percent of its attributed conversions but only 35 percent of incremental conversions. This kind of cross-channel insight is impossible to obtain from platform-specific attribution systems.
The challenge with MMM is that it requires substantial data to produce reliable results. Most implementations recommend at least 18 to 24 months of historical data across all channels with enough weekly or monthly variation in spend to isolate the effect of each channel. That means you cannot run MMM immediately when launching a new business or new channel. You need to accumulate performance history first. Additionally, MMM requires statistical expertise to build and interpret the models, which historically limited access to large enterprises with analytics teams or budgets to hire specialized agencies. According to Forrester's 2025 Marketing Measurement Survey, 73 percent of enterprise companies use some form of MMM, but only 14 percent of small businesses do, primarily due to perceived complexity and cost.
That accessibility gap is closing. Marketing technology vendors including Senova's analytics platform, Rockerbox, and Measured have built automated MMM tools that require minimal statistical expertise and work with smaller data sets than traditional implementations. These platforms ingest spend and performance data from advertising platforms, CRM systems, and website analytics, run the regression models automatically, and present results in dashboards that show channel contribution and budget optimization recommendations. According to user reviews aggregated by G2, businesses using modern MMM platforms report 22 to 38 percent improvement in marketing efficiency, measured as cost per acquisition or revenue per marketing dollar, within six months of implementation. The improvement comes not from discovering entirely new channels but from reallocating budget away from over-credited channels toward under-credited channels based on actual contribution rather than last-click attribution.
Identify website visitors and track their journey across your marketing ecosystem.
6Unified Identity Tracking as the First-Party Data Solution
Both incrementality testing and media mix modeling operate on aggregate data, which makes them robust to privacy restrictions but limits granular insight. If you want to understand individual customer journeys, segment performance by audience characteristics, or personalize messaging based on prior interactions, you need some form of user-level tracking. The solution is not returning to third-party cookies or trying to circumvent iOS restrictions. The solution is building first-party data infrastructure that tracks users you have permission to track across properties you own, using identity resolution techniques that do not depend on third-party tracking mechanisms.
Visitor identification is the foundation of this approach. When someone visits your website, visitor identification systems use IP address intelligence, device fingerprints, and identity resolution across commercial databases to match anonymous visitors to known consumer profiles. According to vendor benchmarks from providers including Senova, Clearbit, and LeadFeeder, match rates for business websites range from 30 to 65 percent depending on traffic sources and device mix. For matched visitors, the system appends contact information, demographic data, and behavioral attributes. The matched visitor becomes part of your CRM, allowing you to track their subsequent behavior across your owned properties like your website, email, and physical locations if applicable.
The key distinction from third-party tracking is consent and ownership. You are not tracking users across the web. You are identifying visitors to your own property and tracking their behavior within your ecosystem, which is permissible under GDPR, CCPA, and most privacy regulations as long as you disclose the practice and allow opt-out. The data you collect is first-party data that belongs to your business, not third-party data purchased from brokers or shared across platforms. According to a 2025 Gartner survey, 86 percent of marketing leaders said that first-party data is their most valuable data asset, up from 54 percent in 2021, driven primarily by the unreliability of third-party data and platform-reported metrics.
The strategic value of unified identity tracking is that it enables measurement that platform attribution cannot provide. When you identify a visitor, you can see that they arrived from a Facebook ad, browsed three pages, left without converting, returned five days later from a Google search, downloaded a resource, received four emails over two weeks, and eventually converted. You tracked this journey using your own systems across your own properties, so you know it is accurate. You can use this data to build custom attribution models that reflect your business reality rather than platform defaults. You can segment audiences based on which combination of touches drives highest conversion rates. You can personalize subsequent marketing based on prior interactions because you own the interaction history.
The limitation of first-party identity tracking is that it only works within your ecosystem. You cannot see what happens on other websites, other apps, or inside walled gardens like Facebook and Amazon. If a user sees your display ad on a news website, clicks it, visits your website where you identify them, then later sees your Facebook ad and clicks it before converting, you will know they visited your website twice and converted, but you will not know whether the display ad or Facebook ad drove the second visit. That ambiguity is unavoidable in a privacy-first world, but it is far superior to relying entirely on platform-reported attribution that inflates each platform's contribution by ignoring the existence of other platforms.
According to a 2025 study by Winterberry Group, businesses that implemented first-party identity infrastructure reported 34 percent improvement in customer lifetime value, 28 percent reduction in customer acquisition costs, and 41 percent higher confidence in marketing attribution compared to businesses relying on platform-reported metrics. The improvement stems from better audience segmentation, more effective personalization, and budget allocation based on actual observed behavior rather than algorithmic estimates from platforms with incentives to inflate their contribution.
7The 80/20 Rule of Marketing Measurement
The uncomfortable truth is that perfect attribution was never necessary, is not currently achievable, and will not be achievable in the foreseeable future. The question is not how to measure everything perfectly but how to measure enough to make directionally correct decisions. The 80/20 rule, also known as the Pareto principle, suggests that 80 percent of your results come from 20 percent of your activities. Applied to marketing measurement, this means that knowing which channels, audiences, and campaigns drive the majority of your results is vastly more valuable than precisely attributing every conversion to every contributing touch.
Practical measurement for small businesses should focus on three questions. First, which channels drive the most conversions at acceptable cost? You can answer this using platform-reported metrics as directional indicators even if the absolute numbers are inflated, because the relative performance across channels is more reliable than absolute attribution. If Facebook reports 100 conversions and Google reports 80 conversions, Facebook probably drives more conversions than Google even if the actual numbers are 60 and 50. That directional insight is sufficient for most budget allocation decisions.
Second, which audience segments respond best? Use first-party data from your CRM to segment customers by demographic, behavioral, and firmographic attributes, then analyze which segments have the highest conversion rates, highest lifetime value, and lowest acquisition cost. This analysis does not require perfect attribution. It just requires knowing who converted and how much they spent, both of which are directly observable in your CRM and transaction systems. According to HubSpot's 2025 State of Marketing Report, businesses that segment campaigns by audience characteristics report 32 percent higher conversion rates and 24 percent higher customer lifetime value compared to businesses running generic undifferentiated campaigns.
Third, are your marketing investments generating positive returns? Calculate blended customer acquisition cost across all channels, not channel-specific CAC from attribution models that double-count conversions. Total marketing spend divided by total new customers equals blended CAC. Compare that to customer lifetime value. If LTV exceeds CAC by a comfortable margin, typically 3:1 or better, your marketing is working regardless of which specific channels deserve credit. If LTV is less than 2:1 multiple of CAC, your marketing is not working and you need to reduce spend, improve conversion rates, or increase prices. This holistic metric does not require attribution models or cross-channel tracking. It just requires basic business math, total spend and total customers.
These three questions, which channels perform best directionally, which audiences convert best, and is the overall investment profitable, provide sufficient insight to make good decisions for the vast majority of small businesses. Pursuing perfect attribution beyond this baseline is usually an academic exercise that consumes resources without improving decision quality. According to a 2025 Deloitte study surveying 300 marketing leaders, companies that focused measurement efforts on these three core questions reported equivalent or higher marketing ROI compared to companies that invested heavily in sophisticated multi-touch attribution platforms, suggesting that measurement complexity beyond a certain threshold provides diminishing returns.
8Building a Measurement Framework That Works in 2026
If traditional attribution is broken and perfect measurement is unattainable, what should a small business actually do? The answer is a pragmatic measurement framework that combines aggregate analysis, first-party identity tracking, and periodic incrementality testing.
Start with platform dashboards for directional insight. Google Ads, Facebook Ads, and other platforms provide conversion tracking and attribution reports. Use these as directional indicators of relative performance, not absolute truth. If Facebook reports that Campaign A generated 50 conversions and Campaign B generated 30 conversions, Campaign A probably outperformed Campaign B even if the absolute numbers are inflated. Make decisions based on relative performance, directional trends, and cost efficiency metrics like CPA and ROAS, understanding that these metrics over-report performance but still provide useful signals.
Layer in first-party identity tracking using visitor identification and CRM integration. When you identify website visitors and track their behavior across your properties, you build a proprietary view of the customer journey that is not subject to platform attribution rules or privacy restrictions. This data provides ground truth about which traffic sources drive qualified visitors, which visitor behaviors predict conversion, and which post-visit touchpoints like email nurture or retargeting are most effective. According to Segment's 2025 State of Personalization Report, businesses using first-party identity infrastructure report 47 percent more accurate customer journey visibility compared to those relying solely on platform analytics.
Conduct periodic incrementality tests to validate platform-reported performance. Once per quarter or twice per year, run a simple on-off test or geo-experiment for your largest marketing channel to measure incremental contribution. If the test reveals that platform-reported conversions overstate incremental impact by 30 to 50 percent, apply that discount factor when evaluating performance and allocating budget. You do not need to test every channel every month. A handful of calibration tests per year provide sufficient ground truth to adjust your interpretation of platform metrics.
Use media mix modeling if your budget and data history support it. If you spend $5,000 or more per month across at least three channels and have 18-plus months of historical data, MMM platforms can provide cross-channel budget optimization recommendations based on observed relationships between spend and outcomes. If you do not meet those thresholds, skip MMM for now and revisit it when you scale.
Focus on business outcomes, not attribution precision. Track total leads, total customers, total revenue, and blended customer acquisition cost across all marketing activities. If these metrics are moving in the right direction, your marketing is working even if you cannot precisely attribute which channels deserve credit. If they are moving in the wrong direction, you have a problem that requires fixing regardless of what attribution models claim. According to a 2025 CMO Survey by Deloitte and Duke University, CMOs who reported high confidence in their marketing performance measurement spent 27 percent less time on attribution analysis and 39 percent more time on business outcome tracking compared to CMOs with low confidence, suggesting that outcome focus is more valuable than attribution complexity.
The measurement challenge in 2026 is not lack of tools or technology. It is the gap between what attribution models promise, perfect visibility into causal relationships across channels and devices, and what privacy-first infrastructure can actually deliver, directionally accurate insights based on incomplete data. Businesses that accept this reality and build measurement frameworks appropriate for current conditions will make better decisions and achieve better results than businesses that continue pursuing attribution perfection using methods that are no longer viable. The future of marketing measurement is not more sophisticated models applied to increasingly invisible data. It is simpler frameworks, grounded in first-party data, validated by controlled experiments, and focused relentlessly on business outcomes rather than mathematical precision.
Key Takeaways
About the Author
Senova Research Team
Marketing Intelligence at Senova
The Senova research team publishes data-driven insights on visitor identification, programmatic advertising, CRM strategy, and marketing analytics for growth-focused businesses.
Ready to Transform Your Lead Generation?
See how Senova's visitor identification platform can help you identify
and convert high-value prospects.
Related Articles
Why Your Website Analytics Are Lying to You About Conversions
Ad blockers, consent banners, and dark social are hiding 30%+ of your real traffic. Learn why your analytics are inaccurate and how to build measurement you can trust.
Marketing Attribution Models Explained: Which One Is Right for Your Business
From first-touch to data-driven attribution, learn how each model works, their strengths and weaknesses, and which one will give you the insights you need to optimize your marketing spend.
Revenue-Predicting Metrics vs Vanity Metrics: What Actually Matters
Most marketing dashboards are full of metrics that look impressive but have zero correlation with revenue. Learn which metrics actually matter and how to build a reporting system that drives real business results.