The most common post-acquisition reporting failure is assuming that if you can see the acquired company's numbers in your dashboard, the integration is working.
Why seeing numbers is not the same as trusting numbers
PE platforms live and die by their reporting. Investors want to see roll-up performance. Operating partners need to spot trends across the portfolio. And lenders require consistent financial close. So after an acquisition, the first thing most platforms do is build a consolidated dashboard.
I call this 'dashboard theater.' It looks impressive. The acquired company's revenue appears alongside the platform's revenue. Graphs trend upward. Colors are consistent. But underneath, the numbers are stitched together with assumptions, manual adjustments, and definition mismatches that no one has documented.
The problem shows up when someone asks a simple question: 'Why is the acquired company's revenue per job 40% lower than ours?' Is it a pricing problem? A data quality problem? Or does the acquired company define 'revenue' at a different point in the job lifecycle? Without documented definitions, you cannot answer the question. And without the answer, you cannot manage the business. Dashboard theater wastes executive attention and creates false confidence that delays real fixes.
The four alignment requirements
Real reporting integration happens upstream of the dashboard. These four alignments must be in place before any visualization is built. Skip any one of them and your reports will mislead you.
Revenue definitions
When does revenue get recognized? At job completion? At invoice creation? At payment? At the start of a recurring maintenance agreement? If the platform recognizes revenue at invoice creation and the acquired company recognizes it at payment, your consolidated revenue is meaningless until you reconcile the timing difference.
Also consider what counts as revenue. Does a warranty callback count as revenue or a cost of service? Does a supplement in roofing add to the original job revenue or create a separate line? Does a cancellation fee count? These are not accounting questions — they are operational questions that affect how leadership understands performance.
The alignment process is simple but rarely done: document how each company defines revenue for operational reporting, identify the differences, and create a consolidated definition that both companies can produce from their systems. Only then build the dashboard. I have seen platforms discover six months after close that their 'revenue growth' was entirely due to recognizing revenue earlier, not selling more.
Job status definitions
Job status is the backbone of operational reporting. If the platform and the acquired company use different status taxonomies, every operational metric is compromised.
I worked with a platform where 'completed' meant the technician finished the work and the invoice was generated. Their acquired company used 'completed' to mean the technician left the site — billing happened days later. When we first rolled up performance, it looked like the acquired company had a 98% completion rate but terrible collections. In reality, they just called the same stage by a different name.
Aligning job status requires mapping every status in both companies to a consolidated taxonomy. It is tedious work. But without it, your pipeline reports, technician productivity metrics, and revenue forecasting are built on sand. I recommend creating a shared status dictionary with definitions, ownership rules, and system triggers before any reporting is built.
Customer matching logic
Customer data is the thread that connects CRM, dispatch, billing, and reporting. If the acquired company uses different customer IDs, field structures, or matching rules, downstream reports will double-count, miss, or misattribute customers.
Common mismatch scenarios: the acquired company matches customers by phone number, the platform matches by email. The acquired company treats 'Bill Smith' and 'William Smith' as different customers; the platform deduplicates them. The acquired company stores multiple properties under one customer record; the platform creates a new record per property.
Before building consolidated customer reports, define the matching rules. Choose which fields identify a unique customer. Document how to handle name variations, multiple properties, and business versus residential accounts. And validate the matching logic by spot-checking a sample of records from both systems. A 10% customer duplication rate can make your customer acquisition cost look twice as good as it really is.
Close timing
Month-end close timing affects every trend line in your dashboard. If the platform closes on the last calendar day and the acquired company closes on the Friday before, five days of revenue difference will show up as a variance every single month.
Close timing also affects year-over-year comparisons. If the acquired company had a different close calendar before acquisition, historical trend lines will not align with the platform's. You will see a 'drop' in performance that is actually a calendar mismatch.
The fix is to align close timing as early as possible. If full alignment is not practical, document the difference and build calendar-adjustment logic into your reporting pipeline. Do not let the dashboard hide the mismatch. I have seen operating partners spend hours in board meetings explaining variances that were purely calendar artifacts.
When to build the dashboard (hint: not first)
I know the pressure. Leadership wants visibility. Investors want roll-ups. The operating partner wants to show progress. But building the dashboard before aligning definitions creates false confidence. You will spend six months explaining variances that are not real problems — they are definition mismatches.
The right sequence is: align definitions, validate data quality, build a prototype report with one metric, test it with both operations teams, fix the mismatches, and only then build the full dashboard. This takes longer upfront. It saves months of confusion downstream.
If someone demands a dashboard in Week 1, build a simple operational health view: revenue trend, job count, technician count, and customer count. Label every metric with its definition and its data source. Make the limitations visible. Honest reporting with caveats is more valuable than integrated reporting with hidden assumptions. The board will respect the honesty more than they will respect a pretty chart they later discover is wrong.
What good reporting integration looks like in practice
Good reporting integration is invisible. The platform leadership opens their dashboard and sees reliable numbers without knowing which system they came from. The acquired company's operations manager pulls their branch reports and sees the same numbers the platform sees. No one is running manual reconciliations. No one is debating which number is right.
This happens when definitions are aligned, data quality is validated, and the integration layer handles exceptions without hiding them. It does not happen because someone bought a better BI tool. It happens because someone did the hard, tedious work of making two companies speak the same operational language.
Common reporting integration myths
There are a few myths that lead platforms astray when integrating reporting after acquisition.
- Myth: A BI tool will fix our reporting problems. A BI tool cannot create consistency where none exists. It visualizes data faster. It does not make the data correct.
- Myth: If the numbers look close enough, the definitions are aligned. 'Close enough' is dangerous in PE reporting. Small definition differences compound over time and create large variances.
- Myth: The acquired company's team can clean their data after we build the dashboard. Data cleanup must happen before dashboard build. Building on dirty data institutionalizes the errors.
- Myth: Real-time dashboards are always better. Real-time reporting on inconsistent data is real-time confusion. Batch reporting with clean data beats real-time reporting with dirty data.
If the problem is recurring, treat it as a systems problem before adding more manual process around it.