Your enterprise “average” cost per unit is probably wrong. Here’s why — and what to do about it.
The Number Everyone Trusts (That Nobody Should)
Every multi-site manufacturer has a cost-per-unit number at the enterprise level. It shows up in board packs, quarterly reviews, and budget planning sessions. And in most organisations, it’s calculated by averaging the site-level numbers.
Here’s the problem: averaging ratios across sites is mathematically wrong.
Consider two sites producing the same product:
- Site A: Cost per unit = $120/tonne. Produced 10,000 tonnes.
- Site B: Cost per unit = $150/tonne. Produced 2,000 tonnes.
The average? ($120 + $150) / 2 = $135/tonne.
The actual number? ($1,200,000 + $300,000) / 12,000 tonnes = $125/tonne.
That’s a $10/tonne discrepancy — or $120,000 in phantom costs across 12,000 tonnes of production. Multiply that across a full year of operations, across every KPI that involves a ratio, percentage, or rate, and you start to understand the scale of the problem.
This Isn’t a Rounding Error. It’s a Structural Flaw.
The issue isn’t that someone made a mistake. It’s that spreadsheets — by their very nature — encourage this mistake. When each site submits their numbers in separate tabs or files, the natural instinct is to summarize by averaging. It feels right. It looks reasonable. But it’s wrong every time the underlying volumes are different.
And it’s not just cost per unit. The same error applies to:
- Gross margin percentage across business units with different revenue volumes
- OEE (Overall Equipment Effectiveness) across lines running at different capacities
- Safety incident rates across sites with different workforce sizes
- Energy consumption per unit across facilities with different output levels
- Yield percentages across processes with different batch sizes
Every ratio, rate, or percentage that gets “averaged” across unequal volumes produces a number that doesn’t represent reality.
The Hidden Consequence: Bad Decisions
Wrong numbers don’t just sit in reports — they drive decisions.
Capital allocation: If your enterprise cost-per-unit is overstated by 8% because of faulty aggregation, you might approve capital expenditure to “fix” a cost problem that’s smaller than it appears — or miss the actual problem hiding beneath the averages.
Site benchmarking: Averaging masks which sites are truly efficient. A small site with excellent unit economics gets drowned out by a large site with mediocre performance, because the weighted impact isn’t visible.
Pricing decisions: If your cost basis is wrong, your pricing model inherits the error. At scale, even a 2-3% distortion in cost-per-unit translates to mispriced contracts and compressed margins.
Investor reporting: Public companies reporting averaged KPIs across divisions are presenting a simplified number that may not survive scrutiny.
The Root Cause: Spreadsheets Don’t Understand Hierarchy
A spreadsheet is fundamentally flat. It doesn’t know that Site A and Site B roll up to a Region, which rolls up to an Enterprise. It doesn’t know that cost-per-unit at the Region level must be recalculated from the sum of costs divided by the sum of units — not averaged from the sites below.
This is the key distinction:
- Additive metrics (costs, volumes, revenue, headcount) can be summed up the hierarchy.
- Ratio metrics (rates, percentages, per-unit calculations) must be recomputed from the aggregated additive inputs at each level.
When you build reporting in spreadsheets, this distinction is invisible. There’s no mechanism to enforce it. The person building the summary sheet has to know the mathematical rule for every single metric — and apply it correctly every time, across every consolidation cycle.
At three sites, this is manageable. At thirty, it’s a governance risk. At three hundred, it’s a certainty that your numbers are wrong.
What “Correct” Aggregation Looks Like
In a properly structured data model, the aggregation engine knows the difference between additive and ratio metrics. Here’s how it works:
Step 1: Additive inputs roll up naturally. Total costs across all sites = sum of site-level costs. Total production = sum of site-level production.
Step 2: Ratio metrics are recomputed, not rolled up. Cost per unit at the enterprise level = total enterprise costs ÷ total enterprise production. The formula re-executes using the rolled-up inputs — it never touches the site-level ratios.
Step 3: This recalculation happens at every level of the hierarchy. Region, division, business unit, enterprise — at each node, ratios are fresh calculations from that node’s aggregated inputs.
The result: every number in every report, at every level, is mathematically defensible.
The OEE Dollar Bridge: A Practical Example
Consider OEE — the metric every plant manager knows. Site A reports 92% OEE. Site B reports 78% OEE. What’s the enterprise OEE?
If you average: 85%. That sounds acceptable.
But Site A runs a high-capacity line producing 50,000 units per day. Site B runs a smaller line at 8,000 units per day. The weighted OEE, properly calculated from total effective output versus total theoretical output, might be 90.1% — because Site A’s larger volume dominates the calculation.
More importantly: that 9.9% gap at enterprise level represents a different dollar amount depending on which site’s losses you’re looking at. Converting OEE percentage to dollar impact requires knowing the revenue per unit, the cost structure, and the production volume at each site.
At NxGN, we call this the OEE Dollar Bridge — translating operational percentages into financial impact. It’s one of the first things our clients build, because it turns a metric that operations teams debate into a number that the CFO acts on.
What’s the Fix?
The answer isn’t “be more careful with spreadsheets.” The answer is to stop using spreadsheets for multi-site KPI consolidation. Not because spreadsheets are bad tools — they’re excellent for analysis, modelling, and ad-hoc work. But they were never designed to enforce aggregation rules across a hierarchical data model.
What you need is a data layer that:
- Understands your organisational hierarchy — sites, regions, divisions, enterprise — and can recalculate metrics at each level
- Distinguishes between additive and ratio metrics — automatically applying the correct aggregation method
- Maintains audit trails — so every number is traceable to its source inputs
- Connects to your existing systems — pulling production data from historians, costs from ERP, and targets from planning systems
This is exactly what NxGN Capstone does. It sits above your existing data infrastructure and creates a unified operational model where every metric is correctly computed, aggregated, and visualised — from equipment level to enterprise.
Start With One Question
If you want to test whether this problem exists in your organisation, ask one question at your next management review:
“How was the enterprise cost-per-unit calculated?”
If the answer involves averaging site-level numbers, you know the number is wrong. The question is how wrong — and what decisions have been made based on it.
NxGN Solutions builds intelligent, cloud-based platforms to help organisations see clearly, act faster, and scale with confidence. NxGN Capstone is our data management platform for integrated reporting — connecting production data to financial decisions. Learn more →