The pattern is familiar to anyone who has sat through enough board presentations. The management team walks in with a single projection — a new production line, a geographic expansion, an acquisition of a smaller competitor. The numbers are clean. The internal rate of return is attractive. The payback period is reasonable. The board asks a few questions, receives confident answers, and approves the investment.
Six months later, the assumptions have shifted. The revenue ramp took longer than modelled. The working capital requirement was larger than projected. The margin contribution was thinner because the pricing assumption — the one nobody stress-tested — turned out to be the one that mattered most. The board approved a decision that was never tested against the conditions that actually materialised.
Graham and Harvey's landmark survey of corporate finance practice found that only 26.6 percent of firms consistently use formal sensitivity or simulation analysis in capital budgeting decisions — and that figure drops sharply at smaller firm sizes. A 2013 McKinsey survey of corporate executives found that forty percent described their scenario planning as having little or no effectiveness. In the Greek mid-market, where capital allocation decisions tend to be made with even less analytical infrastructure, the number is almost certainly lower. The decision is made on a base case. The base case is a single line through an uncertain future. And nobody models what happens when it bends.
Why single-scenario decision-making fails
A single-scenario projection is not a forecast — it is a narrative. It tells a story about how an investment will perform if a set of assumptions all hold simultaneously. The problem is not that the assumptions are wrong individually. It is that their joint probability — the likelihood that revenue, cost, timing, and working capital all behave as modelled — is far lower than any individual assumption suggests.
This plays out in Greek portfolio companies with predictable regularity. A food manufacturer models a second production line based on current demand curves and current raw material costs. Both assumptions are reasonable in isolation. But the scenario in which demand softens by fifteen percent while input costs rise by ten percent — a combination that has occurred three times in the last decade — was never modelled. The board approved an investment whose downside was invisible because nobody built the model that would show it.
The EIB's 2023 Investment Survey for Greece found that a significant share of Greek SMEs lack long-term investment planning, with thirty-four percent still reporting severe access-to-finance issues alongside persistent underinvestment relative to EU peers. When capital is scarce and hard-won, the cost of allocating it to the wrong project is not just a return shortfall — it is an opportunity that the business cannot afford to repeat.
Anatomy of a decision simulation
The decision simulator is not a generic scenario-planning tool. It is a purpose-built model for a specific investment decision, anchored in the portfolio company's governed actuals. It is the decision-support layer that becomes possible once operational-financial linkage, FP&A core, and performance intelligence are already in place.
The revenue tree
The simulation begins with revenue decomposition. Instead of a single revenue line with a growth rate, the model breaks revenue into the drivers that produce it: volume by product or service line, price per unit or contract, conversion rate from pipeline to close, retention rate for recurring revenue, and ramp timing for new capacity or new markets. Each driver carries three values: a base case (the management assumption), a bull case (the upside the team believes is plausible), and a bear case (the downside that history or market conditions suggest is possible).
The discipline is in the bear case. Management teams are systematically optimistic about revenue timing; few board presentations model the revenue ramp arriving later than management hopes. The bear case is not a catastrophe. It is the scenario in which the two or three most sensitive assumptions move against the plan simultaneously. Defining it forces the management team to name what they are most uncertain about.
Cost structure and capital phasing
On the cost side, the simulation separates fixed commitments from variable and step-function costs. Capex phases across the project timeline with defined trigger points — the second tranche of equipment spend does not release until the first production milestone is met. Working capital models the cash cycle impact: how much additional receivables, inventory, and payables the investment generates at each revenue level, and how long the cash conversion cycle extends during the ramp.
The working capital model is where most single-scenario projections fail. Management teams model the P&L accurately but underestimate the balance sheet impact. A geographic expansion that is profitable on an accrual basis can consume cash for eighteen months if the new market's payment terms are longer than the home market's. The simulation makes this visible by modelling the cash position — not just the margin — at each point in the scenario timeline.
The sensitivity grid
Once the model is built, the sensitivity grid identifies which assumptions matter most. The grid runs pairwise combinations of the key drivers — revenue ramp versus input cost, price realisation versus volume, working capital cycle versus capex timing — and shows the impact on three outputs: cumulative cash position, IRR, and payback period.
The grid is the artefact that the board reviews. It answers the question that a single-scenario projection cannot: "If we are wrong about X, how wrong does Y need to be before this investment destroys value?" Typically, two to three cells in the grid reveal the decision's critical assumptions — the combinations where the investment moves from value-creating to value-destroying. Those are the assumptions that need monitoring once the investment is approved, and they feed directly into the early warning wire.
Anchoring the simulation in governed actuals
A simulation is only as trustworthy as the numbers it starts from. This is where the decision simulator connects to the broader infrastructure stack.
The base-case assumptions for an existing business line — current margin, current DSO, current cost structure — are not management estimates. They are governed actuals, drawn from the same data architecture that feeds the rolling forecast and the early warning wire. When the simulation says "current gross margin at this product line is thirty-eight percent," that number is the trailing twelve-month actual from the governed P&L, not a management assertion.
This anchoring changes the conversation in the board room. The simulation is not a standalone spreadsheet that lives on someone's laptop. It is a model whose starting point the board can verify, whose assumptions are transparent, and whose outputs connect to the monitoring infrastructure that will track the investment's actual performance once approved. The gap between "the model" and "reality" narrows because they share the same data foundation.
What the board sees and how the decision changes
Gompers, Kaplan and Mukharlyamov's survey of seventy-nine PE investors managing over $750 billion in assets found that PE firms rank operational engineering as a primary source of value — yet relatively few use scenario-based frameworks when evaluating portfolio investments. The decision simulator is the discipline that closes that gap.
The board presentation changes from a single projection with a recommendation to a structured decision package. The package contains the base-case economics, the sensitivity grid with the critical assumption pairs highlighted, the cash-position trajectory under the bear case, and a proposed monitoring plan — which triggers from the early warning wire will track the investment's key assumptions post-approval.
The decision itself changes. Approval is no longer binary — "yes" or "no." It becomes conditional: "yes, with a review gate at month six if revenue ramp is below the bear-case threshold," or "yes, with the second capex tranche contingent on the first milestone." The simulation gives the board the vocabulary to approve with conditions rather than approve with hope.
For the PE fund, the change is strategic. The operating partner moves from reviewing investment proposals against management's confidence to reviewing them against a quantified risk surface. The fund's own portfolio risk — the aggregate of capital allocation decisions across five or twelve companies — becomes measurable at the individual decision level.
From conviction to evidence
This is where the Fortivis stack points next: from governed actuals and performance intelligence into simulation infrastructure that connects portfolio-company data to capital allocation decisions.
The decision simulator is not a replacement for management judgment. It is a discipline that ensures judgment is applied to the right question. The question is not "will this investment succeed?" — every management team believes it will. The question is "under what conditions does it fail, how likely are those conditions, and what will we do if they materialise?" A board that can answer those three questions is making a decision. A board that cannot is making a bet.
Key terms
Decision simulator
A purpose-built scenario model for a specific capital allocation decision, anchored in governed actuals, that tests the investment against multiple assumption combinations and identifies the conditions under which it creates or destroys value.
Sensitivity grid
A matrix that runs pairwise combinations of key assumptions and shows their joint impact on decision outputs (cash position, IRR, payback). Identifies the critical assumption pairs that determine whether an investment is value-creating or value-destroying.
Revenue tree
A decomposition of revenue into its constituent drivers (volume, price, conversion, retention, ramp timing), each independently forecastable under base, bull, and bear assumptions.
Conditional approval
A board decision structure where investment approval carries defined review gates and contingencies tied to the simulation's key assumptions, replacing binary yes/no decisions with monitored commitments.
Sources
- Graham, J.R. & Harvey, C.R. (2001). The Theory and Practice of Corporate Finance: Evidence from the Field. Journal of Financial Economics, 60(2-3), 187–243. Only 26.6% of firms consistently use formal sensitivity or simulation analysis; adoption drops sharply at smaller firm sizes.
- Gompers, P.A., Kaplan, S.N. & Mukharlyamov, V. (2016). What Do Private Equity Firms Say They Do? Journal of Financial Economics, 121(3), 449–476. Survey of 79 PE investors with $750B+ AUM; PE firms rank operational engineering as primary value source but few use scenario-based frameworks.
- Dye, R., Sibony, O. & Viguerie, S.P. (2015). Overcoming Obstacles to Effective Scenario Planning. McKinsey on Finance, No. 55. 40% of executives describe their scenario planning as having little or no effectiveness.
- European Investment Bank (2023). EIB Investment Survey 2023 — Greece Country Overview. 34% of Greek SMEs report severe access-to-finance issues alongside persistent underinvestment relative to EU peers.
- Graham, J.R. (2022). Corporate Finance and Reality. Journal of Finance, 77(4). Presidential address; simulation techniques remain underused outside large-cap firms, with the widest adoption gap in the mid-market segment PE funds typically back.
Sophia Rizopoulou is an Associate at Fortivis, where she develops performance dashboards and analytical frameworks that transform operational data into actionable insights for portfolio companies. She studied Economics, Management and Computer Science at Bocconi University on an International Award Scholarship.
