Round Ready Academy — Lesson 2 of 11
In Lesson 1 we said your valuation is a conclusion underwritten by evidence. This lesson is the field guide to the twelve categories of evidence that institutional investors actually look at.
The Opagio 12 is not a theoretical framework. It is the output of mapping what Series A and Series B diligence teams ask about across hundreds of rounds against the intangible asset taxonomies in IFRS 3 and the Corrado-Hulten-Sichel (CHS) growth accounting literature. Every driver in it is there because it has shown up in more than one real diligence checklist.
The Opagio 12 is a shared language between founders and investors. Founders who speak it enter diligence with a prepared map. Founders who do not end up answering the same questions in twelve different ways across twelve different partner meetings.
Why Twelve and Not Five
There is a reasonable question here: why twelve? Why not the five-class IFRS 3 taxonomy, or the six CHS categories?
The short answer: because the IFRS 3 classes are designed for post-acquisition purchase price allocation and the CHS categories are designed for national accounting. Neither is designed for the working question "what makes this business defensibly more valuable in a diligence room?"
The Opagio 12 preserves cross-walk to both frameworks — every driver maps cleanly to CHS and, where applicable, to IFRS 3 — but the level of granularity is the level at which investors actually ask questions. Three of the twelve (Human Capital, Organisational Capital, Culture) have no IFRS 3 home and are typically the drivers most responsible for the gap between accounting book value and enterprise value.
Driver 1 — Brand and Reputation
The question the investor asks: Does a customer choose you over an unbranded alternative, and would they pay more for it?
The evidence that answers it: unprompted recall in your target segment, NPS, share of branded versus non-branded search, the ratio of inbound enquiries to outbound activity, the language customers use when they describe you.
Worked example: a UK B2B SaaS in the HR space found that 62% of its qualified inbound enquiries used its product name (not category name) in the first contact. That single figure, embedded in the diligence pack, materially changed how its Series A partner framed market position.
How Brand Shows Up in Diligence
| Signal | Strong evidence | Weak evidence |
|---|---|---|
| Unprompted recall | Survey output in named target ICPs, year-on-year trend | "We're well known in our space" |
| Branded search share | Branded ÷ non-branded ratio rising over twelve months | No branded-vs-generic split tracked |
| Inbound mix | Documented inbound-to-outbound ratio with conversion by source | "Most of our deals come in warm" |
Driver 2 — Customer Capital
The question: What is the quality of your revenue base?
This is the driver that the most diligence time is spent on. The evidence breaks into four layers: concentration (what share of revenue comes from the top 10 customers), cohort retention (how each vintage retains over time), contract quality (term, auto-renew, price escalators), and expansion (net revenue retention, expansion ARR by cohort).
At Series A, the bar is typically 100%+ net retention in your best cohorts and evidence that retention is not carried by one or two anchor customers. At Series B, 120%+ NRR is the price of entry.
Net revenue retention (NRR) is a cohort's total revenue in the current period divided by that same cohort's revenue one year earlier, including expansion, contraction, and churn. It is the single most-cited metric in Series A and B IC memos for recurring revenue businesses.
How Customer Capital Shows Up in Diligence
| Signal | Strong evidence | Weak evidence |
|---|---|---|
| Concentration | Top-10 customer revenue share over time, with named-customer view | "We have hundreds of customers" |
| Cohort retention | Vintage-by-vintage retention curves, twelve+ months deep | A single blended retention number |
| Contract quality | Auto-renew clauses, multi-year terms, price escalators documented | "Annual contracts with most customers" |
Driver 3 — Network Effects and Platforms
The question: Does value compound as the user base grows?
Most businesses that claim network effects do not have them. Real network effects show up in measurable form: lower CAC at scale, higher retention for later cohorts, a per-user value metric that rises with density, a two-sided exchange where one side attracts the other.
Worked example: a marketplace business presenting a Series B showed cohort graphs where the second-year retention of Year-3 cohorts was higher than the second-year retention of Year-1 cohorts. That is a network effect in evidence form. It is not a claim; it is a gradient.
How Network Effects Show Up in Diligence
| Signal | Strong evidence | Weak evidence |
|---|---|---|
| CAC trend at scale | CAC falling as user base grows, by named segment | CAC quoted as a single blended figure |
| Cohort retention by vintage | Later cohorts retaining better than earlier ones | "Retention is good across the board" |
| Per-user value | A measurable metric (transactions, density, engagement) rising with size | "More users means more value" |
Driver 4 — Technology and Innovation
The question: What technical capability is proprietary, and what is commoditised?
The question is not "do you use AI" or "is your stack modern." It is: what would a competent engineering team need to replicate — and in what time and at what cost? The answer almost always touches on architecture decisions made early, proprietary tooling, and accumulated calibration data that a new entrant cannot download.
How Technology Shows Up in Diligence
| Signal | Strong evidence | Weak evidence |
|---|---|---|
| Replication cost | Documented architecture, components a competent team would take 12–24 months to rebuild | "Our codebase is mature" |
| Proprietary tooling | Internal frameworks, named build/test infrastructure, deployment automation | "We use modern tooling" |
| Calibration data | Production data that improves the product (recommendations, models, defaults) | "We have lots of telemetry" |
Driver 5 — Data and Intelligence
The question: What datasets do you own, and what decisions do they enable?
Data as an intangible asset does not mean "we have lots of data." It means: a dataset whose structure and coverage are not available to a competitor, and where holding it changes what your product can do. In a SaaS business, this is often a labelled dataset of customer behaviour captured over years. In a vertical tools business, it might be a benchmarking corpus built from every customer's submissions.
How Data Shows Up in Diligence
| Signal | Strong evidence | Weak evidence |
|---|---|---|
| Coverage | Volume + breadth + depth, with a named contrast to what is publicly available | "We have a lot of data" |
| Decision use | Specific product features that depend on the dataset, with usage telemetry | "We could use it for AI later" |
| Rights | Clean ownership and licensing trail; no third-party ambiguity | Mixed customer-data and licensed-data with unclear terms |
Driver 6 — Human Capital
The question: Who knows the things that make this business work?
Human Capital is where the Opagio 12 starts showing drivers that IFRS 3 does not recognise. Diligence teams ask about attrition rates, key-person concentration, internal promotion rates, and — when they are thorough — the specific knowledge that, if a person left tomorrow, would take six months to rebuild.
Series A and B partners pay for teams that have already scaled once through this stage, and for evidence that the team can attract its next ten hires.
How Human Capital Shows Up in Diligence
| Signal | Strong evidence | Weak evidence |
|---|---|---|
| Attrition | Twelve-month rolling figure, regretted-versus-non-regretted split | "Our retention is good" |
| Key-person concentration | Documented succession on top-3 roles, codified knowledge transfer | "Sarah holds the customer relationships" |
| Hiring pipeline | Active offer pipeline, named senior hires closed in the last six months | "We can hire when we need to" |
Driver 7 — Organisational Capital
The question: What survives if any one person leaves?
Organisational Capital is the set of processes, playbooks, and institutional routines that allow the business to keep functioning when a key individual goes on holiday, leaves, or gets promoted. The evidence is unglamorous: onboarding documentation, decision rights, a coherent operating cadence, sales processes that are replicable across hires.
How Organisational Capital Shows Up in Diligence
| Signal | Strong evidence | Weak evidence |
|---|---|---|
| Sales process | Named stages, conversion rates per stage, documented qualification criteria | "We do it differently for each deal" |
| Onboarding | 30-60-90 plan, time-to-productive-metric, role-specific playbooks | "Sit next to Sarah for two weeks" |
| Decision rights | RACI or equivalent, documented approval thresholds | "Everything goes through the founder" |
Organisational Capital is often the single biggest difference between a Series A company that is ready to scale and one that is not.
Driver 8 — Ecosystem and Partnerships
The question: Who else is invested in your success?
Not every partnership counts. The ones that matter are: channel partners who source revenue, technology partners whose product depends on or complements yours, industry bodies whose endorsement changes buying decisions. One signed letter of intent from an ecosystem anchor is worth more than a logo slide of 30 "partners" who have never produced a lead.
How Ecosystem Shows Up in Diligence
| Signal | Strong evidence | Weak evidence |
|---|---|---|
| Channel partners | Signed agreements, sourced-revenue figures, joint-go-to-market plans | A logo slide |
| Technology partners | Live integrations with usage telemetry; co-engineering commitments | "We integrate with the major platforms" |
| Industry bodies | Named endorsements, standard-setting roles, paid-for advisory boards | "We attend the relevant conferences" |
Driver 9 — Content and IP
The question: What is legally protectable or already protected?
Registered trademarks, granted patents, filed applications, copyright-protected proprietary content, documented trade secrets. Increasingly for Series B and beyond, this driver also covers the evidence needed to support IP-backed lending — which we cover in depth in Lesson 10.
How Content and IP Shows Up in Diligence
| Signal | Strong evidence | Weak evidence |
|---|---|---|
| Registered IP | Granted patents and trademarks; named jurisdictions; renewal status | "We have applications pending" |
| Trade secrets | Documented practice register, NDAs, access controls, incident log | "Everyone signs an NDA" |
| Lending qualification | Asset-by-asset valuation backing IP-backed loan facility (see Lesson 10) | No view of which IP could collateralise |
Driver 10 — Regulatory and Compliance
The question: What permissions do you hold that others would need years to earn?
In regulated sectors — fintech, healthtech, insurtech, regulated B2B — permissions are an asset. FCA authorisation, clinical approval, ISO 27001, SOC 2, industry-specific licences. These are moats measured in months of time-to-compete, not in the platitudes of "regulatory expertise".
How Regulatory Shows Up in Diligence
| Signal | Strong evidence | Weak evidence |
|---|---|---|
| Authorisations held | Named licences and certifications; renewal calendar; jurisdiction map | "We are compliant" |
| Time-to-compete | A documented account of what a new entrant would need (months, cost) | "It would take a competitor years" |
| Audit trail | External audit reports, findings closed, controls testing logs | "We pass our audits" |
Driver 11 — Switching Costs and Lock-In
The question: How painful is it for a customer to leave?
Switching costs are evidence-measurable. Time to migrate, depth of integrations, workflow embedding, contractual friction, data lock-in, trained-user costs. Businesses with genuinely high switching costs tend to show it in low gross churn and long average contract lives.
How Switching Costs Show Up in Diligence
| Signal | Strong evidence | Weak evidence |
|---|---|---|
| Migration time | A named figure for time-to-migrate, sourced from real customer-leave events | "It would take customers a long time to leave" |
| Integration depth | Number of customer systems integrated; API call volume per customer | "Our product is sticky" |
| Gross churn | Logo churn under 5% with cohort-level breakdown | A blended retention number |
Driver 12 — Culture and Ways of Working
The question: What habits produce outcomes that numbers alone cannot explain?
Culture is the driver most often dismissed as soft. It is also the driver that, when evidenced well, produces the clearest articulation of why this team is different. Evidence includes: Glassdoor scores, internal engagement survey outcomes, public commitments the team has lived up to, and the specific practices — code review cultures, shipping rituals, customer-contact norms — that show up in how things get done.
How Culture Shows Up in Diligence
| Signal | Strong evidence | Weak evidence |
|---|---|---|
| Engagement | Internal survey data over time; named-team granularity; action log | "Our team is engaged" |
| External signals | Glassdoor / repeat-hire rate / candidate NPS | A glossy values page |
| Lived practice | Specific rituals (code review, shipping cadence, customer-contact norms) documented and dated | "We have a strong culture" |
The three drivers without an IFRS 3 home (Human Capital, Organisational Capital, Culture) are typically responsible for the bulk of enterprise value above and beyond what the balance sheet records. Founders who ignore these drivers in their diligence prep are leaving their most differentiated value undocumented.
Sector Patterns — What Each Type of Business Leads With
The twelve drivers apply to every business, but the ones that lead the diligence conversation vary by sector.
Driver Emphasis by Sector
| Sector | Typically strongest drivers | Typically under-evidenced drivers |
|---|---|---|
| B2B SaaS | Customer Capital, Switching Costs, Technology | Organisational Capital, Data |
| Marketplace | Network Effects, Data, Ecosystem | Regulatory, IP |
| Deeptech | Technology, Content and IP, Human Capital | Customer Capital, Organisational Capital |
| Fintech | Regulatory, Technology, Customer Capital | Culture, Organisational Capital |
| Consumer brand | Brand, Customer Capital, Content | Technology, Data |
| Healthtech | Regulatory, Technology, Data | Switching Costs, Ecosystem |
The point of the table is not to tell you which drivers to prioritise. It is to show you which drivers your sector's diligence teams will expect to see evidenced first — and which they will be mildly surprised to find in good shape.
Using the Twelve as a Diagnostic
The Opagio 12 becomes most useful when you score your own business honestly against each driver and compare that profile to what your sector typically leads with.
The Short Version
Run the Round Readiness Diagnostic. You will get a twelve-driver radar chart that shows where your evidence is strong, where it is thin, and where your sector peers typically lead. That output tells you which of the following lessons is most worth starting with.
In Lesson 3, we cover the specific blind spots founders at £1M+ ARR tend to have across these twelve drivers — the places where what founders think is strongest does not match what diligence will pay for. For a deeper reference on individual drivers, the Value Drivers Academy covers each one in full. Individual drivers also have glossary entries — for example Customer Capital and Organisational Capital.
Primary CTA: Run the Round Readiness Diagnostic to score your business across The Opagio 12 and get the prioritised gap list for your sector.
David Stroll is Chief Scientist at Opagio. PhD productivity economist, published researcher on intangible capital and growth accounting, with 30+ years in systems architecture, AI, and productivity measurement. Meet the team.