This note sets out the full methodology underlying the PIPE University Impact Index: how each scoring dimension is constructed, what data sources are used, how institutions are normalised against one another, the specific model used to calculate friction, and the limitations that apply to the current dataset. It is intended to allow independent scrutiny of the index and to support any institution that wishes to query or contextualise its position.

The index covers 346 universities across 13 countries, scored on six equal-weight dimensions. It is updated as new data becomes available. Questions about individual institution data should be directed via the Contact Us page.

Overview of the scoring model

Each institution receives a composite impact score on a 0–100 scale. The score is derived from six dimensions, each carrying equal weight of 16.67 points. Scores are normalised within currency peer groups rather than globally, so that a Belgian university is compared against other EUR-group institutions and not against MIT. This removes the distortion that would otherwise arise from comparing absolute income figures across currencies and institutional sizes.

The seven currency peer groups are: GBP (140 UK institutions), EUR (101 institutions across Belgium, France, Germany, Italy, the Netherlands, Portugal and Spain), USD (45 US institutions), AUD (27 Australian institutions), CAD (15 Canadian institutions), SEK (10 Swedish institutions), and NZD (8 New Zealand institutions).

Within each peer group, the maximum value observed across all institutions sets the normalisation ceiling for that dimension. An institution scoring at the peer-group maximum receives the full 16.67 points for that dimension; one scoring at zero receives none. Because the normalisation is within peer group, a score of 65 at a UK institution means something different from a score of 65 at a US institution, the two are not directly comparable across currencies.

The six scoring dimensions

1. IP utilisation

Measured as the number of licences issued plus spinouts formed, expressed as a percentage of patents held: (licences + spinouts) ÷ patents × 100. This dimension operationalises the principle that a patent is a commercialisation stage, not a commercialisation outcome. A patent that generates neither a licence agreement nor underpins a spinout company represents annual renewal cost without commercial return — and may create friction for other institutions that must route around it to avoid infringement.

Licences are derived as patents × licRate where licRate is the institution's licence-to-patent ratio. Spinouts are the annual spinout count derived as spinRate × Texp / 100. Values are capped at 100 in scoring to prevent near-zero patent bases from generating spuriously high ratios at institutions with minimal patent activity.

Data sources: UK: HESA HE-BCI (Higher Education Business and Community Interaction) survey — patents filed, spinouts formed, and licence agreements. United States: AUTM annual licensing survey. Australia: HERDC. Other countries — national equivalents and institutional publications where available.

Limitation: The metric measures the ratio of commercial outcomes to patents held, not the absolute volume of either. An institution filing very few patents but licensing all of them will score highly. An institution with a large, productive patent portfolio that is well-licensed will also score highly. The metric does not distinguish between these profiles. It should be read alongside the IP revenue dimension, which captures the absolute commercial value generated.

2. Spinout formation rate

Measured as new spinout companies formed per 100 senior academic staff per year. Senior academic staff is used as the denominator rather than total research FTE because spinouts typically originate with principal investigators and senior researchers rather than with the full research workforce.

Data sources: As for patents above. For UK institutions, HESA HE-BCI provides spinout formation data directly.

Limitation: Spinout definitions vary between institutions and reporting systems. Some institutions count only majority-owned spinouts; others include minority-stake ventures and licence-based companies. The index uses the figures as reported in the primary data sources without adjustment for definitional variation. Institutions that apply a more inclusive definition will tend to report higher spinout rates.

3. Spinout three-year survival rate

Measured as the percentage of spinout companies still operating three years after formation. This dimension is designed to separate institutions that form durable commercial ventures from those that form spinouts for reporting or reputational purposes. A spinout that ceases operations within three years has consumed resources without generating sustained commercial value.

Data sources: HESA HE-BCI for UK. AUTM for US. Equivalent national sources for other countries. Where three-year survival data is not directly reported, it is estimated from two-year and five-year cohort data using a linear interpolation.

Limitation: The index does not differentiate between the commercial performance or quality of surviving spinout companies, only their survival. A spinout generating £500m in revenue and one generating £50k in its third year both count as surviving. The survival rate is therefore a floor indicator of spinout quality, not a ceiling. Institutions with exceptionally high-performing spinout portfolios may be underrepresented by this dimension relative to their true commercialisation impact.

4. IP revenue

Measured as total income from IP activities in local currency, combining surviving spinout revenue and licence income. The formula is:

ipRevAdj = (spinRate × Texp/100 × spinSurv/100 × spinIncome) + (patProd × researchStaff/100 × licRate × licIncome/1000)

Where spinoutStock is the number of spinouts adjusted for three-year survival, and annualLicences is the annual licence count derived from the patent portfolio and licence rate. This approach prevents institutions with large spinout portfolios of mostly failed companies from scoring artificially high on this dimension.

Data sources: HESA HE-BCI for UK. AUTM for US. HERDC for Australia. Institutional annual reports for other countries.

Limitation: IP revenue figures are not cross-currency comparable. A Canadian institution reporting C$50m is not directly comparable to a UK institution reporting £50m. This is why the index normalises IP revenue within currency peer groups only. Any cross-country comparison on this dimension should be treated with caution.

Total revenue figures used in this model exclude endowment income and consultancy receipts. For institutions with substantial endowments, notably Oxford (endowment approximately £6bn) and Cambridge (approximately £3.5bn) actual institutional income is considerably higher than the model reflects. This does not affect the IP revenue dimension directly but contextualises the IP to staff cost calculation below.

5. IP yield ratio

Measured as IP revenue (as defined above) divided by TTO operating cost. A ratio above 1× means the TTO generates more in IP income than it costs to operate. A ratio below 1× means the TTO is a net cost centre relative to the IP income it generates.

This is the most diagnostic dimension in the index for assessing the structural efficiency of the commercialisation function, independent of research scale. Across the 140 UK institutions in the index, only 32 achieve an IP yield ratio above 1×. Cambridge leads the UK at 31.4×; MIT leads the global dataset at 81.1×.

Data sources: TTO operating costs from HESA HE-BCI (UK), AUTM (US), and equivalent national sources. Where operating costs are not separately reported, they are estimated from total knowledge exchange expenditure using a sector-standard proportion derived from institutions where both figures are available.

Limitation: Some institutions route contract research and consultancy income through their TTO or its commercial subsidiary. In these cases, the IP yield ratio figure may overstate or understate the commercialisation-specific return depending on how income is attributed. The index uses the figures as reported and does not adjust for institutional accounting practice.

6. IP to staff cost

Measured as IP income expressed as a percentage of research-attributed staff cost. Research-attributed staff cost is calculated as:

researchCost = (Texp × costTexp × resEffort + Tjun × costTjun × 0.2 + postdoc × costPostdoc) / 1000

Where staff costs are in thousands of local currency per year. This is the closest publicly available equivalent to ROCE (Return on Capital Employed) for university research commercialisation. It measures what proportion of the institution's investment in its research workforce is being recovered through IP activity.

Values above 100% are valid and economically significant: MIT recovers approximately 492% of its research staff cost through IP income, meaning its IP activities generate nearly twice what the institution spends on the research staff who produce the underlying science. The UK median is approximately 0.8%.

EUR-group limitation: For the 101 institutions in the EUR currency group (Belgium, France, Germany, Italy, Netherlands, Portugal and Spain), salary data was not available for six of the seven countries. For institutions in Belgium, France, Italy, Portugal, Spain and Sweden, the research staff cost is estimated using a sector-median multiplier derived from the ratio of research staff cost to TTO operating cost observed across all institutions where both figures are available (median ratio: 47.2×). The IP to staff cost figures for these 73 institutions should therefore be read as directional estimates rather than precise calculations. German, Dutch and Canadian institutions have full salary data and their IP to staff cost figures are calculated exactly.

The friction model

Friction is presented as a separate diagnostic metric rather than a scored dimension. It measures the volume of commercially viable ideas generated each month that do not reach a commercial output, a patent filing, a spinout formation, or a licence agreement.

The friction model has two components: idea generation and idea realisation.

Idea generation

The model estimates the monthly volume of commercially viable ideas generated by three staff groups at each institution: senior research-active academics, postdoctoral researchers, and doctoral students. For each group, two rates are applied:

The resulting figure — totalGoodIdeasPerMonth — represents the estimated monthly flow of ideas that could, in principle, reach a commercial output if the institution had unlimited commercialisation capacity.

Idea realisation

Ideas that actually reach a commercial output in a given year are counted as: annual spinout formations and annual licence agreements. Patent filings are excluded — a filed patent is an intermediate stage that may enable commercial outcomes, but does not constitute one. Dividing by 12 gives a monthly realisation rate. Friction is then:

friction = totalGoodIdeasPerMonth − (spinouts + licences) / 12

The friction rate, used in scoring, is friction expressed as a proportion of total good ideas per month. Scoring is inverted: a lower friction rate produces a higher score.

Interpretation and limitations

Friction scores are model-derived estimates, not independently observed figures. The idea generation rates are calibrated from sector benchmarks and applied uniformly across all institutions; the model does not adjust rates for institutional type, research intensity, or disciplinary mix.

A critical distinction: the model estimates commercially viable ideas generated across all research-active staff, including those whose ideas are pursued through routes that do not involve the TTO — consultancy, collaborative research agreements, KTPs, and informal partnerships. The model's good idea count for an institution will therefore substantially exceed the number of ideas handled by the TTO in any given year. A TTO handling 250 disclosures annually at an institution where the model estimates 500 good ideas per year is not failing to capture the other 250 — many of those ideas are reaching commercial application through other channels entirely. The friction metric measures the gap between commercially viable ideas and commercial outcomes broadly defined, not TTO throughput capacity.

The disclosure gap

A further structural limitation affects any index calibrated from TTO-reported or HESA-derived data: the disclosure gap. TTOs do not see all commercially viable research generated at their institution, for two distinct reasons.

First, a significant proportion of researchers bypass the TTO deliberately. Research using data from a large European public research organisation found that researchers in physical and life sciences, those with doctoral degrees, and those with extensive industry interaction experience are all more likely to route their IP outside the TTO — precisely the researchers most likely to generate commercially valuable inventions (Goel & Göktepe-Hultén, Journal of Technology Transfer, 2018). Industry consulting arrangements and collaborative research agreements often result in IP being filed by the industry partner as lead applicant, which does not appear in HESA data at all.

Second, many researchers bypass the TTO not deliberately but because they are simply unaware it exists. A study of 3,250 researchers across 24 European universities found that only a minority were aware of the existence of a TTO at their institution, and that TTO awareness was concentrated among researchers who already had entrepreneurial experience or industry contacts (Huyghe, Knockaert, Piva & Wright, Small Business Economics, 2016). Researchers without prior commercialisation experience are least likely to know the TTO exists — a self-reinforcing gap.

TTO managers at US universities estimate that fewer than half of patentable inventions with commercial potential are ever disclosed to the TTO. If this applies to UK institutions, HESA HE-BCI figures systematically understate actual IP generation — and the model's good idea estimates, which attempt to capture the full staff-level idea flow, may be closer to the true figure than the HESA counts alone suggest.

The practical implication is that the friction metric should not be read as a measure of TTO capacity failure. A substantial portion of what the model identifies as friction represents ideas never disclosed to the TTO — not because the TTO failed to act, but because the researcher did not know they could, or routed the opportunity through an industry partner. Improving disclosure rates through active engagement programmes, embedded research liaison roles, and simplified disclosure processes is therefore a distinct lever from improving TTO capacity.

An institution with a strong applied science focus may generate commercially viable ideas at a higher rate than the model assumes; one with a predominantly humanities focus may generate fewer. The relative ranking between institutions of similar type is more reliable than comparisons across very different institution types.

The friction metric captures quantity of unrealised potential, not quality. It is important to distinguish friction from the patent efficiency metric (patEff), which measures patent filings as a percentage of good ideas and is retained separately as a contextual indicator. A high patent efficiency score with high friction means an institution is converting ideas into patents but not into commercial outcomes — which is precisely the behaviour the index now penalises through the IP utilisation dimension. An institution that generates 500 good ideas per month and realises 100 of them has a higher friction score than one that generates 50 and realises 25, even though the latter is only realising 50% of its potential versus the former's 20%. Both have friction problems, but of different characters. Users of this metric should read it alongside the IP yield ratio and IP to staff cost dimensions to form a complete picture of commercialisation efficiency. Ideas that do not result in spinouts or licences are not necessarily lost — many are pursued through consultancy engagements, collaborative research partnerships, or Knowledge Transfer Partnerships (KTPs). The friction metric does not capture value created through these routes, and institutions with strong non-IP commercialisation activity may appear to have higher friction than their true unrealised potential would suggest. This is a known limitation of the current index, which is scoped to IP-based commercial outputs only.

The monthly idea figures across all 140 UK institutions in the index sum to approximately 2,740 commercially viable ideas per month. Of these, approximately 270 reach a commercial output (a spinout formation or a licence agreement). The remaining 2,460 — approximately 90% — represent the sector-wide friction estimate under the revised definition, which excludes patent filings. This is higher than the previous estimate of 78% because filing a patent that is never commercialised is no longer counted as a commercial output. This figure should be treated as a model-derived estimate calibrated from sector benchmarks, not as a directly observed measure of unrealised commercial potential.

Data sources summary

The following primary sources are used across the index. Where primary data is unavailable, estimated figures are derived from the methods described above and flagged in the methodology.

How primary data feeds the model

Primary data sources such as HESA HE-BCI do not feed directly into the index as raw counts. The calibration chain works as follows:

  1. Primary sources (HESA, AUTM, HERDC) provide observed absolute counts — actual patents filed, spinouts formed, licences issued, IP income received, and TTO operating costs — for each institution in a given year.
  2. The UEM divides those counts by the institution's relevant staff population to derive normalised rates — patents per 100 research FTE, spinouts per 100 senior FTE, and so on. These rates are stored in the model.
  3. At computation time, those rates are applied to the model's own staff population estimates to produce model-level annual counts used in income and efficiency calculations.

This means that model-derived absolute counts — annual patent totals, spinout numbers, licence volumes, and the income figures derived from them — will generally differ from the primary source figures. They are not intended to replicate reported figures; they are internally consistent outputs that preserve the relative performance relationships between institutions and enable like-for-like scoring within peer groups.

As a concrete example: Oxford University Innovation reported filing 93 new patent applications in 2023–24. The model's internal patent count for Oxford is substantially higher, because the model applies Oxford's patent productivity rate to a broader research-active population that includes all postdoctoral researchers and research-effort-weighted senior staff, rather than the subset engaged in IP-generating activity. The relative ranking between Oxford and other institutions is preserved; the absolute count is not intended as a restatement of OUI's reported figure.

This distinction matters for the IP yield ratio and IP to staff cost dimensions. Users who wish to compare the model's income figures against publicly reported IP income should note that the model figures will typically exceed reported figures by a consistent multiplier reflecting the broader staff population used. The relative ordering of institutions is unaffected.

Peer group normalisation

Scores are normalised within seven currency peer groups (GBP, EUR, USD, AUD, CAD, SEK, NZD) rather than globally. This decision reflects the reality that IP revenue, TTO costs, and salary costs are not meaningfully comparable across currencies without adjustment for purchasing power parity, institutional funding models, and national research policy environments. Normalising within currency peer groups is an imperfect but practical approach that allows meaningful comparison among institutions operating in the same economic context.

EUR-group limitation: The EUR peer group contains 101 institutions across seven countries with substantially different research funding models, salary scales, and commercialisation cultures. A Belgian institute with eight institutions in its peer group is being ranked against German research universities operating at a considerably different scale. This introduces within-group distortion that is not present in the GBP or USD groups, which are more internally homogeneous. Users comparing EUR-group institutions across countries should treat those comparisons as indicative rather than definitive. A future version of the index will normalise by country within the EUR group where data permits.

What the index does not measure

The index is explicitly limited to the efficiency and output of the research commercialisation function. It does not measure:

These omissions are deliberate. The index is designed to measure what is measurable with reasonable consistency across national data systems. Adding dimensions that require subjective assessment or are not reported consistently would compromise the comparability of the rankings.

Querying your institution's data

Institutions that wish to query their data, flag a reporting discrepancy, or discuss the application of this methodology to their specific context are encouraged to contact the PIPE research team directly via the Contact Us page. We will respond to all substantive queries within ten working days.

The index is updated annually as new HESA HE-BCI, AUTM, and equivalent data is published. Institutions that believe their primary source data has been misread or incorrectly applied are invited to provide the corrected source reference, and we will review and update accordingly.

Contact: Contact Us  ·  Index page: University Impact Index  ·  Version: 1.5, March 2026