From NASA to Startups: How TRLs Became the Universal Language of Deep Tech | John Mankins

Podcast Episode Description:

Technology Readiness Levels aren't bureaucratic checkboxes. They're a language for building trust whilst measuring progress objectively along the lab-to-market journey.

Every Deep Tech company uses TRLs – whether explicitly through investor reporting and grant applications, or implicitly when discussing how far along their technology is. NASA uses them. The Department of Defence mandates them. The European Space Agency, ISO standards bodies, venture capital firms, corporations and startups all communicate using this nine-level framework that runs from 'basic principles observed' (TRL 1) to 'system proven in operational environment' (TRL 9).

But where did TRLs come from? And more importantly, how should you actually use them?

Professor John Mankins is uniquely positioned to answer both questions. He co-invented the TRL framework, wrote the seminal 1995 white paper that codified it for the world and spent 15 years at NASA managing massive innovation portfolios using these tools – including the Exploration Systems Research and Technology programme with 100+ projects, 3,000+ people and $800M+ annual budget.

The origin story reveals how frameworks emerge from practical need

TRLs emerged organically from NASA's Apollo-era research centres in the late 1960s. Whilst the world watched Moon landings, NASA was simultaneously running substantial research programmes focused on what comes next – about $1 billion annually (in today's dollars) developing future capabilities.

The researchers needed to communicate with flight project teams using language they'd understand. Flight teams used 'flight readiness review' before every launch. So the technology community started using 'technology readiness review' for their work. The parallel was intuitive.

Then Apollo was cancelled. NASA's budget collapsed 80–90%. The space technology community was devastated.

In the mid-1970s, NASA, the aerospace industry and government partners held workshops trying to recover their budget and get space technology back on the agenda. They created outlooks projecting what space could be between the mid-1970s and year 2000. Those documents used a framework called 'state of the art levels' – ten levels describing technology maturity.

Enter Stan Sadin, a young researcher at NASA headquarters in the Office of Aeronautics and Space Technology. Stan combined 'technology readiness' from the earlier reviews with 'levels' from the state-of-the-art framework. He called them Technology Readiness Levels.

Ten levels were too many – too confusing, too complicated. Stan simplified to six, then seven. He used these in the massive Space Systems Technology Models documents, attempting to lay out the potential of space technology for decision makers: 'If only you give us a little more money, here's what we can achieve.'

It was fundamentally about budget recovery. But it was also about trust – communicating progress accurately and objectively to rebuild credibility after the Apollo collapse.

John Mankins entered the story circa 1986–87, working in the same space technology office. By the late 1980s, he was managing the technology investment portfolio for the Space Exploration Initiative (Bush/Reagan era Moon-to-Mars programme).

The flight project planning teams started talking about creating their own 'flight readiness levels' that would sit above the technology readiness levels. Separate scales for separate organisations.

John's response was defensive but appropriate: 'Oh no, we're going to nine.' TRL 8 and 9 would cover testing in actual operational environments and proven operational systems. One language for everybody. No need for two separate scales.

There was resistance. But then John got appointed technology lead for the flight projects planning activity – wearing both hats simultaneously. 'So I just said, there's just one scale.'

The 1995 white paper emerged from managing the Integrated Technology Plan for the Civil Space Programme across exploration, science, Earth science and launchers. John needed to codify TRLs across NASA's full R&D spectrum.

There was no formal publication vehicle. But the internet existed. Netscape was out there. So John made his own document and published it himself – from his desktop, whilst at NASA, without asking permission.

'Nobody even knew what the internet was. Nobody could tell they weren't thinking about it at those times. So it just went from my desktop to the internet.'

How TRLs jumped to the Department of Defence

Circa 1996–97, the General Accounting Office (now Government Accountability Office) was auditing DoD at Congress's direction, investigating budget overruns and proceeding with immature technologies.

An auditor discovered TRLs. They invited John to GAO offices to present.

He expected maybe half a dozen people. Sixty to seventy auditors attended – every single person working on the DoD audit.

They said: 'This answers our question. This solves the problem of how to make sure we're not going forward with technologies that aren't ready and we're not gonna get budget delays, budget overruns and schedule delays.'

They adopted it. They recommended it to the Pentagon. The Pentagon said: 'This checks the box. This gives us the tool we needed. We don't have to invent something, we can just use this.'

Off to the races. And proliferation began – integration readiness levels, software readiness levels, AI readiness levels, manufacturing readiness levels. Every topic got its own version.

TRLs are a contact sport, not a calculation tool

John's most important insight: 'It's not a calculation tool. It's not something where you can make a spreadsheet and you plug in some numbers and it gives you an outcome, and then you base your decisions on it's got a TRL of 4.77. That's just stupid.'

Technology readiness assessments are a contact sport. You need constant negotiation between technology developers and the people responsible for integrating technology into applications. You're always using a ball and always on a field, but the goals at the end of the field change depending on your application.

An atomic clock for a GPS satellite has different readiness criteria than an atomic clock for a submarine. The application guides and constrains what 'ready' means.

The valley of death – where technology gets kicked forward but nobody's there to catch it – happens when teams aren't on the same page. You're hoping the flight systems team isn't wearing a different jersey, that they won't kick you back down the field as you approach the goal.

The hardest transition: TRL 1 to TRL 2

Most people assume the hardest transition is TRL 4–6 – the famous 'valley of death' where you move from lab prototype to relevant environment demonstration. Capital requirements jump. You need industrial partners. Commercial pressures mount.

John surprises: the most important transition, and often hardest, is TRL 1 to 2.

TRL 1: I observe that heat flows from hot to cold (basic principle).

TRL 2: I have a concept for how to use that phenomenon to do something useful (idea of a heat engine).

That moment of creativity – connecting physical phenomena to practical application – is absolutely critical. If you're not steeped as a practitioner who can draw on intuition and years in the shop, it's a eureka moment. Sometimes easy, sometimes extraordinarily hard.

We underestimate this difficulty. But it's why we put so much status on the founding spark – because it is genuinely difficult and genuinely critical.

TRL 4–6 transitions are also hard, absolutely. You're moving from generic lab experiments to focused designs constrained by the actual application. Not just testing in the right environment, but building to fit the rest of the system design. Fidelity matters as much as environment.

Self-reported TRLs need validation

How confident should investors be in self-reported TRLs?

At low levels (TRL 3–4), published papers and peer review provide confidence. The technical community agrees you've achieved what you claim.

As you march up the scale, you must bring in people responsible for deployment decisions. If you self-report TRL 9 for your algorithm going into bank financial software, investors should be sceptical unless the software team has vetted your algorithm and is involved in installation.

There's a transition not just in doing the technology development, but in who gets to assess technology maturity and riskiness.

Due diligence. Industrial partners. Customers. Their validation becomes essential at higher TRLs.

Beyond TRLs: R&D³ and Technology Need Value

John developed two complementary frameworks whilst managing NASA's innovation portfolios:

R&D³ (Research & Development Degree of Difficulty): How hard will it be to reach the next TRL for your application?

Scale: 1 (slam dunk, single path almost certain to succeed – like folding steel into a paperclip) to 5 (nearly impossible, requires many parallel attempts).

It's essentially the inverse of probability of failure. High R&D³ means you need portfolio approaches – either sequential attempts (Edison testing bulb after bulb) or parallel paths (ICBM development running Atlas, Redstone and solid rockets simultaneously because time was critical).

Prizes work well for high R&D³ problems – inducing many competing innovators to try different approaches with your lump sum. Low R&D³? Just buy the solution.

Technology Need Value (TNV): How strategically important is this innovation to your application? How badly do you need the results of this R&D effort?

Measured strictly relative to your goals and objectives. If doubling efficiency makes no difference to cost, why bother investing?

Integrating the three: Technology Readiness and Risk Assessment (TRRA)

John combined TRL, R&D³ and TNV into comprehensive Technology Readiness and Risk Assessments. Two ways to aggregate:

  1. Risk matrix: Plot technology readiness remaining versus consequences of failure. Traditional risk matrix (green/yellow/orange/red), but for technology R&D. Shows where probability of failure is highest and consequences worst – your tall poles.

  2. Integrated Technology Index (ITI): Single number collapsing all three. Multiply technology maturity × difficulty × strategic need. Enables system-of-systems decisions: Is Redstone or Atlas the better ICBM solution?

John used these to manage NASA's $800M+ exploration portfolio. Details matter (technical and budgetary). Form follows function (clearly identify quantifiable goals, create work breakdown structure that follows what you're accomplishing). Lieutenants who control their piece competently and follow your philosophy.

With that structure: 'I could handle literally thousands of people and hundreds of projects. Because I knew who was responsible for what. I knew what they were trying to accomplish. I knew how much money they had. I knew what they were supposed to be spending this year.'

Modelling megaprojects: Space Solar Power

John's work on solar power satellites demonstrates framework application to genuinely massive undertakings. His 2023 paper on modelling megaprojects argues for physics-based parametric analysis.

Physics-based: Models must reflect real physics. Use the rocket equation. Don't cheat by assuming weightless fuel tanks.

Parametric: Allow variation of key assumptions to see what breaks and why.

Example: Space solar power end-to-end efficiency runs through multiple conversion stages – photovoltaic cells (sunlight to DC), RF transmitter (DC to RF), RF receiver (RF to DC). Each has efficiency. You need to model the full chain.

Then vary parameters. What if PV efficiency improves 10%? What if transmitter efficiency goes from 70% to 80%?

Sensitivity analysis reveals where the knees in curves are. Maybe transmitter efficiency improvement cuts total cost in half due to cascade effects (less mass, lower launch costs, smaller system). Maybe PV improvement barely moves the needle.

You can't understand strategic importance (TNV) without this modelling. How badly do you need results of a particular R&D effort? The model tells you.

Critical parameters for large-scale energy/space systems: Launch cost matters, but cost of hardware matters more. Traditional space systems cost $50,000–150,000/kg. Mega-constellations manufacturing thousands of units in factories instead of building 'bus-sized Swiss watches in laboratories'? Cost drops to $900/kg.

Combine 99% reduction in launch cost ($40,000/kg to $400/kg) with 99% reduction in hardware cost ($100,000/kg to $1,000/kg) – everything else changes. All other parameters shift due to cascade effects.

That's why integrated physics-based modelling is so critical for megaprojects.

What about technologies where the supply chain doesn't exist yet?

Fusion example: High-temperature superconducting magnets. Materials may not be ready. Supply chain doesn't exist at required volumes. You're decades from realisation.

John's answer: The model can be fine, but you haven't reached TRL 1 yet. At the heart of your machine, if you need room-temperature superconductors for magnetic field strengths enabling helium-3 fusion (avoiding radioactive brittleness), but you can't make 50,000km of those superconductors – your modelling may help you learn a lot, but the fusion machine will have to wait.

Space elevators same thing. They've got the solution, but they need 70,000km of one-molecule fibre. That's the Achilles heel they're working on because they've recognised it gates everything.

Advice for Deep Tech founders at TRL 5

Three things to understand before raising Series A:

  1. Validate your market. Make absolutely sure what you're developing is what you actually want to develop. Test that the application has legs.

  2. Test for scalability. Will your technology be adaptable, evolvable, scalable to what you'll need tomorrow? Make sure the next stage isn't going to require miracles.

  3. Double-check your foundations. Before you scale up, before you go to market, make sure the kernel of your technology – the thing that's important – will work in the application. Not just 'it works great at TRL 5', but will it run hot in a system? Will it blow up because you need a massive radiator? Check key performance parameters on the kernel to ensure it'll scale, adapt, evolve and work in the real application.

The frameworks endure because they build trust

TRLs proliferated not because NASA mandated them, but because they solved a fundamental problem: how do you communicate progress on cutting-edge innovation objectively when stakeholders span technical specialists, programme managers, financial decision-makers and executives?

You need shared language. You need frameworks that are flexible enough to adapt across domains (aerospace, fusion, AI, biotech) but rigorous enough to build confidence.

TRLs, R&D³ and TNV provide that language. They're not perfect. They require judgment, negotiation and constant reference to the application. But they work – which is why three decades after a NASA researcher published a white paper from his desktop without asking permission, essentially every organisation doing advanced technology development uses some version of these tools.

For Deep Tech founders navigating the lab-to-market journey, understanding these frameworks deeply isn't optional. They're how you'll communicate with investors, partners, customers and your team. They're how you'll measure your own progress honestly. And they're how you'll build the trust that determines whether you're in the 45% that survives or the 55% that fails.

Subscribe to our technology leadership podcast on YouTube or wherever you get your podcasts to stay updated on the latest insights from Deep Tech innovators. 

To learn more about Lab-to-Market Leadership, view our presentation: The Lab-to-Market Journey – a Blueprint.’

Additionally, learn about our work with Chief Executive Officers.

Next
Next

Why 55% of Deep Tech Companies Fail: The Communication Problem No One Talks About | Hailey Eustace