Infographic showing the AI deployment gap between theoretical capacity and actual workplace usage based on Anthropic research, featuring a modern office setting with data overlays.
Product Management - SaaS

AI Job Displacement Data Exposes a $1 Trillion Due Diligence Blind Spot

AI job displacement has dominated boardroom conversations since ChatGPT’s launch in late 2022. Boards demand answers. Investors price in risk. Founders promise transformation. But until this week, nobody had built a rigorous, real-world measurement framework to separate signal from noise.

On March 5, 2026, Anthropic researchers Maxim Massenkoff and Peter McCrory published “Labor Market Impacts of AI: A New Measure and Early Evidence” — the most methodologically rigorous study of AI’s actual employment effects to date. Their findings challenge every popular narrative about AI and jobs.

The headline finding: no measurable increase in unemployment for highly AI-exposed workers since ChatGPT launched.

But here’s the contrarian layer most analysts will miss: the paper simultaneously reveals a structural hiring slowdown hitting young workers in AI-exposed roles — and exposes a massive gap between theoretical AI capability and actual deployment that has profound implications for M&A valuation, enterprise software investment, and workforce strategy.

Why Every Previous AI Job Displacement Measure Was Wrong

The research community has been flying blind. Previous attempts to measure AI’s labor market impact relied entirely on theoretical capability assessments — asking whether an LLM could perform a task, not whether it actually does.

The most influential prior framework, Eloundou et al. (2023) from OpenAI, scored tasks on a simple scale: 1 if an LLM can double task speed alone, 0.5 if it needs additional tools, and 0 if it cannot help. This became the standard exposure metric across dozens of subsequent studies.

The problem? Theoretical capability does not equal economic displacement. A prime example: Eloundou et al. classify “Authorize drug refills and provide prescription information to pharmacies” as fully AI-exposed (β=1). Anthropic’s data shows Claude has never performed this task at scale, despite the technical feasibility. Legal constraints, liability requirements, and verification workflows create a deployment gap that purely theoretical models cannot see.

This is not an academic distinction. Enterprise software investors pricing AI disruption risk into valuations — or SaaS founders claiming AI replaces entire workflow categories — are working from fundamentally flawed data.

Introducing “Observed Exposure”: What Actually Happens vs. What Could

Massenkoff and McCrory introduce a new metric they call Observed Exposure. It combines three data sources:

  • O*NET — the US Department of Labor database cataloging tasks across 800+ occupations
  • Anthropic’s own Economic Index — real-world Claude usage data from millions of professional interactions
  • Eloundou et al.’s theoretical feasibility scores as a ceiling measure

The methodology weights automated use (full weight) over augmentative use (half weight), and adjusts for how much of a role’s time is actually spent on AI-impacted tasks. It only counts a task as “covered” if it shows sufficient work-related usage in actual Claude traffic — not just if an LLM could theoretically handle it.

The result is the first exposure metric grounded in empirical deployment data rather than theoretical capability.

The Deployment Gap: AI Is Operating at a Fraction of Its Theoretical Capacity

Here is where the contrarian insight lives. Eloundou et al.’s theoretical framework suggests that 94% of Computer & Math occupation tasks could be handled by an LLM. Anthropic’s actual usage data shows Claude covers just 33% of tasks in that category.

Office & Administrative occupations: theoretically 90% exposed. Actually observed: well below that. Legal occupations: theoretically highly exposed. Actually used: minimal, due to liability constraints, court representation requirements, and firm-specific verification workflows.

This deployment gap is not a temporary lag. It reflects structural barriers — regulatory constraints, liability exposure, human verification requirements, legacy system dependencies, and organizational inertia — that theoretical capability models systematically ignore.

For enterprise SaaS investors conducting AI due diligence, this gap is the story. Vendors claiming to automate entire job functions based on theoretical LLM capability are making promises that current deployment data does not support. The question for any AI-enabled acquisition is not “what can the AI do?” — it’s “what is the AI actually doing, at scale, in production, today?”

See also: Our AI productivity due diligence framework for SaaS M&A for how we apply this distinction in transaction analysis.

The Ten Most Exposed Jobs Right Now — And What’s Actually Being Automated

The paper’s Figure 3 reveals which occupations face the highest actual AI exposure, not theoretical risk. The rankings may surprise conventional wisdom:

  • Computer programmers — 74.5% observed exposure. Leading automated task: writing and maintaining software programs
  • Customer service representatives — 70.1% observed exposure. Leading automated task: handling customer inquiries and complaints via API-driven chatbots
  • Data entry keyers — 67.1% observed exposure. Leading automated task: reading source documents and entering data into systems
  • Medical record specialists — 66.7% observed exposure. Leading automated task: compiling and coding patient data
  • Market research analysts — 64.8% observed exposure. Leading automated task: preparing reports and translating findings into written text
  • Sales representatives — 62.8% observed exposure
  • Financial and investment analysts — 57.2% observed exposure

Notice what this list actually represents: these are not low-wage, low-skill positions. These are professional knowledge workers — the exact demographic that enterprise SaaS vendors target as buyers, and that PE-backed companies count as their primary value generators.

Meanwhile, 30% of workers have zero observed AI coverage — Cooks, Motorcycle Mechanics, Lifeguards, Bartenders, Dishwashers. These are the workers the disruption narrative said were most vulnerable to automation. The data says the opposite.

What BLS Job Growth Projections Reveal About Long-Term AI Job Displacement Risk

The paper cross-references Observed Exposure scores against Bureau of Labor Statistics employment projections for 2024–2034. The correlation is modest but meaningful: for every 10 percentage point increase in AI coverage, BLS growth projections drop by 0.6 percentage points.

Crucially, this correlation only exists with the new Observed Exposure metric — not with Eloundou et al.’s theoretical measure alone. The implication: theoretical exposure scores have no predictive value for actual job growth trajectories. Only deployment-grounded measures correlate with independent labor market analyst forecasts.

For investors modeling workforce cost reductions in AI-augmented acquisitions, this distinction directly impacts valuation. A target company claiming 40% headcount reduction because LLMs could theoretically handle those roles is making a claim unsupported by BLS trend data. A company with documented automated task deployment in the 60–70% observed exposure range is telling a different — and more credible — story.

The Demographics of AI Exposure Will Reshape Your Talent Strategy

The paper’s workforce demographics data overturns another popular assumption. Using Current Population Survey data from August–October 2022 (just before ChatGPT’s release), Massenkoff and McCrory profile who actually works in high-exposure versus no-exposure occupations.

Highly exposed workers are:

  • 16 percentage points more likely to be female (54.4% vs. 38.8%)
  • 47% higher average hourly wages ($32.69 vs. $22.23)
  • Nearly 4x more likely to hold graduate degrees (17.4% vs. 4.5%)
  • More likely to be White or Asian, less likely to be Hispanic

This demographic profile has direct implications for enterprise workforce strategy and M&A people risk analysis. The workers most exposed to AI displacement are not entry-level, low-wage, or easily replaceable. They are experienced, well-compensated knowledge workers — many of them in leadership pipelines.

For SaaS founders pitching AI-enabled efficiency to enterprise buyers: your buyers’ procurement teams, legal reviewers, and financial analysts are in the high-exposure cohort. The productivity paradox cuts both ways — you’re selling efficiency tools to the people whose jobs are most theoretically at risk from those same tools.

The Unemployment Data Shows No AI Job Displacement Signal — Yet

The core finding, tracked via difference-in-differences analysis of CPS unemployment data from 2016 through 2025: no systematic increase in unemployment for highly AI-exposed workers since ChatGPT launched in late 2022.

The pooled post-ChatGPT estimate is +0.0020 (SE 0.0019) — statistically indistinguishable from zero. Unemployment trends for the most exposed workers track closely with the least exposed cohort post-2022, with no divergence visible in the raw data.

The authors stress-test this across multiple specifications — varying the exposure threshold from the median to the 95th percentile, using Department of Labor unemployment insurance claims instead of survey responses, and isolating young workers. In no specification do they find a clear unemployment impact from AI.

This does not mean AI has no labor market impact. It means that any effects so far have not manifested as increased unemployment. The paper’s authors explicitly note that AI’s diffusion may be more like the internet or trade shocks than like COVID — effects too gradual and distributed to appear in aggregate unemployment statistics yet.

For enterprise buyers assessing AI vendor claims of workforce reduction: use this data as a reality check. Documented, production-scale automation is rare enough that even the most rigorous study finds no aggregate unemployment signal. Vendor pitches projecting 30–40% headcount reductions should face proportionally rigorous scrutiny.

The Hidden Early Warning: Young Worker Hiring Is Already Slowing

Here is the finding that most coverage will overlook, and that matters most for long-term workforce planning.

Brynjolfsson et al. (2025) previously reported a 6–16% drop in employment in AI-exposed occupations among workers aged 22–25. Massenkoff and McCrory investigate this directly — and find a 14% drop in job-finding rates for young workers entering high-exposure occupations in the post-ChatGPT era.

The mechanism appears to be hiring slowdown, not separations. Companies are not laying off experienced workers in exposed roles. They are quietly not hiring entry-level replacements — a hiring freeze visible only when you look at labor market entrants, not incumbent workers.

This has significant implications for enterprise software buyers evaluating AI-native workforce models. The companies that are actually capturing AI productivity gains are doing it silently: not replacing workers, but not backfilling departures and retirements. The headcount reduction shows up years later in structural workforce composition, not in dramatic layoff announcements.

For M&A due diligence, this matters. A target company with a workforce heavily concentrated in high-exposure roles and a pattern of not backfilling entry-level positions is either successfully deploying AI (value creation) or building a brittle single-point-of-failure knowledge structure (risk). The difference requires deep workforce analytics — not just a review of AI tool subscription spend.

What This Means for Enterprise M&A, SaaS Investment, and Workforce Strategy

For PE/VC Investors

Observed Exposure scores are now publicly available at Hugging Face / Anthropic Economic Index. Before any acquisition of an enterprise SaaS company claiming AI-driven efficiency gains, map the target’s customer workforce profile to these scores. High theoretical exposure without documented deployment is a red flag — not a value driver. The gap between β (theoretical capability) and observed exposure is your due diligence line item.

For SaaS Founders

The most credible AI productivity claims are grounded in observed automation metrics, not theoretical capability. If your product is genuinely shifting tasks from human to automated workflows in the 60–70%+ observed exposure range, you have defensible data. If you’re selling based on what LLMs could theoretically do for a role, the market will eventually price in the deployment gap.

For Enterprise CTOs and CPOs

The hiring slowdown for young workers in exposed roles is your canary in the coal mine. If your organization is quietly not backfilling entry-level positions in data entry, customer service, financial analysis, or market research — without a structured knowledge transfer plan — you’re building a workforce risk profile that won’t show up in current productivity metrics but will create a succession problem in 3–5 years.

The Bottom Line: AI Job Displacement Is a 2027–2030 Story, Not 2023–2026

The most important insight from Massenkoff and McCrory’s research is methodological: the effects of AI on employment are likely to be gradual, sector-specific, and visible only to analysts who look at the right metrics. Theoretical capability scores predict nothing. Observed deployment data predicts the future — but with a 2–3 year lag.

The current data shows no mass displacement. But it shows real structural shifts in hiring patterns, confirmed by BLS projections. The companies and investors who treat this as a static “no impact” finding are misreading the study. The companies and investors who use Observed Exposure data to identify where real automation is actually happening — and where the deployment gap will close next — are positioning for the phase transition that the aggregate data will eventually confirm.At DevelopmentCorporate, our enterprise SaaS M&A advisory practice integrates workforce analytics with deal structure. If you’re evaluating a transaction where AI productivity claims are driving the valuation thesis, we’d welcome a conversation about how deployment-grounded exposure analysis changes the model.