Skill erosion paradox
The skill erosion paradox is a phenomenon in AI-augmented professional environments in which the adoption of AI tools that enhance short-term productivity simultaneously degrades the underlying human analytical capabilities that made productive AI augmentation possible. The paradox arises because AI assistance reduces the frequency and intensity of deliberate practice through which expert skills are developed and maintained: as AI performs more of the analytical work, practitioners exercise their independent analytical capabilities less, causing those capabilities to atrophy over time even as measured performance improves. The term was introduced in the context of AI adoption in private equity and professional services by Coney (2026), who described it as "the deeper, slower-moving threat beneath AI adoption."[1]
Background
The relationship between tool use and skill development has been studied across numerous professional domains. Research in aviation documented the degradation of manual flying skills among pilots who relied heavily on autopilot systems, with implications for safety in situations requiring manual override. Similar patterns have been documented in medical diagnosis, where computational decision support tools reduced the diagnostic accuracy of physicians in situations where the support tools were unavailable or incorrect.
In knowledge work contexts, the mechanism of skill erosion through disuse differs from the physical skill degradation documented in motor task research, but follows analogous principles. Expert judgment in fields such as financial analysis, legal reasoning, and investment evaluation is maintained through repeated exercise against challenging problems. When AI assistance reduces the cognitive demand of routine analytical tasks, practitioners encounter fewer opportunities to exercise the deeper reasoning skills that distinguish expert from novice performance.
The paradox is sharpened by the asymmetric time horizons of costs and benefits. AI productivity gains are immediate and measurable; skill erosion is gradual and difficult to detect until it manifests in high-stakes situations where AI assistance is unavailable, unreliable, or insufficient.
Description and Methodology
Coney (2026) identifies three forms of skill erosion in AI-augmented professional environments, each operating at a different level of the organisation.
Individual analytical atrophy occurs when practitioners progressively reduce the independent cognitive effort applied to problems because AI outputs provide a plausible starting point that reduces the perceived need for independent analysis. Over time, the habit of independent analysis weakens, and the practitioner's ability to detect errors in AI outputs -- which requires the same analytical skills the AI is replacing -- diminishes correspondingly. This creates a compounding dynamic: as AI reliability is trusted more, verification skills atrophy; as verification skills atrophy, the ability to catch AI errors declines.
Institutional knowledge loss occurs at the organisational level when the tacit knowledge embedded in expert practitioners -- developed through years of accumulated deal experience, pattern recognition, and judgment calibration -- is not transferred to junior professionals because AI systems perform the tasks through which that knowledge transfer historically occurred. If AI handles financial spreading, junior analysts do not develop spreading skills; if AI drafts investment committee memos, associates do not develop the judgment about what matters to a committee.
Capability concentration risk is the organisational consequence of both individual and institutional erosion: the firm's analytical capability becomes concentrated in a small number of senior individuals whose skills predate AI adoption, creating a structural vulnerability if those individuals depart.
The research presents frameworks for preserving expertise through deliberate practice structures, workflow design, and talent development strategies calibrated for AI-augmented environments. These include mandatory non-AI analytical exercises at defined intervals, structured mentorship that preserves knowledge transfer pathways, and governance frameworks that require independent human analysis at specified decision points regardless of AI output quality.[2]
Applications
The skill erosion paradox is relevant across any professional domain where AI tools assume significant portions of the analytical workload.
In private equity specifically, the concern focuses on the analytical skills most at risk: financial modelling judgment, qualitative management assessment, pattern recognition in complex deal structures, and the ability to identify what a standard diligence framework may have missed. These skills are most at risk precisely because they are most amenable to AI assistance.
In legal services, AI contract review and research tools reduce the volume of independent document analysis performed by junior lawyers, with implications for the development of document review judgment and issue-spotting instinct.
In medicine, clinical decision support tools have been associated with reduced diagnostic accuracy in AI-unavailable scenarios, a pattern studied extensively in radiology and emergency medicine.
In financial analysis and equity research, AI earnings analysis and report generation tools reduce the frequency with which analysts construct independent financial models, potentially degrading the modelling intuition necessary to identify when AI-generated outputs are incorrect.
Challenges
The skill erosion paradox is methodologically difficult to study because erosion occurs gradually and manifests in low-frequency, high-stakes situations that are difficult to simulate in controlled research settings. Most AI adoption research measures productivity gains under normal operating conditions, not performance degradation in edge cases or AI-failure scenarios.
Organisational incentives work against skill preservation interventions. Requiring practitioners to perform analytical tasks manually when AI could perform them faster is experienced as inefficiency by practitioners and managers alike, creating pressure to abandon preservation protocols.
The paradox creates an ethical dimension for organisations deploying AI at scale: firms may be systematically degrading the human capital they depend on, in ways that are not visible until a consequential failure occurs.
See Also
- Automation complacency
- Decision velocity
- AI-assisted due diligence
- Human-AI workflow design
- AI governance in private equity
References
- Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of automation. Human Factors, 52(3), 381-410.
- Dell'Acqua, F., et al. (2023). Navigating the Jagged Technological Frontier. Harvard Business School Working Paper 24-013.
- McKinsey & Company. (2023). The Economic Potential of Generative AI. McKinsey Global Institute.
- Coney, L. (2026). Combating Automation Complacency in Financial Due Diligence. SSRN. DOI: 10.2139/ssrn.6111107.
- Stanford HAI. (2024). AI Index Report 2024. Stanford University.
