Something unusual is happening in workplaces across every industry. Engineers are shipping in an afternoon what used to take a sprint. Analysts are producing research in hours that once required weeks. Support teams are resolving issues at two or three times their previous rate.
The productivity gains from AI are real. McKinsey estimates generative AI could add $2.6 to $4.4 trillion annually in value to the global economy. PwC’s 2025 Global AI Jobs Barometer, drawing on nearly a billion job postings and thousands of company financial reports, found that industries most exposed to AI saw productivity growth nearly quadruple. Wharton economists project AI will increase U.S. productivity by 1.5% by 2035, rising toward 3.7% by 2075.
These are not incremental improvements. A skilled engineer working with well-deployed agentic AI is not marginally more productive. In many domains, the multiplier is 3x, 5x, sometimes 10x. In my own work, I’ve documented an engagement where two AI-augmented engineers delivered what traditionally required a full team — achieving roughly 3.6x productivity improvement while investing significantly less than traditional staffing would have required.
Here is the question almost nobody in leadership is asking clearly enough: when a worker becomes dramatically more productive by harnessing AI, who should benefit from that productivity gain?
The default answer, if nobody forces the question, is already playing out. The employer captures the surplus. The worker does the same amount of work — or more — for the same pay. And a once-in-a-generation opportunity to build a fairer compact between labor and capital quietly slips away.
The strongest argument against sharing
Before making the case that workers deserve a share, I want to engage the strongest counterargument honestly.
It goes like this: the employer paid for the AI tools. The employer invested in infrastructure, integration, governance, and training. The employer bore the risk of adoption — what MIT and NBER researchers call the “Productivity J-Curve,” where firms often see temporary performance declines before AI investments pay off. If the tools are the employer’s capital investment, the returns belong to the employer. An employee who becomes 10x more productive didn’t get there through willpower alone. They got there because the firm financed the technology and reorganized the business to make it possible.
This argument has real force. Enterprise AI licenses, compute infrastructure, compliance engineering, integration costs — these are genuine and substantial expenses. EY’s 2025 survey found 96% of organizations investing in AI are experiencing productivity gains, with the majority reinvesting those gains into new AI capabilities and R&D rather than sharing them with workers. Organizations achieving the highest ROI from AI often allocate more than 10% of their total technology budget to AI initiatives. The capital expenditure is real and the risk is not trivial.
But the argument proves too much. By this logic, every prior wave of automation justified indefinite wage suppression. Employers also paid for computers, email servers, and spreadsheet software. Nobody argues that a financial analyst skilled in Excel deserves zero credit for the efficiency they create with it. The tool is necessary. It is not sufficient. And the evidence for that distinction is overwhelming.
The human skill is the value multiplier
The BCG “Jagged Frontier” study is the clearest demonstration of where value actually originates in AI-augmented work. Researchers at Harvard Business School studied 758 consultants using GPT-4. When consultants used AI for tasks inside its capability frontier, quality rose 40%. When they used it for tasks outside the frontier — tasks that seemed similar but were not — performance dropped 19 percentage points below the no-AI control group.
Same tool. Same people. Same day. A 59-percentage-point swing determined entirely by the human’s judgment about when and how to apply AI.
Garry Kasparov observed the same dynamic in centaur chess: “Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.” The human’s process — their skill in collaborating with the machine — mattered more than either raw human talent or raw machine capability alone.
The NBER paper “Economics of Bicycles for the Mind” formalized this. It distinguishes three worker skills: implementation, opportunity judgment, and payoff judgment. In experiments, lower-skilled users improved mainly in basic execution, while higher-skilled users gained in persuasion, strategic thinking, and quality of decision-making. The same tool served as a crutch for the less experienced and a force multiplier for the more capable. A separate Harvard Business School study of Kenyan entrepreneurs reinforced this starkly: AI boosted profits 10–15% for high-performing entrepreneurs but lowered results by roughly 8% for low-performing ones.
The BCG study identified two successful collaboration patterns: “Centaurs” who strategically divided tasks between human and AI, and “Cyborgs” who integrated AI into every step with constant judgment. Both required skill. Neither emerged automatically from tool access.
If the same AI tool can produce a 40% quality improvement or a 19% quality degradation depending on the human directing it, then claiming the human contributed nothing worth compensating is not just unfair. It is economically illiterate.
Naming the skill
This human skill of directing AI effectively has a name. I call it synthesis engineering — the professional discipline of human-AI collaboration on complex work. Not automation, where AI replaces humans. Not augmentation, where AI merely assists. Synthesis, where human judgment and AI execution combine to produce something neither could create alone — like a chemical synthesis where the compound has properties neither input possesses.
As I’ve written in The part of your job AI can’t do, the AI has all the knowledge needed to recognize a pattern. It simply does not recognize that this situation calls for that pattern. That recognition — the judgment about when to apply what — is the essential human contribution. The bottleneck has shifted from “how fast can we type” to “how good is our judgment about what to type.” That shift is a move up, toward more consequential, more valuable work.
The market already validates this. PwC’s 2025 Global AI Jobs Barometer found AI-skilled workers command a 56% wage premium, double the 25% premium from the prior year. Lightcast’s analysis of 1.3 billion job postings found roles requiring AI skills offered 28% higher salaries, rising to 43% with two or more AI skills. Even in non-technical fields like HR and marketing, AI literacy drives salary premiums of 35–43%.
The premium exists because the human skill is the scarce input. The AI tool is increasingly commodity. If the human’s skill creates the value — if it is the difference between a 40% quality gain and a 19% quality loss from the same technology — then the human has a legitimate claim to share in that value.
The gap that’s already opened
The problem is that most workers aren’t sharing in the value they’re creating. And this isn’t new — AI is accelerating a pattern that has been building for decades.
The Economic Policy Institute has tracked the numbers since the late 1970s: from 1979 to 2019, net U.S. productivity grew 59.7% while typical worker compensation grew only 15.8%. A 43.9 percentage-point divergence. Had compensation tracked productivity over those 40 years, the typical U.S. worker would earn roughly $9 more per hour today. Between 1948 and 1979, productivity and compensation grew almost in lockstep. Then the line broke.
As of the third quarter of 2025, the U.S. labor share of income has fallen to a record low in a dataset stretching back nearly eight decades, according to PIMCO. Corporate profit share has more than doubled since 1980, rising from roughly 6% to nearly 12%. Returning to 1980 labor share levels would mean roughly $2 trillion in additional annual compensation for American workers — about $12,000 per worker per year.
The most rigorous early evidence on whether AI productivity gains specifically reach workers is sobering. Humlum and Vestergaard studied AI chatbot adoption across 11 occupations in Denmark through December 2024, linking adoption data to administrative labor market records. They found essentially zero effects on earnings and recorded hours — across intensive users, early adopters, workplaces with substantial AI investments, and workers who reported large productivity gains. Confidence intervals rule out effects larger than 2%. Workers gained productivity. The productivity did not translate to pay.
And where the gains go, the work follows. An NBER study by Jiang, Park, Xiao, and Zhang found that AI exposure is associated with longer work hours and reduced leisure time. Workers’ overall welfare fails to keep pace with productivity gains. A UC Berkeley Haas ethnographic study of roughly 200 employees at a U.S. tech company found the same pattern from the inside: AI didn’t free up time. It expanded what workers felt capable of taking on. “You had thought that maybe, oh, because you could be more productive with AI, then you save some time, you can work less,” one engineer said. “But then really, you don’t work less.”
Fortune magazine captured the employer side of this dynamic in a March 2026 feature. Energy company AES transformed a 14-day auditing process into one hour. Dun & Bradstreet’s CTO Mike Manos described the same logic: “I got the eight hours to two hours, but now I can get 20 hours of work.” A separate Fortune feature documented how time spent emailing doubled while deep focus work fell, as AI-enabled speed raised expectations across the board.
Economists have a name for this: the Jevons Paradox applied to labor. Efficiency gains, without structural constraints on scope, become higher expectations. The tractor didn’t give farmers shorter days. It gave them more acres. Email didn’t give office workers shorter weeks. It spawned what Tim Harford called a “profusion of low-quality, low-value messages bleeding into evenings and weekends.”
Why this is structural
The failure of productivity gains to reach workers is not an accident or a market inefficiency waiting to correct itself. It is a structural and power problem that AI is intensifying.
PIMCO identifies the compounding forces: weakened labor bargaining power from decades of union decline, globalization hollowing out high-labor-share sectors, technological change substituting for labor, and market concentration in “superstar firms” that scale with minimal headcount. AI accelerates all four simultaneously.
The pattern holds across every major technological revolution. During the Industrial Revolution, Robert Allen’s data shows output per worker rose 46% between 1780 and 1840 while real wages rose just 12%. Benefits took 60–80 years to reach workers broadly. The computing revolution was worse: mean compensation growth fell from 2.6% annually in the postwar period to 0.4% from 1973 to 2003, while the college wage premium widened. Daron Acemoglu and Pascual Restrepo’s foundational research found that between 50 and 70 percent of changes in the U.S. wage structure from 1980 to 2016 can be attributed to the relative wage declines of workers in tasks affected by automation.
Technological productivity gains do not flow to workers by default. They flow to workers when institutions, bargaining power, or deliberate choices distribute them. That is as true now as it was in 1840.
What the optimists get right — and what they miss
Richard Socher, the AI researcher and entrepreneur, makes an important point about what economists call the Lump of Labor Fallacy: the mistaken belief that there’s a fixed amount of work, and if machines do some of it, less remains for humans. He’s right. When tractors automated farming, new industries emerged. A hundred and fifty years ago, 90% of people worked in agriculture. Today, 5% feed the world. The workforce didn’t shrink. It transformed.
The World Economic Forum estimates AI will create roles equivalent to about 14% of current employment while displacing about 8%, for a net positive. History and economic theory both suggest new categories of work will emerge that we cannot yet envision.
But here’s where the optimistic framing, while correct on its own terms, misses the question I’m asking. The Lump of Labor Fallacy tells us something about the quantity of jobs. It tells us nothing about the distribution of gains. New jobs can emerge while workers in those jobs earn less, relative to the value they produce, than workers earned a generation ago. That is precisely what happened with the computing revolution. The jobs were there. The wages lagged. The productivity-pay gap widened through decades of net job creation.
The question I am interested in is not whether there will be work. There will be work. The question is whether the people doing that work — including new kinds of work we can’t predict yet — will share in the extraordinary value that their AI collaboration makes possible.
The strategic case for sharing
There is also a pragmatic argument that stands independent of fairness. Employers who capture all the gains from AI productivity are acting against their own long-term interests.
Burnout from AI-intensified work is real and expensive. The Berkeley Haas study found that workers who took on more because AI made more feel achievable experienced cognitive exhaustion and declining decision quality. A Boston Consulting Group study found that workers spending significant time monitoring multiple AI tools — rather than letting systems run with targeted oversight — experience 12% more mental fatigue and significant information overload. The initial productivity boost degrades as judgment deteriorates. You cannot run a knowledge economy on depleted human judgment.
Jeffrey Pfeffer documented where this trajectory leads in Dying for a Paycheck: workplace practices like long hours, economic insecurity, and lack of autonomy over pace and workload contribute to an estimated 120,000 excess deaths per year in the United States — making the workplace, by Pfeffer’s analysis, the fifth leading cause of death. Job stress alone costs U.S. employers more than $300 billion annually. AI-driven intensification — more output expected, same pay, diminishing control over scope — risks amplifying every factor Pfeffer identified. The companies demanding “more with AI” without sharing the gains are not just losing productivity to burnout. They are, in Pfeffer’s framing, making their people sick.
AI adoption requires trust. McKinsey’s research consistently finds that AI implementations fail not from technical problems but from organizational resistance. Workers who believe AI is being used to extract more from them for the same pay will resist, undermine, or minimally comply. Workers who see their compensation or working conditions improve alongside their output become advocates for adoption. The difference between a company where AI is embraced and one where it is sabotaged often comes down to whether workers feel like partners in the gains.
And there is a macroeconomic dimension that boards should take seriously. If AI-driven productivity gains flow overwhelmingly to capital while workers face stagnant wages and expanding demands, the consumer purchasing power that corporate profits depend on erodes. As one analyst noted, companies can replace workers with AI and cut costs in the short term. They cannot replace customers. JP Morgan CEO Jamie Dimon told the World Economic Forum in January 2026 that governments and businesses must step in to support displaced workers or risk significant social instability. The productivity gains are real. But so is the risk that hoarding them hollows out the demand side of the economy.
What a fairer compact looks like
Several concrete models exist for sharing AI-generated surplus. The strongest implementations combine more than one.
The most fundamental shift is from measuring work by hours spent to measuring it by value delivered. As I’ve explored in The future of engineering services, time-based billing creates perverse incentives: it rewards inefficiency and penalizes the worker who uses AI to accomplish in two hours what previously took twenty. Value-based pricing and outcome-based contracts flip this. The work is done at the agreed value. When the team delivers faster because of AI, the efficiency belongs partly to the worker. The hours are returned.
Employers who implement AI tools that measurably increase output should share a defined portion of the productivity dividend with the workers whose judgment amplifies the AI. This could mean bonuses tied to quality-adjusted output increases, profit-sharing pools that include AI-generated revenue, or salary structures that recognize AI collaboration skill. Traditional gain-sharing plans — measuring pre- vs. post-improvement output and splitting the gains — translate directly to the AI context. The measurement discipline already exists. What’s missing is the expectation that sharing is the norm.
Joe O’Connor of Work Time Revolution coined the term “time dividend” for the reward workers earn from increasing their productivity with AI tools. The four-day workweek movement provides extensive evidence that this works in practice. Trials coordinated by 4 Day Week Global found 92% of participating companies kept the policy after testing. The UK pilot of over 3,000 workers showed 71% reported reduced burnout with no deterioration in business metrics and a 35% average increase in revenue. If AI lets workers accomplish their output in fewer hours, returning some of those hours to the worker is both fair and strategically sound. Several tech CEOs have publicly predicted AI will enable three- or four-day workweeks — but no individual company will move first without policy pressure or collective bargaining.
The most impactful distribution mechanisms so far have come through organized labor. The Writers Guild’s 148-day strike in 2023 produced the first major union agreement regulating AI in creative work — establishing that AI cannot write or rewrite literary material and that human creators retain their economic rights. The Culinary Union Local 226 won contracts for 40,000 Las Vegas hotel workers ensuring advance notice, training, and $2,000 per year of employment as severance if AI eliminates their role. At Deutsche Telekom, a negotiated AI shift-planning tool gave workers control over their own schedules while AI optimized the roster. Management reported that service quality improved precisely because workers were empowered rather than surveilled. At ZeniMax Media, a CWA-negotiated agreement commits the company to providing notice about AI implementation while ensuring AI boosts productivity without harming workers. As one worker put it: “This agreement empowers us to shape the ways we may choose to use AI in our work.”
These are early examples. They demonstrate that the distribution of AI gains is not technologically determined. It is a negotiated social outcome. The technology does not dictate who benefits. People do.
Where this leads
I have spent my career at the intersection of technology and human potential — building technology organizations at major media companies, and now working on AI-augmented engineering practices. I believe that AI genuinely amplifies human capability. As I wrote in The Iron Man suit for your brain, AI at its best functions as a cognitive exoskeleton — extending what humans can do without replacing what makes them valuable.
But amplification without recognition is extraction. If a worker harnesses agentic AI to do the work of five people, and the result is that the company quietly doubles their workload while paying the same salary, that is not a partnership. It is a transfer of value from labor to capital dressed up as technological progress.
The synthesis engineering framework I’ve described in my work offers one piece of the answer: it names the human skill that creates the value. But naming the skill is not enough if the economic structures don’t compensate it. The deeper challenge is building the institutions, incentives, and expectations that ensure the people who develop this skill — who learn to direct AI with judgment, domain expertise, and accumulated context — share in the value that collaboration produces.
As I wrote in Collective intelligence: making AI work for everyone, AI has the potential to be one of the most powerful equalizers in human history, democratizing capabilities that were previously available only to the privileged few. That promise is real. But it will only be fulfilled if the value flows broadly — to the workers doing the work, not only to the shareholders owning the tools.
The employers who understand this will build teams that innovate rather than burn out. They will turn AI adoption from a source of anxiety into a competitive advantage built on trust. The workers who understand this will recognize that the value they create — the judgment, the context, the knowing-what-to-build — is precisely what AI cannot replicate. It is worth something. It is worth insisting on.
Rajiv Pant is President of Flatiron Software and Snapshot AI, where he leads organizational growth and AI innovation. He is former Chief Product & Technology Officer at The Wall Street Journal, The New York Times, and Hearst Magazines. Earlier in his career, he headed technology for Condé Nast’s brands including Reddit. Rajiv coined the terms “synthesis engineering” and “synthesis coding” to describe the systematic integration of human expertise with AI capabilities in professional software development. Connect with him on LinkedIn or read more at rajiv.com.