Richard Socher nailed the future of work with AI

My friend Richard Socher posted something on LinkedIn recently that stopped me mid-scroll.

He was responding to fears about AI taking jobs, and he cut through the noise with a frame I’ve been circling for months.

“The future of work is all of us becoming managers of AI,” Richard wrote. “Similar to moving from individual contributor to people manager: learning to delegate clearly, specify requirements, build trust. That’s the skill we need now.”

Read that again.

Delegate clearly. Specify requirements. Build trust.

That’s the core of what I’ve been trying to systematize with synthesis engineering .

The fear is real

I wrote recently about the anxiety engineers feel watching AI tools generate code in seconds that would have taken hours. The fears about skills becoming worthless. The uncertainty about what we actually contribute when implementation becomes cheap.

As I said in that piece: “If you’re anxious about AI making your skills obsolete, that’s a rational response to a genuinely uncertain situation. I won’t tell you there’s nothing to worry about.”

That anxiety deserves a real answer, not dismissal.

Richard’s post offers part of that answer. He points out what economists call the Lump of Labor Fallacy: the mistaken belief that there’s a fixed amount of work, and if machines do some of it, less remains for humans.

It’s been wrong before. When tractors automated farming, people predicted mass unemployment. Instead, new industries emerged. 150 years ago, 90% of people worked in agriculture. Today, 5% feed the world.

The workforce didn’t shrink. It transformed.

Richard’s point: AI is driving a similar transformation for knowledge work. Some tasks will get automated. But work isn’t zero-sum. The pie is growing.

I agree. The skills that matter most are learnable. They’re not job titles or seniority levels. They’re capabilities anyone can develop.

The skills, not the title

Richard uses a manager analogy, but don’t let that mislead you. He’s not saying “become a manager.” He’s describing specific skills that managers develop: setting direction, providing context, reviewing output, course-correcting when things go wrong.

Those skills aren’t reserved for people with “manager” in their title. They’re capabilities. And with AI, everyone needs them.

Using AI effectively requires developing judgment. Better prompts help, but judgment is the core skill.

Think about what “delegate clearly” means with AI. It means understanding what context the AI needs before starting. Breaking complex work into components AI can execute. Knowing when to give detailed instructions versus high-level goals.

Think about what “specify requirements” means. Defining success criteria upfront. Setting constraints and boundaries. Anticipating edge cases the AI might miss.

Think about what “build trust” means. Learning when AI output needs more scrutiny. Calibrating your verification effort to the task. Understanding each AI’s strengths and failure modes.

These aren’t prompting tips. They’re professional skills that apply whether you’re an IC, a tech lead, or a CTO.

Giving it a name

I’ve been working on this for years through my AI-assisted software development practice. In November, I started documenting what I’d learned under the name “synthesis engineering.”

The core principle: design systems for AI capabilities, not human limitations.

Traditional workflows optimize for human cognition. AI has different strengths. When you design workflows that use AI’s strengths while preserving human judgment, you get better results than either could achieve alone.

I’ve documented specific practices. Synthesis coding: human-AI collaboration for building production software. Synthesis project management: project management redesigned for AI capabilities. The Direction Dynamic: the pattern where humans direct and AI executes.

The framework keeps expanding as I learn more. But the foundation is exactly what Richard articulated: humans developing the skills to manage AI effectively.

Why this matters

Richard wrote: “Not being able to work with AI will soon be like not knowing how to use a computer or the internet.”

I think he’s right.

And the implication is that we need to teach these skills systematically.

When people developed management as a discipline, we created frameworks for it. We wrote books. We created training programs. We made it learnable.

The same needs to happen for AI collaboration. Working effectively with AI isn’t intuitive. It requires understanding AI’s capabilities and limitations, developing judgment about when to trust output, and building workflows that catch errors before they compound.

Synthesis engineering is my attempt to systematize this. It’s CC0 public domain. No permission required, no attribution needed.

The goal is adoption of shared vocabulary, not monetization.

Richard saw this early

I’ve known Richard for years. I’ve been an advisor to you.com since its founding.

Richard is the researcher who pioneered early work on what became prompt engineering, back when most people hadn’t heard of it. When I wrote about his vision for AI earlier this year, I noted his ability to see where AI technology is heading before others do.

This LinkedIn post is another example.

While others debate whether AI will take jobs, Richard is describing the skills that will define the next era of knowledge work.

Delegate clearly. Specify requirements. Build trust.

That’s what I’ve been calling synthesis engineering.


Rajiv Pant is President of Flatiron Software and Snapshot AI , where he leads organizational growth and AI innovation. He is former Chief Product & Technology Officer at The Wall Street Journal, The New York Times, and Hearst Magazines. Earlier in his career, he headed technology for Condé Nast’s brands including Reddit. Rajiv coined the terms “ synthesis engineering ” and “ synthesis coding ” to describe the systematic integration of human expertise with AI capabilities in professional software development. Connect with him on LinkedIn or read more at rajiv.com .