The Bot Sandwich Economy: Why AI Will Make Some Jobs Better and Others Worse
Two job designs are emerging: directing AI and being directed by it.
Earlier this month, a strange little website called Rent‑A‑Human went viral. The premise was simple and slightly dystopic: if an “AI agent” needed something done in the real world, it could hire an actual person to do it. Deliver a package. Take a photo. Show up somewhere and report back. Be the hands and eyes the software does not have.
Some people called this the “meatspace layer.” Charming.
For all its meme energy, it gave me a jolt of clarity. For months I’ve been asking people whether AI will make work more purposeful or less, and the experts I trust keep landing on opposite answers. Why?
Then it clicked: it depends on whether you’re above the AI agent in the Rent‑A‑Human stack, setting the agent’s goals, or below it, doing the running around.
The Bot Sandwich
Over the next decade, I think most jobs will split this way. And I’ve started to call this idea The Bot Sandwich: bots in the middle, humans on both sides. Above, people who steer. Below, people who get steered.
The people up top set the goal, choose constraints, apply judgment, and use AI as leverage. This is Ethan Mollick’s “centaur” posture: human steering, machine sprinting. Think of a founder who can go from idea to landing page to outreach list to product prototype in a weekend, because the machine does the drafting, searching, and scaffolding, and the human does the choosing.
When you’re the one holding the reins, AI can strip away the dead work. It widens your span of control. It collapses the distance between idea and action. Work starts to feel more like craft: more decisions, more iteration, more ownership.
Down below are the people who comply with the system. They take the task, follow instructions, hit the metrics, and move on. Cory Doctorow calls this the “reverse centaur,” a machine head on a human body. Think of a call center where the system routes the next ticket, suggests the script, scores the tone, tracks handle time, and flags deviations. The worker is there to do the last-mile talking.
When AI Becomes Management
This is where AI stops looking like a tool and starts looking like a middle manager. And because monitoring is cheap, it spreads. Because optimization is efficient, it spreads too. Researchers already have a name for the pattern: algorithmic management.
Economists have boring words for this too: complements and substitutes. When AI complements you, it increases your leverage. When it substitutes for you, it makes you easier to swap. When AI makes it possible to pull judgment out of a job, what’s left gets easier to standardize, measure, and swap. The work breaks into smaller, more legible pieces. That usually means less bargaining power. And over time, it means wage pressure, and in some cases, eventual automation of the full job.
Hierarchy isn’t new. What’s new is that AI makes the middle layer scalable. A human manager can only coordinate so many people; an AI system can route work, score performance, and enforce standards for thousands at once. That makes the top layer wildly more leveraged: a small number of people can set direction and capture value while the bots coordinate execution. In other words, the org chart doesn’t just get flatter. The middle becomes software.
That’s why people keep producing opposite prophecies about AI and work. One sees abundance. One sees overlords. They are arguing about a distribution: how many people land on top of the Bot Sandwich, and how many land below it.
I’m pretty sure the split is coming. I’m less sure how many people end up on each side. That part isn’t prophecy. It’s policy, design, and a thousand decisions made across countless schools and organizations. It’s something we still have a say in.
The Bottlenecks
A fair question is whether the economy can create enough high-judgment work to absorb a larger supply of people trained to steer. I think it can, because AI lowers the cost of making almost anything, not just code. Lesson plans, marketing campaigns, legal drafts, care workflows, training programs, patient follow-ups, compliance documentation, vendor negotiations.
When production gets cheap across the economy, the bottleneck moves upstream. The scarce resource becomes judgment: choosing what’s worth doing, setting constraints, deciding what “good” looks like, owning the tradeoffs.
That’s the hopeful version. But the catch starts in school.
The risk isn’t that we’ll run out of meaningful work. It’s that we’ll build a world where there’s more leverage everywhere, and then ration the steering to the people who already know how to steer.
Because the gap isn’t “AI skills.” It’s whether you can decide what needs doing without waiting for instructions. Agency. Judgment. Motivation. And our K–12 and higher education systems still train the opposite muscle: follow the directions, master the content, hit the rubric, repeat. We ask students to persist without giving them many chances to see why it matters, or to practice the work that matters: identifying a problem, asking better questions, marshaling resources, and learning their way toward a solution.
If the Bot Sandwich is the shape of work, then school is still preparing most kids for the bottom slice.
Brookings researcher Molly Kinder makes a related point about entry-level work. If AI eats the early-career ladder, young people lose the chance to build judgment on the job, developing the pattern recognition and institutional fluency that eventually lets you steer. Her proposal is residency-style on-ramps, structured, mentored roles that preserve learning-by-doing even as tasks automate.
My version is the school-side equivalent: make more of formal education experience-based, with projects that have real stakes, adult standards, and tight feedback loops, so students build the judgment and motivation they’ll need when work becomes less about producing answers and more about choosing problems.
So yes, the Bot Sandwich is forming.
And it will sort too many people to the bottom unless we make a deliberate effort not to let it.
We get more people to the top by building technology that augments instead of controls. By designing jobs that keep judgment in the workflow, instead of extracting it for the model. And by building human capability fast enough that more people end up in the driver’s seat.
We’re about to build a lot of machines that can do a lot of things. The real test is whether we also build more humans who can decide what’s worth doing.



Amazing piece. Question though, if the middle-management layer becomes software, and AI eats the early-career ladder, how does someone at the bottom realistically build the institutional fluency to transition to the top?