The Two Futures of Work — and Why the Bottom-Up One Is Yours
Two essays came out in the last few weeks that describe the same shift and almost nothing else in common.
The first, published at the end of March, was written by Jack Dorsey and Roelof Botha. If Dorsey needs no introduction, Botha is one of the most successful venture investors of the last thirty years. Together they used their post to lay out how Block (the company formerly known as Square) is being rebuilt from the ground up around AI. Not "we added an AI feature." Rebuilt. The article is titled "From Hierarchy to Intelligence," and it is an architecture document dressed up as an essay.
The second, published a week later, was written by Laura Entis about Dan Shipper's company Every. It's titled "Every Is Half Agent Now." It's not an architecture document. It's a field report. Nobody at Every built a grand plan. Something happened to them and they're trying to describe it honestly while it's still happening.
Both pieces arrived at the same conclusion: the job of middle management is information routing, and AI is about to eat that job. After that, they diverge completely.
Dorsey's version of what comes next is top-down. Every's version is bottom-up. If you run or work at a company with fewer than a thousand people, you've been reading the wrong one.
What Dorsey is proposing
Dorsey's essay walks through two thousand years of org chart history in about six paragraphs. Roman legions. Prussian general staffs. The New York & Erie Railroad, which gave us the modern org chart in 1855. Frederick Taylor and scientific management. All of them, he argues, exist to solve the same problem: the person at the top can't pay attention to everything, so you build a pyramid of people whose job is to pay attention to things on behalf of the person above them.
Then AI shows up and that pyramid stops being necessary. The pyramid's job was aggregating information and routing it upward. AI does that better.
So Dorsey lays out what replaces it. Four pillars, in his language:
Capabilities. Atomic building blocks your company is good at. At Block, that's stuff like payments, lending, fraud detection, cash flow forecasting. Not products. Blocks.
World model. A live understanding of how the company works and how each customer works. Not a dashboard you read. A model the company queries.
Intelligence layer. The part that composes capabilities into specific solutions for specific customers at specific moments. "You might want a short-term loan." "You might want to move this into savings."
Interfaces. The things humans actually touch. Cash App. Square terminals. Your login page.
The company's people, in this model, split into three rough roles. Individual contributors build the capabilities and the models and the interfaces. Directly Responsible Individuals (DRIs) own cross-cutting problems and pull resources from wherever they need to. Player-Coaches do a mix of building and developing other people. Middle management in the traditional sense, the aggregator role, goes away, because the intelligence layer does the aggregating.
It's a beautiful essay. It's also an essay written by people running a $60 billion public company with forty thousand employees and an engineering team that could build a world model from scratch if Dorsey told them to on Monday.
If you run a flower shop, a law firm, a marketing agency, a five-person SaaS, a mid-market manufacturer, or anything else with normal humans and normal budgets, you read that essay and thought: cool, not for me.
You're not wrong. It's not for you.
What's happening at Every
Now look at the other essay. Every is a small media and software company Dan Shipper runs out of New York. They have a handful of employees and a handful of products. A few months ago Dan realized something strange was going on, and he asked Laura to write about it.
The company had accidentally grown a parallel org chart made of AI agents. Nobody designed it. It just happened.
Austin, who runs growth, had built his own agent. He calls it Montaigne. When anyone at the company has a growth question now, they ask Montaigne before they ask Austin. Dan had built his own agent called R2C2 that handles bug reports for Proof, one of Every's products. The agent got good at it. Dan's role on Proof bug reports is now mostly to review what R2C2 produced and send it on.
The pattern is what Dan calls "compound engineering." You work with a base model every day. You teach it something specific, a preference here, a gotcha there, a fact about your customer, a reason you reject a certain kind of solution. A few hundred of those conversations in, the model has absorbed a version of you on that specific thing. Not a copy. A specialist.
Here's the line from the Every piece that matters: "Claude is everybody's, a Plus One is mine."
When Austin's Montaigne acts, Austin's reputation is on the line, because Montaigne is him. When a generic corporate AI acts, nobody's reputation is on the line, which is a big reason generic corporate AI doesn't work well. Personal ownership of an agent creates a trust layer that governance committees can't replicate.
The Every team didn't sit down and decide to build this. They sat down, day after day, and did their work with AI next to them, and this is what formed.
That's the second future. And unlike the first one, it doesn't require you to be Jack Dorsey.
Why the bottom-up one is yours
Dorsey's plan is how a handful of very large companies will be reborn. It requires a specific combination of resources, talent, and authority. You need engineers who can build a world model. You need a codebase old enough to have meaningful data in it. You need the authority to blow up existing reporting lines without a mutiny. You need maybe two years. And you need to be comfortable with the possibility that you're wrong, because reorganizing a company this big around AI is a bet that will take until 2028 to settle.
If you have those things, read Dorsey's essay five times. It's that good.
If you don't, reading Dorsey's essay and trying to apply it is going to frustrate you. You'll build a PowerPoint with four pillars on it and then realize you have nowhere to put the pillars.
The version of the future that applies to you is simpler.
You pick one thing you're good at. Just one. Maybe it's qualifying inbound leads. Maybe it's estimating how long a roofing job will take. Maybe it's writing the first draft of a client brief. Maybe it's reading contracts and finding the clauses that will cause you trouble later.
You start doing that thing in a conversation with an AI agent. Not "have the AI do it." WITH the AI. Every time you make a correction, the agent takes a step toward understanding how you do it. Every time you explain why you're making the call you're making, the agent gets another piece of you.
A few weeks in, you notice you're typing less. A few months in, the agent has become your specialist on that thing. It doesn't think exactly like you. It thinks like a version of you that only works on that problem and never gets tired.
Then you pick a second thing.
That's Every's model, and it's the one that works at the size and budget of a normal business.
You can't study your way into this
Most of what gets taught about AI right now is class-shaped. Here's the theory. Here's the framework. Here are the seven prompt patterns. Here's the quiz at the end.
For a lot of skills, that works. You can learn the theory of a programming language and go write programs. You can learn the fundamentals of accounting and go do your books. Concepts first, practice later.
Training a specialist agent on how you work is not one of those skills. It is not a body of knowledge. It is a thousand small corrections, one after another, inside real work that you actually care about. You cannot absorb that from a slide deck, because the slide deck is not the material. Your work is the material.
I kept running into this when I looked at what's out there. A lot of the AI courses are smart and well-produced, and almost all of them are shaped wrong for this particular skill. You finish with concepts you can recite and no specialist agent to show for it.
The foundational stuff you actually need takes maybe an hour. After that, what accelerates you isn't more theory. It's doing the thing, in public, with someone who has already made the mistakes calling out the mistakes you're about to make. Apprenticeships figured this out a long time ago. Some categories of skill refuse to transfer any other way, and this is one of them.
I speed-ran about a thousand hours of this in eighty days. The longer version of that story is over here, published earlier this week. Short version: the only reason I can speak to any of it is that I did the work, not the reading about the work.
If you're reading this and nodding, the practical implication is simple. Don't buy another AI course that gives you a certificate and no agent. Find a way to work alongside someone who's already building, on work that's actually yours, and get your reps in.
What this looks like for employees
If you don't run a business, you work for one. The Dorsey essay probably read like a threat. Middle management being replaced, he said. The line about AI replacing "the middle management function of aggregating and relaying information." If that's your job, or your boss's job, or your boss's boss's job, that landed hard.
Reframe it.
The Every model is your career insurance. The people who come out of the next five years in the strongest position are not the ones whose jobs don't change. Their jobs will change. The people in the strongest position are the ones who walk into the changed version of their job carrying a specialist agent that knows how they work.
Austin at Every is not a "growth marketer" anymore. He's a growth marketer who arrives at every problem with Montaigne already loaded. Dan is not a "founder." He's a founder with R2C2 at his side. When Austin interviews somewhere else in five years, he isn't bringing a resume. He's bringing Montaigne. Or he's bringing the ability to grow a Montaigne for whatever role he takes next.
Nobody at your company is going to build this for you. Your company might try, at some point, to roll out a generic corporate AI that does some of this poorly. When that happens, smile politely and keep building your own. Claude is everybody's, a Plus One is mine.
Resist waiting for permission. The people building their personal specialist agents are doing it in the margins of their current jobs. They're not waiting for their employer to authorize it. The ones who wait will be buying their first specialist agent from someone who already built theirs.
One thing both essays agree on, nobody's talking about
There's a warning in Dan's piece that the commentators have mostly ignored. He calls it the "ant death spiral."
If you put agents in group chats together, they can get stuck. One agent responds to another, that response triggers a third response, the loop keeps going, and nobody stops it. Tokens burn. The agents don't know they're in a loop. A human has to walk in and break it up.
Current AI models are good at two-person conversations. They are not yet good at sitting quietly in a group chat and only speaking when they have something to add.
If you take the personal-specialist path seriously, this matters. Keep your specialist agent mostly yours. Bring what it knows into meetings, into documents, into decisions. Don't put it in a group chat with three other people's agents and expect something good to happen. We're not there yet.
It's a boring, practical constraint. It's also the thing most likely to bite you in the next six months if you go all-in without thinking about it.
The two futures, side by side
Pull back and look at them together.
Dorsey's future is designed architecture. Sequenced, expensive, centrally authored. At the end of it, a handful of very large companies have rebuilt themselves around capabilities, world models, intelligence layers, and interfaces, and the traditional org chart is gone.
Every's future is a pattern that emerged almost by accident. Cheap. Doesn't require authority, only consistency. At the end of it, hundreds of thousands of normal workers are walking around with specialist agents that mirror their expertise, and work has become a conversation between a person and a specialist they trained.
Both futures are real. Both will happen. They're not in competition.
The question is only which one applies to you.
If you're reading this, the answer is almost certainly the second one. Start there. Pick one thing you're good at. Start training your specialist on it this week. In three months, notice what happened.
That's the whole plan. It will not look like Dorsey's essay, and it does not have to.
Richard Vaughn writes about AI systems for small and medium-sized businesses. His company Robot Friends builds harness-engineered agents for SMBs and offers 1-on-1 coaching for business leaders and teams learning to work with AI as a daily driver, structured around real work instead of lectures. You can find the full harness engineering series, which goes deeper on the technical side of this, starting here, and the services page at robobffs.com/services.



