This Isn’t AI’s Decision. It’s Ours.

Photo by Christin Hume on Unsplash

Everyone has an opinion about whether AI will take over knowledge work. But that might not be the right question.

Some say AI is coming for doctors, marketers, lawyers, developers. Others say it’s not that good yet—it gets facts wrong, it makes ugly designs, it doesn’t really "think." But the deeper truth is: it doesn’t need to be brilliant. It just needs to be cheaper, faster, and good enough.

And those are decisions that aren’t being made by the AI. They’re being made by people—executives, investors, product managers—who are optimizing for margin, speed, and scale.

So the better question might be:

What happens when machines don’t need to be brilliant to start replacing the brilliant—and the systems replacing them don’t care that much about brilliance at all?

A Tale of Two Revolutions

This isn’t the first time machines changed what people are for—or what companies value.

In the Industrial Revolution, skilled artisans who built ornate furniture or carved intricate moldings were replaced by factories producing cheap, mass-market goods. Not because the products were better. Because they were cheaper.

Craft didn’t disappear overnight. It was pushed to the margins by economics.

Now we’re watching the same thing happen to knowledge work. Design, writing, coding, synthesis—tasks once reserved for highly trained professionals—are being offloaded to systems that are faster, cheaper, and scalable.

And just like before, the machines aren’t choosing who to replace. That power lies with the people shaping the system: those deciding what’s "good enough" and what’s profitable.

What’s Actually Happening

Companies aren’t waiting for AI to be brilliant. They’re reorganizing around it now.

The traditional playbook was to scale by hiring smart people and building strong teams. That model is eroding. Increasingly, companies are reallocating budget away from people and toward automation stacks—data infrastructure, prompt libraries, AI copilots, model subscriptions.

It’s not happening because the tools are perfect. It’s happening because the tools are good enough to justify the shift.

And it’s not happening in a vacuum. AI is being dropped into systems already optimized for efficiency over equity, compliance over creativity, scale over nuance.

The goal isn’t excellence. It’s leverage. And machines increasingly offer more of it than people do.

Who This Hits First

The very people who helped build the tech economy are now watching its logic turn on them.

For years, engineers, designers, product strategists were considered the most irreplaceable people in the company. Now they’re watching AI erode the edge they worked so hard to earn.

No one is saying your job is gone tomorrow. But the dynamics have changed. You’re no longer the bottleneck. You’re no longer the moat.

And the tools doing the replacing? They're trained on your work. They're serving the same systems that once celebrated your scarcity.

So this becomes the call to adventure: If the thing that made you valuable no longer protects you, what do you do next?

Panic, Adapt, or Lead?

When the ground shifts, you don’t get to stand still.

Some panic: deny, resist, fight the tools. Some adapt: integrate AI into their workflow and hope to stay ahead. But there’s a third path: you lead.

Not by "mastering prompts" faster than everyone else—but by asking the deeper questions:

  • What kind of system is this reinforcing?

  • Whose interests are encoded in the tools?

  • What kind of work, teams, and societies do we want to build?

To lead now is to resist the idea that AI is an independent actor. It’s not. It’s a mirror. A magnifier. A product of choices.

The Deeper Opportunity

If machines are handling more of the work, we have to ask: What are we freeing ourselves to do?

AI can crank out content, analysis, and code at industrial speed. That means what becomes valuable is not volume. It’s discernment.

What’s scarce now? Trust. Taste. Judgment. Vision. Integration. Leadership. Solidarity. Strategic constraint.

The risk isn’t that AI becomes too powerful. It’s that it becomes too aligned with the worst parts of our system: extractive, opaque, narrow, brittle.

Because AI isn’t inventing a new social order. It’s codifying the one we already have.

The future of work is still a design problem. The question is: Who gets to design it?

Call to Thoughtful Action

This isn’t a moment for resignation. It’s a chance to decide what comes next.

AI is moving fast. But it’s not leading. It’s following the logic of those who build, train, and deploy it.

That means the choices we make today—as workers, leaders, designers, voters, and neighbors—still matter.

We can ask better questions. Build fairer defaults. Protect what’s worth keeping. Push back when tools flatten what should be deep.

AI doesn’t set the direction. It follows the hands that shape it.

The real question is: whose hands will those be?

Postscript: I used ChatGPT Plus to help me write this. But I did it in a particular way that I think maintained human control over the output.

First, it has access to all 50 or so articles I have written, plus the manuscripts from my two books. It’s been instructed to pull from my ideas, and to try and emulate my style.

Second, I gave it an overview of what I wanted to say, and asked for an outline. I iterated on this two or three times.

Third, I went through each section of the outline one at a time, and provided explicit feedback on what to change to make my point sharper and more cohesive. I rejected several of its suggestions, and reframed most of the piece to be more politically charged than it was—I thought it was playing it too safe.

Finally, I asked for a draft of the entire piece, and then fine tuned each sentence myself.

I am not “letting the AI write for me.” I am using it as a brainstorming, drafting, and editing tool. It feels pretty much like my writing. I am happy with the final version, and I hope you are too.

Next
Next

Mindfulness Is Good Business