The Risk of AI Writing Is Leadership Without Judgment
Sounds Good, Changes Nothing
This is one of the most common leadership failures in modern organizations.
The CEO brings a strategy document to the meeting. It’s polished. It uses the right language. It sounds reasonable. The CEO carefully walks everyone through it. Heads nod around the table. No one pushes back very hard.
Then the meeting ends... and nothing changes.
The document doesn’t make decisions easier. It doesn’t clarify tradeoffs. It doesn’t tell anyone what to do on Monday. The CEO feels alignment, but the organization doesn’t move.
That moment isn’t about bad writing. It’s about writing that no longer carries weight.
This didn’t break recently. It was never solid to begin with. For more than a century, organizations have separated thinking from doing, dating back to Taylor, Gantt, and the rise of mass production. Executives do the thinking. Everyone else learns to wait.
Deming spent his career trying to close that gap. People mostly remember him for statistics and quality control, but that misses the point. He was arguing for leaders to take responsibility for the whole system and for thinking to live where the work actually happens. Agile briefly moved in that direction inside engineering teams. At the organizational level, the split held.
You can see the consequences everywhere. The CEO debates which metrics to trust instead of acting on them. The executive team argues about narratives instead of resolving problems. Meetings multiply. Language multiplies. Confidence shrinks.
As work moved further from craft, it also moved further from consequence. Leaders replaced judgment with dashboards. They replaced conversation with artifacts. The information age turned work into symbols on screens. Generative AI accelerates this pattern by offering the CEO a clean explanation before the hard thinking is finished, and a ready-made conclusion before responsibility fully lands.
Marshall McLuhan argued that the most important effect of a technology isn’t what it produces, but what it trains us to notice. Media shape attention before they shape outcomes. They quietly reorganize what feels obvious, what feels slow, and what feels like a decision.
Generative AI doesn’t just help the CEO write faster. It changes when the CEO stops thinking. It offers coherence early, before ambiguity has done its work. It makes conclusions feel available before judgment has fully formed. When a medium removes friction from thinking, leaders don’t just move faster—they skip the moments where responsibility normally takes shape.
Tools don’t merely speed work up. They define what counts as work in the first place.
Human writing started to sound like this long ago—safe, optimized, detached from real stakes.
AI doesn’t create this problem. It removes the last excuses for ignoring it.
We Were Already Drowning in Bad Writing
Before generative AI showed up, the internet was already drowning in bad writing.
Safe writing. Polished writing. Writing that looked professional and meant almost nothing. Text engineered to pass review, survive legal, and offend no one. Plenty of words. Very little judgment.
My colleague Boris calls it junk food. It’s filling. It’s familiar. It gives the sensation of having consumed something meaningful without doing any actual work of digestion.
We trained ourselves into this.
For decades, education and leadership development doubled down on STEM, efficiency, and optimization. Those investments paid off. They also quietly demoted interpretation, history, literature, and meaning-making to electives. Useful if you were “into that sort of thing.” Optional if you were serious.
At the same time, work kept moving further away from the thing itself.
Craft turned into machines.
Machines turned into processes.
Processes turned into dashboards.
And eventually, work turned into symbols on screens, managed by people far from the consequences.
Writing followed the same path.
It stopped being how leaders thought and started being how decisions were packaged. Narrative became a wrapper. Language became output. The goal wasn’t clarity or truth. The goal was something that sounded reasonable and could survive the meeting.
Most leaders felt this long before AI arrived.
They struggled to explain why choices mattered. Teams stopped recognizing themselves in the stories being told about their work. Strategy documents got longer. Conviction got thinner. Communication exploded. Shared understanding quietly collapsed.
The recent New York Times piece about “obvious” AI writing lands in that world. The discomfort it names isn’t new. It’s recognition. Machines are now doing openly what organizations trained humans to do for years: produce language without judgment.
AI didn’t hollow this out.
It just made it impossible to pretend it wasn’t already empty.
AI Fits Too Well Into How Organizations Already Work
Generative AI can produce competent text at scale. That fact alone isn’t the issue. The issue is how well it fits habits organizations already have.
Most companies don’t treat writing as thinking. They treat it as output: something to be filled in, polished, shipped, and forgotten. AI slides neatly into that role.
The CEO asks for a summary, someone wants a cleaner narrative, the deck needs tightening, and the strategy document needs to sound more confident. AI delivers immediately.
The language looks finished. It sounds plausible. It travels well through review and approval. Work feels complete even when no real decisions have been made.
Cheap language combined with speed bias nudges organizations toward the extractive loop. Content increases. Judgment thins out. Alignment happens faster around ideas no one has really tested or owned. Writing stops constraining thinking and starts lubricating momentum.
Over time, behavior shifts. Leaders spend less time sitting with ambiguity. Teams surface fewer weak signals. Disagreement gets smoothed over instead of worked through. Narrative arrives early, understanding arrives late, if at all.
None of this requires bad intent. It’s a system effect. AI doesn’t tell leaders to stop thinking; it simply makes it easier to move forward before thinking has done its job.
What This Does to Leaders and Teams
At some point, this stops looking like a systems problem and starts showing up in more personal ways.
Leaders begin to lose confidence in their own voice. Not because they’ve suddenly become less capable or less experienced, but because they’re surrounded by language that sounds authoritative without being anchored to judgment. When everything arrives polished and confident by default, it becomes harder to tell what you actually think, as opposed to what merely sounds plausible.
Teams pick up on this quickly. They can tell when language is doing work instead of carrying meaning. They struggle to find themselves inside the story leadership is telling about priorities, direction, and tradeoffs. The result isn’t open resistance so much as drift. Direction feels abstract. Priorities blur. Energy dissipates.
Underneath this is a quieter erosion. The human capacities that help people make sense of the world—judgment, interpretation, narrative sense, cultural literacy—haven’t disappeared, but they’ve been underused for a long time. Like any neglected skill, they weaken gradually, without anyone noticing a single moment of failure.
People end up surrounded by information, summaries, explanations, and updates, yet oddly unsatisfied. There’s a feeling of being full without being nourished, busy without being grounded. It’s what it feels like to live on informational junk food.
What Is Leadership For?
At a deeper level, this raises a more uncomfortable question.
What is writing actually for?
In most organizations, writing has been reduced to a delivery mechanism. A way to transmit decisions, justify actions, or signal alignment. Something you do after the thinking is finished, not as part of the thinking itself. The same reduction has quietly happened to leadership.
What is leadership for?
If leadership is primarily about efficiency, throughput, and coordination, then AI-generated language is an obvious upgrade. Faster drafts. Cleaner narratives. Fewer rough edges. Less friction.
But if leadership is about judgment—about deciding what matters, naming tradeoffs, and helping people orient themselves in uncertainty—then writing plays a very different role. It isn’t just communication. It’s a way leaders test their own thinking, surface what they don’t yet understand, and make commitments visible.
This is where the philosophical tension shows up.
Paul Kingsnorth uses the word “machine” to describe a way of relating to the world that prioritizes scale, control, and efficiency over presence, meaning, and responsibility. He isn’t arguing against tools. He’s pointing at a mindset—one that treats human activity as something to be optimized rather than inhabited.
That mindset has been shaping modern organizations for a long time. AI didn’t invent it. It simply accelerates it, pushing more of our thinking into systems that operate without memory, risk, or consequence.
The real question, then, isn’t whether AI can write.
It’s whether leaders are willing to decide what kind of world their organizations are helping to build—one where language deepens judgment and responsibility, or one where it smooths over uncertainty and moves work along without anyone fully owning the arc.
Humanities Matter Now More Than Ever
This is where the humanities re-enter the picture, not as decoration or nostalgia, but as practical tools for how humans actually think, decide, and act.
For most of human history, leaders didn’t rely on dashboards, models, or abstractions to make sense of the world. They relied on stories, shared history, moral frameworks, and lived examples. Narrative, myth, interpretation, and memory were not side interests. They were the primary technologies for coordination, judgment, and meaning.
That hasn’t changed as much as we pretend it has.
Across psychology, linguistics, and cognitive science, researchers disagree on the details but converge on a basic point: humans do not experience reality as raw data. We experience it as structured meaning, unfolding over time, usually with stakes. Cause and effect. Choice and consequence. Before we analyze, we orient.
Put plainly, stories are cognitive infrastructure.
When leaders ignore this, communication starts to feel mechanical. Decisions become brittle. Strategy sounds coherent but doesn’t travel. People struggle to locate themselves inside what the organization claims to be doing.
The humanities train exactly the capacities modern leadership keeps asking for and rarely cultivates: interpretation, judgment under uncertainty, moral reasoning, cultural awareness, and the ability to hold multiple perspectives without collapsing them into slogans.
This isn’t about becoming more poetic. It’s about becoming more precise about how meaning actually forms in human systems.
How Meaning Forms
To understand what’s being disrupted, it helps to be precise about how meaning actually forms for humans.
It doesn’t start with language.
It starts with raw experience: sensation, affect, impulse, somatic signal. The tightening in the chest during a meeting. The flicker of irritation at a metric that doesn’t quite make sense. The unease that something important is being smoothed over. At this stage, there is no story yet. No protagonist. No timeline. Just what is happening.
This layer is non-narrative. It has no plot and no conclusion. It’s simply contact with reality as it is encountered, moment by moment.
This is the domain mindfulness trains access to. Not as a spiritual posture, but as a practical discipline: noticing before interpreting, staying with sensation before forcing meaning, interrupting the reflex to immediately explain or justify. Mindfulness protects this layer from being overwritten too quickly. It preserves contact with reality.
Meaning emerges in the next step.
A subject takes shape. Time starts to move forward. Something becomes at stake. Obstacles appear. Choice becomes possible, or necessary. Experience stops being just sensation and starts becoming about something.
This is story—not primarily as language, but as temporal structure with consequence. Cause and effect. Before and after. What changed because of what. Story is how humans organize experience so it can be understood, shared, and acted upon.
Leadership lives in the seam between these two layers.
When that seam is handled poorly, predictable failure modes show up. Sometimes leaders rush straight to story without staying long enough with experience. Language arrives early. Narratives sound confident. Slides get built. Strategy gets declared. But the story has lost contact with reality. It becomes slogan, theater, or, increasingly, AI-generated coherence.
Other times leaders stay with experience but never shape trajectory. Everything feels overwhelming. Signals pile up. No decisions land. There’s presence without direction, insight without movement. Burnout often lives here.
Neither mode works on its own.
Mindfulness without trajectory stalls. Story without contact detaches. Leadership is the disciplined movement between the two: staying with what’s actually happening long enough to feel it, then consciously shaping what it means over time, with stakes and responsibility attached.
This seam is where judgment forms. It’s also where AI now intervenes most aggressively.
What Mindfulness Is (and Isn’t)
Mindfulness is often treated as an endpoint. A state to achieve. A kind of emotional hygiene that keeps leaders calm and regulated.
That framing misses its real function.
Mindfulness isn’t where leadership ends. It’s where leadership begins. It acts as a gatekeeper between raw experience and the stories we tell about it.
Practiced well, mindfulness creates a pause before interpretation. It allows experience to be felt before it is forced into plot. Weak signals surface that would otherwise be edited out. Reflexive explanations slow down. The urge to immediately solve or narrate loses its grip.
This matters because most leadership failures don’t come from bad intentions. They come from stories that arrive too quickly, before experience has been fully registered.
But mindfulness has limits.
On its own, it doesn’t create direction. It doesn’t choose priorities. It doesn’t commit anyone to an arc. When mindfulness becomes an end in itself, it can turn into endless presence, insight without movement, calm without responsibility.
Leadership requires more than awareness. It requires trajectory.
Mindfulness does its job when it feeds story rather than replacing it. It preserves contact with reality long enough for judgment to form, and then hands off to deliberate meaning-making: deciding what matters, what comes next, and what the organization is willing to risk in order to move forward.
That handoff is the work.
Stakes Make Leadership Real
Not everything that looks like change actually changes anything.
Real transformation only happens when a decision commits you to a future you can’t fully control, and where being wrong will cost you something you care about. Without that, organizations stay busy, generate insight, and keep moving—without ever really moving forward.
This is where the limits of AI start to matter in practice.
AI can generate insight instantly. It can summarize, explain, recommend, and reframe on demand. But it does all of that without having to live with the consequences. There is no decision it has to stand behind. No outcome it has to explain to a team six months later. No cost for being confidently wrong.
That absence changes the weight of its output.
This is why Walter Benjamin’s observation still applies, even outside art. When reproduction becomes effortless, meaning loses pressure. Not because information disappears, but because nothing forces a pause. Nothing insists on presence. Nothing makes the moment remind you that this choice will narrow future options.
Leadership lives inside those narrowing moments.
When a leader commits to a direction, they bind together a past that must be interpreted and a future that will judge the decision. They take responsibility for what gets excluded as much as what gets chosen. Stakes aren’t incidental to leadership. They’re what give judgment its authority in the first place.
AI never has to do that.
It never has to carry a decision forward in time. It never has to absorb fallout. It never has to repair trust, explain tradeoffs, or face the people affected by the call. It can help articulate a choice, but it cannot inhabit the conditions that make the choice real.
That’s not a flaw in the technology.
It’s the boundary where leadership begins.
AI Puts New Pressure on Leadership
This is the point where AI creates its most subtle pressure on leadership.
It collapses the distance between experience and explanation.
In a healthy system, there’s friction between what people sense and what they say. Confusion has time to surface. Disagreement has room to breathe. Leaders are forced to sit with incomplete information long enough for judgment to form.
AI short-circuits that gap. It offers instant coherence. A clean narrative arrives before uncertainty has done its work. Language shows up early, polished and confident, before anyone has fully reckoned with what’s actually happening.
This is the wedge.
Not because AI is malicious, but because it rewards speed over digestion. It makes it easier for leaders to move forward without fully inhabiting the decision they’re about to make.
Mindful leadership, in this environment, requires active resistance.
It means slowing the collapse. Holding experience open longer than feels efficient. Letting ambiguity linger until it sharpens rather than dissolves. Refusing to accept a finished story before the costs, risks, and tradeoffs are visible.
This isn’t about being more present for its own sake. And it isn’t about telling better stories.
It’s about sequence.
First, feel what’s actually here. Not what the dashboard says, not what the draft claims, but what people are experiencing in the work itself.
Then, decide what it means over time. Name what matters, what will be prioritized, and what will be left behind.
Finally, accept responsibility for the arc that follows. For how this decision reshapes the future, constrains options, and affects real people.
AI can help with articulation. It can assist with synthesis. But it cannot perform that sequence.
That work remains human.
Mindfulness preserves contact with raw experience.
Story organizes experience into time with stakes.
Leadership is the disciplined movement between the two.
That’s not spiritual. That’s operational.
The Choice
At this point, leaders don’t get to stay neutral.
The systems we’re building are already shaping how people think, decide, and relate to their work. Generative AI simply makes the underlying direction harder to ignore. It forces a choice that has been implicit for a long time.
One path treats language, culture, and judgment as overhead. Writing becomes something to optimize or outsource. Sensemaking gets compressed into summaries and dashboards. Decisions move faster, but responsibility thins out. Leaders stay busy managing outputs while meaning quietly erodes.
This is the extractive loop. It produces speed, scale, and short-term efficiency. It also produces burnout, brittle strategies, and organizations that struggle to act when conditions change. People feel managed but not led. The work keeps moving, but no one feels truly oriented.
The other path treats sensemaking as core work.
Here, leaders use tools like AI to remove drudgery, not judgment. They slow down where it matters. They invest time in interpretation, narrative framing, and honest conversation. Writing is used as a way to think in public, to test ideas, surface uncertainty, and make commitments visible.
This path doesn’t reject technology. It puts it in the right place.
It assumes that leadership is not about having the best explanation, but about taking responsibility for the arc an organization is on. It accepts that meaning can’t be automated, and that judgment only forms when leaders stay present to experience, risk, and consequence.
The difference between these paths isn’t ideology. It’s practice.
One outsources sensemaking and hopes coherence emerges.
The other treats sensemaking as the leader’s job.
AI will amplify whichever choice leaders make.
Writing as a Leadership Discipline
For most leaders, writing sits in an odd place. It’s everywhere, but it rarely feels central. Emails, strategy docs, memos, decks—constant output, very little sense that writing itself is part of the job of leadership.
That’s the mistake.
Writing is one of the few places where leaders are forced to slow their thinking down enough to see it. When you try to put a real decision into words, gaps show up. Tradeoffs get uncomfortable. Vagueness becomes obvious. Writing, done honestly, exposes what you don’t yet understand and what you’re not quite willing to commit to.
This is why AI is both useful and dangerous here.
Used late in the process, it can reduce friction. It can help clarify language, test alternatives, and clean up expression once judgment has already formed. Used early, it replaces the very struggle that makes writing valuable in the first place. Leaders get a finished paragraph before they’ve actually decided what they think.
Strong leaders treat writing differently.
They use it as a thinking surface, not a delivery mechanism. They write before they’re sure. They revise as their understanding changes. They let writing reveal the consequences of a choice before the organization has to live with them. And they resist the temptation to outsource that work, even when tools make it easy.
This is also the problem we’ve been working on with Humanize.
Not as a content engine or a shortcut to better language, but as a way to help leaders stay inside the discipline of sensemaking—using AI to surface patterns and questions without collapsing experience into a ready-made story too quickly.
You don’t need more words. You need places where thinking can slow down, judgment can form, and responsibility can land before decisions harden into momentum.
That’s what leadership has always required. AI just makes it harder to avoid.