AI Architecture & Infrastructure

The Tool Is Eating the Task

Why the durable advantage is deciding what should be done, not doing what was requested.

4 min

If Your Only Job Is to Turn Instructions into Output, You Are in Danger

Not because those skills are suddenly worthless. Because they are becoming less scarce by the month.

Prompting is being productised. Routine, templated output is being automated. The floor keeps rising: what used to require a competent professional can now be done by a mediocre workflow wrapped around a frontier model. If your value is confined to "I know how to ask the model nicely" or "I can turn clear instructions into deliverables faster than average", you are competing with a curve that moves directly against you — and that curve does not care about your seniority, your credentials, or how good you are at what you do. It only cares whether what you do can be approximated cheaply enough.

The real risk is not mass overnight replacement. It is compression. A smaller number of people will produce more output. Organisations will need fewer people to do routine drafting, synthesis, analysis, formatting, presentation-building, basic research, obvious debugging, standard planning, and first-pass execution. The work does not disappear. The headcount that justified it does.

What Survives

What survives is judgment, problem selection, and taste. The durable advantage is deciding what should be done, not doing what was requested. Verifying outputs. Spotting failure modes before they ship. Understanding incentives well enough to know when the answer is technically correct but operationally disastrous. Talking to users. Owning outcomes rather than tasks. LLMs are strong at generating options. They are weak at knowing which option actually matters in a messy, politically loaded, under-specified real-world context — and catastrophically weak at telling you when the problem as framed is the wrong problem entirely.

So the dangerous position is not "I use AI in my work." The dangerous position is this: my contribution begins after the problem is already well defined and ends when the deliverable is produced. That layer is being commoditised, fast.

Output Is Not Value

There is a harsher version of this truth that most people miss. Output has never been the same thing as value. Plenty of highly paid work — slides, analysis, copy, plans, code — produces the illusion of progress without changing anything material. LLMs make this confusion worse, because they make output nearly free. When output is cheap, knowing what not to produce becomes more valuable than knowing how to produce everything. Restraint becomes a competitive advantage. The person who ships three things that matter is more dangerous than the person who ships thirty that do not.

The standard objection is that this is overstated. Most organisations are slow, messy, regulated, and operationally chaotic. They still need humans to navigate context, politics, legacy systems, edge cases, and social constraints that no model can read. In practice, replacement is slower than the demos suggest. This is true. It is also exactly the kind of comfort that makes people stop adapting. Slow disruption still ends in the same place. It just gives you more time to pretend it will not.

The Boundary That Will Not Hold

The harder version of the challenge — the one that deserves to be stated plainly — is this: right now, LLMs are weak at framing problems. They are strong at executing once those problems are framed. That boundary is where most knowledge workers currently live. It is not a permanent refuge. As models improve at structuring ambiguity, reasoning about incentives, and asking better questions, the layer that currently feels safe will come under the same pressure the execution layer is under today. Judgment is safer than prompting, but it is not permanent shelter. The bet is not that your current skills are immune. The bet is that you can keep moving toward the parts of work that are harder to automate faster than automation catches up.

Someone who can use the tools, understand the domain, frame problems well, verify aggressively, and tie their work to actual outcomes will be far more valuable than the old-style specialist whose strength was fast execution. The tool eats the task. The person who wields it with judgment gets more leverage, not less.

The Question That Matters

The question is not whether you use AI. Everyone will use AI. The question is whether you would still be worth hiring if the model could do everything you currently do. If that question makes you uncomfortable, that discomfort is the signal. Act on it while the window is still open.

Topics

AI StrategyKnowledge WorkAutomationJudgmentWorkforce Transformation