Automation
Jan 30, 2026
Eight Rules for Automation
How to Stop Spending Money Making Your Worst Ideas Permanent
Automation is usually sold as efficiency. Do the same thing, but faster. Which sounds marvellous—in the same way that a megaphone is ‘better communication.’
A megaphone does not improve what you say. It amplifies it. If you are saying something intelligent, wonderful. If you are saying something idiotic, you are now saying something idiotic very loudly indeed.
Automation works the same way. It is a photocopier for your processes—faithful, tireless, and entirely indifferent to whether what it’s copying is a masterpiece or a ransom note.
Bad automation does not fail. That would be merciful. Bad automation succeeds, enthusiastically, at the wrong thing, on schedule, at scale, and with a dashboard to prove it.
So here are eight rules to stop that happening. They are not particularly glamorous, but the best ones never are.
Rule 1: Never Automate a Superstition
Here is something nobody in consulting will tell you, because the billable hours are too good: most business processes were not designed. They accreted. Like limescale.
Someone once had a bad day. An approval step appeared. Someone once made a mistake. A form field became mandatory. Someone once got shouted at in a meeting by a person who has since retired, and a ‘quick sanity check’ became a permanent liturgical observance.
Fast forward three years and you have a nine-step ceremony to move fifty dollars between accounts. Nobody knows why. Everyone swears it is essential. It has the quiet, unchallengeable authority of a religious ritual—which, psychologically speaking, is exactly what it is.
If the best defence of a process step is ‘we’ve always done it this way,’ you are not looking at a process. You are looking at a superstition. And automating a superstition does not make it rational. It makes it permanent.
This is the automation equivalent of programming your smart home to lock the doors every time a black cat crosses the driveway. The cat was never the problem. But now you have an expensive, fully integrated system devoted to solving it, and a Jira board to prove it.
The instinct to preserve existing processes is a form of what psychologists call the status quo bias—the deeply human preference for ‘the way things are’ over ‘something different,’ regardless of whether ‘the way things are’ is any good. Daniel Kahneman won a Nobel Prize partly for explaining this tendency. Your operations team has been winning internal arguments with it for decades.
Before you automate anything, put every step on trial. If it cannot justify its existence without an appeal to tradition, seniority, or vague compliance anxiety, it does not need to be automated. It needs to be shot.
Rule 2: Study the Menu, Not the Restaurant’s Website
Every organisation has two versions of its processes. There is the official version—the one in the SOP document, the one described in the onboarding slides, the one senior management sincerely believes is happening.
Then there is the real version. The one that happens on a Tuesday at 4:47pm when Margaret in finance is on holiday and someone needs to process an edge case that the official flowchart pretends does not exist.
The gap between these two versions is where all the interesting information lives. And it is almost always enormous.
This is identical to the gap between a restaurant’s website and its actual menu. The website shows you an artfully lit photograph of a dish that may or may not still be available. The menu, stained and annotated, tells you what they can actually make on a Wednesday.
If you automate from the SOP, you are automating the website. The menu—the real work—will continue to be done in private spreadsheets, copied email templates, Teams messages, and favours owed between departments.
What to actually map
The happy path—which, in most organisations, is a polite fiction maintained for the benefit of process diagrams. The exceptions—which, in practice, are the job. The waiting—which is usually where the cost hides. The handoffs—where things vanish into an interdepartmental Bermuda Triangle. The shadow steps—the private workarounds nobody officially acknowledges but everyone privately relies upon.
And once you’ve mapped all of that, ask the one question that makes project sponsors visibly uncomfortable: not ‘why do we do it this way?’ but ‘why do we do it at all?’
Many process steps exist for one purpose only: the redistribution of blame. That is not process design. That is organisational insurance. And it is remarkably expensive for how little protection it actually provides.
Rule 3: Assess the Illness Before Prescribing the Cure
People adore improvement that feels productive. New tools. New dashboards. New terminology that makes the same problems sound more sophisticated. None of it proves anything.
This is the business equivalent of rearranging your living room furniture and calling it exercise. You feel like something happened. Nothing did.
You need a baseline. And you need measures that describe reality, not intention. Here are the ones that actually matter:
Throughput
How many things you actually complete per day or week. Completed—not ‘touched,’ ‘progressed,’ or ‘moved to the next stage.’ Half-finished work is not throughput. It is inventory with ambitions.
Cycle time
From the moment a request arrives to the moment the customer sees a result. Track the median and the 90th percentile. Averages are the most diplomatically dishonest statistic in management—they disguise a multitude of sins behind a single comforting number. Then split active time from waiting time. The waiting is almost always the villain. Your people are not slow. Your queues are.
Touch time
The actual human minutes spent per item, across all roles involved. Also count the number of touches. Five two-minute interruptions are cognitively worse than one ten-minute block, because every context switch carries a hidden restart tax that no timesheet will ever capture.
Error and rework rate
How often you fix what you already did. Track the loops: ‘missing information, chase customer, resubmit, recheck.’ Rework is where productivity goes to die with its shoes on, looking busy right up to the end.
Work-in-progress and queues
How much work is currently parked—in an inbox, in an approval step, in the one person who understands the legacy system. Queues are invisible car parks. They look calm on the surface. Underneath, things are quietly rusting.
And add one metric that prevents everyone gaming the system:
First-pass yield
The percentage that completes without any correction. This is the metric that tells you whether you improved quality, not merely speed. Doing the wrong thing faster is not improvement. It is escalation.
Once you can say ‘cycle time is twelve days but touch time is nine minutes,’ you stop having arguments about effort and start attacking waiting. Which is where the actual money is.
Rule 4: First, Remove the Stupid
Automation can do exactly two things. It can make the same steps faster. Or it can remove steps so there is less work to do. The second is the real prize. It is also the one nobody gets excited about, because deletion is not photogenic.
Nobody has ever been promoted for a PowerPoint slide that reads: ‘We deleted three approval steps.’ People get promoted for slides that read: ‘We launched an intelligent workflow orchestration platform with real-time analytics.’ One of these saves money. The other one raises money. They are not the same achievement, but they are frequently confused.
This is a deep problem in corporate life, and it has a name in behavioural science: addition bias. When asked to improve something, humans overwhelmingly prefer to add new elements rather than remove existing ones. It feels more creative. It feels more productive. It is usually more expensive and less effective.
Speeding up nonsense does not give you sense. It gives you nonsense at scale, with a licence fee.
So before you automate, do the unglamorous work. Delete steps that exist because of tradition. Reduce approvals to where risk genuinely warrants a human pause. Standardise inputs—stop letting people compose free-form novels into structured fields. Move decisions to the person who actually has the information, instead of routing them up through three levels of people who will rubber-stamp them anyway. Fix the data at source, instead of building an entire downstream cottage industry devoted to cleaning it.
To borrow Michael Hammer’s magnificently blunt advice: don’t automate. Obliterate.
If a step is unnecessary, the best possible automation is deletion. Deletion never breaks. It never needs a support ticket. It never requires a licence renewal. It is the only technology with a one hundred percent uptime guarantee.
Rule 5: Prototypes Are Hypotheses. Demos Are Theatre.
Automation fails in three boringly predictable ways. It cannot handle the awkward cases. It cannot explain what it did. Or it works perfectly until a single tiny assumption changes, at which point it falls over like a horse in a canal.
So act like a scientist. Which is to say, act like someone who expects to be wrong and has built a system for discovering how.
Plan
Define what ‘done’ looks like. Define the boundaries. Define the exceptions. If you cannot articulate the edge cases before you start, they will become the main cases after you launch. Edge cases are like in-laws: they are always more numerous and more disruptive than you expected.
Prototype
Use real data wherever possible. If you cannot, use something close enough to be uncomfortable. And prototype the user experience, not just the logic. Form design and default settings drive roughly eighty percent of human behaviour in any system. This is why the organ donation rate in Austria is ninety-nine percent and in Germany it is twelve percent. Same culture. Different default on the form.
Prove
Compare outcomes to your baseline. Not impressions to aspirations. Not vibes to hopes.
If your prototype cannot produce logs, show its decision trail, and explain why it did what it did, it is not a prototype. It is a demo. A demo is what you build when you want approval. A prototype is what you build when you want truth. The difference is the same as the difference between a show flat and a house you actually have to live in.
Rule 6: Choose the Boring Option
There is a hierarchy of automation approaches, and it runs in order of increasing fragility:
First: delete the step entirely. Second: configure your existing system to handle it. Third: connect systems via proper APIs developing middleware where appropriate. Fourth, and only when absolutely nothing else works: bolt on UI automation or RPA.
UI automation—the kind where a software robot clicks buttons on a screen like a very diligent temp—is the organisational equivalent of solving a plumbing problem by hiring someone to stand in your kitchen and manually pour water into the pipes every fifteen minutes. It works. Technically. Until they are off sick.
RPA clicks what you tell it to click. Then someone redesigns the interface. A button moves. A field gets renamed. A pop-up appears. And your ‘robot workforce’—a phrase that should set off alarm bells in any sentient person—sits there, politely and expensively failing at scale.
The sexiest technology is almost never the right one. In automation, as in plumbing, the boring solution that works reliably is worth infinitely more than the exciting solution that works intermittently.
Use RPA when you have genuinely exhausted every alternative. Do not pretend it is infrastructure. It is a workaround with a marketing budget.
Rule 7: Let Machines Recommend. Make Humans Decide.
Artificial intelligence is genuinely excellent at several things. Classifying and routing. Extracting information from messy, unstructured inputs. Summarising and drafting. Spotting anomalies and patterns. Suggesting what to do next.
What AI is not good at, and this matters enormously, is being held responsible for anything.
There is a governance principle that has been floating around since at least the 1960s, and it is as true now as it was then: a computer can never be held accountable. Therefore, a computer must never make a management decision. You can train a model to be astonishingly accurate. You cannot train it to appear before a judge and explain its reasoning under oath.
This is not a technical limitation. It is a fundamental feature of how accountability works in human societies. We hold people responsible, not algorithms. An algorithm that makes a catastrophic error is a software bug. A person who makes a catastrophic error is a defendant. The legal and social infrastructure is built around the second, not the first.
So here is the rule: let machines recommend. Require humans to own the decision. Let AI surface the anomaly. Let a person decide what to do about it. This is Toyota’s old principle of jidoka—building in quality by making sure the system stops and alerts a human when something is abnormal, rather than cheerfully manufacturing defects all day because the sensors showed green.
If your AI cannot explain itself at the level of ‘here is why this claim was flagged for review,’ keep it away from anything with regulatory consequences. Confidence is not competence, in models or in people.
Rule 8: Automations Are Lodgers, Not Appliances
Most organisations treat the launch of an automation the way they treat a product launch: there is a ceremony, a screenshot for an executive update, a mild sense of triumph, and then everyone moves on to the next project.
This is a catastrophic misunderstanding.
An automation is not an appliance. You do not plug it in and forget about it. It is closer to a new lodger in your house. It is fast. It is tireless. It has no common sense whatsoever. And it is perfectly capable of making the same mistake ten thousand times a day while smiling.
What happens after launch: permissions change. Inputs drift. Someone quietly edits a template. A vendor updates their interface. A regulation shifts. Nothing explodes. Nothing sends an alert. It just quietly, invisibly degrades—until a customer notices, at which point it has usually been wrong for weeks.
So operate automation like production software. Version control. Change logs. Observability—meaning logs you can actually read and traces you can actually follow, not a dashboard showing a green tick that means ‘the bot ran’ rather than ‘the right thing happened.’ Access discipline. Runbooks for failure. Post-deployment reviews.
If AI is involved, add extra measures: monitor for drift, schedule regular human review of samples, and build escalation rules for cases where the model is uncertain or the stakes are high.
An automation does not ‘go live.’ It moves in. And like any lodger, it requires supervision, house rules, and the occasional awkward conversation about what it’s been doing in the kitchen at three in the morning.
The Punchline
Bad automation does not fail. It succeeds at the wrong thing. Then it makes the wrong thing load-bearing.
The sequence that prevents this is not complicated. It is merely unfashionable:
Understand → Measure → Simplify → Design → Automate → Operate
Do that, and you do not merely automate a process. You remove the accumulated nonsense. You reduce risk. You build something that survives contact with reality without requiring a priest, a spreadsheet, and a small blood sacrifice every Friday afternoon.
Which, now that I think about it, is the real test of whether you have improved anything. Not ‘is it faster?’ but ‘does it still work when the one person who understood it goes on holiday?’
If the answer is yes, you have automated well. If the answer is no, you have merely built a faster version of your existing dependency on Deborah in finance.
And Deborah, frankly, deserves a break.
Start With a Process Audit
If you are considering AI in business process automation and would prefer to avoid the expensive kind of education, begin with a structured discovery. A process audit identifies where workflow automation, RPA, and AI genuinely belong—and, just as importantly, where they do not. It quantifies value in terms that finance departments respect: time, cycle time, errors, and cost-to-serve. And it defines the controls needed for a rollout that does not become a cautionary tale.
Learn more: process audit | Contact us



