Productivity
Nov 19, 2025
What is a Process Audit?
What a Doctor Does Before Prescribing, and Why Your Operations Team Should Try It
Most organisations, when they decide to automate something, skip directly to the shopping. They browse vendors. They attend demos. They sit through webinars where someone in a polo shirt uses the phrase "digital workforce" without flinching.
Then they buy something. Then they bolt it onto whatever process they already have. Then they discover, expensively, that they have automated the organisational equivalent of a scenic detour β a twelve-step journey that should have been three steps, now performed at tremendous speed and with excellent logging.
This is not unlike hiring a personal trainer before finding out why your back hurts. The enthusiasm is admirable. The sequence is catastrophic.
A process audit is the bit you do first. It is the diagnostic before the prescription. It is β and I realise this is not a fashionable thing to say in a profession that sells momentum β the act of stopping to look before spending money to move faster.
Here is what it involves, what you get, and why it matters more than the automation itself.
1. A Process Map (Of What Actually Happens, Not What the Manual Claims)
Every organisation has a process manual. It is usually a document created three years ago by someone who has since left, maintained by nobody, and believed by management with the quiet, uncritical faith normally reserved for horoscopes.
The real process β the one your team performs on a Wednesday afternoon when two people are on holiday and the system is running slowly β bears roughly the same relationship to the manual as a restaurant bears to its website. One is curated and well-lit. The other has actual food.
A process audit maps the real thing. People, systems, decisions, exceptions, handoffs, waiting, workarounds β the full ecology, not the sanitised version.
You get:
Swimlanes by role and system β so you can see who touches what, and where things cross departmental borders and quietly vanish
The happy path and the exception paths β because exceptions are not edge cases; in most operations, they are the job
Inputs, outputs, decision criteria, and handoffs β the plumbing that nobody draws on a whiteboard but that determines whether anything actually flows
Time and volume annotations β so the map tells you not just what happens, but how long and how often
A boardroom-ready PDF and an editable source file β because a process map that cannot be updated is a photograph of a moment that has already passed
The point of this map is not decoration. It is to make the invisible visible β so that when someone says "our process is fine, it just needs to be faster," you can point to the seven days of waiting disguised as two minutes of work and have a more honest conversation.
2. A Throughput Analysis (Where the Time Actually Goes)
Most people, when asked why a process is slow, will blame effort. The team is stretched. There aren't enough people. Everyone is working flat out.
This is almost always sincere. It is also almost always wrong β or rather, it is answering the wrong question.
The question is not "are people busy?" The question is "where does the time go?" And the answer, with remarkable consistency, is: waiting. Work sits in inboxes. It queues behind approvals. It parks in someone's head while they finish something else. The human beings involved are genuinely busy. The work is mostly stationary.
This is what a throughput analysis reveals. It separates the time you spend doing from the time the work spends sitting, and it identifies the actual constraint β the one step that determines the speed of everything downstream.
You get:
Volume patterns β daily, weekly, seasonal β so you know when work arrives, not just how much
Lead time vs. touch time vs. wait time β the distinction that transforms every conversation about "efficiency" from an argument about headcount into a diagnosis of flow
Bottleneck and constraint identification β the step that, if improved, improves everything; and the steps that, if improved, improve nothing at all
WIP and queue visibility β how much work is currently parked, and where; queues are invisible car parks, and they are always larger than anyone expects
Rework and error hotspots β the loops where you fix what you already did, chase what was never sent, and re-check what was checked incorrectly the first time; rework is where productivity goes to die looking busy
Once you have this, something remarkable happens: you stop arguing about whether the team needs more people and start asking why work that takes nine minutes to do takes twelve days to finish. That is a very different question, and it leads to very different β and usually much cheaper β answers.
3. An Efficiency Scorecard (With an Automation Readiness View)
Numbers are essential, but they are not sufficient. You also need a way to see, at a glance, whether a process is healthy, fragile, or quietly falling apart in a way that will only become visible when a regulator asks an awkward question.
The efficiency scorecard is a structured assessment across eleven dimensions β from process clarity and rework rates to data quality, compliance posture, automation feasibility, AI suitability, and organisational readiness for change.
Each dimension is scored on evidence, not opinion. The scorecard tells you what is working, what is risky, and β crucially β what is ready to be automated versus what needs to be fixed first.
It produces three summary scores:
Process Health Score β operational stability; how well the process runs today
Automation Readiness Score β feasibility; how suitable the process is for automation in its current state
Impact Potential Score β value; what you stand to gain if you improve or automate it
Plus a short, blunt summary of the top constraints and the top levers β the "here is why this is slow, here is why this is expensive, and here is what to do about it" in plain language.
The scorecard exists for a reason that is psychological as much as analytical: it gives decision-makers a single, defensible artefact they can point to when someone asks "why are we doing this?" That matters. Transformation programmes do not die from lack of ambition. They die from lack of evidence at the moment someone senior asks a hard question.
4. Optimisation Recommendations (Before You Spend Money Automating the Wrong Thing)
Here is the part that saves you the most money and gets the least applause.
Automation amplifies whatever it touches. If the process is clean, automation makes it faster, cheaper, and more reliable. If the process is a mess β and most processes are, to some degree, a mess β automation makes the mess permanent, expensive, and very difficult to undo.
So before we recommend what to automate, we recommend what to fix. Or, better still, what to delete.
Typical recommendations include:
Steps to delete, merge, or standardise β the approvals that exist because someone got blamed once in 2019; the checks that duplicate other checks; the handoffs that exist because two teams do not trust each other's data
Approval reductions β not the elimination of governance, but the right-sizing of it; most approval chains are designed for the worst case and applied to every case, which is like requiring a fire inspection every time someone boils a kettle
Input and data fixes β stop the problem at source instead of building an industry downstream to clean it up; bad data is not a technology problem, it is a design problem, and it is dramatically cheaper to fix at the point of entry than at the point of use
Ownership and handoff clarification β the places where "someone should probably..." is doing the work of an actual role definition
Quick wins separated from structural improvements β so you know what to do Monday and what to plan for Q3
This is the unsexy work. Nobody has ever been promoted for a PowerPoint slide that reads "we deleted four steps." But deletion is the only technology with a one hundred percent uptime guarantee. It never breaks. It never needs a licence renewal. It is perfect.
5. An Automation Shortlist (Matched to the Right Approach, Not the Trendiest One)
Once the process is clean β or at least cleaner β we identify which steps are good automation candidates, and we match them to the right technology. Not the most exciting technology. The right one.
There is a hierarchy, and it runs in order of increasing fragility:
First, delete the step. Second, configure your existing system. Third, connect systems via APIs. Fourth, and only when nothing else works, use UI automation or RPA.
The shortlist gives you:
Top opportunities ranked by impact, effort, and risk β not a list of everything that could be automated, but a prioritised view of what should be
Recommended solution type β workflow automation, API integration, RPA, document automation, or AI β chosen for the work, not the sales cycle
Dependencies β data quality, system access, security requirements, and the subject-matter experts you will need on speed dial
Expected benefits β in terms finance departments actually respect: time saved, errors reduced, SLA improvement
A suggested pilot candidate β the one automation that will deliver the fastest return and the most useful lessons for everything that follows
The pilot matters more than people think. A good pilot does not just save money. It builds institutional confidence β the conviction, backed by evidence, that this approach works here, in this organisation, with these systems and these people. That confidence is worth more than any business case, because it is what turns a single project into a programme.
6. An Implementation Roadmap (Sequenced, Realistic, and Designed to Survive Contact With Reality)
A plan without a sequence is a wish list. And wish lists, in corporate life, have a well-documented tendency to become shelfware β expensive documents that everyone agreed were excellent and nobody ever acted on.
The roadmap tells you what to do first, what depends on what, and how to move from pilot to production without the kind of heroic, all-hands-on-deck chaos that organisations mistake for momentum.
You get:
A phased roadmap β pilot, iterate, scale β with the humility to acknowledge that the first version will not be perfect and the discipline to plan for that
Timeline and effort estimates β ranges, not false precision; anyone who gives you an exact date for an automation rollout is either lying or has never done one
A testing and measurement plan β what you will measure, how often, and what "good" looks like β because an automation without monitoring is a lodger without house rules
Monitoring and support assumptions β who watches the automation after launch, and what they do when it drifts; because it will drift
An executive readout and next-step plan β so the people who approve the budget understand what they are approving, and the people who do the work know where to start
Why This Matters More Than the Automation Itself
The audit is not a delay. It is the thing that prevents the expensive kind of delay β the kind where you discover, six months and several hundred thousand dollars later, that you automated a twelve-step process that should have been four steps, and now the four unnecessary steps are embedded in code, connected to three other systems, and owned by a vendor who charges for change requests.
The sequence is simple: understand, measure, simplify, then automate.
Do it in that order, and you build something that works. Do it in any other order, and you build something that looks like it works β until the one person who understood the workaround goes on holiday.
Book a process audit or discovery call
If you are considering AI in business process automation and would prefer the educational variety that does not cost six figures, start with a structured discovery. A process audit identifies where workflow automation, RPA, and AI genuinely belong β and, just as importantly, where they do not.



