Mark Hayes, August 10th, 2019.
The first thing to look at in improving any process is the big picture. Not simply the process in isolation, but how it fits in with the rest of the organisation, the impact it has on other processes and vice versa.
Improving individual processes is pointless if they don't interact efficiently within the organisation as a whole, with customers and with suppliers.
If the lead time for a task that only takes a few minutes, such as adding a user to a work-group, is more than a couple of hours something is seriously wrong.
Counter Productive Targets and Incentives are Mean not Lean!
Setting targets and offering incentives by team or department will likely increase friction and reduce overall quality and efficiency. It discourages creativity and cooperation while encouraging team selfishness and short-sightedness.
Insular savings may be relatively easy to identify and deliver but, they will often be made at the cost of other parts of the organisation, or even the customer. For example, a decrease in headcount will generally lead to an increase in backlog. The longer it takes to get something done, the bigger the cost to those doing the waiting, the other teams and customers.
Instead of setting targets based on cost cutting or headcount reductions, aim for changes that add value. In particular aim for:
- Backlog elimination
- Waiting time reduction
- Process simplification
These areas may not necessarily decrease cost in the team providing the service. They will however improve the overall efficiency, and ultimately profitability, of the organisation as a whole.
Equally beneficial it will encourage, and raise the profile of the more creative employees. The people that add value and bring the types of new ideas that help an organisation to thrive. The ones that often leave because they feel their talents would be better utilised elsewhere and whose efforts are eclipsed by the egos and politics of administration.
Most successful, quality organisations were created by brilliant people, not administrators.
Look at Apple for example. When the board of directors fired Steve Jobs, their short-sightedness almost destroyed the company. They thought he was too obsessed with quality and unable to turn out new products fast enough to compete. When he returned, he fired the board and made Apple what it is today. A the market leader that is the very essence of both quality and innovation.
Instead of encouraging and rewarding competition and back-biting, encourage cooperation. Get employees to look for better, more productive ways to interact with other teams as well as provide a better service.
Directing incentives at the collective will produce global results as opposed to isolated ones. If a particular team needs help, get everyone that interacts with them involved instead of looking at the problem in isolation. Better still, nurture a culture that encourages input from everyone.
Focusing on individuals might make one person feel good for a day, but it doesn't do much for the team. It can even be embarrassing and nurture resentment.
In Japan, for example, praise is generally directed at the team as a whole. The culture of mutual respect means if a single person is exceptional in some way, they understand the rest of the team appreciates it. Workers value team status above personal recognition, and recognising the teams accomplishments is enough.
Many organisations in the UK take completely the opposite approach and then pervert it with office politics. Praise is often given not to the best, but to the loudest and most politically active, or even the one that's nowhere near as bad as they once were.
How many times do we see praise given not because of a job well done, but because they got there in the end. Or, as often as not, to people that "accept" the credit due to others.
Ultimately the need to be singled out comes from the fact that, most employees in the UK don't feel secure in their job. They feel the need to stand out at the expense of others. This is cultural and one of the main causes for friction in process improvement.
The fact is, there are very few people out there that are comfortable initiating changes that might see them out of a job. They will resist, not contribute and be obstructive if they can. Often holding back on great ideas until a better manager comes along, one that might just see them shine.
The sad thing is, the vast majority of "managers" in large organisations will take as much of the credit as they can get away with, regardless of whether it's deserved.
Backlogs are Toxic!
Generally, having a task carried out later does not make it any less time consuming or costly. The argument that things need to be scheduled to be efficient simply doesn't wash. Ask yourself, what are they doing right now that's preventing them from doing something straight away?
The answer will no doubt be, they are working on something that a) shouldn't take too long, and b) was requested a while ago. In fact, they've got a lot of things to do that were requested a while ago. I.e. they have a backlog.
So what advantage does having a backlog give? None whatsoever.
If a job takes five minutes, expecting someone to wait any longer than ten minutes, just means that each minute they are waiting something is not getting done. In other words, the backlog is costing. Of course there are things that don't need to be done straight away, but that's not relevant and doesn't in any way excuse delays that impact progress or efficiency.
Backlogs don't come about because when a process is born and goes live, it doesn't start processing until it's acquired enough of a backlog to become a bottleneck. So why do so many processes maintain a constant backlog?
Apart from some of the points made in the previous section, two things spring to mind:
- Scheduling: While only partly to blame, scheduling would not be necessary if things were done straight away.
- Job Security: A good backlog helps people feel safe in their positions and gives "managers" something to do and go on about.
Controversial perhaps, but if you take away the need for scheduling, most of the problems go away and managers don't have to "manage" the backlog any more. As a result the people performing the actual work have more time to get on with it.
This might sound a bit simplistic, but in almost every large organisation lead times for the simplest tasks are one of the most significant causes of waste and friction.
For example, in every large organisation I've worked in, with the exception of only one, new starters wait between 3 days and a week before they have everything the need to do their job. Sometimes longer.
The cost and inconvenience of this is staggering. Yet in most cases the new starter process was initiated between a week and a month before the start date. If the time it takes to actually set up a new starter is fixed, why the wait? Is it saving us any money, making us any money or bringing us some other benefit? Of course not, it's just bad service and weak management.
Some might think i's no big deal but... In slightly less than half of those organisations, existing employees will log into a spare workstation, under their own credentials, and tell the new starter to improvise till they get a login of their own.
This is a primary security breach almost everywhere that deals with personal data, money and any kind of sensitive information. In fact, if it's not then that particular organisation is likely not GDPR compliant.
Okay... So I'm picking holes a bit. My point is that friction like this is not only costly, it's dangerous.
Dangerous because employees seek out less formal ways to get things done efficiently. This makes it harder to identify process improvements because the process being improved is no longer the process being worked. Not just that, but in situations like the example above, where policy has been badly implemented or creates unnecessary or inconvenient hurdles, it gets ignored.
After a bit of meandering then, my point is: Don't accept a service that is slow and causing bottlenecks and don't try too work round it, fight it until it changes! Always look for organisation wide improvements, don't think these obstacles are set in stone.
When analysing a process that needs streamlining, highlight and quantify external bottlenecks. Find out why they take so long and who is responsible, not for finger pointing, but so they can also improve their process.
Define and quantify how much time and effort is really involved, list the benefits removing the bottleneck will bring and how much inconvenience and overheard they are currently adding.
Target these issues, point out that they may have made sense at the time, in isolation, but in retrospect they were clearly stupid ideas that only caused problems and increased costs elsewhere. Don't validate them, they are what they are and addressing them as such will help develop a culture more compatible with continuous process improvement.
It's Not all about Time and Money, it's about Flow
Focussing on the most time and resource hungry steps in a process is not always the most efficient way to identify savings and improvements.
In almost all processes, the path most conducive to smooth flow and flexibility will be the most efficient and cost effective. If customers are involved, it will also offer the best customer experience.
In other words, look at ways to:
- Reduce friction.
- Reduce the number of process steps.
- Reduce the number of interactions.
- Simplify and consolidate.
Sounds a bit like I'm repeating myself hey...
Reducing the number of interactions, for example between user and interface or customer and supplier, reduces complexity and cost while increasing quality and efficiency because... there's less to do. Reducing the number of steps and simplifying a process, reduces possible sources of friction and defects. Reducing friction and defects, reduces the amount a process needs to be managed.
And ultimately, reducing the amount a process needs to be managed reduces the amount of time workers lose to management interactions.
The easier it is for a user to provide a good service, the better the relationship with the customer.
Don't Just Look at the Existing System or Process
Look at the context!
Given that a large number of business processes involve systems that are either incomplete or fragmented, significant savings and improvements are often missed by focusing on systems without looking at how the spaces between them are filled. I.e. the less obvious and informal sub-processes.
In these instances where a task, problem or product is passed around, look at:
- How it's passed around, particularly if it's a data exchange that could be automated.
- How it's tracked.
- Service level agreements between parties or processes involved.
- Bottlenecks, sub-processes and work-arounds.
- Typical reasons for failure or delay when processing, handing off, or receiving.
- Anything being done, formal or informal, to make things quicker or to validate the subject matter.
I've seen many instances where a team maintains a spreadsheet, for example, to track work or provide user friendly reference data that supports their function. In each case a) the spreadsheet could be automated, and b) it's existence exposed exactly what was lacking in the current process.
Also, particularly when replacing something like a spreadsheet driven process with a database application, or simply automating it, don't assume that recreating the manual process is the right way to go. You may well have gained a few time savings through automation, but in all likelihood you will have created a highly inefficient process that someone else will have to replace in the near future.
Such spreadsheet driven processes, for example that enrich, aggregate and QC granular data in order to produce a report, tend to be broken down into user friendly steps. Each step in the process has its own worksheet, within the workbook, because that's the easiest way to manually manage and process the data.
This is completely unnecessary for database applications and automated spreadsheets. It over complicates the end product and completely muddies the water. So, unless requirements stipulate otherwise, just produce the end result, don't produce an output for each stage the data goes through.
I say this because:
- Doing so requires more code.
- The code will be less efficient.
- The purpose of the end process will be less clear.
- Having all that extra data not only clogs up networks, but in some cases can also raise confidentiality issues.
In other words, if it's not necessary don't do it. Existing processes often appear to be more complex, or have more stages, because of the way they are run. Automating them, or replacing them with an application, means the bulk of the work should no longer be visible.
If however, the requirements stipulate that each stage must be shown in the output, or the output should contain the detail, don't just do it. Make sure there is a good business reason first. There's a good chance that there's a downstream reconciliation process, or something, that is no longer required. Argue the fact that once a system has been tested and proven, the need to reconcile its output should, most likely, have gone away.
Finally, if such a process is properly defined and coded up, any data quality and reconciliation checks should be built in! If the data is dodgy, incomplete or doesn't reconcile in any way, the code should throw it out and produce a report that proves the issue.
Ask Controversial Questions, and... Don't be Afraid of the Answers!
This is important! There's a lot of ground to be won by talking to the people who actually work a process outside the earshot of management. It's often the case that office politics, or a simple fear of downsizing due to process improvement, muddy the water.
The thing to remember is that the people who actually work a process, are the process. Allow them to speak freely and ask them things like:
- What takes the most time?
- What could be better?
- What's not necessary any more?
- What or who does the process depend on?
- Where are the bottlenecks?
- What are the inputs and where do they come from?
- What kind of defects are found in the inputs, how are they handled and how much effort do they require to resolve?
- What are the rules, who creates them and how are they applied?
- What are the exceptions to the rules?
- Who, or what, is responsible for the majority of wasted effort and defects?
There are lots of "right" questions to ask depending on context, my point is that you need to get the users talking.
Example: The Impact of Getting it Wrong
Not that many years ago I was building a document control and validation system for a process that ran approximately once a quarter. It was a very complex process that required a lot of manual effort and considerable business knowledge.
The team had, sometime earlier, been visited by a "process improvement" team. As a result, they'd been given a custom built in-house solution to make their lives easier. This solution had taken the better part of a year to deliver.
The solution delivered was so clumsy the team didn't actually use it. The effort to run and maintain it was greater than the time-savings it offered. Not just that, it didn't actually work.
The key problem was that they process improvement bods had failed to ask the two most important questions for this type of process, i.e:
- What do you spend most of your time doing?
- How are the rules constructed?
The final solution, based almost entirely on the answers to these two questions, cut manual effort by about two thirds and reduced turnaround times on validation issues from weeks to seconds.
Improving Restatement Validation and Processing
My first task was to find out how they were getting on with the new system, what time-savings it had brought and what it lacked. I was hoping I'd get away with simply enhancing it. It had taken a process improvement analyst and a programmer almost a year to deliver, and I didn't want to see that effort go to waste. Testing and tweaking had also consumed a huge amount of the teams resource over the previous year.
After a bit of beating around the bush I was told that they didn't actually use the new system. Development had taken so long and been so problematic that they signed it off just so they wouldn't have to keep testing it and talking about it. Despite frequent meetings, before and during development, the "Process Improvement" team had failed to grasp how the process actually worked.
Its solution was so bad that the users didn't even want me to look at it. A clean slate was required, there was nothing that the application offered that brought any benefit to the team. Not just that, it didn't work properly.
Given this process only happens once a quarter, and usually completes in about a month, my next question was "What do you spend most of your time doing?".
It turned out that tracking the submissions was a big time waster. Maintaining a list of submissions, with details, status and issues, combined with managing the files and emails submitted, was consuming a week or more per cycle. That didn't include time spent validating and correcting submissions from end users.
This was all easily automated by scraping and validating submissions from the group mailbox. Filing the emails and submission workbooks, generating email responses with validation errors and loading the submission data into a SQL Server database was all handled automatically.
The tracking workbook was then replaced with a simple status report requiring zero effort. In fact, given the fact that most of the administrative processes were now automated, the tracking report ceased to be core to the process and was simply used as an overview of progress to date.
One week or more saved 4 times a year on tracking, much much more on validating... yippee!
The process improvement bods had not asked the question, they just looked at the documentation and based their delivery on that. They had also not considered the dynamic nature of the rules the process was based on.
The rules were changing each cycle. Not completely, but enough that the process improvement teams solution would have to be updated every cycle, both reference data and as often as not the code and data structure too!
This was unacceptable as maintaining the solution was clearly going to take more time than it saved.
So then my question was "How are these rules changing?".
Well, the answer was they weren't just changing, new rules were being added as well as old ones modified or deleted. Scary stuff, but no... The rules were based on hierarchical sets of either accounts or cost centres and shared a common set of characteristics. All of them!
The mistake had been to deliver a solution based on sets of accounts and cost centres. Each set being a specific sub-process with it's own data tables, screens and code. Surprisingly, I later found out there were several similar high maintenance applications in the organisation doing the same thing in the same way.
The rules, or characteristics, were essentially quite simple. Some prevented postings happening at all between certain accounts or cost centres, others applied limits based on aggregates either by child or parent node, some served to kick off an authorisation process or notify the end user that additional steps or information were required.
The solution was simple, allow users to create their own rule sets and set the conditions, aggregation levels, error messages and warnings as they saw fit.
Rule sets, as data not structure or code, can now be created, or deleted, and include any number of cost centres and accounts, or parent nodes. Conditions and aggregations are set by simply ticking a box to apply.
Rules can be either "Critical" or "Warning Only" and validation messages, including what was wrong, why and how it could be remedied, can be set in order to instruct the end user how to proceed.
Validation issues are now being identified and fed back to the end user by simply clicking a button. The process is self documenting and system maintenance is simply a matter of uploading the latest cost centres and accounts, and maintaining the rule sets each quarter if required.
Submission validation and processing time reduced from weeks to seconds, another yippee!
Most importantly, the complexity became irrelevant because the system explained exactly what was wrong. And with turnaround times reduced to seconds, the entire submission process, in almost all cases, could be completed by the end user in a single sitting.
By approaching the process generically and handling all rule sets the same way through a common interface, as opposed to random sets of criteria and reference data each handled as a separate module, maintenance and training requirements were also minimised.
The point is, it's often the things people shy away from, because of complexity or habit, that should be focussed on the most. These tend to be the primary reasons a process is inefficient.
Even if resources dictate that only a quick win is possible at the time, every effort should be made to at least document areas for further improvement and then revisit them later.