The Spice of Life

spices-442726_640Variety is the spice of life. If everything were the same it would be rather boring. Happily, there is natural variety in everything.

Let me use an example to explain:

I was thinking about this as I was walking the dog the other day. I use the same route, along the beach each day, and it takes me roughly the same time – about 30 minutes.

If I actually timed myself each and every day (and didn’t let this fact change my behaviour) then I would find that the walk might take me on average 30 minutes but it could range anywhere between, say, 26 and 37 minutes.

I think you would agree with me that it would be somewhat bizarre if it always took me, say, exactly 29 minutes and 41 seconds to walk the dog – that would just be weird!

You understand that there are all sorts of reasons as to why the time would vary slightly, such as:

  • how I am feeling (was it a late night last night?);
  • what the weather is doing, whether the tide is up or down, and even perhaps what season it is;
  • who I meet on the way and their desired level of interaction (i.e. they have some juicy gossip vs. they are in a hurry);
  • what the dog is interested in sniffing…which (I presume) depends on what other dogs have been passed recently;
  • if the dog needs to ‘down load’ or not and, if so, how long this will take today!
  • …and so on.

There are, likely, many thousands of little reasons that would cause variation. None of these have anything special about them – they are just the variables that exist within that process, the majority of which I have little or no control over.

Now, I might have timed myself as taking 30 mins. and 20 seconds yesterday, but taken only 29 mins. and 12 seconds today. Is this better? Have I improved? Against what purpose?

Here’s 3 weeks of imaginary dog walking data in a control chart:

Untitled

A few things to note:

  • You can now easily see the variation within and that it took between 26 and 37 minutes and, on average, 30 mins. Understanding of this variation is hidden until you visualise it;
  • The red lines refer to upper and lower control limits: they are mathematically worked out from the data…you don’t need to worry about how but they signify the range within the data. The important bit is that all of the times sit within these two red lines and this shows that my dog walking is ‘in control’ (stable) and therefore the time range that it will take tomorrow can be predicted with a high degree of confidence!*
  • If a particular walk had taken a time that sits outside of the two red lines, then I can say with a high degree of confidence that something ‘special’ happened – perhaps the dog had a limp, or I met up with a long lost friend or…..
  • Any movement within the two red lines is likely to just be noise and, as such, I shouldn’t be surprised about it at all. Anything outside of the red lines is what we would call a signal, in that it is likely that something quite different occurred.

* This is actually quite profound. It’s worth considering that I cannot predict if I just have a binary comparison (two pieces of data). Knowing that it took 30 mins 20 secs. yesterday and 29 mins 12 secs. today is what is referred to as driving by looking in the rear view mirror. It doesn’t help me look forward.

Back to the world of work

The above example can equally be applied to all our processes at work…yet we ignore this reality. In fact, worse than ignoring it, we act like this isn’t so! We seem to love making binary comparisons (e.g. this week vs. last week), deriving a supposed reason for the difference and then:

  • congratulating people for ‘improvements’; or
  • chastising people for ‘slipping backwards’ whilst coming up with supposed solutions to do something about it (which is in actual fact merely tampering)

So, hopefully you are happy with my walking the dog scenario….here’s a work-related example:

  • Bob, Jim and Jane have each been tasked with handling incoming calls*. They have each been given a daily target of handling 80 calls a day as a motivator!

(* you can substitute any sort of activity here instead of handling calls: such as sell something, make something, perform something….)

  • In reality there is so much about a call that the ‘call agent’ cannot control. Using Professor Frances Frei’s 5 types of service demand variation, we can see the following:
    • Arrival variability: when/ whether calls come in. If no calls are coming in at a point in time, the call agent can’t handle one!
    • Request variability: what the customer is asking for. This could be simple or complex to properly handle
    • Capability variability: how much the customer understands. Are they knowledgeable about their need or do they need a great deal explaining?
    • Effort variability: how much help the customer wants. Are they happy to do things for themselves, or do they want the call agent to do it all for them?
    • Subjective preference variability: different customers have different opinions on things e.g. are they happy just to accept the price or are they price sensitive and want the call agent to break it down into all its parts and explain the rationale for each?

Now, the above could cause a huge difference in call length and hence how many calls can be handled…but there’s not a great deal about the above that Bob, Jim and Jane can do much about – and nor should they try to!. It is pure chance (a lottery) as to which calls they are asked to handle.

As a result, we can expect natural variation as to the number of calls they can handle in a given day. If we were to plot it on a control chart we might see something very similar to the dog walking control chart….something like this:

Control chart 2

We can see that:

  • the process appears to be under control and that, assuming we don’t change the system, the predictable range of calls that a call agent can handle in a day is between 60 and 100;
  • it would be daft to congratulate, say, Bob one day for achieving 95 and then chastise him the next for ‘only’ achieving 77…yet this is what we usually do!

Targets are worse than useless

Let’s go back to that (motivational?!) target of 80 calls a day. From the diagram we can see that:

  • if I set the target at 60 or below then the call agents can almost guarantee that they will achieve it every day;
  • conversely, if I set the target at 100 or above, they will virtually never be able to achieve it;
  • finally, if I set the target anywhere between 60 or 100, it becomes a daily lottery as to whether they will achieve it or not.

….but, without this knowledge, we think that targets are doing important things.

What they actually do is cause our process performers to do things which go against the purpose of the system. I’ve written about the things people understandably do in an earlier post titled The trouble with targets.

What should we actually want?

We shouldn’t be pressuring our call agents (or any of our process performers) to achieve a target for each individual unit (or for an average of a group of units). We should be considering how we can change the system itself (e.g. the process) so that we shift and/or tighten the range of what it can achieve.

So, hopefully you now have an understanding of:

  • variation: that it is a natural occurrence…which we would do well to understand;
  • binary comparisons and that these can’t help us predict;
  • targets and why they are worse than useless; and
  • system, and why we should be trying to improve its capability (i.e. for all units going through it), rather than trying to force individual units through it quicker.

Once we understand the variation within our system we now have a useful measure (NOT target) to consider what our system is capable of, why this variation exists and whether any changes we make are in fact improvements.

Going back to Purpose

You might say to me “but Steve, you could set a target for your dog walks, say 30 mins, and you could do things to make it!”

I would say that, yes, I could and it would change my behaviours…but the crucial point is this: What is the purpose of the dog walk?

  • It isn’t to get it done in a certain time
  • It’s about me and the dog getting what we need out of it!

The same comparison can be said for a customer call: Our purpose should be to properly and fully assist that particular customer, not meet a target. We should expect much failure demand and rework to be created from behaviours caused by targets.

Do you understand the variation within your processes? Do you rely on binary comparisons and judge people accordingly? Do you understand the behaviours that your targets cause?

DUMB

smartWe are all taught at an early age in our careers (i.e. ‘Management for dummies’) that we should cascade down S.M.A.R.T objectives. You will come across it as an idea that is so deeply rooted that it has been co-opted as ‘common sense’.

Sounds so good, it must be right, right?

Let’s just remind ourselves what SMART stands for:

  • Specific
  • Measurable
  • Achievable
  • Realistic
  • Time bound

Let’s then also remind ourselves about the definition of a system (taken from my earlier ‘Harmony or cacophony’ post):

“A system is a network of interdependent components that work together to try to accomplish the aim [purpose] of the system.” (W. Edwards Deming)

The cascaded objectives technique (known as Management by Objectives, or M.B.O) is used by ‘Command-and-control’ organisations in the mistaken belief that, if we all achieve our cascaded personal objectives, these will then all roll up to achieve the overall goal (whatever that actually is).

This misunderstands:

  • the over-riding need for all the parts (components) of a system to fit together; and
  • the damage caused by attempting to optimise the components…because this will harm the whole system.

A simple illustrative example (taken from Peter Scholtes’ superb book called ‘The Leaders Handbook’):

Let’s say that we run a delivery company – our system. Fred, Amy and Dave are our drivers – our people components. If we provide them each with SMART personal objectives cascaded down (and offer performance-based pay), we might assume that they will all be ‘motivated’ to achieve them and therefore, taken together, the purpose of the whole will be achieved. Sounds great – I’ll have some of that!

…but what should we expect?

  • Each driver might compete with the others to get the best, most reliable, largest-capacity truck;
  • Each driver might compete for the easiest delivery assignments;
  • Drivers might engage in ‘creative accounting’: such as trying to get one delivery counted as two; or unloading a delivery somewhere nearby where it can be made after hours so that they can go back to the warehouse to get more jobs;
  • If we have created a competition out of it (say, the getting of a desirable award) then we can expect to see little driver co-operation, more resentment and perhaps even subtle sabotage.

The above shows that the sum of the outcomes will not add up to what we intended for the whole system…and, in fact, will have caused much unmeasured (and likely immeasurable) damage!

This is a good point to bring out Eli Goldratt’s classic quote:

“Tell me how you will measure me and I will tell [show*] you how I will behave.”

* I prefer to use the word ‘show’ since most people won’t tell you! They know their actions aren’t good for the overall system (they aren’t stupid) and so don’t like telling you what daft practices the management system has ended up creating.

A critique of S.M.A.R.T:

“SMART doesn’t tell us how to determine what to measure, and it assumes knowledge – otherwise how do we know what is ‘achievable’ and ‘realistic’? It is only likely to promote the use of arbitrary measures that will sub-optimise the system.” (John Seddon)

If an individual (or ‘team’) is given a truly SMART objective then, by definition, it would have to have been set so that they could achieve it on their own….otherwise it would be unrealistic.

Therefore any interdependencies it has with the rest of the organisational system would have to have been removed…which, clearly, given the definition of a system means one of the following:

  • if all interdependencies had been successfully removed…then meeting the resultant SMART objective will be:
    • a very insignificant (and very possibly meaningless) achievement for the system; and/or
    • sub-optimal to the system (i.e. work against the good of the whole)

OR

  • if (most likely) it was in fact not possible to truly remove the interdependencies…despite what delicate and time consuming ‘word smith-ing’ was arrived at…then:
    • it will be a lottery (not really under the person’s control) as to whether it can be achieved; and/or
    • it will ‘clash’ with other components (and their supposedly SMART objectives) within the system

So where did the post title ‘D.U.M.B’ come from? Here’s a thought provoking quote from John Seddon:

“We should not allow a plausible acronym to fool us into believing that there is, within it, a reliable method.”

Consider     

  • SMART: Specific, Measurable, Achievable, Realistic, Time-bound

With

  • DUMB: Distorting, Undermining, Management-inspired, Blocking improvement

Does the fact that the acronym and its components ‘match’ make it any more worthy?

Cascaded personal objectives will either be ineffective, detrimental to the whole system or a lottery (outside of the person’s control) as to whether they can be achieved.

We need to move away from cascaded personal objectives and, instead:

  • see each horizontal value stream as a system, with customers and a related purpose;
  • provide those working within these systems with visible measures of the capability of the system as against its purpose; and
  • desist from attempting to judge individuals ‘performance’ and thereby allow and foster collaboration and a group desire to improve the system as a whole.

The trouble with targets

1136281264582304The front page article on the Press for Friday 7th November 2014 says “Patients ‘forgotten’ in wait for surgery”.

It goes on to say that research published in the NZ medical journal suggests that:

“One in three people requiring elective surgery are being turned away from waiting lists to meet Government targets.”

It should be no surprise to any of us that if a numeric target is imposed on a system then the process performers will do what they can to achieve it, even when their actions are detrimental to the actual purpose of the system. The controlling influence of the targets will be even greater if contingent financial implications are involved (carrots or sticks).

If we viewed a league table of (say) hospitals and wait times, what would this tell us? Would it tell us which:

  • has the best current method as judged against the purpose of the system; or
  • is best at managing the system against the numeric targets?

…and what about quality?

This NZ research is not an isolated or even new incident. John Seddon has been following, and challenging the fallout from target setters for many years, across the whole range of UK public sector services. Many of his findings are comedy and yet scary at the same time.

Any target-setter should have no surprise by the resultant behaviours of process performers and their managers, such as to:

  • Avoid, or pass on difficult work;
  • Attempt to restrict work in the process, by:
    • making it hard to get into the process; or
    • throwing them back out (‘they didn’t do it correctly’); or
    • inventing new ‘outside the target’ queues earlier in the process
  • Applying the ‘completed’ stamp as soon as possible, and often before the customer has reached the end from their point of view;
  • Earn easy points, by doing things anyway when not strictly necessary…because it will count towards the target

The target-setter has created a ‘survival game’ of ‘how can we make the target’ which replaces ‘serve customer’.

So what to do? How about adding on layers of compliance reporting and inspections to police the process, to spot them doing ‘naughty things’ to meet target and punish bad behaviour…that should work, shouldn’t it?

Thus the battle lines are drawn, with the customer suffering in the cross fire.

Of note, the Press article goes on to explain that the Government target of 6 months is soon to be reduced to 5 and then 4….because, obviously, adding more pressure on them will motivate them to improve!???

What about if we replace numeric targets with capability measures (which measure the capability of the process against the purpose of the system)….and then used these measures to help us improve.

We can laugh (or cry) at the public sector comedy…but let’s not forget what we do with targets in our own organisations.

‘Management by results’…how does that work?!

cause_effect__1_1_5312The CEO of the company I work for recently shared a tweet with us summing up an insight she gained at a recent engagement she attended, which read as follows:

“Nick Farr-Jones dinner guest at conference key message focus on process not the scoreboard and you will get result.”

I like this tweet – the words, whilst short, are incredibly important for anyone wanting to make improvements. I thought it useful to dig into this a bit.

Dr W. Edwards Deming was very clear on this point! He set out the practise of ‘Management by results’ (e.g. focusing on a scoreboard of outcomes) as one of the diseases of a ‘modern organisation’ and, instead, proposed that we should spend our time and focus on understanding and improving the processes that produce the results i.e. the results are the outcome of something, and you can look at the outcome till you are blue in the face…but this won’t make it change!

“The outcome of management by results is more trouble, not less….Certainly we need good results, but management by results is not the way to get good results….It is important to work on the causes of the results – i.e. on the system. Costs are not causes: costs come from causes.” (Deming, The New Economics)

Professor John Seddon (think ‘Deming on steroids’ for Service organisations 🙂 ) takes this message on further. He notes that the measures used in most organisations are ‘lagging’ or ‘rear-view’ measures – they tell you what you did.

Seddon has a very clear view on measurement but at this time I want to simply put forward his thinking regarding the difference between operational and financial measures. He says that we should use:

  • Operational measures (such as demand for a service, and a processes capability to deliver against its purpose) to manage; and
  • Financial measures (revenues, costs) to keep the score.

We know that one affects the other but we can never know exactly how and it is waste to divert time and effort to try to do so.

Bringing this back to Nick Farr-Jones and Rugby: A rugby coach uses process measures to manage (e.g. passes complete, line-outs won, tackles made…) and the result quite literally as the score!

So Nick Farr-Jones, Deming and Seddon quite clearly agree with each other. If you work on the capability of a system/ value stream/ process to deliver against its purpose (as from the customer’s point of view) then the results will come.

Finally, you may be thinking ‘ah yes, this is where the balanced scorecard comes in’…there’s a post in there! Watch this space.