Targets on measures of targets on measures of things

In this post I’m going to differentiate between:

  1. Measures of things
  2. Targets on (measures of things)
  3. Measures of (targets on (measures of things)); and
  4. Targets on (measures of (targets on (measures of things)))

Wow, that last one is hard to write, let alone say out loud! You might think that it’s a nonsense (which it is) but, sadly, it’s very common.

Note: I added the brackets to (hopefully) make really clear how each one builds on the last.

I’ll attempt to explain…

1. Measures of things:

Seems straight forward enough: I’m interested in better understanding a thing, so I’d like to measure it1.

Some examples…

A couple of personal ones:

  • What’s my (systolic) blood pressure level? or
  • How quickly do I ride my regular cycle route?

A couple of (deliberately) generic work ones:

  • how long does it take us to achieve a thing? or
  • how many things did we achieve over a given period?

Here’s a graph of a measure of a thing (in chronological order):

Nice, we can clearly see what’s going on. We achieved 13 things in week 1. Each thing took us anything between 2 and 36 days to achieve…and there’s lots of variation in-between.

It doesn’t surprise me that it varies2 – it would be weird if all 13 things took, say, exactly 19 days (unless this had been structurally designed into the system). There will likely be all sorts of reasons for the variation.

However, whilst I ‘get’ that there is (and always will be) variation, the graph allows us to think about the nature and degree of that variation: Does it vary more than we would expect/ can explain?3 Are there any repeating patterns? Unusual one-offs? (statistically relevant) Trends?

Such a review allows us to ask good questions, to investigate against and learn from.

“Every observation, numerical or otherwise, is subject to variation. Moreover, there is useful information in variation.” (Deming)

2. Targets on (measures of things):

Let’s say that we’ve been asked to achieve a certain (arbitrary4) target.

Here’s an arbitrary target of 30 days (the red line) set against our measure:

And here’s how we are doing against that target, with some visual ‘traffic lighting’ added:

Instance (X)12345678910111213
Target of 30 days met? (Yes/No)NYYNYYYYYYYNY

We’ve now turned a rich analogue signal into a dull digital ‘on/off’ switch.

If we only look at whether we met the target or not (red vs. green), then we can no longer see the detail that allowed us to ask the good questions.

  • We met ‘target’ for instances 2 and 3…but the measures for each were quite different
  • Conversely, we met ‘target’ for instances 5 all the way through to 11 and then ‘suddenly’ we didn’t…which would likely make us think to intensely question instance 12 (and yet not see, let alone ponder, the variation between 5 and 11).

The target is causing us to ask the wrong questions5, and miss asking the right ones.

3. Measures of (targets on (measures of things)):

But I’m a fan of measures! So, let’s show a measure over time of how we are doing against our target.

In week 1 we met our 30-day target for 10 out of our 13 instances, which is 77%. Sounds pretty good!

Here’s a table showing how many times we met target for each of the next five weeks:

Week12345
Things achieved1315141112
Number meeting 30-day target10141278
% meeting  30-day target77%93%86%64%67%

Let’s graph that:

It looks like we’ve created a useful graph, just like in point 1.

But we would be fooling ourselves – we are measuring the movement of the dumbed-down ‘yes/no’ digital switch, not the actual signal. The information has been stripped out.

For example: There might have been huge turbulence in our measure of things in, say, week 3 whilst there might have been very little variation in week 4 (with lots of things only just missing our arbitrary ‘target’)…we can’t see this but (if we want to understand) it would be important to know – we are blind but we think we can see.

4. Targets on (measures of (targets on (measures of things))):

And so, we get to the final iteration:

How about setting an arbitrary target on the proportion of things meeting our arbitrary target…such as achieving things in 30 days for 80% of the time (the red line)…

And here’s the table showing how we are doing against that target:

Week number:12345
80% Target on 30-day Target met?NYYNN

Which is a double-dumbing down!

We’ve now got absolutely no clue as to what is actually going on!!!

But (and this is much worse) we ‘think’ we are looking at important measures and (are asked to) conclude things from this.

The table (seemingly) tells us that we didn’t do well in week’s 1, 4 and 5, but we did in week’s 2 and 3…

The base data series used for this example:

In order to write this post, I used the Microsoft Excel random number generator function. I asked it to generate a set of (65) random numbers between 1 and 40 and then I broke these down into imaginary weeks. All the analysis above was on pure randomness.

Here’s what the individual values look like when graphed over time:

(Noting that instances 1 – 13 are as per the graph at point 1, albeit squashed together)

Some key points:

  • There is nothing special about any of the individual data points
  • The 30-day target has got nothing to do with the data
  • There is nothing special about any of the five (made up) weeks within
  • The 80% target on the 30-day target has got nothing to do with anything!

The point: Whilst I would want to throw away all the ‘targets’, ‘measures of target’ and ‘targets on measures of target’…I would like to understand the system and why it varies.

This is where our chance of improving the system is, NOT in the traditional measures.

Our reality:

You might be laughing at the above, and thinking how silly the journey is that I’ve taken you on…

…but, the ‘targets on (measures of (targets on (measures of things)))’ thing is real and all around us.

  • 80% of calls answered within 20 seconds
  • 95% of patients discharged from the Emergency department within 4 hours
  • 70% of files closed within a month
  • [look for and add your own]

Starting from a position of targets and working backwards:

If you’ve got a target and I take it away from you…

…but I still ask you “so tell me, how is [the thing] performing?” then what do you need to do to answer?

Well, you would now need to ponder how has the thing been performing – you would then need to look at a valid measure of a thing over time and ponder what this shows.

In a nutshell: If you’ve got a target, take it away BUT still ask yourself ”how are we doing?”

A likely challenge: “But it’s hard!”

Yes… if you peel back the layers of the ‘targets on targets’ onion so that you get back to the core of what’s actually going on, then you could be faced with lots of data.

I see the (incorrect) target approach as trying to simplify what is being looked at so that it looks easy to deal with. But, in making it look ‘easy to deal with’, we mustn’t destroy the value within the data.

“Everything should be made as simple as possible, but no simpler.” (attributed to Einstein)

The right approach, when faced with a great deal of data, would be to:

  • Look at it in ways that uncover the potential ‘secrets’ within (such as in a histogram, in a time-series plot); and
  • understand how to disaggregate the data, such that we can split it up into meaningful sub-groups. We can then:
    • compare sub-groups to consider if and how they differ; and
    • look at what’s happening within each sub-group (i.e. comparing apples with apples)

To close:

If you are involved in ‘data analysis’ for management, I don’t think your role should be about ‘providing the simple (often 1-page) picture that they’ve asked for’. I would expect that you would wish your profession to be along the lines of ‘how can I clearly show what’s happening and what this means?’

If you are a manager looking at measures: why would you want an (overly) simple picture so that you can review it quickly and then move on to making decisions? Wouldn’t you rather understand what is happening and why … so that good decisions can be made?

Footnotes

1. Measurement of things – a caution: We should be careful not to fall into the trap of thinking that everything is measurable or, if we aren’t measuring it, then it doesn’t matter.

There’s plenty of stuff that we know is really important even though we might not be measuring it.

2. Variation: If you’d like to understand this point, then please read some of my earlier posts, such as ‘The Spice of Life’ and ‘Falling into that trap’

As a simple example: If you took a regular reading of your resting heart rate, don’t you think it would be weird if you got, say, 67 beats per minute every single time? You’d think that you’d turned into some sort of android!

3. Expect/ can explain – clarification: this is NOT the same as ‘what we would like it to be’.

4. Arbitrary: When a numeric target is set, it is arbitrary as to which number was picked. Sure, it might have been picked with reference to something (such as 10% better than average, or the highest we’ve ever achieved, or….) but it’s arbitrary as to which ‘reference’ you choose.

5. Wrong questions: These wrong questions are then likely to cause us to jump to wrong conclusions and actions (also known as tampering). Such actions are likely to focus on individuals, rather than the system that they work within.

6. ‘Trigger’: The writing of this post was ‘triggered’ the other day when I reviewed a table of traffic-lighted (i.e. against a target) measures of targets on measures of things.

Oversimplification

!cid_image001_png@01D18034So it seems that many an organisation repeats a mantra that we must “simplify, simplify, simplify”…they accompany this thrice repeated word with rhetoric that implies that this is so blindingly obvious that only a fool would query this!

As such, anyone questioning this logic is likely to hold their tongue…but I’ll be that fool and question it, and here’s why:

It’s too simple!

Here’s where I mention the ‘Law of requisite variety’ which was formulated by the cyberneticist1 W. Ross Ashby in the context of studying biological systems. Stafford Beer extended Ashby’s thinking by applying it to organisations.

Now, rather than stating Ashby’s technical definition, I’ll put forward an informal definition that I think is of use:

“In order to deal properly with the diversity of problems the world throws at you, you need to have a repertoire of responses which is (at least) as nuanced as the problems you face.” (What is requisite variety?)

!cid_image002_png@01D18034

Using the diagram above, let’s say that the problem types on the left (shown by different coloured arrows) represent the different types of value demands from our customers.

Let’s say that the responses on the right are what our system* is designed to cope with (* where system means the whole thing – people, process, technology – it doesn’t refer merely to ‘the computer’).

We can see that our system above is not designed to cope with the red arrows and incorrectly copes with some of the yellow arrows (with an orange response)….the customers with these value demands will be somewhat disappointed! Further, we would waste a great deal of time, effort and money trying to cope with this situation.

What on earth are you on about?!

“Management always hopes to devise systems that are simple…but often ends up spending vast sums of money to inject requisite variety – which should have been designed into the system in the first place.” (Stafford Beer)

Many large organisations engage in ill thought out and/or overly zealous ‘complexity reduction’ initiatives (incidentally, system replacement projects* are corkers for this!) that strip out more than they should and the outcome is unusable and/or hugely harmful towards satisfying customer value demands…which ends up creating un-necessary complexity as the necessary variety is ‘put back in’ via workarounds and ugly add-ons and patch-ups.

(* Large public sector departments have been excellent at this….often scrapping multi-million $ projects before a single live transaction gets into a database.)

Note: for readers aware of the ‘Lean Start-up’ thinking, you might cry out that this appears to go against the Minimum Viable Product (MVP)/ experimentation point…but it doesn’t…in fact it supports thinking in terms of target conditions rather than merely stating ‘make it simple’ objectives and setting related arbitrary targets.

Standardisation?

You might think that, because service demand is infinitely variable 2, then I am suggesting that we need to build infinitely complex systems that can cope with every eventuality with standardised responses. Well, no, that would be mad…and impossible.

In service, we can’t hope to know every ‘coloured arrow’ that might come at us! Instead, we need to ensure that our service system can absorb variety! This means providing a flexible environment (e.g. guidelines, not ‘straight jacket’ rules), and empowering front line staff to ‘do the right thing’ for the specific variety of the customer’s demand before them, and pulling appropriate expertise when required.

Standardisation in service is not the answer.

Cause and Effect

Don’t confuse cause and effect. Simplification should not be the goal…but it can be a very agreeable side effect.

“To remove waste [e.g. complexity], you need to understand its causes….if the system conditions that caused the waste are not removed, any improvements will be marginal and unsustainable.” (John Seddon)

If you think “We’ve got too many products and IT applications…we need to run projects to get rid of the majority of them!” then ask yourself this: “Did anyone set out specifically to have loads of products and IT applications?” I very much doubt it…

You can say that you want fewer products, less technology applications, less complex processes…less xyz. But first, you need to be absolutely clear on what caused you to be (and remain) this way. Then you would be in a position to improve, which will likely result in the effect of appropriate simplification (towards customer purpose).

If you don’t understand the ‘why’ then:

  • how can you be sure that removing all those products and systems and processes will be a success? and
  • what’s to stop  them from multiplying again?

The goal should be what you want, not what you don’t want

“If you get rid of something that you don’t want, you don’t necessarily get something that you do want…improvement should be directed at what you want, not at what you don’t want.” (Russell Ackoff)

The starting point should be:

  • studying your (value stream) systems and getting knowledge; and then
  • experimenting towards purpose (from the customers point of view) , whilst monitoring your capability measures

The starting point is NOT simplification.

A classic example of the simplification mantra usurping the customer purpose is where organisations force their customers down a ‘digital’ path rather than providing them with the choice.

  • To force them will create dissatisfaction, failure demand and the complexity of dealing with it;
  • To provide them with choice will create the simplicity of delivering what they want, how they want it…with the side effect of educating them as to what is possible and likely moving them into forging new habits (accepting that this takes time).

In conclusion

So I’d like to end on the quote that I have worn out most over my working life to date:

“Make everything as simple as possible, but no simpler.” (attributed to Einstein)

The great thing about this quote is that it contrasts ‘relative’ with ‘absolute’. “As simple as possible” is relative 3 – it necessitates a comparison against purpose. “Simple” is absolute and, as such, our pursuit of simplification for its own sake will destroy value.

Thus, the quote requires us to start with, and constantly test against, customer purpose…and the appropriate simplicity will find itself.

Notes:

  1. Cybernetics: the science of control and communication in animals, men and machines. Cyberneticians try to understand how systems describe themselves, control themselves, and organize themselves.
  2. Infinite variability: We are all unique and, whilst we will likely identify a range of common cause variation within service demand (i.e. predictable), we need to see each customer as an individual and aim to satisfy their specific need.
  3. There’s probably an Einstein ‘relativity’ joke in there somewhere. 

It’s NOT about the nail!

It not about the nailSo there’s a fabulous (yet very short) YouTube skit called ‘It’s NOT about the nail’.

Many of you will have watched it…and if you haven’t then please watch it now before reading on – you won’t get this post if you don’t.

And I bet that those of you who have seen it before will want to watch it again (and again).

(though please see my ‘PC police’ note at the bottom 🙂 )

So, why am I using this clip? What’s the link?

Well it struck me that this is a brilliant systems analogy!

Let me explain:

Let’s assume that the woman is an organisation and the man is outside it, looking in.

The script might go something like this…

The organisation: “It’s just, there’s all this pressure you know. And sometimes it feels like it’s right up on me…and I can just feel it, like literally feel it, in my head and it’s relentless…and I don’t know if it’s going to stop, I mean that’s the thing that scares me the most…is that I don’t know if its ever going to stop!”

[Turns to show the outside world the reality of the situation]

Outside:     “…yeah…well…you do have…a ‘command and control’ management system.”

The organisation:     “It’s not about the management system!”

Outside:     “Are you sure? Because, I mean, I’ll bet that if we got that out of there…”

The organisation:     “Stop trying to fix it!”

Outside:     “No, I’m not trying to ‘fix it’…I’m just pointing out that maybe the management system is causing….”

The organisation:     “You always do this! You always try to fix things when all I really need is for you to listen!”

Outside:     “yeah…see…I don’t think that is what you need. I think what you need is to get the ‘command and control’ out…

The organisation: “See! You’re not even listening now!”

Outside:     “Okay, fine! I will listen. Fine.”

[Pause]

The organisation:     “…it’s just, sometimes it’s like…there’s this achy…I don’t know what it is. I’m not sleeping very well at all…and all my workers are disempowered and disengaged. I mean all of them.”

[Pause. Searching looks between the two]

Outside:     “That sounds…erm…really…hard.”

The organisation:     “It is! Thank you 🙂 

[Pause. Reach forward to reconcile….]

The organisation:     “Owch!”

Outside:     “Oh come on! If you would just…”

The organisation:    “DON’T!!!…”

[(usually) The end, unfortunately]

But let’s not stop there and just cope with the nail….

…to the point:

To successfully and meaningfully change a system towards its purpose, you need to look from the outside-in. You cannot achieve this looking from the inside-out.

Deming was very clear on this point: “The prevailing style of management must undergo transformation. A system cannot understand itself. The transformation requires a view from outside.”

Seddon wrote “When managers learn to take a systems view, starting outside-in (that is, from the customer’s rather than the organisation’s point of view), they can see the waste caused by the current organisation design, the opportunities for improvement and the means to realise them. Taking a systems view always provides a compelling case for change and it leads managers to see the value of designing and managing work in a different way…

…but this better way represents a challenge to current management conventions. Measures and roles need to change to make the systems solution work. You have to be prepared to change the system…”

In a similar vein Einstein is credited with the saying We cannot solve our problems with the same thinking we used when we created them.”

A catch:

Gosh, it sounds so simple….let’s just look from the outside-in shall we? But, unfortunately, it isn’t that simple.

Here’s Stafford Beer with why not:

“…a new idea is not only beyond the comprehension of the existing system, the existing system finds it threatening to its own status quo…the existing system does not know what will happen if the new idea is embraced.

The innovator fails to work through the systematic consequences of the new idea. The establishment cannot…and has no motivation to do so…it was not its own idea…the onus is on the innovator…[but] the establishment controls the resources that the adventurous idea needs…”

Blimey, that’s a bit depressing isn’t it!…which is an opportune moment to remind you of my earlier ‘Germ theory of management’ post.

You/I/we won’t succeed by trying to push the idea onto the system. We need to make ‘it’ curious and want to pull the idea at the rate that understanding, acceptance and desire emerges.

So it IS about the nail! …Oh never mind.

(if you watch the YouTube clip again, I expect you will find it hard not to mentally overlay the above script onto it now! I know I do.)

Comment for the ‘Political-Correctness’ police: I ‘get’ that the clip is stereotypical about the differences between men and women…I ‘get’ that men will likely find it funnier than women…but, come on, it is very funny.

Okay, okay…I am more than happy to post an equally funny clip (to address the gender balance) that sends up men…here’s a good one: ‘Man flu’

Stating the obvious!

Copy-of-dumb_blondeIt is really easy for any leader to say “I want…

  • Continuous Improvement;
  • Removal of waste;
  • Reduction in failure demand*.”

(* explained in my earlier marbles post here)

All are sensible, in fact obvious! But it’s a bit like a financial advisor telling you to ‘buy low and sell high’…what have you actually learned that you didn’t already know, and how does this help?

It’s much harder to understand the system conditions (structures, policies, procedures, measurement, IT), and underlying management thinking (beliefs and behaviours) that protect the status quo, create the waste and cause the failure demand….because you have to change your thinking!

“We cannot solve our problems with the same thinking we used when we created them.” (attributed to Einstein)

If you:

  • set numeric activity targets to make improvements…
  • …and offer rewards for their achievement…
  • …and rate (and rank) people’s performance against them…

…then you haven’t understood (or accepted) about systems, measurement and motivation.

To quote from John Seddon:

“Treating improvement as merely process improvements is folly; if the system conditions that caused the waste are not removed, any improvements will be marginal and unsustainable.