Evolution: 2014 – 2018

Evolution-des-wissens

Most things evolve and this blogging lark is no different.

My blog started off as a way to get the ‘madness at work’ things off my chest….which probably explains why the first few posts could be considered a bit ‘ranty’. Ho hum.

I then got a bit more thoughtful (I think). I adopted a stance of ‘professional provocation’ – challenging the status quo but doing so with analysis and evidence…and the length of my posts got longer. Sorry about that.

Then I realised that the blog was a rather useful extension of my work educating and coaching people.  It became a sort of service: you could pick up the phone or drop by my desk – have a conversation about your situation, receive some well-intended ‘organisational therapy’ from me and a promise that I’d try to put our conversation into useful and re-usable words. I’d usually get something out ‘within a week’…. though not always – some of the more involved posts took months!

And at some stage throughout all that, I realised that it was all rather generic anyway. It is applicable to people in organisations all around the world…hence why I decided that anyone curious could read it for themselves.

When ‘going public’ I wanted to keep myself anonymous because I don’t think that people need to know who the hell I am – my words should either stand up as being interesting, credible and useful or not.

Things have slightly changed for me over the last six months – I’ve been dabbling with ‘doing my own thing’ (i.e. from employment to solo consulting)…which partly explains why the blog went rather quiet. I spent a bit of time writing and piloting a one-day education course titled ‘Systems Thinking and Intervention: The Fundamentals’. The day is based around the elements of Deming’s ‘Theory of Profound Knowledge’.

If you are (or know of) a curious organisation in New Zealand (or perhaps over in Australia) and find my work interesting, then you are very welcome to contact me for a chat. I can help with initial education (such as my one-day course) and then with coaching and supporting the curious, to study and improve their system.

  • You can contact me* at: Steve@Schefer.co.uk
  • You can also have a read through my 1 page (2-sided) course brochure:

Systems Thinking and Intervention – The fundamentals – course leaflet

Okay, that’s enough of that! Don’t worry – I’m not about to change this blog into an attempted sales tool 🙂 . I’m interested in talking to people who would like to pull my help. I have no desire to push it onto anyone!

* I’ve also added an ‘About me’ page to the blog menu bar and this also contains my contact details.

Thanks for reading,

Steve

 

How good is that one number?

Lottery ballsThis post is a promised follow up to the recent ‘Not Particularly Surprising’ post on Net Promoter Score.

I’ll break it into two parts:

  • Relevance; and
  • Reliability

Part 1 – Relevance

A number of posts already written have explained that:

Donald Wheeler, in his superb book ‘Understanding Variation’, nicely sets out Dr Walter Shewhart’s1 ‘Rule One for the Presentation of Data’:

“Data should always be presented in such a way that preserves the evidence in the data…”

Or, in Wheeler’s words “Data cannot be divorced from their context without the danger of distortion…[and if context is stripped out] are effectively rendered meaningless.”

And so to a key point: The Net Promoter Score (NPS) metric does a most excellent job of stripping out meaning from within. Here’s a reminder from my previous post that, when asking the ‘score us from 0 – 10’ question about “would you recommend us to a friend”:

  • NPS scaleA respondent scoring a 9 or 10 is labelled as a ‘Promoter’;
  • A scorer of 0 to 6 is labelled as a ‘Detractor’; and
  • A 7 or 8 is labelled as being ‘Passive’.

….so this means that:

  • A catastrophic response of 0 gets the same recognition as a casual 6. Wow, I bet two such polar-opposite ‘Detractors’ have got very different stories of what happened to them!

and yet

  • a concrete boundary is placed between responses of 6 and 7 (and between 8 and 9). Such an ‘on the boundary’ responder may have vaguely pondered which box to tick and metaphorically (or even literally) ‘tossed a coin’ to decide.

Now, you might say “yeah, but Reichheld’s broad-brush NPS metric will do” so I’ve mocked up three (deliberately) extreme comparison cases to illustrate the stripping out of meaning:

First, imagine that I’ve surveyed 100 subjects with my NPS question and that 50 ‘helpful’ people have provided responses. Further, instead of providing management with just a number, I’m furnishing them with a bar chart of the results.

Comparison pair 1: ‘Terrifying vs. Tardy’

Below are two quite different potential ‘NPS question’ response charts. I would describe the first set of results as terrifying, whilst the second is merely tardy.

Chart 1 Terrifying vs Tardy

Both sets of results have the same % of Detractors (below the red line) and Promoters (above the green line)…and so are assigned the same NPS score (which, in this case would be -100). This comparison illustrates the significant dumbing down of data by lumping responses of 0 – 6 into the one category.

I’d want to clearly see the variation within the responses i.e. such as the bar charts shown, rather than have it stripped out for the sake of a ‘simple number’.

You might respond with “but we do have that data….we just provide Senior Management with the single NPS figure”….and that would be the problem! I don’t want Senior Management making blinkered decisions2, using a single number.

I’m reminded of a rather good Inspector Guilfoyle poster that fits perfectly with having the data but deliberately not using it.

Comparison pair 2: ‘Polarised vs. Contented’

Below are two more NPS response charts for comparison….and, again, they both derive the same NPS score (-12 in this case) …and yet they tell quite different stories:

Chart 2 Polarised vs Cotented

The first set of data uncovers that the organisation is having a polarising effect on its customers – some absolutely love ‘em …whilst many others are really not impressed.

The second set shows quite a warm picture of contentedness.

Whilst the NPS scores may be the same, the diagnosis is unlikely to be. Another example where seeing the variation within the data is key.

Comparison pair 3: ‘No Contest vs. No Show’

And here’s my penultimate pair of comparison charts:

Chart 3 No contest vs No show

Yep, you’ve guessed it – the two sets of response data have the same NPS scores (+30).

The difference this time is that, whilst the first chart reflects 50 respondents (out of the 100 surveyed), only 10 people responded in the second chart.

You might think “what’s the problem, the NPS of +30 was retained – so we keep our KPI inspired bonus!” …but do you think the surveys are comparable. Why might so many people not have responded? Is this likely to be a good sign?  Can you honestly compare those NPS numbers? (perhaps see ‘What have the Romans ever done for us?!’)

….which leads me nicely onto the second part of this post:

Part 2 – Reliability

A 2012 article co-authored by Fred Reichheld (creator of NPS), identifies many issues that are highly relevant to compiling that one number:

  • Frequency: that NPS surveys should be frequently performed (e.g. weekly), rather than, say, a quarterly exercise.

The article doesn’t, however, refer to the essential need to always present the results over time, or whether/ how such ‘over time’ charts should (and should not) be interpreted.


  • Consistency: that the survey method should be kept constant because two different methods could produce wildly different scores.

The authors comment that “the consistency principle applies even to seemingly trivial variations in methodologies”, giving an example of the difference between a face-to-face method at the culmination of a restaurant meal (deriving an NPS of +40) and a follow-up email method (NPS of -39).


  • Response rate: that the higher the response rate, then the greater the accuracy – which I think we can all understand. Just reference comparison 3 above.

But the article goes to say that “what counts most, of course, is high response rates from your core or target customers – those who are most profitable…” In choosing these words, the authors demonstrate the goal of profitability, rather than customer purpose. If you want to understand the significance of this then please read ‘Oxygen isn’t what life is about’.

I’d suggest that there will be huge value in studying those customers that aren’t your current status quo.


  • Freedom from bias: that many types of bias can affect survey data.

The authors are clearly right to worry about the non-trivial issue of bias. They go on to talk about some key issues such as ‘confidentiality bias’, ‘responder bias’ and the whopper of employees ‘gaming the system’ (which they unhelpfully label as unethical behaviour, rather than pondering the system-causing motivations – see ‘Worse than useless’)


  • Granularity: that of breaking results down to regions, plants/ departments, stores/branches…enabling “individuals and small teams…to be held responsible for results”.

Owch….and we’d be back at that risk of bias again, with employees playing survival games. There is nothing within the article that recognises what a system is, why this is of fundamental importance, and hence why supreme care would be needed with using such granular NPS feedback. You could cause a great deal of harm.

Wow, that’s a few reliability issues to consider and, as a result, there’s a whole NPS industry being created within organisational customer/ marketing teams3…which is diverting valuable resources from people working together to properly study, measure and improve the customer value stream(s) ‘in operation’, towards each and every customer’s purpose.

Reichheld’s article ends with what it calls “The key”: the advice to “validate [your derived NPS number] with behaviours”, by which he explains that “you must regularly validate the link between individual customers’ scores and those customers’ behaviours over time.”

I find this closing advice amusing, because I see it being completely the wrong way around.

Rather than getting so obsessed with the ‘science’ of compiling frequent, consistent, high response, unbiased and granular Net Promoter Scores, we should be working really hard to:

“use Operational measures to manage, and [lagging4] measures to keep the score.” [John Seddon]

…and so to my last set of comparison charts:

Chart 4 Dont just stand there do something

Let’s say that the first chart corresponds to last month’s NPS survey results and the second is this month. Oh sh1t, we’ve dropped by 14 whole points. Quick, don’t just stand there, do something!

But wait…before you run off with action plan in hand, has anything actually changed?

Who knows? It’s just a binary comparison – even if it is dressed up as a fancy bar chart.

To summarise:

  • Net Promoter Score (NPS) has been defined as a customer loyalty metric;
  • There may be interesting data within customer surveys, subject to a heavy caveat around how such data is collected, presented and interpreted;
  • NPS doesn’t explain ‘why’ and any accompanying qualitative survey data is limited, potentially distorting and easily put to bad use;
  • Far better data (for meaningful and sustainable improvement) is to be found from:
    • studying a system in operation (at the points of demand arriving into the system, and by following units of demand through to their customer satisfaction); and
    • using operational capability measures (see ‘Capability what?’) to understand and experiment;
  • If we properly study and redesign an organisational system, then we can expect a healthy leap in the NPS metric – this is the simple operation of cause and effect;

  • NPS is not a system of management.

Footnotes

1. Dr Walter Shewhart (1891 – 1967) was the ‘father’ of statistical quality control. Deming was heavily influenced by Shewhart’s work and they collaborated together.

2. Blinkered decisions, like setting KPI targets and paying out incentives for ‘hitting it’.

3. I should add that, EVEN IF the (now rather large) NPS team succeeds in creating a ‘reliable’ NPS machine, we should still expect common cause variation within the results over time. Such variation is not a bad thing. Misunderstanding it and tampering would be.

4. Seddon’s original quote is “use operational measures to manage, and financial measures to keep the score” but his ‘keeping the score’ meaning (as demonstrated in other pieces that he has written) can be widened to cover lagging/ outcome/ results measures in general…which would include NPS.

Seddon’s quote mirrors Deming’s ‘Management by Results’ criticism (as explained in the previous post).

Not Particularly Surprising

pH scaleHave you heard people telling you their NPS number? (perhaps with their chests puffed out…or maybe somewhat quietly – depending on the score). Further, have they been telling you that they must do all they can to retain or increase it?1

NPS – what’s one of those?

‘Net Promoter Score’, or NPS, is a customer loyalty metric that has become much loved by the management of many (most?) large corporations. It was introduced to the management world by Fred Reichheld2 in his 2003 HBR article titled ‘One number you need to grow’.

So far, so what.

But as most things in ‘modern management‘ medicine, once introduced, NPS took on a life of its own.

Reichheld designed NPS to be rather simple. You just ask a sample of subjects (usually customers3) one question and give them an 11-point scale of 0 to 10 to answer it. And that question?

‘How likely is it that you would recommend our company/product/ service to a friend or a colleague?’

You then take all your responses (which, incidentally, may be rather low) and boil them down into one number. Marvellous…that will be easy to (ab)use!

But, before you grab your calculators, this number isn’t just an arithmetic average of the responses. Oh no, there’s some magic to take you from your survey results to your rather exciting score…and here’s how:

  • A respondent scoring a 9 or 10 is labelled as a ‘Promoter’;
  • A scorer of 0 to 6 is labelled as a ‘Detractor’; and
  • A 7 or 8 is labelled as being ‘Passive’4.

where the sum of all Promoters, Detractors and Passives = the total number of respondents.

NPS calculation.jpgYou then work out the % of your total respondents that are Promoters and Detractors, and subtract one from the other.

You’ll get a number between -100 (they are all Detractors) and +100 (all Promoters), with a zero meaning Detractors and Promoters exactly balance each other out.

And, guess what…a positive score is desirable…and, over the long term, a likely necessity if you want to stay in business.

Okay, so I’ve done the up-front explanatory bit and regular readers of this blog are probably now ready for me to go on and attempt to tear ‘NPS’ apart.

I’m not particularly bothered by the score – it might be of some interest…though exceedingly limited in its usefulness.

Rather, I’m bothered by:

  1. what use it is said to be; and
  2. what use it is put to.

I’ve split my thoughts into two posts. This post deals with the second ‘bother’, and my next one will go back to consider the first.

Qualitative from Quantitative – trying to ‘make a wrong thing righter’

The sane manager, when faced with an NPS score and a ‘strategic objective’ to improve it, wants to move on from the purely quantitative score and ‘get behind it’ – they want to know why a score of x was given.

Reichheld’s NPS method covers this obvious craving by encouraging a second open-ended question requesting the respondent’s reasoning behind the rating just given – a ‘please explain’ comments box of sorts. The logic being that this additional qualitative data can then be provided to operational management for analysis and follow up action(s).

Reichheld’s research might suggest that NPS provides an indicator of ‘customer loyalty’, but…and here’s the key bit…don’t believe it to be a particularly good tool to help you improve your system’s performance.

There are many limitations with attempting to study the reasons for your system’s performance through such a delayed, incomplete and second-hand ‘the horse has bolted’ method such as NPS.

  • Which subjects (e.g. customers) were surveyed?
  • What caused you to survey them?
  • Which subjects chose to respond…and which didn’t?
  • What effort from the respondent is likely to go into explaining their scoring?
  • Does the respondent even know their ‘why’?
  • Can they put their (potentially hidden) feelings into words?…and do they even want to?

If you truly want to understand how your system works and why, so that you can meaningfully and sustainably improve it, wouldn’t it just be soooo much better (and simpler) to jump straight to (properly5) studying the system in operation?!

A lagging indicator vs. Operational measures

One of my very early posts on this blog covered the mad, yet conventional, idea of ‘management by results’ and subsequent posts have delved into ‘cause and effect’ in more detail (e.g. ‘Chain beats Triangle’).

My ‘cause and effect’ post ends with the key point that:

“Customer Purpose (which, by definition, means quality) comes first…which then delivers growth and profitability, and NOT the other way around!”

Now, if you read up on what Reichheld has to say about NPS, he will tell you that it is a leading measure, whereas I argue that it is a lagging one. The difference is because we are coming from opposite ends of the chain:

  • Reichheld appears to be concerned with growth and profitability, and argues that NPS predicts what is going to happen to these two financial measures (I would say in the short term);

  • I am concerned with customer purpose, and an organisation’s capability at delivering against its customers’ needs. This means that I want to know what IS happening, here and now so that I can understand and improve it …which will deliver (for our customers, for the organisation, for its stakeholders) now, and over the long term.

You might read the above and think I am playing with semantics. I think not.

I want operational measures on the actual demands coming in the door, and how my processes are actually working. I want first hand operational knowledge, rather than attempting to reverse engineer this from partial and likely misleading secondary NPS survey evidence.

“Managers learn to examine results, outcomes. This is wrong. The manager’s concern should be with processes….the concentration of a manager should be to make his processes better and better. To do so, he needs information about the performance of the process – the ‘voice of the process’. “ [‘Four Days with Dr Deming’]

Deming’s clear message was ‘focus on the process and the result will come’ and, conversely, you can look at results all you like but you’d be looking in the wrong place!

NPS thinking fits into the ‘remote control’ school of management. Don’t survey and interrogate. ‘Go to the gemba’ (the place where the work occurs).

 “But what about the Lean Start-up Steve?”

Some readers familiar with Eric Ries’ Lean Start-up movement might respond “but Eric advocates the use of customer data!” and yes, he does.

But he isn’t trying to get a score from them, he is trying to deeply engage with a small number of them, understand how they think and behave when experiencing a product or service, and learn from this…and repeat this loop again and again.

This fits with studying demand, where it comes in, and as it flows.

The Lean Startup movement is about observing and reflecting upon what is actually happening at the point of customer interaction, and not about surveying them afterwards.

To close – some wise words

After writing this post I remembered that John Seddon had written something about NPS…so I searched through my book collection to recover what he had to say…and he didn’t disappoint:

“Even though NPS is completely useless in helping service organisations improve, on our first assignment [e.g. as system improvement interventionists] we say nothing about it, because we know the result of redesigning the system will be an immediate jump in the NPS score…and because when this is reported to the board our work gets the directors’ attention.

It makes it easy to see why NPS is a waste of time and money. First, it is what we call a ‘lagging measure’ – as with all customer satisfaction measures, it assesses the result of something done in the past. Since it doesn’t help anyone understand or improve performance in the present, it fails the test of a good measure5 – it can’t help to understand or improve performance.” [Seddon, ‘The Whitehall Effect’]

Seddon goes on to illuminate a clear and pernicious ‘red herring’ triggered by the use of NPS:  the simple question of ‘would you recommend this service to a friend’ mutates to a hunt for the person who delivered the particular instance of service currently under the microscope. Management become “concerned with the behaviour of people delivering the service” as opposed to the system that makes such behaviour highly likely to occur!

I have experience of this exact management behaviour in full flow, with senior management contacting specified members of staff directly (i.e. those who handled the random transaction in question) to congratulate or interrogate/berate them, following the receipt of particularly outstanding6 NPS responses.

This is to focus on the 5% (the people) and ignore the 95% (the system that they are required to operate within). NPS “becomes an attractive device for controlling them”.

Indeed.

The title of this post follows from Seddon’s point that if you focus on studying, understanding and improving the system then, guess what, the NPS will improve – usually markedly. Not Particularly Surprising.

My next post called ‘How good is that one number’ contains the second part of my NPS critique.

Footnotes

1. This post, as usual, comes from having a most excellent conversation with a friend (and ex-colleague) …and she bought me lunch!

I should add that the title image (the pH scale) is a light-hearted satire of the various NPS images I found i.e. smiley, neutral and angry faces arranged on a coloured and numbered scale.

2. Reichheld has written a number of books on customer loyalty, with one of his more recent ones trying to relabel ‘NPS’ from Net Promoter Score to Net Promoter System (of management) …which, to put it mildly, I am not a fan of.

It reminds me of the earlier ‘Balanced Scorecard’ attempting to morph into a system of management. See ‘Slaughtering the Sacred Cow’.

Yet another ‘management idea’ expanding beyond its initial semblance of relevance, in the hands of book sellers and consultants.

Sorry, but that’s how I feel about it.

NPS is linked to the ‘Balanced Scorecard’ in that it provides a metric for the customer ‘quadrant’ of the scorecard …but, as with financial measures, it is still an ‘outcome’ (lagging) measure of an organisation’s people and processes.

3. The original NPS focused on customers, but this has subsequently been expanded to consider other subjects, particularly employees.

4. Being British (i.e. somewhat subdued), I find the labelling of a 7 or 8 score as ‘Passive’ to be hilarious. A score of 7 from me would be positively gushing in praise! What a great example of the variety inherent within customers…and which NPS cannot reveal.

5. For the ‘tests of a good measure, please see an earlier post titled ‘Capability what?’

6. Where ‘outstanding’ means particularly low, as well as high.

The notion of ‘Leadership’

ChurchillI’ve been re-reading a book on leadership by Elliott Jaques1 and, whilst I’m not smitten with where he took his ‘Requisite Organisation’ ideas2, I respect his original thinking and really like what he had to say about the notion of leadership. I thought I’d try and set this out in a post…but before I get into any of his work:

“When I grow up I want to be a leader!“

Over the years I’ve spoken with graduate recruits/ management trainees in large organisations about their aspirations, and I often hear that ‘when they grow up’ they ‘want to lead people’.

And I think “Really? Lead who? Where? Why?”

Why is it that (many) people think that ‘to lead’ is the goal? Perhaps it is because ‘modern management’ has rammed the ‘being labelled a leader IS success’ idea down our throats.

It seems strange to me that people feel the need ‘to lead’ per se. For me, whether I would want to lead (or not) absolutely depends…on things like:

  1. Is there a set of people (whether large or small) that needs leading?
    • If they don’t, then I shouldn’t be attempting to force myself upon them.
  2. Am I passionate about the thing that ‘we’ want to move towards? (the purpose)
    • If not, then I’m going to find it rather hard to genuinely inspire people to follow. I would be faking it.
  3. Do I (really) care about those that need leading?
    • If I don’t, then this is likely to become obvious through my words and deeds
    • ‘really’ caring means constantly putting myself in their shoes – to understand them – and acting on what I find.
  4. Do I think I have the means to lead in this scenario? (e.g. the necessary cognitive capacity/ knowledge/ skills/ experience)
    • If someone else in the group (or close by) is better placed to lead in this scenario (for all the reasons above), then I should welcome this, and even seek them out3 – and not ‘fight them for it’.

I think we need to move on from the simplistic ‘I’m a leader!’ paradigm.

So, turning to what Jaques had to say…

Defining leadership

Jaques noted that the concept of leadership is rarely defined with any precision”. He wrote that:

“Good leadership is one of the most valued of all human activities. To be known as a good leader is a great accolade… It signifies the talent to bring people together…to work effectively together to meet a common goal, to co-operate with each other, to rely upon each other, to trust each other.”

I’d ask you to pause here, and have a think about that phrase “to be known as a good leader…”

How many ‘good leaders’ have you seen?

I’d suggest that, given the number of people we come across in (what have been labelled as) ‘leadership’ positions, it is rare for us to mentally award the ‘good leader’ moniker.

We don’t give out such badges easily – we are rather discerning.

Why? Because being well led really matters to us. It has a huge impact upon our lives.

The ‘personality’ obsession

Mr MenWe (humans) seem to have spent much time over the last few decades trying to create a list of the key personality characteristics that are said to determine a good leader.

There have been two methods used to create such lists, which Jaques explains as follows:

“Most of the descriptions of leadership have focused on superiority or shortcomings in personal qualities in people and their behaviour. Thus, much has been written:

  • about surveys that describe what executives do who are said to be good at leadership; or 
  • about the lives of well-known individuals who had reputations as ‘good leaders’ as though somehow emulating such people can help.”

Jaques believed that ‘modern management’ places far too much4 emphasis on personality make-up.

If you google ‘the characteristics of a good leader’ you will be bombarded with list upon list of what a leader should supposedly look like – with many claiming legitimacy from ‘academic exercises’ that sought out a set of people who appear to have ‘done well’ and then collecting a myriad of attributes about them, and searching for commonality (perhaps even using some nice statistics) …and, voila, that’s ‘a leader’ right there!

If you are an organisation desiring ‘leaders’, then all you then need do is find people like this. Perhaps, in time, you could pluck ‘a leader’ off a supermarket shelf.

If you ‘want to be a leader’, then all you need do is imitate the list of characteristics. After all, don’t you just ‘fake it to make it’ nowadays?

Mmm, if only leadership were so simple.

In reality, there are a huge range of personalities that will be able to successful lead and, conversely, there will be circumstances where someone with (supposedly) the most amazing ‘leadership’ personality fit won’t succeed5. This will come back to leading the ‘who’, to ‘where’ and ‘why’.

Further, some of those ‘what makes a good leader’ lists contain some very opaque ‘characteristics’…such as that you must be ‘enthusiastic’, ‘confident’, ‘purposeful’, ‘passionate’ and ‘caring’. These are all outcomes (effects) from those earlier ‘it depends’ four questions (causes), not things that you can simply be!

Personally, I’ll be enthusiastic and purposeful about, say, reducing plastic waste in our environment but I won’t be enthusiastic and purposeful about manufacturing weapons! I suppose that Donald Trump and Kim Jong-un might be different.

Jaques wrote that:

“It is the current focus upon psychological characteristics and style that leads to the unfortunate attempts within companies to change the personalities of individuals or to maintaining procedures aimed at ‘getting a correct balance’ of personalities in working groups…

…our analysis and experience would suggest that such practises are at best likely to be counterproductive in the medium and long term…

…attempts to improve leadership by psychologically changing our ‘leaders’ serve mainly as placebos or band-aids which, however well-intentioned they may be, nevertheless obscure the grossly undermining effects of the widespread organisational shortcomings and destructive defects”

It really won’t matter what personality you (attempt to) adopt if you continue to preside over a system that:

  • lives a false purpose; and
  • attempts to:
    • command through budgets, detailed implementation plans, targets and cascaded objectives; and
    • control through rules, judgements and contingent rewards.

Conversely, if you help lead your organisation through meaningfully and sustainably changing the system, towards better meeting its (customer) purpose, then you will have achieved a great thing! And the people around you (employees, customers, suppliers… society) will be truly grateful – and hold you in high regard – even if they can’t list a set of ‘desirable’ traits that you displayed along the way.

Peter Senge, in his systems thinking book ‘The Fifth Discipline’ writes that:

“Most of the outstanding leaders I have had the privilege to know possess neither striking appearance nor forceful personality. What distinguishes them is the clarity and persuasiveness of their ideas, the depth of their commitment, and the extent of their openness to continually learning more.

They do not ‘have the answer’, but they seem to instil confidence in those around them that, together, ‘we can learn whatever we need to learn in order to achieve the results we truly desire’.”

To close the ‘personality’ point – Jaques believed that:

“The ability to exercise leadership is not some great ‘charismystery’ but is, rather, an ordinary quality to be found in Everyman and Everywoman so long as the essential conditions exist

…Charisma is a quality relevant only to cult leadership”.

We should stop the simplistic labeling of “this one here is a leader, and that one over there is not”.

Manager? Leader? Or are we confusing the two?!

Find and replaceSo, back to that ‘Manager or Leader’ debate.

It feels to me that many an HR department hit upon the ‘leader’ word, say 10 years ago, and considering it as highly desirable, decided that it would be a good idea to do a ‘find and replace’ throughout all of their organisation’s lexicon. i.e. find wherever the word ‘Manager’ is used and replace (i.e. in their eyes ‘upgrade’) with the word ‘Leader’.

And so we got ‘Team Leaders’ instead of ‘Team Managers’ and ‘Senior Leadership’ instead of ‘Senior Management’….and on and on.

And this changed everything, and nothing.

Jaques explained that:

Leadership is not a free-standing activity: it is one function, among many, that occurs in some but not all roles.”

“Part of the work of the role [of a manager] is the exercise of leadership, but it is not a ‘leadership role’ any more than it would be called a telephoning role because telephoning is also a part of the work required.”

Peter Senge writes that:

“we encode a broader message when we refer to such people as the leaders. That message is that the only people with power to bring about change are those at the top of the hierarchy, not those further down. This represents a profound and tragic confusion.”

And so to three important leadership concepts: Accountability, Authority and Responsibility:

Accountability

Put simply, the occupant of a role is accountable:

  • for achieving what has been defined as requirements of the role; and
  • to the person or persons who have established that role.

Jaques writes that “management without leadership accountability is lifeless…leadership accountability should automatically be an ordinary part of any managerial role.”

Such leadership isn’t bigger than, or instead of, management – it is just a necessary part within. As such, it doesn’t make sense to say that “he/she is a good manager, but not a good leader”.

Authority

Authority is that which enables someone to carry out the role that they are accountable for.

“In order to discharge accountability, a person in a role must have appropriate authority; that is to say, authority with respect to the use of materials or financial resources or with respect to other people making it reasonably possible to do what needs to be done.”

Jaques goes on to split authority into ‘authority vested’ and ‘authority earned’.

Role-vested authority by itself, properly used, should be enough to produce a minimal satisfactory result, by means of [people] doing what they are role bound to do. What it cannot do is to release the full and enthusiastic co-operation of others

personally earned authority is needed if people are to go along with us, using their full competence in a really willing and enthusiastic way; it carries the difference between a just-good-enough result and an outstanding or even scintillating one.”

In short, managers have to (continually) earn the trust and respect of their people.

(You might like to revisit an earlier post that explained Scholtes’ excellent diagram on trustPeople and Relationships).

Responsibility

Let’s suppose that you are at the scene of a traffic accident. If you are on your own you will likely take on the social responsibility of doing the best you can in the circumstances. If others are there (say there is a crowd), you will likely assess whether you have special knowledge that is not already present:

  • is anyone attending to the injured? If not, what can you do?
  • If first aid is underway, do (you believe that) you know more than they appear to? Can you be of assistance to what they are already doing?
  • If the police are not yet there, what can you do to secure the safety of others, such as warning other traffic?

In such circumstances, nobody carries the authority to call you to account (unless you knowingly do something illegal).

Jaques explains this as the general leadership responsibility and does so to:

“show how deeply leadership notions are embedded in the most general issues of social conscience, social morality, and the general social good.”

“General leadership responsibility must apply even where a person’s role does not carry leadership accountability…[employees] must strive to carry leadership responsibility, even towards their managers, whenever they consider it to be for everyone’s good for them to do so.”

The understanding of the difference between leadership accountability and general leadership responsibility (for the good of society, or a sub-set within) makes clear that it is never a case of “I’m the leader and you’re not.”

Jaques went on to write that:

“The effective and sensible discharge of general leadership responsibility is one sign of a healthy collaborative organisation.”

…and finally, to react to a likely critique:

“You’re so naïve Steve!”

Many of you reading this post may think me naïve.  You may reply that there are, and will always be, people out there who want to feel the power and ego (self-importance) of being labelled as ‘a leader’…and yet (regardless of their words) don’t actually care about the ‘who, where and why’ of leading. You might cite a large swathe of politicians and senior corporate executives as evidence.

Yep, I’d agree that there will be people out there like this who will ‘play the game’ and work their way into (supposed) ‘leadership’ positions…but I don’t believe that such “I’m a leader!” people are likely to make ‘good leaders’ (in the sense of what Jacques defines as leadership). Sure, they can play the ‘leader’ game, but what really counts is whether a system (such as an organisation, or a community) meaningfully and sustainably moves towards its true purpose, for the good of society.

Senge writes that:

“the term ‘leader’ is generally an assessment made by others. People who are truly leading seem rarely to think of themselves in that way. Their focus is invariably on what needs to be done, the larger system in which they are operating, and the people with whom they are creating – not on themselves as ‘leaders’. Indeed, if it is otherwise, this is probably a problem. For there is always the danger, especially for those [installed into] leadership positions, of becoming ‘heroes in their own minds’.”

In summary:

If there is a need, and a person really cares about the purpose and the people, and they have the means then they will likely lead well – regardless of their personality type whilst doing so.

Conversely, it doesn’t matter what ‘an amazing person’ someone might (appear to) be if the conditions for ‘leading’ aren’t there.

‘Winning’ at becoming ‘the leader’ shouldn’t be the goal.

Footnotes

1. Elliott Jaques (1917 – 2003) was a Canadian psychoanalyst and organizational psychologist. 

2. Requisite Organisation: Jaques wrote a book called the Requisite Organisation, which puts many of his ideas together. Personally, I find the ideas interesting but ‘of a time’ and/or of a particular ‘hierarchical’ mindset.

3. Seeking out the person best placed to lead: This would be a sign that you cared more about the purpose and the people than leading.

4. Regarding ‘far too much emphasis on personality’: Notice that Jaques says ‘too much’ but he doesn’t say that personality is irrelevant. But, rather than come up with what qualities ‘we’ should have, he turns it the other way around. A managerial leader should have:

“The absence of abnormal temperamental or emotional characteristics that disrupt the ability to work with others.”

This is nice. It presumes that, so long as we aren’t ‘abnormal’ then any of us can lead given the necessary conditions.

5. Winston Churchill is often used as an example of a great leader – and he was, under certain circumstances…but many historians have written about how this didn’t carry through to every situation (such as running a country in peacetime).

 

Double Trouble

Double troubleThere’s a lovely idea which I’ve known about for some time but which I haven’t yet written about.

The reason for my sluggishness is that the idea sounds so simple…but (as is often the case) there’s a lot more to it. It’s going to ‘mess with my head’ trying to explain – but here goes:

[‘Heads up’: This is one of my long posts]

Learning through feedback

We learn when we (properly) test out a theory, and (appropriately) reflect on what the application of the theory is telling us i.e. we need to test our beliefs against data.

“Theory by itself teaches nothing. Application by itself teaches nothing. Learning is the result of dynamic interplay between the two.” (Scholtes)

Great. So far, so good.

Single-loop learning vs. Double-loop learning

Chris Argyris (1923 – 2013) clarified that there are two levels to this learning, which he explained through the phrases ‘single-loop’ and double-loop’1.

Here are his definitions to start with:

Single-loop learning: learning that changes strategies of action (i.e. the how) …in ways that leave the values of a theory of action unchanged (i.e. the why)

Double-loop learning: learning that results in a change in the values of theory-in-use (i.e. the why), as well as in its strategies and assumptions (i.e. the how).

That’s a bit of a mouthful – and (with no disrespect meant) not much easier to comprehend when you read his book!2

If you look up ‘double loop learning’ on the wonders of Google Images, you will find dozens of (very similar) diagrams3, showing a visualisation of what Argyris was getting at.

Here’s my version4 of such a diagram:

Double loop 1

You can think about this diagram as it relates either to an individual (e.g. yourself) or at an organisational level (how you all work together).

Start at the box on the left. Whether we like it or not, we (at a given point in time) think in a certain way. This thinking comes about from our current beliefs and assumptions about the world (and, for some, what might lie beyond).

Our thinking guides our actions (what we do), and these actions heavily influence5 our performance (what we get).

And so to the ‘error’ bit:

“Organisations [are] continually engaged in transactions with their environments [and, as such] regularly carry out inquiry that takes the form of detection and correction of error.” (Argyris & Schon)

We are continually observing, and inquiring into, our current outcomes – asking ourselves whether we are ‘on track’, or everything is ‘as we would expect’ or perhaps whether we could do better. Such inquiry might range from:

  • subconscious and unstructured (e.g. just part of daily work); right through to
  • deliberate and formal (such as a major review producing a big fat report).

Argyris labels this constant inquiry as the ‘detection of error’. The error is that we aren’t where we would want to be, and the correction is to do something about this.

Okay, so we’ve detected an error and we want to make a corrective change. The easiest thing to do is to revisit our actions (and the strategies that they are derived from), and assess and develop new action strategies whilst keeping our underlying thinking (our beliefs and assumptions) steadfastly constant. This is ‘single-loop’ learning i.e. new actions, borne from the same thinking.


I reflect that the phrase ‘the more things change, the more they stay the same’ fits nicely here:

If the reason for the ‘error’ is within your thinking, then your single-loop learning, and the resultant change, won’t work. Worse, you will re-observe that error as it ‘comes round again’, and probably quicker this time…and so you make another ‘action’ change….and that error keeps on coming around. You have merely been making changes within the system, rather than changing the system.

A previous post called ‘making a wrong thing righter’ demonstrates this loop through the example of short term incentive schemes, and their constant revision “to make them even better”.


So, the final piece of the diagram…that green line. Many ‘errors’ will only be corrected through inquiry into, and modification of, our thinking…and, if this meaningfully occurs, then this would result in ‘double-loop’ learning – you would have changed the system itself.

Right, so that’s me finished explaining the difference between single-loop and double-loop learning…which I hope is clear and makes sense.

You may now be thinking “great, let’s do double-loop learning from now on!”

…because this is how most (if not all) those Google Image diagrams make it look. I mean, now you know about it, why wouldn’t you?

But you can’t!

The bit that’s missing…

Unfortunately, there’s a wall. Worse still, this wall is (currently) invisible. Here’s the diagram again, but altered accordingly:

Double loop 2

Right, I’d better try and explain that wall. Argyris & Schon wrote that:

“People learn collectively to maintain patterns of thoughts and action that inhibit productive learning.”

What are they on about?

Imagine that, through some form of inquiry, an error (as explained above) has been detected and a team of relevant people commence a conversation to talk about it:

  • The hierarchically senior person begins with a ‘take charge’ attitude (assuming responsibility, being persuasive, appealing to larger pre-existing goals);
    • it is typical within organisations that, once goals have been decided, changing them is seen as a sign of weakness.

  • He/she request a ‘constructive dialogue’, thereby stifling the expression of negative (yet real) feelings by themselves, and by everyone else involved…and yet acts as if this is not happening;
    • each person in the group is therefore being asked to suppress their feelings – to experience them privately, censor them from the group, and act as if they are not doing so.”

  • He/she takes a rational approach and asks the group to develop a ‘credible plan’ (which becomes the objective) to respond to the error…and so has skipped the necessary organisational self-reflection for double-loop learning to occur.
    • Coming up with a plan is ‘jumping into solution mode’ before you’ve properly studied the current condition and asked ‘why’.

So how does this affect the group dynamics?

“The participants experience an interest in solving the business problem, but their ways of crafting their conversation, combined with their self-censorship, [will lead] to a dialogue that [is] defensive and self-reinforcing.” (Argyris & Schon)

Given that this approach will hide so much, we can expect lots of private conversations (pre-meetings to prepare for meetings, post-meetings about what was/wasn’t said in meetings, meetings about what meetings aren’t happening…). Does this describe what you sometimes see in your organisation? I think that it is often labelled as ‘politics’…. which would be evidence of that wall.

Taken together, Argyris and Schon label the above as primary inhibitory loops.

Argyris sets out a (non-exhaustive) list of conditions that trigger and, in turn, reinforce, such defensive and dysfunctional behaviour. Here’s the list of conditions, together with how they should be combated:

Condition Corrective response
Vagueness Specify
Ambiguity Clarify
Un-test-ability Make testable
Scattered Information Concert (arrange, co-ordinate)
Information withheld Reveal
Un-discuss-ability Make discussable
Uncertainty Inquire
Inconsistency/ Incompatibility Resolve

”[such] conditions…trigger defensive reactions…these reactions, in turn, reduce the likelihood that individuals will engage in the kind of organisational inquiry that leads to productive learning outcomes.” (Argyris & Schon)

i.e. If you’ve got defensive behaviour, look for these conditions… and work on correcting them. Otherwise you will remain stuck.

Unfortunately, primary loops lead to secondary inhibitory loops. That is, that they lead to second-order consequences, and these become self-reinforcing.

  • Managers begin to (privately) judge their staff poorly, whilst the staff, ahem, ‘return the compliment’, with “both views becoming embedded in the organisational norms that govern relationships between line and staff”;

  • Sensitive issues of inter-group conflict become undiscussable. “Each group sees the other as unmovable, and both see the problem as un-correctable.”
    • A classic example of this is the constant conflict in many organisations between ‘IT’ and ‘The business’.

  • The organisation creates defensive routines intended to protect individuals from experiencing embarrassment or threat”with the unintended side effect that this then prevents the identification of the causes of the embarrassment or threat in order to correct the relevant problems.”

From this we get organisational messages that:

  • are inconsistent (in themselves and/or with other messages);
  • act as if there is no inconsistency; and
  • make the inconsistency undiscussable.

“The message is made undiscussable by the very naturalness with which it is delivered and by the absence of any invitation or disposition to inquire about it.”

Do you receive regular messages from, say, those ‘above you’ in the hierarchy? Perhaps a weekly or monthly Senior Manager communication?

  • How often are you amazed (in an incredulous way) about what they have written or said?
  • Do you feel welcome to point this inconsistency out? Probably not.

We end up with people giving others advice to reinforce the status quo: ‘Be careful what you say’, ‘You’ll get yourself into trouble’, ‘I wouldn’t say that if I were you’, ‘Remember what happened last time’…etc.

In short, there are powerful forces* at work in most organisations that are preventing (or at least seriously impeding) productive learning from taking place, despite the ability and intrinsic desire of those within the organisation to do so.

(* Note: Budgets – as in fixed performance contracts – are a classic ‘single-loop reinforcing’ management instrument. Conversely, Rolling forecasts can be a ‘double-loop’ enabler.)

So what to do instead?

Right, here’s my third (and last) diagram:

 

Double loop 3

It looks very similar to the last diagram, but this time there’s a ladder! But where do we get one of those from?

“For double-loop learning to occur and persist at any level in the organisation, the self-fuelling processes must be interrupted. In order to interrupt these processes, individual theories-in-use [how we think] must be altered.” (Argyris & Schon)

Oooh, exciting stuff! They go on to write:

“An organisation with a [defensive] learning system is highly unlikely to learn to alter its governing variables, norms and assumptions [i.e. thinking] because this would require organisational inquiry into double-loop issues, and [defensive] systems militate against this…we will have to create a new learning system as a rare event.”

There’s two places to go from here:

  • What would a productive learning system look like? and
  • How might we jolt the system to see the wall, and then attempt to climb the ladder?

If I can begin to tease these two out, then BINGO, this blog post is ready for print. Right, nearly there…

A Productive learning system

Argyris and Schon identify three values necessary for a productive learning system:

  • Valid information;
  • Free and informed choice; and
  • Internal commitment to the choice, including constant monitoring of its implementation.

Sounds lovely…but such a learning system requires the fundamental altering of conventional social virtues that have been taught to us since early in our lives. The following table ‘compares and contrasts’ the conventional with the productive:

Social Virtue: Instead of… Work towards…
Help and Support Giving approval and praise to others, and protecting their feelings Increasing others capacity to confront their own ideas, and to face what they might find.
Respect for others Deferring to others, and avoiding confronting their actions and reasoning. Attributing to others the capacity for self-reflection and self-examination.
Strength Advocating your position in order to ‘win’, and holding firm in the face of advocacy. Advocating your position, whilst encouraging inquiry of it and self-reflection.
Honesty Not telling lies, or

(the opposite) telling others all you think and feel.

Encouraging yourself and others, to reveal what they know yet fear to say. Minimising distortion and cover-up.
Integrity Sticking to your principles, values and beliefs Advocating them in a way that invites enquiry into them. Encouraging others to do likewise.

There’s a HUGE difference between the two.

The consequences will be an enhancement of the conditions necessary for double-loop learning – with current thinking being surfaced, publicly confronted, tested and restructured – and therefore increasing long-term effectiveness.

You’d likely liberate6 a bunch of great people, and create a purpose-seeking organisation.

Intervention

The first task is for you to see yourself – you have to become aware of the wall…and Argyris & Schon are suggesting that you may (likely) require an intervention (a shake) to do this. Your current defensive learning system is getting in the way.

Let’s be clear on what would make a successful intervention possible, and what would not.

An interventionist would locate themselves in your system and help you (properly) see yourselves…and coach you through contemplating what you see and the new questions that you are now asking…and facilitate you through experimenting with your new thinking and making this the ‘new normal’. This is ‘action learning’.

This ‘new normal’ isn’t version 2 of your current system. It would be a different type of system – one that thinks differently.

Conversely, you will not change the nature of your system if you attempt to ‘get someone in to do it to you’.

Why not?

“Kurt Lewin pointed out many years ago that people are more likely to accept and act on research findings if they helped to design the research, and participate in the gathering and analysis of data.

The method he evolved was that of involving his subjects as active, inquiring participants in the conduct of social experiments about themselves.” (Argyris & Schon)

In short: It can’t be done to you.

That ladder? That would be a skilled interventionist, helping you see and change yourselves through ‘action learning’.

To Close

Next time someone shows you that lovely (as in ‘simple’) double-loop learning diagram, I hope you can tell them about the wall…and the ladder.

Footnotes

1. Chris Argyris is known as one of the co-founders of ‘Organisation Development’ (OD) – the study of successful organizational change and performance. Argyris notes that he borrowed the distinction between single-loop and double-loop from the work of W. Ross Ashby. For blog readers, we met Ashby in an earlier post on requisite variety.

2. Book: ‘Organisational Learning II: Theory, Method, and Practise’ (1996) by Chris Argyris and Donald A. Schon).

3. Diagrams: Many of the diagrams stay true to what Argyris wrote about. Some attempt to build upon it. Others (in my view) bastardise it completely!

4. Language: I should note that Argyris used different language to my diagram. Here’s a table that compares:

My diagram: Argyris and Schon:
Thinking (Our beliefs and assumptions) Values, norms and assumptions
Action Action Strategies
Performance Performance, effectiveness
Defensive learning system Model O – I
Productive learning system Model O – II

5. Influence: I haven’t used the bolder ‘cause’ word because there’s a lot going on that is outside the system (e.g. the external environment).

6. Liberate: You don’t need to bring in ‘new’ people, most of what you need are already with you – they just need liberating from the system that they work within.

7. Kurt Lewin: often referred to as ‘the founder of social psychology’. Much of my writings in this blog are based around Lewin’s equation that Bƒ(PE) or, in plain English, that behaviour is a function of the person in their environment.

The Seeker

The seekerTo seek: search for, attempt to find something.

Seeker: as in ‘a tenacious seeker of the truth’ or ‘a tireless seeker of justice’

Seeking is very different to conventional management.

Conventional Management

Conventional management constantly defines up front the ‘what’ and the ‘how’…and then toils to achieve the espoused ‘strategy/ plan/ target operating model…blah, blah, blah’ through pulling levers of (supposed) control.

It is about:

  • being (seen to be) certain of yourself;
  • having ‘an opinion’ and knowing ‘the answer’;
  • retaining confidence (at least outwardly); and
  • forcing through the barriers in your way (rather than contemplating why they are there).

Their drive is to be able to assert (whether through fear or power…or a combination of both) that they have conquered what they defined up front….and then repeat the (single-feedback) loop….and on and on.

Seeking

A seeker has a deep-rooted resolve, but doesn’t know where they will be going – and is okay with this. Their journey will be ever changing (dynamic). Note the use of the words tenacious and tireless in the definitions above.

Their drive is to explore what is before them, whilst always seeking their ‘true north’. This will lead them on many collaborative adventures, much learning and growth and a constantly regenerating desire. They will continually question, and change, themselves (double-loop).

A purpose-seeking organisation

I love this phrase. For me, it says so much.

Breaking it down into its parts:

  • Purpose: a clear, meaningful, ongoing endeavour – for the fundamental reason why our system should exist (which will never be ‘to make money’);

  • Seeking: as above. Not a destination, but an ongoing quest;

  • Organisation: everyone, joined together. About how we all interact, not how we act taken separately.

A purpose-seeking organisation can do amazing things (that others wouldn’t dare put into a plan) and can sustain and reinvent itself. It will possess that most treasured of desirable system properties – self-organisation.

So what?

There is a gulf between conventional and purpose-seeking organisations…and much to do to bridge the gap.

But the first step is for those ‘in positions of power’ to see the gap. You can then question why it exists. If you rush into changing something before you have properly seen and questioned, then you will remain stuck in the same conventional loop.

Are you a seeker?

Footnote:

This short post comes about from re-reading my notes on ‘Organisational Learning’ (Chris Argyris).

I recognise that it might be a bit philosophical (bullshit?) for some. I have a couple of follow-up posts in mind that are perhaps a bit more practical 🙂

Memo to ‘Top Management’ – Subject: Engine Technology

I’ve just been searching for a post that is hugely relevant to a recent conversation, and have found that it was an old piece that didn’t get published onto this blog…so here it is:

Jet engine“Management thinking affects business performance just as an engine affects the performance of an aircraft. Internal combustion and jet propulsion are two technologies for converting fuel into power to drive an aircraft.

New recipes for internal combustion can improve the performance of a propeller-driven airplane, but jet propulsion technology raises total performance to levels that internal combustion cannot achieve. So it is with management thinking.

Competitive businesses require jet (even rocket!) management principles. Unfortunately, internal combustion principles still power almost all management thinking.” (H. Thomas Johnson)

And so Johnson nicely compares and contrasts the decades old ‘command and control’ management system with a new ‘systems thinking’ way.

Let’s take incentives as an important example:

You report to a manager, who reports to a manager, who…etc. You have ‘negotiated’ some cascaded objectives and you will be rated and then rewarded on your ‘performance’ in meeting them. Sound familiar?

Here are the fundamental problems with this arrangement:

  • Obey and justifyYou will tell your manager what you think he/she wants to hear, and provide tailored evidence that supports this, whilst suppressing that which does not;

  • If you are ‘brave’ and tell your manager something that they might not like, you will do so very very carefully, like ‘walking on eggshells’…and, in so doing, likely de-power (i.e. remove the necessary clout from) the message;

  • You realise that it’s virtually suicidal to ‘go above them’ and tell your manager’s manager the ‘brave’ thing that they should hear…because you fear (with good reason) that this will most likely ‘come back to bite you’ at your judgement time (when the carrots are being handed out);

  • You are locked into a hierarchy that is reliant on a game of ‘Chinese whispers’ up the chain of command, with each whisperer finessing (or blocking) the message to assist in the rating of their own individual performance;

  • Each layer of management is shielded (by their own mechanism) from hearing the raw truth and, as such, they engineer that they ‘hear what they like, and like what they hear’.

…and therefore this system, whilst fully functioning, is perpetually impotent! It has disabled itself from finding out what it really needs to know.

“Hierarchies don’t like bad news…. bad news does not travel easily up organisations” (John Seddon)

If you’ve been in such a system and HAVE broken one of the rules above through your passion to make a real difference for the good of the organisation you work for (or perhaps worked!), then you’ve probably got some scars to show for it.

If you’ve always played it safe, then this is probably because you’ve seen what happens to the others!

The ‘Bottom line’ for ‘Top Management’:

If you want to transform your organisation, change ‘engine technology’! Tinkering with your existing one is simply not going to work.

  • Managers should not be rating the performance of individuals. Rather, they should understand what the system is preventing the individual from achieving…and then work with them to change that system to release their untapped potential;

  • Managers should not be incentivising individuals to comply. Rather, they should be sharing the success of the organisation with them. (These are very different things!)

Neither of these fundamental changes is in the gift of ‘middle management’ – they belong to those that determine the management system.

… and so, if (and this is a big ‘if’) ‘top management’ want to know the raw truth (‘warts and all’) they must constantly remove, and guard against, system conditions (e.g. incentives, performance ratings) that would prevent the truth from easily and quickly becoming lucid and transparent.

Afterthought, to counter a likely retort from ‘Top management’:

I have often (professionally) provided well intended feedback to ‘management’ as to what’s actually ‘happening out there’, particularly when I believe that they may not be aware of this. Many an Executive has derived great worth from this feedback (and thanked me accordingly).

This isn’t saying that I’m always right, or that I know everything. Obviously I’m not, and I don’t. But I do know what I see and hear.

However, there has been a subset of deeply command-and-control executives that confidently respond with “no Steve, you are wrong – that’s not the case at all. My people tell me exactly what’s happening…and there’s no problem here”.

I find this interesting (sometimes amusing, but mostly disappointing).

A manager can never be sure that people are being totally open and honest with them…but they can constantly look for, and understand, what mechanisms and practices would put this desired feedback in doubt or at risk….and then tirelessly work to remove these system conditions, for the good of all.

Footnote: I wrote this post before I wrote ‘Your Money or your Life!’…which considers the question as to whether ‘Top management’ in large corporates CAN change.

“Citizens face many front doors…”

Doors-Doors-DoorsGovernments all over the world want to get the most out of the money they spend on public services – for the benefit of the citizens requiring the services, and the taxpayers footing the bill.

Government officials regularly devise initiatives, and even new departments, aimed at getting their myriad of agencies to work better together.

However, looking at this from the outside, the media regularly uncover seemingly daft (and sometimes tragic) instances where government agencies have failed to effectively act, connect and co-operate with each other. In such instances, each agency appears to ‘the person on the street’ to have been wearing blinkers with their ‘common sense’ radars turned to ‘exceedingly low’.

But is it right to lay blame on the agencies or, worse, the people acting within them? In the majority of cases, I’d suggest that the answer would clearly be ‘no’. We should be looking at the bigger ‘whole of public service’ system that they are designed to operate within.

A new phrase was termed some years back called ‘Joined up government’. The Oxford dictionary defines it as:

“A method of government characterized by effective communication between different departments and co-ordination of policies.”

When a dictionary defines a word, it usually provides the reader with an example sentence showing its proper usage. In this instance, the first example sentence given is a negative one, as in:

“There is an obvious lack of joined-up government here” (Oxford Dictionary)

i.e. Governments openly recognise that there is a big problem (a lack of togetherness)…and that they would love to ‘solve’ it…but it’s regularly in the ‘too hard basket’!

The purpose of this post is to share (what is to me) an important (and very well presented) 30 min. video by Jeremy Cox1: Budget Management and People Centred Services that nicely explains, by way of reference to a real case study, the ‘multi agency’ problem and how to go about changing it.

If you are interested (particularly if you work within the public sector) then I’d expect that watching it should be a worthwhile (and thought provoking) use of your time.


Right…if you’ve got to here then I’ll assume that you’ve watched the video…the rest of this post pulls out (what I believe to be) key things said by Jeremy Cox in his presentation (blue italics below) and my ‘wrap-around’ narrative.

Note: What follows is incomplete and not a substitute for watching the video. It’s just an aide-memoire so that I (and you) don’t have to watch the video every time to pull out the key points or discuss it with our colleagues.


Jeremy Cox starts at a summary level by walking us through “four critical steps”:

1. The first thing to do is to study your system…and, just to be crystal clear, YOU (those responsible for the system) have to study it, and do so WITH those who operate it. A consultant cannot do this for (i.e. to) you2.

“You have to go and study because if you see it with your own eyes, you can’t deny it. If someone ‘tells you’, then you can ‘rationalise’ it away quite easily.”

2. From studying your system, you can then see and understand the effects of (supposed) ‘controls’ on its performance.

3. Only when we understand (at a root cause) WHY the system operates as it does, should we redesign…because then, and only then, is such a redesign based on meaningful evidence…as opposed to the usual ‘conventional wisdom’ or ‘current in-vogue ideology’;

and finally:

4. Devise new measures, and move to a new model of leadership.

Cox then goes into each step in some detail.

Going back to Step 1: Cox talks about studying demand.

HelpHe takes us through a case study of a real person in need, and their interactions with multiple organisations (many ‘front doors’) and how the traditional way of thinking seriously fails them and, as an aside, costs the full system a fortune.

Understand demand in context….don’t understand people from the point of view of your organisation, understand the person and what matters to them about living a better life.”

The case study is sad…and yet not really a surprise – we all kind of know that it’s true. It shows the huge power of following some cases around the full system.

In explaining Step 2, Cox opens up the madness within silo’d (i.e. single department) thinking, which is driven by their ‘budgetary controls’.

Rules of playHe identifies three survival principles in play, and the resulting anti-systemic controls that result:

a) “We must prioritise [our] services for the most in need” which leads to attempts to stop entry into the service, and then the requirement to break through escalating thresholds of eligibility.

Such ‘screening out’ logic creates the following madness: “Your case isn’t serious enough yet…go away until things get worse!”

b) “We must stick to what we do” which leads to “I can see that you need A and B for you to get better…but, here, we only do A.”

Cox gives a real example of an alcoholic with depression being turned away by mental health practitioners because “we don’t work on alcoholism – you need to solve that first and then come back with your depression”. We can predict that such unhelpfulness will lead the needy citizen towards a rather large drink!

c) “We must limit service delivery” which leads to attempts at closing cases, doing things on the cheap, and setting time limits…all of which are about pushing things through at the expense of the needy citizen…which will lead to failure demand (probably popping up unexpectedly in another department…and therefore not seen as linked).

The redesign at Step 3 requires different principles.

IntegratedCox makes the obvious point that the actual redesign can’t be explained up-front because, well…how can it be -you haven’t studied your system yet!

…but, generally, it is likely that “genuinely integrated, local-by-default problem solving teams will emerge from [following the steps]”.

A clarification: ‘Genuinely integrated’ doesn’t mean a multi-disciplined shared building where people regularly come together for, say, case review meetings…and then go back to their ‘corners’ and work to their existing (i.e. competing) policies and procedures.

A nice test from Cox:

“How do you know a team is genuinely integrated rather than co-located?…All you have to do is look in the fridge – nobody’s written their department’s name on the milk!”

And so to Step 4: New measures and new leadership

shovelling sand with a pitchfork[Once you’ve successfully redesigned the system] “The primary focus is on having really good citizen-focused measure: ’are you improving’, ‘are you getting better’, ‘is the demand that you’re placing reducing over time’.”

Notice that these measures are about the purpose of the system (i.e. for the citizen), and NOT about the activities performed within the system. It’s not about the volumes of calls taken or visits performed or payments made or cases closed or…[carry on naming activities].

“You have to shift leaders from managing the budget top-down to adding value to the process of studying, and improving outcomes for individuals.”

The point here is that you are never done. The outcomes from a redesign can radically shift performance, but you’ll quickly be ‘back at square one’ if you haven’t grasped the WHY and don’t ‘kick on’ to yet more learning, and yet more improvement – becoming better every day – for the good of citizens, and (importantly) for the pride of your employees.

To close

What’s most interesting to me from the video is the graphic explanation of one unit of demand, a needy citizen in a really shitty situation, being bounced around – presenting at public service ‘front doors’ in multiple and seemingly unrelated ‘cases’, with each agency doing what they can but not what is required….and the needy slip ever further into their personal quagmire.

“We limit what we do to ‘what we do’, not to what the person needs.”

Cox makes the hugely important point that, once you open your mind, then the study and redesign of the work is relatively easy. The hard bit is re-conceiving the ‘system of management’. This takes real leadership and (perhaps most importantly) self-development.

Cox closes with the following comment:

“Some of the most rewarding work that I have ever done is just working with these integrated teams who are out…on the ground, with good leadership, learning how to solve problems for citizens. You actually see people’s lives turned around and people who otherwise would have been dead who are now still alive.”

This is powerful stuff! There can’t be much more meaning to anyone’s working life than that.

Footnotes:

1. The video covers one session within a ‘Beyond Budgeting’ event run by Vanguard Consulting over in the UK. The first 3 mins. is an introduction from John Seddon, and then Jeremy Cox (a Vanguard consultant) presents the rest.

Note: Cox refers to names of UK government departments (e.g. The DWP). If you live elsewhere in the world then you are likely to have similar agencies, just with different names.

2. A consultant cannot do it for you: I should clarify that an experienced ‘systems thinking’ coach CAN facilitate you through studying your system and its redesign….BUT they aren’t ‘doing it’ – you are!

I have a post with the ink half dry that explains and expands this point called ‘Smoke and Mirrors’. I guess I should get on and finish it now.

3. The NZ government is setting up a Social Investment Agency. Its focus is fundamentally about changing the lives of the most vulnerable New Zealanders by focusing on individuals and families, understanding their needs better, and doing more of what is most likely to give the best results”. I like the intent.  I hope that those involved watch (or have already watched) the Jeremy Cox video, and consider the messages within.

Roar!

Lions badgeFor those rugby fans among you – and virtually every New Zealander – the British and Irish Lions touched down in Auckland this afternoon.

They are here to play ten (daunting) games, including against all five NZ Super Rugby franchises and three All Black tests. I can hardly wait!

A Lions tour to NZ is special. It now only happens every twelve years….and the Lions have only ever won one series, way back in 1971. It’s going to be a tough gig.

I’ve recently been getting into the mood by listening to interviews with various Lions from past tours. Much of the material on offer understandably focuses on the last NZ tour, back in 2005 (when the Lions got well and truly thumped) and what went wrong….and how on earth can they win this time round.

One interview stood out to me – Matt Dawson with Sir Ian McGeechan1.

(I should explain, for those that don’t know, that ‘Geech’ is perhaps the most successful Lions Head Coach there has ever been).

Dawson was asking Geech about an incredibly tricky task – the process of selection (i.e. which players from the ‘squad of four nations’2 would get to play in a test).

GeechSir Ian explained that he would sit down with his team of coaches (perhaps five people) and work through all the analysis and then discuss, often for hours deep into the night. He provided this wonderful insight:

I’ve never voted in picking a test team, [I’ve] always talked it through until we get to what we want to see and are comfortable with.”

He doesn’t even mention that, as Head Coach, he had the power to force his views through (i.e. not even go to a vote)…because that’s not how he thinks.

I love the fact that (when he was the Head Coach) they never voted!

This fits really well with a few of my earlier posts:

Talk-back radio which has a dig at people using their opinions;

“What I think is…” which talks about moving from opinions to knowledge; and

Catch-ball which talks about moving from the (predictably) divisive process of ‘consultation’, to the inclusive process of ‘catch-ball’.

If you’re reading the above and you are a ‘tough’, ‘command and control’, ‘conventional wisdom’ type of person, then:

  • you may judge me (and Geech) to be weak; and
  • you may argue that talking it through would take forever to make any decisions.

Yes, it takes a great deal of effort to reach a consensus…but that’s the point – it requires you to actually invest in those around you, to listen to them, to test your own thinking, to draw out theirs, to connect, to understand, to appreciate, to grow…and to make monumentally better decisions, for the longer term, together, towards your shared purpose.

Footnotes

1. Sir Ian McGeechan (‘Geech’) is perhaps the most respected/revered/ loved Lion ever. He played for the Lions in 1974 and 1977 and then coached them in 1989, 1993, 1997 and then again in 2009.

He also coached the ‘mid-week massive’ during the 2005 tour of New Zealand whilst Sir Clive Woodward was Head Coach. Woodward (in my view) is a very different man to Geech.  Sir Clive ‘decided’ things, and often wouldn’t budge in spite of the advice being offered to him….which didn’t turn out too well.

2. The Lions are made up of the very best players from each of England, Scotland, Wales and Ireland.

3. For long-term blog readers, you may recall from an earlier post that I would be torn as to which team I will be supporting. I have the good fortune to be going to the 3rd test in Auckland on 8th July with my oldest son, and with some great mates (thanks Jonesy!)

Let’s just say that I will be wearing red, and my son will be wearing black – which I think fits rather nicely with our past and our future.

4. As a bonus for reading this far 🙂 , here’s another nice ‘Geech’ quote to ponder regarding selecting the right people:

“It’s what’s happening off the ball that you watch….I spent as much time watching players off the ball as I did on the ball…Who’s putting themselves into the game? What’s happening off the ball? Who’s stepping up trying to make a difference when the team are under the cosh?”

Polishing a Turd

turd polishWhen I was growing up, I remember my dad (a Physicist) telling me that it was pointless, and in fact meaningless, to be accurate with an estimate: if you’ve worked out a calculation using a number of assumptions, there’s no point in writing the answer to 3 decimal places! He would say that my ‘accurate’ answer would be wrong because it is misleading. The reader needs to know about the possible range of answers – i.e. about the uncertainty – so that they don’t run off thinking that it is exact.

And so, with that introduction (and flashback to my school days) this post is about the regular comedy surrounding business cases, and detailed up-front planning…and what to do instead.

A seriously important concept to start with:

The Planning fallacy

Human beings who desire something to be ‘a success’ (e.g. many an Executive/ Senior Manager) tend to:

“make decisions based on delusional optimism rather than on a rational weighting of gains, losses, and probabilities. They overestimate benefits and underestimate costs. They spin scenarios of success while overlooking the potential for mistakes and miscalculations. As a result, they pursue initiatives that are unlikely to come in on budget or on time or deliver the expected returns – or even to be completed.” (Daniel Kahneman)

This isn’t calling such individuals ‘bad people’, or even to suggest that their actions are in some way deliberate – it is simply to call out a well-known human irrationality: the planning fallacy.

We all ‘suffer from’, and would be wise to understand and guard against, it.

I’ve worked (or is that wasted time) on many a ‘detailed business case’ over the years. There is an all-too-common pattern….

“Can you just tweak that figure till it looks good…”

models are wrongLet’s say that someone in senior management (we’ll call her Theresa) wants to carry out a major organisational change that (the salesman said) will change the world as we know it!

Theresa needs permission (e.g. from the board) to make a rather large investment decision. The board want certainty as to what will happen if they sign the cheque – there’s the first problem1.

Theresa looks around for someone who can write a great story, including convincing calculations…and finds YOU.

Yep, you are now the lucky ‘spreadsheet jockey’ on this proposed (ahem) ‘transformation programme’.

You gather all sorts of data, but mainly around the following:

  • a ‘base case’ (i.e. where we are now, and what might happen if we took the ‘do nothing’ option);
  • a list of ‘improvements’ that will (supposedly) occur if the board says ‘Yes’;
  • assumptions relating to the potential costs and benefits (including their size and how/when the cash will flow); and
  • some ‘financial extras’ used to wrap up the above (interest rates, currency rates, taxes, the cost of capital…and so on)

You create an initial broad-brush model and then, after gaining feedback from ‘key’ people, you work through a number of drafts – adding in new features and much detail that they insist as being essential.

And voila! We have a beautifully crafted financial model that has a box at the end with ‘the answer’ in it2.

You show the model to Theresa.

Wow, she’s impressed with the work you’ve put in (over many weeks) and how sophisticated the model is…but she doesn’t like this initial answer. She’s disappointed – it’s not what she was looking for.

You go through all of the assumptions together. Theresa has some suggestions:

  • “I reckon the ‘base case’ comparison will be worse than that…let’s tweak it a bit”
  • “Our turnover should go up by more than that…let’s tweak it a bit”
  • “Nah, there won’t be such a negative productivity hit during implementation – the ‘learning curve’ will be much steeper!…let’s tweak it a bit”
  • “We’ll save more money than that…and avoid paying that…let’s tweak it a bit”
  • “Those savings should kick in much earlier than that…let’s tweak it a bit”
  • “We’ll be able to delay those costs a bit more than that…let’s tweak it a bit”

…and, one of my favourites:

“Mmm, the ‘time value of money’3 makes those upfront costs large compared to the benefits coming later…why don’t we extend the model out for another 5 years?”

And, because you designed a nice flexible model, all of the above ‘suggestions’ are relatively easily tweaked to flow through to the magic ‘answer’ cell

“now THAT looks more healthy! The board is going to LOVE this. Gosh, this is going to be such a success”.

Some reflections

John Dewey quote on learningSome (and perhaps all) of the tweaks might have logic to them…but for every assumption being made (supposedly) tighter:

  • one, or many, of the basic assumptions might be spectacularly wrong;
  • plenty of the assumptions are being (conveniently4) ignored for tweaking…and could equally be ‘tightened’ in the other direction (i.e. making the business case look far worse); and
  • there are many assumptions that are completely missing…because you simply don’t know about them….yet…or don’t want to know about them.

With any and every tweak made, nothing has actually changed: Nothing has been learned about what can and will actually occur. You have been ‘polishing a turd’…but, sadly, that’s not how those around you see it. Your model presents a highly convincing and desirable story.

Going back, your first high-level draft model was probably more useful! It left many ‘as-yet-unknowns’, it contained ranges of outcomes, it provided food-for-thought rather than delusional certainty.

We should reflect that “adding more upfront planning…tends to make the eventual outcome worse, not better” (Lean Enterprise). The more detailed you get then the more reliant you become on those assumptions.

The repercussions

Theresa gains approval from the board for her grand plan and now cascades the (ahem) ‘realisation of benefits’ down to her direct reports…who protest that the desired outcomes are optimistic at best, and sheer madness at worst (though they hold their tongues on this last bit).

Some of the assumptions have already proven to be incorrect – as should be expected – but it’s too late: the board approved it.

The plan is baked into cascaded KPIs…and everyone retreats into their silos, to force their part through regardless of the harm being caused.

But here’s the thing:

“Whether the project ‘succeeds’ according to [the original plan] is irrelevant and insignificant when compared to whether we actually created value for customers and for our organisations.” (Lean Enterprise)

The wider point…and what to do instead

validated learningIt’s not just financial models within business cases – it is ‘detailed up-front’ planning in general: the idea that we should create a highly detailed plan before making a decision (usually by hierarchical committee) as to whether to proceed on a major investment.

The Lean Start-up movement, led by Eric Ries, makes a great case for a totally different way of thinking:

  • assumptions aren’t true! (it seems daft to be writing that…but the existence of the planning fallacy requires me to do so);
  • we should test big assumptions as quickly as possible;
  • such testing can be done through small scale experimentation (which doesn’t require huge investment) and subsequent (open-minded) reflection;
  • we will learn important things…which we did not (and probably could not) predict through detailed up-front planning. This is a seriously good thing – we can save much time, money and pain, and create real customer value;
  • we may (and often will) find a huge flaw in our original thinking…which will enable us to ‘pivot’5 to some new hypothesis, and re-orientate us towards our customer purpose.

The big idea to get across is what has been termed ‘validated learning’.

Learning comes from actually trying things out on, and gaining direct feedback from, the end customers (or patients, citizens, employees etc.), rather than relying on our opinions about them.

Validated is about demonstrating what the customer (or patient, citizen, employee etc.) actually does (or doesn’t do), not what they say they would do when asked (i.e. from external market research or internal survey). It is to observe and measure real behaviours, rather than analyse responses to hypothetical questions.

…and to do the above rapidly by experimenting with ‘minimum viable products’ (MVPs).

Delay (whilst writing a beautiful document, getting it approved, and then building a seemingly golden ‘solution’) prevents the necessary feedback from getting through.

Caveat: Many an organisation has read ‘The Lean Startup’ book (or employed a consultant who has) and is using the above logic merely at the start of their legacy ‘investment decision’ process…but, through grafting new labels (such as Lean) onto old methods and retaining central hierarchical approval committees, their process remains ostensibly the same.

You don’t do validated learning merely at the start of an investment process – you re-imagine what ‘making investments’ means!

“It’s moving leaders from playing Caesar with their thumbs up and down on every idea to – instead – putting in the culture and the systems so that teams can move and innovate at the speed of the experimentation system.”

“The focus of each team is iterating with customers as rapidly as possible, running experiments, and then using validated learning to make real-time investment decisions about what to work on.” (Eric Ries)

 Notice that it is the team that is making the investment decisions as they go along. They are not deferring to some higher body for ‘permission’. This is made possible when:

  • the purpose of the team is clear and meaningful (i.e. based around a service or value stream);
  • they have meaningful capability measures to work with (i.e. truly knowing how they are doing against their purpose); and
  • all extrinsic motivators have been removed…so that they can focus, collaborate and gain a real sense of worth in their collective work.

Nothing new here

You might read the above and shout out:

  • “but this is just the scientific method”; or
  • “it’s yet another re-writing of the ‘Plan – Do – Study – Act’6 way of working”

…and you’d be right.

Eric Ries’ thinking came about directly from his studying of Deming, Toyota etc. and then applying the learning to his world of entrepreneurship – to become effective when investing time and money.

His book, ‘The Lean Startup’, and the ‘validated learning’ concept are an excellent addition to the existing body of work on experimentation towards purpose.

Footnotes

1. We should never present a seemingly certain picture to a board (or merely hide the caveats in footnotes)…and we should coach them to be suspicious if they see one.

2. For the financially aware: this will likely be a net present value (NPV) figure using a cost of capital (WACC) provided by the finance department, or some financial governance body.

3. The ‘time value of money’ reflects the fact that $1 now is worth more to you than $1 in a year’s time.

4. Conveniently doesn’t mean intentionally or maliciously – it can just be that lovely planning fallacy at work.

5. Pivot: This word has become trendy in many a management conversation but I think that its original (i.e.intended) meaning is excellent (as used by Eric Ries, and his mentor Steve Blank).

Eric Ries defines a pivot as “a structured course correction designed to test a new fundamental hypothesis….”

6. PDSA: Popularised by Deming, who learned it from his mentor, Walter Shewhart. A method of iterative experimentation towards your purpose, where the path is discovered as you go, rather than attempted to be planned at the start. Note that, whilst the first step is ‘Plan’, this DOESN’T mean detailed up-front planning of an answer – it simply means properly planning the next experiment (e.g. what you are going to do, how you are going to conduct it, and how you are going to meaningfully measure it).