“If scientific reasoning were limited to the logical processes of arithmetic, we should not get very far in our understanding of the physical world. One might as well attempt to grasp the game of poker entirely by the use of the mathematics of probability.” Vannevar Bush, Founder of the US National Science Foundation.
We ended our February blog with the question “So, what can we do if we must make decisions regarding wicked problems but cannot use deduction?” Wicked problems are intractable, interconnected, problems with unclear cause and effect connections – see January’s blog for details of how wicked problems are defined. In this blog I will suggest that not being able to use deduction does not mean we cannot use reason and deliberation.
We can measure factors or variables as the basis for decision making because complex adaptive systems have ‘basins of stability’ – as introduced in our August 2014 blog – which are steady state systems maintained by the feeding in of external energy. For corporations and innovation ecosystems this equilibrium is a kind of order. In the language of complexity these steady states, regions of quasi-stability or system-level order, are known as ‘attractors.’ Empirical research has shown that in large complex systems such as communities and corporations, these attractors maintain conditions required for emergent self-organization, adaptive capability – and measurement.
The Rainforest Scorecard: A Practical Framework for Growing Innovation Potential process and scoring model introduced in our January bonus-blog Measuring Culture, Performance, and Innovation seeks to describe an ideal organizational ‘system state’—the aggregate set of conditions or features of systems that are generally present in innovative organizations. This idealized model is in turn used as a gauge against which organizations can measure and evaluate their own state of innovation.
Buridan’s donkey, named after the 14th century French philosopher Jean Buridan, had a problem in trying to make a rational decision using deduction. This hungry donkey standing midway between two equally nourishing looking piles of hay is unable to make a rational decision to choose one pile over the other and consequently dies of hunger.
Another philosopher, David Milligan, in his book Reasoning and the Explanations of Actions, written in 1980 but still fresh, explained that a good deliberative reasoner is “not someone who simply obeys the rules of logic,” but someone who is also a sound judge and can defend his or her decisions about how to act by pointing to reasons which supports them.
“The deductivist [a person using deduction] tries to reduce the elements of sound judgment and correct evaluation either to the application of logic or to a kind of subjective response.” Milligan does not talk about complex systems or linearity as such, although everything he discusses applies (another example, as as we have seen before, and will see again, of the significance of philosophy in understanding complexity). Rather than downgrading the importance of logic, the author is trying to show that “reason is far wider and has a far more important role in action than might appear from the deductive list account.” Milligan’s work launches us into the necessary search for non-deductive ways of reasoning and decision making where there is an abundance of wicked problems – which is almost everywhere.
Another feature to take into account is that decision making will always be ‘bounded’ – that is we cannot know all the factors which possibly should be taken into account when making a decision, and thus we cannot reach an optimal solution. This concept was first proposed by the economist Herbert Simon as an alternative basis for the mathematical modeling of decision making; we will have to be satisfied with a less than optimal solution. The decision-maker is thus sometimes referred to as a ‘satisficer’ – someone who is satisfied with a good enough solution. For readers of this series the advantages of sub-optimum solutions will sound familiar (e.g. Imperfect Works Feb 2013 blog in this series. More about satisficers and maximizers in future blogs.
We should point out that being satisfied with boundedness and sub-optimality does not imply accepting insufficient depth of knowledge of those factors we do know about.
All this may be sounding a bit abstract, so in April’s blog we will apply these ideas of non-deductive reasoning to trying to choose one of two options for the solution to a wicked problem.
“Many development partner tools and business processes deal with static, simple or linear problems. There is considerable demand for new methods and principles that can help development partners better navigate the complex, dynamic realities they face on a day-to-day basis.”
From best practice to best fit: Understanding and navigating wicked problems in international development. Ben Ramalingam, Miguel Laric and John Primrose, UK Department for International Development (DfID).
Don’t try to tame wicked problems: Part 1 introduced ‘wicked problems’ though six typical characteristics of these problems.
In the last few years international development organizations seem to be discovering complex systems and wicked problems. This series of blogs are not intended to be literature reviews, but two examples out of many are:
One question being raised is whether the method of Logical Framework Analysis, also referred to as ‘logframes’ can be used, can be relied upon, when dealing with complex systems generally and with wicked problems in particular.
The Logical Framework Analysis was tested by USAID in the 1970s for evaluation of technical assistance projects, and used extensively by governments, consultants, and international aid and development organizations for project planning and evaluation ever since. Logical Framework design is not an evaluation in itself; it provides a plan of the project against which project progress can be assessed by evaluators. It was also intended to make evaluation less threatening. Furthermore, where there are clear and logical relationships between inputs and outputs this can lead to efficient task delegation.
As noted in our January blog, the behaviors of complex innovation ecosystems don’t fit well into logframes which deal with inputs and outputs and the tasks which produce the latter from the former. To illustrate what we are talking about a rather simplified logframe (it typically is a 4×4 matrix) might look something like this:
|Goal||Improve creation of spin-off companies from universities.|
|Purpose/Outcome||An effective improved company creation system is operating.|
|Output||New spin-off companies developed.
New incentives created.
Increased number of role models and mentors.
New methods in place.
|Input||Analyze problems with current methods to create spin-off companies.
Provide more early stage, start-up, funding.
Find more brokers available to help match R&D needs to sources.
Provide more incentives to researchers.
Identify role models and mentors.
Create an entrepreneurship culture.
Inventory physical and people assets.
This table suggests we can produce a certain set of outputs from a certain set of inputs to achieve the required outcome. These concepts can help us think through a project in an orderly, logical fashion assuming there is a definite cause and effect relationship between any level and the level immediately above it; in wicked problems this is not the case. Cause and effect logic is also the basis for strategy maps and best-practice balanced scorecards. Finally, an emphasis on cause and effect suggests a rational expectations hypothesis, which does not take into consideration extra-rational motives which influence behavior.
At T2 Venture Creation we just published a short book about measuring variables in innovation ecosystems, The Rainforest Scorecard: A Practical Framework for Growing Innovation Potential. The work is a measurement methodology based on complexity characteristics and does not assume linear cause and effect relationships, but does recognize that ‘emergence’ is a critical feature of complex adaptive systems. Measurement however is only the first step; decisions and actions must follow. In the decision making process this traditionally implies deduction – reasoning which links a set of premises with a logical, and necessarily true, conclusion. Probably the best known example of such reasoning is:
- All me are mortal (premise)
- Socrates is a man (premise)
- Therefore, Socrates is mortal (conclusion)
So, what can we do if we must make decisions regarding wicked problems but cannot use deduction? And, furthermore, we will have to make decisions in spaces where indicators of success may be fallible –as discussed in the April 2013 blog in this series Fallibility and the Making of Good Decisions: Solving the right problem Part 2. We shall turn our attention to this question next time.
Next time: Practical Reasoning: Decision making in Rainforest innovation ecosystems.
Culture drives performance and innovation. But can we describe, quantify, measure, and manage all the variables in an ecosystem which determine culture?
My colleague Henry H. Doss and I answer this question in the just published The Rainforest Scorecard: A Practical Framework for Growing Innovation Potential. The book is short (35 pages) to enable quick application of its content. The Scorecard provides a systematic, comprehensive, detailed strategy for assessing and quantifying all elements of an organizational culture with respect to its capacity for innovation. The framework serves as tactical scaffolding upon which innovation culture can be built at scale, in any organization, public or private.
The Scorecard contains sets of questions proving a comprehensive evaluation of an organization’s innovation potential. The scoring methodology is based on the notion that objective scoring is a necessary, initial step to begin the process of cultural change. The scale, scope and level of detail which may be reached in completing this process will vary from organization to organization, and is a function of available time and resources. However, irrespective of scale, all assessment efforts will follow certain guidelines, in order to optimize their return on the effort required to complete the scorecard.
The Scorecard brings together the rainforest metaphor (see December 2014 blog) as described by T2VC’s founders Greg Horowitt and Victor Hwang in The Rainforest: The Secret to Building the Next Silicon Valley together with much of what has been discussed in this series of blogs.
The concept of system state variables was introduced in our December 2013 blog. The Scorecard quantifies critical state variables of innovation as: Leadership; Frameworks, Infrastructure and Policies; Organizational Resources; Activities and Engagement; Role Models; and Culture. The book guides the user through a detailed question and answer process, which in turn creates both an innovation profile and a clear, direct process for building innovation into an organization.
But, can we really claim to be able to measure such variables in a complex adaptive ecosystem? After all, in previous blogs in this series we have constantly banged on about lack of predictability and uncertain relationships between cause and effect in these far-from-equilibrium systems.
We can, because complex adaptive systems have ‘basins of stability’ – as introduced in our August 2014 blog – which are steady state systems maintained by the feeding in of external energy. For corporations and innovation ecosystems this equilibrium would be a kind of self-satisfied stasis. However, if this maintaining heat vanishes the system may flip into another steady state, one ‘basin of stability,’ to another which will require new maintaining energy. In the language of complex adaptive systems these steady states – regions of quasi-stability or system-level order are known as ‘attractors.’ Empirical research has shown that in large complex systems such as communities and corporations, these attractors maintain conditions required for emergent self-organization, adaptive capability – and measurement.
The Scorecard process and scoring model seek to describe an ideal organizational ‘system state’—the aggregate set of conditions or features of systems that are generally present in innovative organizations. This idealized model is in turn used as a gauge against which organizations can measure and evaluate their own state of innovation.
The Rainforest Scorecard is part of an overall implementation process being developed by T2VC inspired by the impact of the Malcolm Baldrige National Quality Award, which the U.S. created in 1988 to foster and recognize organizational excellence. In ways similar to the Baldrige Award, this process guides organizations through structured, outcome-oriented conversations about innovation states, focusing on several key areas of assessment; these organizational conversations in turn support the creation of innovation ecosystems that make communities, organizations and businesses more resilient and sustainable.
The Rainforest Scorecard: A Practical Framework for Growing Innovation Potential is released under a Creative Commons license, so that the world can “remix, tweak, and build upon” this work non-commercially. We just ask that you credit the original book and license your work under the same terms.
Wicked: Adjective (slang) meaning very good, excellent; “cool”; “awesome” from 13th Century Middle English wikked, wikke, an alteration of wicke, adjectival use of Old English wicca (“wizard, sorcerer”). “Going beyond reasonable or predictable limits.” Or, the Merriam-Webster dictionary’s (nicely understated) “very bad or unpleasant.”
“A problem with many layers of nested and intractable predicaments,… complex inter-linkages between elements… small perturbations can quickly transform into catastrophic events…” This was how Nepalese citizens viewed the impact of climate change on their country in a 2009 survey of local views.
In previous blogs in this series we have discussed innovation ecosystems as complex systems – with all of their inherent intriguing properties – as we attempt to develop the ‘science’ of Rainforest ecosystems (The Rainforest: The Secret to Building the Next Silicon Valley http://www.therainforestbook.com/ by Victor H Hwang and Greg Horowitt). Innovation ecosystems, as well as climate, have their share of nested and intractable predicaments where inter-linkages are hidden like the layers of an onion. New business creation is linked with leadership; leadership linked with culture; resources are linked with frameworks and policies.
In economic development, especially in developing countries, poverty is linked with education, nutrition with poverty, the economy with nutrition, and so on as described the 2014 book Aid at the Edge of Chaos, by Ben Ramalingam. Partly as a result of Ramalingam’s book the global aid community is starting to understand that countries and regions are complex systems, and in turn are made up of sub-complexes, rather than linear modules. In linear systems cause and effect are determinable and typically modeled using Logical Framework Analysis, or ‘logframe’ methods (ubiquitous in the global aid community). The behaviors of complex systems don’t fit into logframes which deal with inputs and outputs and the tasks which produce the latter from the former. A Balanced Scorecard strategy map outlining an organization’s plans to accomplish defined objectives is another example of heavy reliance on cause-and-effect logic as best-practice. For more on causality see April 2014 blog in this series.
The above discussion above leads us to introduce a new wrinkle on complexity this month, namely ‘wicked problems.’ A wicked problem is a social or cultural problem that is the opposite of a ‘tame problem’ as set out below. Tame problems are susceptible to logical analysis. Wicked problems are not. A wicked problem is an extreme case of a complex problem.
|Characteristic||Tame problems||Wicked problems|
|Problem formulation||The problem can be clearly written down. The problem can be stated as a gap between what is and what ought to be. There is easy agreement about the problem definition.||The problem is difficult to define. Many possible explanations may exist. Individuals perceive the issue differently. Depending on the explanation, the solution takes on a different form.|
|Testability||Potential solutions can be tested as either correct or false.||There is no single set of criteria for whether solutions are right or wrong; they can only be more or less acceptable relative to each other.|
|Finality||Problems have a clear solution and end point.||There is always room for more improvement and potential consequences may continue indefinitely.|
|Level of analysis||It is possible to bound the problem and identify its root cause and subsequent effects; the problem’s parts can be easily separated from the whole.||Every problem can be considered a symptom of another problem. There is no identifiable root cause and it is not possible to be sure of the appropriate level at which to intervene; parts cannot always be easily separated from the whole.|
|Replicability||The problem may repeat itself many times because it is linear; applying formulaic responses will produce predictable results.||Every problem is essentially unique; formulae are of limited value because the problem is non-linear.|
|Reproducibility||Solutions can be tested and excluded until the correct solution is found.||Each problem is a one-shot operation. Once a solution is attempted, you cannot undo what you have already done.|
Adapted from: From best practice to best fit; Understanding and navigating wicked problems in international development, Ben Ramalingam, Miguel Laric and John Primrose, UK Department for International Development (DFID), September 2014. http://www.odi.org/publications/8571-complexity-wiked-problems-tools-ramalingam-dfid
In the March, April, and May 2013 blogs in this series we speculated about, not just complexity, but solving problems in complex systems. This is what we are called upon to do. It’s of little use understanding the complex nature of innovation ecosystems unless we can understand and resolve issues with which we are confronted, such as how to improve the flow of innovation, how to predict disruptions, how to optimize leadership, and many others.
Inspecting the right side column in the table above shows that many, possibly most, challenges in innovation ecosystems are indeed ‘wicked.’
So what should we do? Through up our hands and admit defeat, or try to make these wicked problems if not tame then at least a little less wicked? This we will shall turn to next month – and find that the slang definition of ‘wicked’ is a better fit than the traditional one.
Next time: Don’t try to tame wicked problems: Part 2
All previous blogs in this series are at: http://innovationrainforest.com/author/alistair2013/
In Molière’s play Le Bourgeois Gentilhomme, produced in 1670, Monsieur Jourdain asks something to be written in neither verse nor prose. He is told, “Sir, there is no other way to express oneself than with prose or verse”. Jourdain replies, “By my faith! For more than forty years I have been speaking prose without knowing it.”
The Rainforest concept introduced in the book by Victor H Hwang and Greg Horowitt, The Rainforest: The Secret to Building the Next Silicon Valley http://www.therainforestbook.com/ opened up the idea of a Rainforest as a metaphor for expressing the innovation ecosystem concept. In this series of blogs we have rather taken this metaphor as understood – but have not placed it on the solid foundation it deserves in the science of innovation ecosystems.
Without going into too much detail, and giving a quick version of the meaning of metaphor which might horrify a linguistic specialist, we can probably agree that language, prose or poetry, is used to communicate with others and therefore must be meaningful to others. Much of language is metaphor. It has been said that metaphor is the root of all transfer of meanings in speech. In metaphor or analogy a word is detachable from its original meaning and transferred so that the meaning no longer adheres to the original object. By using words in new contexts, new meanings and aspects of the word may be revealed. However, we should keep in mind George Orwell’s warning not to use metaphors without knowing their original meaning.
Metaphors use symbols (words or signs) which have intuitive meanings and are used within “a universe of discourse.” A universe of discourse is a context where the symbol has an understood meaning. Just as we would not describe a painting using terminology of chemistry, when using stories to communicate understanding we must not stray into universities of discourse having other accepted symbols. When I started working in international development I was confused by colleagues using the term “actor” – with a meaning familiar to sociologist but to a physicist (me) had me wondering how movies came into the picture.
It seems to me that this universe of discourse is really the same as the “phase space” introduced in our December 2013 blog. (For reference, our November 2013 blog first introduced complex systems concepts).
Metaphors are liberating; analogies can constrain.
If we use a rainforest analogy we would have to say the trees are like this and the weeds are like that, and so forth, and the poetic symbolism would be lost. If I reminisce about my youth and inexperienced using the metaphor of being “apple green” (an implied metaphor from Dylan Thomas’s poem Fern Hill) this metaphor has more poetic power that the analogy that I was like a green apple.
Metaphor opens up our imaginations.
The linguistic philosopher Wilbur Urban in analyzing metaphor wrote “it is the nature of the symbol to take the primary and natural meaning of both objects and words and modify them in certain ways so that they acquire a meaning relation of a different kind.” Thus, according to Urban a symbol has (1) reference to the original object – a rainforest in our case – and (2) reference to the object for which the symbol now stands – a complex adaptive innovation ecosystem in our case.
The rainforest metaphor as described by Hwang and Horowitt connects rainforests (the original object) to companies (the object for which the symbol now stands): “A company that seeks to manufacture cheaper, better, more profitable products would run operations like an agricultural farm. However, the community that seeks to generate high levels of innovation throughout the whole system would do the opposite …. not controlling the specific processes but instead helping to set the right environmental variables that foster the unpredictable creation of new weeds.” The metaphor is also a comparison of properties or traits. The trait concept will be revisited in future blogs when we say more about a neglected topic so far, namely, how innovation ecosystems change over time.
As noted in our June blog in this series readers should at least be beginning to see how the rainforest metaphor expands our thinking and leads us to understand that rainforest ecosystems not only have much in common with complex adaptive systems but that rainforest innovation ecosystems are complex adaptive systems. The rainforest symbol has acquired a new and different interpretation as a complex adaptive system. This realization opens up the large volume of research on complex adaptive systems to be used not only to understand but to analyze and predict the behavior of innovation ecosystems. Having grabbed our attention the metaphor remains as a comfort blanket as we enter the sometimes insecure world of complexity.
To parallel Monsieur Jourdain, we may be surprised we’ve been talking about complex adaptive systems without knowing it.
“Every body has their taste in noises as well as other matters; and sounds are quite innoxious, or most distressing, by their sort rather than their quantity.”
Persuasion, Jane Austen, 1817.
Jane Austen was describing the feelings of one of her characters on entering the town of Bath “driving through the long course of streets …. amidst the dash of other carriages, the heavy rumble of carts and drays, the bawling of newsmen, muffin-men and milkmen, and the ceaseless clink of pattens* …. these were noises which belonged to the winter pleasures; her spirits rose under their influence.”
This month we pick up again the issues of network resiliency, perturbations, and noise, introduced in our September and October Blogs in this series. In September’s Blog investigations were cited indicating that the speed at which an innovation moves through a network increases when there are a “greater number of errors, experimentation, or unobserved payoff shocks in the system” (also called noise or variability).
How does a network see noise? As a series of perturbations changing the network’s state. Picture kicking a network and watching the resulting impact rippling through it.
Instinctively we think of noise as something to be eliminated but as you may have already realized this is not necessarily so. Some people find listening to music to be an aid to learning (we don’t have space here to get into why music, and not just the kind we hate, may be referred to as noise). As I write this blog I feel comforted by sounds of the city coming through my open window; I find it difficult to work and learn in a completely silent environment. Likewise, for an innovation ecosystem no noise means isolation from its external environment. A completely static, isolated, network will become dysfunctional. We can probably all cite examples.
For an innovation ecosystem, good noise keeps the system, and its people, alert by being connected to the larger environment and responsive to needed change. Not-so-good-noise is, for example, a perturbation which may disrupt a key link and cause a serious malfunction not by virtue of the magnitude of the perturbation but its type. Some apparently minor event could trigger a breakdown in trust between two critical organizations which in turn create a damaging disruption.
Another way of understanding the role of noise is that some form of energy is needed to prevent self-organizing complex innovation ecosystems, which as we know from past blogs in this series, are in non-equilibrium states, from dropping into the dysfunctional, static, equilibrium state mentioned earlier. A non-equilibrium state is called a steady state system.
Before relating all this to innovation ecosystems it should be noted that a steady state system is not the same as a system in equilibrium. In A and B the level of water in the container is the same, However, in A the level is maintained in a steady state as water flows out is balanced by water coming in. In B the water is in equilibrium – nothing interesting is happening.
Complex adaptive systems have “basins of stability” – as introduced in our August Blog – which are steady state systems maintained by the feeding in of external energy. In non-equilibrium thermodynamics this heat energy goes under the quaint name of “housekeeping heat.” This housekeeping heat prevents the system from falling into a non-productive, static, equilibrium state. For corporations and innovation ecosystems this equilibrium would be a kind of self-satisfied stasis. However, if this maintaining heat vanishes the system may flip into another steady state which will require new maintaining/housekeeping energy. In the language of complex adaptive systems these steady states are known as “attractors.” The need for permanent noise to continuously restructure networks resembles housekeeping heat in steady-state thermodynamics.
Features of a steady state:
- Conditions are stable within the system
- Energy is continuously put into the system (housekeeping heat)
- Over time, the system is maintained in a higher state of order than its surroundings
Features of an equilibrium:
- Conditions are stable within the system
- Net free energy either enters or escapes the system
- Over time, any difference in entropy (state of disorder) between the system and the external environment tends to disappear
Thus, equilibrium is a special case of a steady state.
To sum up: noise can be a friend or an enemy to innovation ecosystems depending on whether it keeps the system alert or damages critical parts of the network. Jane Austin was right; it is the sort of noise that matters.
* A patten is the model of the required casting made in wood metal or plastics. it is used to produce the mould cavity in sand.
Next time: End of the year recap on what these blogs have told us about the practicalities of Rainforest Innovation Ecosystems.
All blogs in this series can be found at http://innovationrainforest.com/author/alistair2013/
“All of the interesting systems (e.g. transportation, healthcare, power generation) are inherently and unavoidably hazardous by their own nature. The frequency of hazard exposure can sometimes be changed but the processes involved in the system are themselves intrinsically and irreducibly hazardous. It is the presence of these hazards that drives the creation of defenses against hazard that characterize these systems.” How Complex Systems Fail: Being a Short Treatise on the Nature of Failure; How Failure is Evaluated; How Failure is Attributed to Proximate Cause; and the Resulting New Understanding of Patient Safety. Richard I. Cook, MD (2000).
“Inside of Utopia, all the seeds of ambition, of faction, are rooted out with all the other vices…. The union of the citizens being thus highly consolidated within, excellence and energy institutions defend the republic against the dangers from without.” Utopia, Sir Thomas More (1516).
In the August blog of this series, Agile Type 1 and Agile Type 2 innovation ecosystems were postulated. Agile Type 1 were imagined as ideal ecosystems in which a rapid flow (or ‘diffusion’ to use a more traditional term) of ideas, solutions, knowledge, and so forth occurs through a system and its networks. Agile Type 1 ecosystems will be capable of rapid self-organization, be highly responsive to system environment changes, and respond efficiently to errors and external shocks. It was also suggested that Agile Type 2 innovation ecosystems can be defined as being more vulnerable than the ideal Agile Type 1, but much closer to reality.
Dr. Richard Cook is a physician at the University of Chicago’s Cognitive Technologies Laboratory http://www.ctlab.org/ who has analyzed and written extensively about the failure of complex systems. Let’s look into D. Cook’s research from How Complex Systems Fail, cited on the Cognitive Technologies Laboratory website, to see what this tells us about Agile Type 2 innovation ecosystems and about how we should build innovation ecosystems which will withstand all the ills that complex adaptive systems are heir to.
First, we have to accept that complex systems are intrinsically hazardous systems and, as noted in the quote above, “It is the presence of these hazards that drives the creation of defenses against hazard that characterize these systems” – that is the emergence in complex systems of defenses against failure. In innovation ecosystems such defenses might be culture, knowledge, trust, diversity and openness, and various forms of physical and intellectual resources and capacity.
This brings to mind Ashby’s Law, also known as the Law of Requisite Variety, which states “the variety in the (network) control system must be equal to or larger than the variety of the perturbations in the system in order to achieve control.” In other words, if you are being attacked having many options is an effective strategy to manage the threat – as US President Kennedy proposed in his 1961 flexible defense policy. Conversely, tightly controlled (not so agile) systems designed to operate efficiently under prevailing conditions, with too many strong links and too few weak ones, reduce communications and become unresponsive to external shocks leading to instability or even collapse. Again showing the value of perturbations. However, it’s worth remembering that it is also a feature of complex systems that small changes may give rise to disproportionally large consequences.
In fact, perturbations are necessary for ecosystem networks to survive. We may think of this as an innovative ecosystem needing a constant flow of energy throughout its networks. Networks with many weak links allow perturbations to be dissipated and the system remains intact. Incidentally, in our September Blog investigations were cited indicating that the speed at which an innovation moves through a network increases when there are a “greater number of errors, experimentation, or unobserved payoff shocks in the system” (also called noise or variability). More about this next month.
Dr. Cook also suggests that “Human practitioners are the adaptable element of complex systems” in optimizing the system’s productive capacity and reducing vulnerability to failure. We know that a feature of complex systems is adaptability. Adaptation may be catalyzed by early detection of changes in system performance and the provision of new paths to recover from perturbations and shocks; as we have seen, the presence of weak links helps here. Adaptation allows systems to be more resilient (the ability to bounce back) from internal confusion or external disturbances, subject to the always present constraints of finite time and resources.
We will end this month with another finding from Dr. Cook’s investigations into accidents varying from aircraft crashes to errors in hospital patient care, namely “Hindsight biases post-accident assessments of human performance.” This means that when the outcome of some event, or more likely a series of events, leading to an accident or, for ecosystems, a collapse due to shocks, is known, then an after-the-event analysis is frequently inaccurate or misleading. Knowledge of the outcome reduces our ability to re-create stories from the viewpoint of those involved. For example, we might say of some event “surely they should have known that such and such a policy would lead to problems.” Several of the Blogs in this series have promoted the learning benefits of extracting re-usable knowledge components from descriptive cases, i.e. stories. So how could hindsight bias, in constructing an ex post facto narrative, affect the learning value of these re-usable knowledge facets? I’m not sure. It’s worth thinking about, perhaps in the context of previous discussions in these Blogs of causality.
We can all think of many system examples of hazards and resilience ranging from the disintegration of Communism in Europe to companies which were ill prepared for technological change, such as Kodak’s slow response to digital photography. Cities and regions – clearly complex systems – have experienced the consequences of Ashby’s Law where a major local employer or even an entire industry has declined, reduced employment due to improved production technologies, or moved elsewhere. Even Thomas More’s Utopia might have eventually collapsed from a lack of weak links and consequent poor resiliency – if not from boredom.
Next time: Is noise good for us?
All blogs in this series can be found at http://innovationrainforest.com/author/alistair2013/