Is noise good for us?

“Every body has their taste in noises as well as other matters; and sounds are quite innoxious, or most distressing, by their sort rather than their quantity.”
Persuasion, Jane Austen, 1817.

Jane Austen was describing the feelings of one of her characters on entering the town of Bath “driving through the long course of streets …. amidst the dash of other carriages, the heavy rumble of carts and drays, the bawling of newsmen, muffin-men and milkmen, and the ceaseless clink of pattens* …. these were noises which belonged to the winter pleasures; her spirits rose under their influence.”

This month we pick up again the issues of network resiliency, perturbations, and noise, introduced in our September and October Blogs in this series. In September’s Blog investigations were cited indicating that the speed at which an innovation moves through a network increases when there are a “greater number of errors, experimentation, or unobserved payoff shocks in the system” (also called noise or variability).

How does a network see noise? As a series of perturbations changing the network’s state. Picture kicking a network and watching the resulting impact rippling through it.

Instinctively we think of noise as something to be eliminated but as you may have already realized this is not necessarily so. Some people find listening to music to be an aid to learning (we don’t have space here to get into why music, and not just the kind we hate, may be referred to as noise). As I write this blog I feel comforted by sounds of the city coming through my open window; I find it difficult to work and learn in a completely silent environment. Likewise, for an innovation ecosystem no noise means isolation from its external environment. A completely static, isolated, network will become dysfunctional. We can probably all cite examples.

For an innovation ecosystem, good noise keeps the system, and its people, alert by being connected to the larger environment and responsive to needed change. Not-so-good-noise is, for example, a perturbation which may disrupt a key link and cause a serious malfunction not by virtue of the magnitude of the perturbation but its type. Some apparently minor event could trigger a breakdown in trust between two critical organizations which in turn create a damaging disruption.

Another way of understanding the role of noise is that some form of energy is needed to prevent self-organizing complex innovation ecosystems, which as we know from past blogs in this series, are in non-equilibrium states, from dropping into the dysfunctional, static, equilibrium state mentioned earlier.   A non-equilibrium state is called a steady state system.steady state equilibrium 4

Before relating all this to innovation ecosystems it should be noted that a steady state system is not the same as a system in equilibrium. In A and B the level of water in the container is the same, However, in A the level is maintained in a steady state as water flows out is balanced by water coming in. In B the water is in equilibrium – nothing interesting is happening.

Complex adaptive systems have “basins of stability” – as introduced in our August Blog – which are steady state systems maintained by the feeding in of external energy. In non-equilibrium thermodynamics this heat energy goes under the quaint name of “housekeeping heat.” This housekeeping heat prevents the system from falling into a non-productive, static, equilibrium state. For corporations and innovation ecosystems this equilibrium would be a kind of self-satisfied stasis. However, if this maintaining heat vanishes the system may flip into another steady state which will require new maintaining/housekeeping energy. In the language of complex adaptive systems these steady states are known as “attractors.” The need for permanent noise to continuously restructure networks resembles housekeeping heat in steady-state thermodynamics.

Features of a steady state:

  • Conditions are stable within the system
  • Energy is continuously put into the system (housekeeping heat)
  • Over time, the system is maintained in a higher state of order than its surroundings

Features of an equilibrium:

  • Conditions are stable within the system
  • Net free energy either enters or escapes the system
  • Over time, any difference in entropy (state of disorder) between the system and the external environment tends to disappear

Thus, equilibrium is a special case of a steady state.

To sum up: noise can be a friend or an enemy to innovation ecosystems depending on whether it keeps the system alert or damages critical parts of the network. Jane Austin was right; it is the sort of noise that matters.

* A patten is the model of the required casting made in wood metal or plastics. it is used to produce the mould cavity in sand.

Next time: End of the year recap on what these blogs have told us about the practicalities of Rainforest Innovation Ecosystems.

All blogs in this series can be found at

Sowing the Seeds of Resilience

“All of the interesting systems (e.g. transportation, healthcare, power generation) are inherently and unavoidably hazardous by their own nature. The frequency of hazard exposure can sometimes be changed but the processes involved in the system are themselves intrinsically and irreducibly hazardous. It is the presence of these hazards that drives the creation of defenses against hazard that characterize these systems.” How Complex Systems Fail: Being a Short Treatise on the Nature of Failure; How Failure is Evaluated; How Failure is Attributed to Proximate Cause; and the Resulting New Understanding of Patient Safety. Richard I. Cook, MD (2000).

“Inside of Utopia, all the seeds of ambition, of faction, are rooted out with all the other vices…. The union of the citizens being thus highly consolidated within, excellence and energy institutions defend the republic against the dangers from without.” Utopia, Sir Thomas More (1516).

In the August blog of this series, Agile Type 1 and Agile Type 2 innovation ecosystems were postulated. Agile Type 1 were imagined as ideal ecosystems in which a rapid flow (or ‘diffusion’ to use a more traditional term) of ideas, solutions, knowledge, and so forth occurs through a system and its networks. Agile Type 1 ecosystems will be capable of rapid self-organization, be highly responsive to system environment changes, and respond efficiently to errors and external shocks. It was also suggested that Agile Type 2 innovation ecosystems can be defined as being more vulnerable than the ideal Agile Type 1, but much closer to reality.

Dr. Richard Cook is a physician at the University of Chicago’s Cognitive Technologies Laboratory who has analyzed and written extensively about the failure of complex systems. Let’s look into D. Cook’s research from How Complex Systems Fail, cited on the Cognitive Technologies Laboratory website, to see what this tells us about Agile Type 2 innovation ecosystems and about how we should build innovation ecosystems which will withstand all the ills that complex adaptive systems are heir to.

First, we have to accept that complex systems are intrinsically hazardous systems and, as noted in the quote above, “It is the presence of these hazards that drives the creation of defenses against hazard that characterize these systems” – that is the emergence in complex systems of defenses against failure. In innovation ecosystems such defenses might be culture, knowledge, trust, diversity and openness, and various forms of physical and intellectual resources and capacity.

This brings to mind Ashby’s Law, also known as the Law of Requisite Variety, which states “the variety in the (network) control system must be equal to or larger than the variety of the perturbations in the system in order to achieve control.” In other words, if you are being attacked having many options is an effective strategy to manage the threat – as US President Kennedy proposed in his 1961 flexible defense policy. Conversely, tightly controlled (not so agile) systems designed to operate efficiently under prevailing conditions, with too many strong links and too few weak ones, reduce communications and become unresponsive to external shocks leading to instability or even collapse. Again showing the value of perturbations. However, it’s worth remembering that it is also a feature of complex systems that small changes may give rise to disproportionally large consequences.

In fact, perturbations are necessary for ecosystem networks to survive. We may think of this as an innovative ecosystem needing a constant flow of energy throughout its networks. Networks with many weak links allow perturbations to be dissipated and the system remains intact. Incidentally, in our September Blog investigations were cited indicating that the speed at which an innovation moves through a network increases when there are a “greater number of errors, experimentation, or unobserved payoff shocks in the system” (also called noise or variability). More about this next month.

Dr. Cook also suggests that “Human practitioners are the adaptable element of complex systems” in optimizing the system’s productive capacity and reducing vulnerability to failure. We know that a feature of complex systems is adaptability. Adaptation may be catalyzed by early detection of changes in system performance and the provision of new paths to recover from perturbations and shocks; as we have seen, the presence of weak links helps here. Adaptation allows systems to be more resilient (the ability to bounce back) from internal confusion or external disturbances, subject to the always present constraints of finite time and resources.

We will end this month with another finding from Dr. Cook’s investigations into accidents varying from aircraft crashes to errors in hospital patient care, namely “Hindsight biases post-accident assessments of human performance.” This means that when the outcome of some event, or more likely a series of events, leading to an accident or, for ecosystems, a collapse due to shocks, is known, then an after-the-event analysis is frequently inaccurate or misleading. Knowledge of the outcome reduces our ability to re-create stories from the viewpoint of those involved. For example, we might say of some event “surely they should have known that such and such a policy would lead to problems.” Several of the Blogs in this series have promoted the learning benefits of extracting re-usable knowledge components from descriptive cases, i.e. stories. So how could hindsight bias, in constructing an ex post facto narrative, affect the learning value of these re-usable knowledge facets? I’m not sure. It’s worth thinking about, perhaps in the context of previous discussions in these Blogs of causality.

We can all think of many system examples of hazards and resilience ranging from the disintegration of Communism in Europe to companies which were ill prepared for technological change, such as Kodak’s slow response to digital photography. Cities and regions – clearly complex systems – have experienced the consequences of Ashby’s Law where a major local employer or even an entire industry has declined, reduced employment due to improved production technologies, or moved elsewhere. Even Thomas More’s Utopia might have eventually collapsed from a lack of weak links and consequent poor resiliency – if not from boredom.

Next time: Is noise good for us?

All blogs in this series can be found at

Stop and smell the roses

You’ve got to stop and smell the roses
You’ve got to count your many blessings everyday
You’re gonna find your way to heaven is a rough and rocky road
If you don’t stop and smell the roses along the way

From a song written by Carl Severinsen and Mac Davis

It affects nearly all of us, whether we are drumming our fingers in front of the microwave oven telling it to hurry up, wanting ever faster internet connections, or finding our attention spans are getting shorter, we have a need for speed.

One purpose of this blog series is to search out research findings and relate them to innovation ecosystems and particularly to the Rainforest framework. A framework that balances the science of innovation with the science of business is, we suggest, useful for economic development across a great diversity of mindsets, motivations, and worldviews.

July’s and August’s blogs about agile innovation ecosystems suggest that there is a need for rapid diffusion, spread, or flow, of information (knowledge, learning, innovations) if such networks are to be responsive. It is to this feature we shall turn our attention in this blog – with two caveats.

First, the results presented here are from several different contexts and there is no certainty that they will be directly relevant to innovation ecosystems. However, they should at least catalyze our thinking.

Second, all these results are based on modeling information flow along links between nodes connected in networks. There are ongoing investigations among researchers as to just how the structure, or topology, of a network of nodes and links influences information flow. Past research has also investigated the type of network structure, such as clustering, which enables rapid diffusion and social learning – and what features can block social learning. An alternate, and less researched model, is that of flow of fluids through pipes which we will consider in a future blog.

In their 2014 paper Rapid innovation diffusion in social networks Gabriel E. Kreindlera at MIT and H. Peyton Young at the University of Oxford, derive results that are independent of a network’s structure and size. We will get to their results in a moment.

In other recent work, a team of researchers at Facebook and the University of Michigan have also been looking into information diffusion among over 200 million Facebook users and published their findings in Role of Social Networks in Information Diffusion

Let’s look at some of the conclusions from these two investigations about factors influencing information flow; those that appear to be common sense, others possibly less so.

Both groups note that innovations often spread through social networks as we respond to what our ‘friends’ are doing. However, in looking at how diffusion of information occurs there is a difficulty: did my behavior influence yours or do you and I behave similarly because we have common characteristics or interests (similar peer behavior)?

It would seem to make sense that if I only interact infrequently with others, that is my links are weak and there is not much similarity between myself and these weakly linked individuals, not much information volume is likely to flow through these weak links. On the other hand information flow should be strong between me and those with whom I interact frequently – my strong links or strongly clustered ones. Strong and weak links, or ties, and their role in stabilizing networks were discussed in our June 2013 blog. The Facebook studies have surfaced results demonstrating the function of strong and weak links in diffusion of information.

To quote the Facebook report: “Weak ties are collectively Information feed strong & weak links 1more influential than strong ties. Although the probability of influence is significantly higher for those that interact frequently, most contagion occurs along weak ties, which are more abundant.” Used in this context, contagion means the spread of information or ideas from person to person.

These results extend the classic studies of Mark Granovetter described in our June 2013 blog.

The MIT/Oxford studies discovered that that diffusion is fast whenever the payoff gain from the innovation is sufficiently high; greater payoffs produce greater speed of diffusion. For example, a technology may be adopted more quickly if the benefit payoffs are substantial. This seems intuitive. Less obvious is another finding that the speed at which an innovation increases when there are a “greater number of errors, experimentation, or unobserved payoff shocks in the system” (also called noise or variability). This may explain the remarkable results which are sometimes achieved in some circumstances by people working under unexpected crisis conditions.

Noise may be interpreted as the weeds in a Rainforest system, born out of uncontrolled environments and necessary for growing innovative companies. Shocks can be good for testing system resiliency, as long as the system as a whole does not tip into a chaos state. We will introduce Ashby’s Law of Requisite Variety next month in discussing resilience. Finally, we know it is the connections between the individual innovation ecosystem components which are critical; non- existent or non-functioning links can destroy communication and knowledge flow without adequate redundancy.

How seriously should we take all these findings? How do they relate to the Rainforest model and Type 2 complex adaptive innovation ecosystems? These questions will be discussed in next month’s blog but I wonder if a paradox is emerging. Might designing lean and agile ecosystems in fact discourage adequate experimentation and learning from mistakes, thus defeating their very purpose? Could rapid diffusion in innovation ecosystem networks be increased if we, as the song says, stop and smell the roses along the way?

Next time: An innovation flow paradox? Shocks and innovation ecosystem resiliency.

Lean and Agile Innovation Ecosystems: Part 2

Brown and agile child, the sun which forms the fruit
And ripens the grain and twists the seaweed
Has made your happy body and your luminous eyes
And given your mouth the smile of water.

Pablo Neruda, “Brown and Agile Child”  

There are three themes to pursue this month in our continuing quest to understand the science of innovation ecosystems. First is agility. In July’s blog ( we introduced the notion of agility in innovation ecosystems and looked at some principles of agile manufacturing systems and lessons to be learned from them.

agile 4Second is knowledge reuse. In our October 2013 blog ( reusability of knowledge was discussed at some length. Studies on knowledge reuse for innovation from NASA’s Jet Propulsion Lab at Caltech were summarized which found that users were motivated to reuse others’ ideas if: work processes optimize exposure to diverse knowledge sources; there exists a culture within the project which encourages malleable knowledge reuse; and there are efficient ways to locate, assess for credibility, and flexibility to allow knowledge reuse.

The third theme is more difficult to name. Let’s call it ‘familiarity’ until we can come up with a better term. It relates to a thread running through several blogs in this series, namely that there are common, or at least similar, features amongst seeming dissimilar innovation and technology commercialization ecosystems. These elements are building blocks which must be correctly connected for innovation to bloom.

A few reoccurring examples of difficulties with these elements I have seen in countries as diverse as the UK and Colombia, or the USA and Russia, include: poor relationships between educational organizations and industry (it is commonly believed that developed nations such as the USA has the problem completely solved – but it is not so); help for small and medium sized enterprises (SMEs) during early stage growth which proved in fact to be unattractive to SMEs; new business incubators; proof of concept, prototype development, and scale-up centers which are underutilized or lack needed services; and of course technology transfer offices at universities and research centers which may be inadequately staffed or supported – or have unclear missions.

Weaving these themes together suggests a greater reuse of knowledge and ‘how-to’ experience – familiarity’ – should lead to greater ecosystem agility.

Let’s call these ecosystems Agile Type 1, and postulate the testable hypotheses:

H1: If an innovation ecosystems is Agile Type 1 then there will be a rapid flow (or ‘diffusion’ to use a more traditional term) of ideas, solutions, knowledge, and so forth through system and its networks.

H2: If an innovation ecosystem is Agile Type 1 then the ecosystem will be capable of rapid self-organization, be highly responsive to system environment changes, and respond efficiently to errors and external shocks.

Cusp 2Ecosystem agility Type 1 also indicates a dynamic innovation ecosystem which exhibits both self-organization and which may have leaders within or outside the self-organized groups. Some degree of direction may be needed for example by those who have knowledge of constraining conditions such as resources available or the need to protect intellectual property.

Such capacity produces ‘areas of stability’ in complex adaptive systems – a frequently observed effect – represented by the phase space projection (right).

In the Rainforest (The Rainforest: The Secret to Building the Next Silicon Valley model these are farms with the rainforest.

Both these hypotheses are of significance when rapid diffusion through social networks is investigated. Results from these investigations are both intuitive and curious. They will help us to speculate on innovation ecosystems of Agile Type 2. These will be postulated as systems which are more vulnerable than Agile Type 1. Agile Type 1 can be thought of as an ideal case, whereas Agile Type 2 innovation ecosystems are closer to reality.

Next time: some recent research on the diffusion of innovation in social networks and more on Type 2 ecosystems.

All previous blogs in this series can be found at

Lean and Agile Innovation Ecosystems: Part 1

Yond Cassius has a lean and hungry look,
He thinks too much; such men are dangerous.

William Shakespeare, Julius Caesar Act 1, scene 2

Before there were lean startups there was lean manufacturing. Lean manufacturing, which seeks to eliminate all expenditures which do not support value for the customer, was developed by Toyota in the 1950s and was in part responsible for the Japanese auto industry becoming the US auto industry’s fierce competitor two or so decades later. Agile software development, introduced in the 1990s was influenced by ideas and methods from the lean manufacturing. Its purpose is to make software usable, adapt to changes, and allow people to excel according to their strengths, rather than according to the system. More recently, lean startup methodology has become popular, intended to shorten product development cycles by iteratively creating products and integrating user feedback.

As noted in last month’s blog: A tale of Two Quotes Rick Dove in his book on agile enterprises, Response Ability: The Language, Structure, and Culture of the Agile Enterprise. John Wiley and Sons, Inc., 2001, introduced the concept of “Response Ability.” He notes that “The agile enterprise can respond to opportunities and threats with the immediacy and grace of a cat prowling its territory” and goes on to explain that “response-able” components can be designed into enterprise ecosystems. These ideas are closely related to those of re-usable components within a framework (see my October 2013 blog: Create early, use often: Lego™ blocks, learning objects, and ecosystems. Part 2

While much of the focus of agility has been in manufacturing and software development, let’s see if any of the “response-able” components concepts illuminate how innovation ecosystems may become agile; an ability to adapt rapidly to system environment changes. After all, we have already introduced the idea of self-organization in a complex adaptive system, which implies agility. How can analyzing agile manufacturing systems help us in building agile innovation ecosystems able to self-organize and respond effectively to external shocks?

Why should we make comparisons between systems? What new understanding might emerge? Comparisons only makes sense if we can learn more about system B by comparing it with system A, and then only if any similarities are more than just coincidence. A cloud in the sky may look like a face, but I doubt we will learn anything enlightening about how faces grow from studying how clouds form.

History shows benefits of comparisons; our understanding of economic systems has been improved, some would argue, by the study of thermodynamics, and innovation flow may be helpfully compared with biological flow.

Manufacturing cell

The results of Rick Dove’s extensive research on systems such as the manufacturing cell illustrated above indicate that principles of “response–able” systems include components with certain characteristics such as (I’m simplifying considerably as this is only an introduction):

  1. Components of response–able systems are distinct, separable, self-sufficient units cooperating towards a shared common purpose.

In innovation ecosystems the function and activities of each stakeholder and the strength of their cultural alignment should be clear to other stakeholders as well as all cross-functional and collaborative activities and existing supportive and incentive policies. This also applies to stakeholders outside the community. Without alignment towards common purposes “friction” between components can be destructive.

  1. Components of response–able systems share defined interaction and interface standards; and they are easily inserted or removed.
  2. Components within a response–able system communicate directly on a peer-to-peer relationship; and parallel rather than sequential relationships are favored.

For innovative innovation ecosystems this means efficient communications to keep transaction costs low. The application of parallel rather than sequential relationships will be discussed in Part 2 of this blog.

  1. Component relationships in a response–able system are transient when possible; decisions and fixed bindings are postponed until immediately necessary; and relationships are scheduled and bound in real time.

This is not a recommendation for procrastination, rather avoidance of decision making with insufficient information which may fix an ecosystem component which later turns out to be a mistake (e.g. building a new business incubator before a reliable deal flow is apparent).

  1. Components in response–able systems are directed by objective rather than method; decisions are made at a point of maximum knowledge; information is associated locally, accessible globally, and freely disseminated.
  2. Component populations in response–able systems may be increased and decreased widely within the existing framework.
  3. Duplicate components are employed in response–able systems to provide capacity right – citing options and failed – soft tolerance; and diversity among similar components employing different methods is exploited.
  4. Component relationships in response–able systems are self-determined; and component interaction is self-adjusting or negotiated.

In previous blogs we discussed the phenomenon of emergence in complex adaptive ecosystems. Emergence is an outcome of self-organization, without centralized control (#5, #8) in the form of a new level of order in the system that comes into being as novel structures and patterns which maintain themselves over some period of time. Innovation springs from emergence. Emergence may create a new entity with qualities that are not reflected in the interactions of each agent within the system. Emergent organizations are typically very robust and able to survive and self-repair substantial damage or perturbations.

  1. Components of response–able systems are reusable/replicable; and responsibility for ready reuse/replication and for management, maintenance, and upgrade of component inventory are specifically is designated.
  2. Frameworks of response–able systems standardize into component communication and interaction; defined component compatibility; and are monitored/updated to accommodate old, current, and new components.

Reusability was discussed at some length October 2013 as referenced at the top of this blog. However, this topic will be further explored in Part 2 of this blog.

Shakespeare might be surprise to learn that his opinion of thinking men (sic) was wrong; one way the US auto industry responded to the competitive challenge of higher quality Japanese imports in the 1980s, which led to agile manufacturing concepts among other changes, was to enable more thinking among assembly line workers.

Next time: Lean and Agile Innovation Ecosystems: Part 2

A Tale of Two Quotes

“I don’t like using words like ecology to explain in shorthand a rich and useful organizational concept for business. For one, these soft edged metaphors turn off a lot of hard edged business people who occupy a large portion of the organizational power structures, especially in operations and manufacturing.. For another, nature has the patience and resilience to absorb a lot of failed or marginal experiments that would terminate a business enterprise…. Simply referencing the metaphorical links and then postulating a new business paradigm doesn’t appear successful in communicating with most people who have operational concerns.” Rick Dove, Response Ability: The Language, Structure, and Culture of the Agile Enterprise. John Wiley and Sons, Inc., 2001, p 134.

A Harvard business school alumnus responding to the intra-Harvard debate between Jill Lepore, an historian, and Clay Christensen, a business school professor, about theories of disruptive technologies is quoted as saying “We don’t learn laws of business. We learned stories.” John McDermott, Career Advice from Marina Keegan, Financial Times (US), June 26, 2014.

Rich Dove’s book is about agile manufacturing but also much more. In the next blog in this series I shall introduce a few of Rick’s concepts and discuss whether they can throw more light onto how innovation ecosystems may become agile; an ability to adapt rapidly to changes and shocks.

Meanwhile, let’s (1) gently dissect these two quotes, and (2) suggest what practical results the complex adaptive systems theory of innovation ecosystems predicts which will be of value to the most skeptical operations person. We only have space to begin here, and will continue next time.
In the above “.. throw more light onto..” is itself a metaphor; we are not literally going to use a flashlight. Francis Thompson (1859-1907) in his poem Contemplation uses the metaphor which nudges us into a sense of contemplation.

“This morning so I, fled in the shower,
The earth reclining in a lull of power”

Much has been written by philosophers about how the hearer decides to seek a nonliteral meaning in a metaphor, makes us attend to some likeness between two things, conveying an idea to open different frames of mind beyond the more straightjacketed analogy (A is like B, freshness after a rain shower is like the earth resting).

Thus, we are saying that a metaphor can help express a theory, but first we should be sure that we have some common ground as to what is a theory. Thomas Kuhn, a philosopher of science, set out criteria (although not necessarily precise ones) to help chose a theory or chose from competing theories. He stated that a theory should be:

1. Accurate, in that it empirically adequate with experimentation and observation.
2. Consistent, namely internally consistent, but also externally consistent with other theories.
3. Broad Scope, with consequences extending beyond the phenomena it was initially designed to explain.
4. Simple, using the principle that the simplest explanation is usually the better one.
5. Fruitful, in that any theory should predict new phenomena or new relationships among phenomena.

Others might add one more requirement, that of “falsifiability” or proving a theory to be wrong by making an observation or conceiving an argument which proves a theory statement to be false.

Or, put more succinctly, a theory must explain and predict. Without prediction a theory is worthless. I suggest we should hold stories and other narrative forms to Kuhn’s five-test scrutiny. Narratives have become a popular (as the Harvard graduate stated), and effective, metaphorical explanation of events – for example, in complex adaptive systems, where a mathematical description is not possible. Can narrative predict as well as explain? Let’s begin to investigate by applying Kuhn’s tests to complex adaptive systems concepts introduced in recent blogs in this series. For example:

Emergence is an outcome of self-organization in the form of a new level of order in the system that comes into being as novel structures and patterns which maintain themselves over some period of time. Innovation springs from emergence. Emergence may create a new entity with qualities that are not reflected in the interactions of each agent within the system. Emergent organizations are typically very robust and able to survive and self-repair substantial damage or perturbations.

Kuhn’s Tests
Kuhn’s tests 1 through 3 are easily satisfied, whereas #4 might be more problematic – depending on how we define ‘simple.” Complex adaptive systems theory has been especially fruitful (test 5) as we described in April’s blog Games of chance? Cause and effect in innovation ecosystems Part 2 which reported the work of Sharon Zivkovik on social entrepreneurship. Prof. Zivkovik reports on how complex adaptive systems theory predicts, under certain conditions ” interactions between independent agents produce system-level order as agents interact and learn from each other, change their behavior, and adapt and evolve to increase their robustness. Empirical research has shown large complex systems such as communities require enabling conditions to be created in order to maintain the coordination required for emergence self-organization and adaptive capability.”

In T2VC’s recent innovation ecosystems work with Medellín, Colombia, similar behavioral changes and adaptations occurred by adjusting certain conditions (this case example will appear in a future blog).

Another concept is:
Stabilizing feedback
If new emergent order is creating value it will stabilize or legitimize itself, finding parameters that best increase its overall sustainability in the ecosystem. Stability results by slowing the non-linear process that led to the amplification of emergence in the first place.

Kuhn’s Tests
We don’t have space this month to go into detail, but discussions of empirical studies on networks scattered among several previous blogs, such as the stabilizing effects of weak links, could be shown to meet all five requirements.

I hope readers of this blog series will now at least be beginning to understand that Rainforest innovation ecosystems are complex adaptive systems, and that the Rainforest metaphor expands our thinking. Philosophers have postulated that “even a quite definite speaker intention does not finally determine the meaning of a metaphor’ and that “the interpretation of the light the metaphor sheds on its subject may outrun anything the speaker is thought explicitly to have in mind.” An Irenic Idea about Metaphor, Philosophy, Vol.88, No.343, p 25. In the Rainforest case the metaphor in fact preceded the more detailed analysis of the complex adaptive systems model. The metaphor worked.

Next time, more on agile innovation ecosystems and more tests of theory and predictions.

Games of chance? Cause and effect in innovation ecosystems Part 2

Notes on the practice of innovation and technology commercialization

“That is why, according to Viktor Mayer-Schönberger and Kenneth Cukier’s book, Big Data, ‘causality won’t be discarded, but it is being knocked off its pedestal as the primary fountain of meaning’. But a theory-free analysis of mere correlations is inevitably fragile.”
Big data: are we making a big mistake? Financial Times, March 28, 2014

“You have on each table cardboard, drinking straws, glue, string, balloons, paper cups, and other bits and pieces. Use these materials to build a model your local innovation ecosystems.” These were the instructions given to multiple groups of five or six from among those of us who participated in the recent Global Innovation Summit in San Jose California. An undisclosed prize, based on unexplained criteria (a slice of innovation humor?) was to be given to the winner. This modeling game was a lot of fun, provided insight to some, and also raised the question of what kinds of models might represent innovation ecosystems?

We usually build system models to simplify the world around us in order to better understand it – and hope that in such simplified models we have included the important features of the actual system. For example, it is not possible to include what may be large numbers of possible causes producing observed outcomes.


A model may be a physical structure, as in the picture above of one team’s product from the Global Innovation Summit, or in the form of mathematical equations (in some cases this may have been the way an actual system was designed), computer simulations, or even a set of stories. Using narrative to understand the dynamics of innovation ecosystems will be explored in a future blog in this series.

One difficulty is that the more we expand and generalize models to take into account wider circumstances the more unmanageable they become and usually we have to make additional simplifying assumptions or model small subsets of a system.

At this point it’s important to make a distinction between complex and complicated systems. Complex (adaptive) systems are what we have been discussing in the past few blogs. We noted, for example, that in such systems the same inputs may not always yield the same outputs and the whole is more than the sum of its parts. Complicated systems may be broken down into smaller and smaller constituent parts (superposition principle); the whole is the sum of its parts and behaviour is completely predictable. An economy is complex. An modern passenger aircraft is complicated. Both systems are composed of a system elements connected in a system structure. Both kinds of system perform specific system functions in its system environment. Both systems may have a permeable system boundary allowing inputs from, and outputs to, the external environment.

The difference is that complicated systems can be fully modeled whereas complex systems are inherently resistant to modeling.

My colleague Henry Doss, is his series of Forbes blogs on leadership, put the issue well “We live and work in a world that wants specificity and predictability, but we live and work in systems that defy predictability…  A strong leader of complex systems knows this truth about systems, and understands that oftentimes judgment, intuition and commitment are more important than measurements, projections and predictions.  Knowing that systems are resistant to predictive models, and are rich in unforeseen, often positive, outcomes is a powerful foundation for effective leadership.  And it’s an awareness that will make for more informed and nuanced decision-making.” Does Synergy Really Mean Anything?

Do difficulties relating effects to causes mean that we cannot model complex adaptive ecosystems? In fact no, even without predictive capabilities progress can be achieved. Sharon Zivkovik at the University of Adelaide, Australia, in her article Addressing Society’s Most Pressing Problems by Combining the Heroic and Collective Forms of Social Entrepreneurship notes “According to complex adaptive systems theory, under certain conditions interactions between independent agents produce system-level order as agents interact and learn from each other, change their behavior, and adapt and evolve to increase their robustness. Empirical research has shown large complex systems such as communities require enabling conditions to be created in order to maintain the coordination required for emergence self-organization and adaptive capability.”

These communities may be said to be engaged in ‘collective entrepreneurship’ by integrating knowledge and resources from different, and sometimes diverse, parts of the ecosystem, capitalizing on properties of far-from-equilibrium complex adaptive systems such as self-organization, and using the resulting resources to address difficult problems.

Dr. Zivkovik further notes “The aim of interventions at the point of self-organization is to enable community system members and their resources to recombined into new patterns of interaction and working arrangements that improve the functioning and performance of the community system and displace the old way of thinking.”

It is this recombining into new patterns of interaction or moving from one ‘basin of stability’ to another in a Rainforest innovation ecosystem that allow us to build, if not complete ecosystem models, then at least some predictability around these stable regions. I use the word “moving” and this indicates what’s missing in our discussion so far; a model must be dynamic and include flows such as those of knowledge, and capital in the innovation ecosystem. These are relatively new ideas in the context of innovation ecosystems and present considerable challenges to the modeler. However, it does seem that we should be able to model such systems beyond string and Styrofoam™.

The first part of this Blog is at: Games of chance? Cause and effect in innovation ecosystems Part 1

Next month: A review of the 14 blogs so far in this series, their connections, and what I hope we have learned.


Get every new post delivered to your Inbox.

Join 2,755 other followers