|
|||||||||||||||||||||||||||||||||
Some
Emerging Principles for Managing in Complex Adaptive Systems Working paper by Paul Plsek (PEP&A, Inc.), Version: November 25, 1997 Recognize it or not, to organizational
leaders, science matters. While names like Galileo, Newton, and Descartes do not routinely
appear on lists of management gurus, scientists such as these have had a profound effect
on management thinking, and thinking in general. Science shapes the way we view the world;
providing metaphors that help us make sense of events, and thereby giving us a framework
for acting to influence the future course of those events. Since the time of the Renaissance, the
predominant metaphor of science has been that of the machine. Scientists of the time
described the universe as a grand clockwork. The planets spun about the Sun in predictable
orbits and physical bodies moved in trajectories that could be described with the
precision of mathematics. The goal of science was to reduce the world to its piece parts,
understand those parts, and then put them back together in new ways to make new things. This thinking pervades our view of
leadership and management. Organization charts, job descriptions, corporate policies,
detailed strategic and operational plans, and countless other artifacts of modern
organizational life, are deeply rooted in the machine metaphor.1 These are our attempts to
specify, in increasing detail, the piece parts of organizational systems so that the
overall clockwork of the organization can better produce the outcomes we desire. Despite our attempts to control the
machine of the modern organization, and despite the numerous, undeniable successes from
the use of these machine-control techniques, it remains our common experience of the world
that "stuff happens." For example, Coca-Cola reduced consumer judgment to its
piece parts, conducted scientifically sound taste tests, developed a detailed product
launch plan, and found the "New Coke" surprisingly rejected by the marketplace.
Countless merger and acquisition deals have been thoroughly analyzed and declared
"sure winners," only to have the whole thing come unraveled as the merged
organization never quite learns to work synergistically as one. Reengineering, TQM, and
numerous other improvement approaches that have worked with great success in one
organization, fail miserably when installed in another organization. True to the machine metaphor, our usual
reaction to such "stuff" is to retrace the analysis, pinpoint where we went
wrong, extract lessons learned, and fix things up for the next round of analysis. The
organizational world is a machine-we think, not unlike the scientists of the
Renaissance-and it is only a matter of time and technology before we will understand its
parts in enough detail to be able to describe it completely and harness it totally. The
usual result: different "stuff" happens the next time around. While there are undoubtedly routine
aspects of organizational life where the machine metaphor fits, there are, just as
undoubtedly, aspects where it does not. We need new metaphors to help us understand the
emerging stuff of the modern, complex organization. Fortunately, science has again
preceded us.
New thinking from the relatively new
science of complexity is radically altering our views on the management of organizations
and other human social systems. For the past two years, we have been working with 30
leaders from VHA health care organizations in applying the lessons from this new science
to the practice of management. Our efforts parallel those of similar groups of leaders
outside healthcare in forums such as the Santa Fe Institute's Business Network. Our on-going, practical application work
leads us to describe an emerging set of management principles for viewing the workings of
complex organizations. These emerging principles suggest new directions for management
action; directions that often run counter to our learned instincts based on the machine
metaphor. |
|||||||||||||||||||||||||||||||||
What is a
Complex Adaptive System (CAS)? The new thinking to which we refer comes from the study of complex
adaptive systems. Over the past 20 years, this field has attracted leading
thinkers-including several Nobel laureates such as Murray Gell-Mann, Phillip Anderson,
Kenneth Arrow, and Ilya Prigogine-from such diverse fields as physics, biology, chemistry,
economics, mathematics, engineering, and computer science. Key work in the field has taken
place at several academic and research centers around the world; most notably the Santa Fe
Institute in New Mexico. In this section, we will briefly describe some of the key
concepts from this work.2 In subsequent sections we will illustrate these concepts more
fully with examples from our work with organizations.
In a CAS, agents operate according to
their own internal rules or mental models (the technical term is "schemata"). In
other words, each agent can have its own rules for how it responds to things in its
environment; each agent can have its own interpretations of events. These rules and
interpretations need not be explicit. They do not even need to be logical when viewed by
another agent. These are clearly characteristics of humans in just about any social
system. Agents within a CAS can share mental
models, or be totally individualistic. Further, agents can change their mental models.
Because agents can both change and share mental models, a CAS can learn; it's behavior can
adapt over time, for better or for worse. Adaptation means that the agents and the systems
in which they are embedded co-evolve. Again, we clearly know that human organizations
change and adapt over time; again, sometimes for better sometimes for worse. The behavior of a CAS emerges-and this is
a key point-from the interaction among the agents. It is more than merely the sum
of its parts. Further, each agent and each CAS is embedded, or nested, within other CAS,
providing further interactions. For example, a person is a CAS... they are also a member
of team... the team is embedded in a department... which is nested in an organization...
which is part of an industry... and so on; there are interactions all up and down the
line. A CAS can, and usually does, exhibit novel
behaviors that stem from these interactions. Because of the interaction, the behavior of
the system is also non-linear; seemingly small changes can result in major swings in
system behavior, while seemingly large changes might have no effect at all. For example, a
change effort in one organization might involve management retreats, employee meetings,
memos and much fanfare, and yet have no discernible effect only a month later. In another
organization, a rumor about a chance comment made by a senior leader in the washroom can
touch off a major union organizing effort that forever changes the landscape of the
company. We are usually surprised when such things happen. However, when we learn to view
systems through the lens of CAS, such unpredictable outcomes are not so surprising. Because of this novelty and non-linearity,
the detailed behavior of a CAS is fundamentally unpredictable. It is not a question of
better understandings of the agents, better models, or faster computing; as we have come
to believe erroneously, based on the machine metaphor. We simply cannot reliably predict
the detailed behavior of a CAS through analysis. We must let the system run to see what
happens. The implications of this are that we can never hope to predict the detailed
behavior of a human system. While this seems obvious to say, note how often managers and
leaders act as if we can be sure about how others should act in response to our actions;
for example, when we install a program that worked in another company and then wring our
hands and point our fingers when the predicted success fails to materialize in our own
organization. Still, despite this lack of detailed
predictability, it is often possible to make generally true, practically useful statements
about the behavior of a CAS. For example, while we cannot predict the exact closing
reading of the Dow Jones Industrial Average tomorrow, we can describe the overall stock
market trend as bullish or bearish and take appropriate investment action. This gives us
some hope in understanding complex human systems, we just need to be careful not to
over-estimate our ability to predict what will happen. Over-estimation is the usual
mistake that we all make; if you have ever been surprised by how something has turned out,
you may have fallen into the trap of over-estimating your ability to predict. Ilya Prigogine3, Stuart Kauffman4, and
others have shown that a CAS is inherently self-organizing. Order, creativity, and
progress can emerge naturally from the interactions within a CAS; it does not need to be
imposed from outside. Further, in a CAS, control is dispersed throughout the interactions
among agents; a central controller is not needed. Consider, for example, the CAS of the
lowly termite. Termite mounds are engineering marvels; the highest structures on the
planet, when compared to the size of its builders. Yet there is no CEO termite, no
architect termite, no blueprint, no termite on a far away hill viewing the structure in
perspective and radioing orders for adjustments as the building proceeds. Each individual
termite acts locally, within a context of other termites who are also acting locally. The
termite mound emerges from a process of self-organization. In contrast, most of our
traditional management theory is about how to establish order and control through the
actions of a few people at the top of an organizational hierarchy. This management
instinct, one that we have all learned, may be the biggest factor holding back innovation
and progress in our organizations. Christopher Langton5 calls the set of
circumstances under which this creative emergence arises "the edge of chaos."
This is a place where there is not enough agreement and certainty to make the choice of
the next step trivial and obvious, but neither is there so much disagreement and
uncertainty that the system is thrown into complete disorder. We have all been there many
times in our lives within organizations. It is that anxious point in time when the plan
has not quite come together yet; when it feels like we are on to something but no one is
quite sure just what that something is. Our learned instinct in such moments is to try to
achieve concreteness, troubleshoot the issues, and take action to fix things; in essence
to break down the ambiguity into piece parts so that we can go on assembling our plans in
a logical manner. The study of complex adaptive systems suggests that we might often be
better off maintaining the anxiety, sustaining the diversity, letting the thing simmer for
a while longer to see what will happen on its own. This is indeed uncomfortable for
leaders schooled in machine thinking.
|
|||||||||||||||||||||||||||||||||
The Stock
Market: An Example of a Complex Adaptive System. The new
thinking to which we refer comes from the study of complex adaptive systems. Over the past
20 years, this field has attracted leading thinkers-including several Nobel laureates such
as Murray Gell-Mann, Phillip Anderson, Kenneth Arrow, and Ilya Prigogine-from such diverse
fields as physics, biology, chemistry, economics, mathematics, engineering, and computer
science. Key work in the field has taken place at several academic and research centers
around the world; most notably the Santa Fe Institute in New Mexico. In this section, we
will briefly describe some of the key concepts from this work.2 In subsequent sections we
will illustrate these concepts more fully with examples from our work with organizations.
In a CAS, agents operate according to
their own internal rules or mental models (the technical term is "schemata"). In
other words, each agent can have its own rules for how it responds to things in its
environment; each agent can have its own interpretations of events. These rules and
interpretations need not be explicit. They do not even need to be logical when viewed by
another agent. These are clearly characteristics of humans in just about any social
system. Agents within a CAS can share mental
models, or be totally individualistic. Further, agents can change their mental models.
Because agents can both change and share mental models, a CAS can learn; it's behavior can
adapt over time, for better or for worse. Adaptation means that the agents and the systems
in which they are embedded co-evolve. Again, we clearly know that human organizations
change and adapt over time; again, sometimes for better sometimes for worse. The behavior of a CAS emerges-and this is
a key point-from the interaction among the agents. It is more than merely the sum
of its parts. Further, each agent and each CAS is embedded, or nested, within other CAS,
providing further interactions. For example, a person is a CAS... they are also a member
of team... the team is embedded in a department... which is nested in an organization...
which is part of an industry... and so on; there are interactions all up and down the
line. A CAS can, and usually does, exhibit novel
behaviors that stem from these interactions. Because of the interaction, the behavior of
the system is also non-linear; seemingly small changes can result in major swings in
system behavior, while seemingly large changes might have no effect at all. For example, a
change effort in one organization might involve management retreats, employee meetings,
memos and much fanfare, and yet have no discernible effect only a month later. In another
organization, a rumor about a chance comment made by a senior leader in the washroom can
touch off a major union organizing effort that forever changes the landscape of the
company. We are usually surprised when such things happen. However, when we learn to view
systems through the lens of CAS, such unpredictable outcomes are not so surprising. Because of this novelty and non-linearity,
the detailed behavior of a CAS is fundamentally unpredictable. It is not a question of
better understandings of the agents, better models, or faster computing; as we have come
to believe erroneously, based on the machine metaphor. We simply cannot reliably predict
the detailed behavior of a CAS through analysis. We must let the system run to see what
happens. The implications of this are that we can never hope to predict the detailed
behavior of a human system. While this seems obvious to say, note how often managers and
leaders act as if we can be sure about how others should act in response to our actions;
for example, when we install a program that worked in another company and then wring our
hands and point our fingers when the predicted success fails to materialize in our own
organization. Still, despite this lack of detailed
predictability, it is often possible to make generally true, practically useful statements
about the behavior of a CAS. For example, while we cannot predict the exact closing
reading of the Dow Jones Industrial Average tomorrow, we can describe the overall stock
market trend as bullish or bearish and take appropriate investment action. This gives us
some hope in understanding complex human systems, we just need to be careful not to
over-estimate our ability to predict what will happen. Over-estimation is the usual
mistake that we all make; if you have ever been surprised by how something has turned out,
you may have fallen into the trap of over-estimating your ability to predict. Ilya Prigogine,3 Stuart Kauffman,4 and
others have shown that a CAS is inherently self-organizing. Order, creativity, and
progress can emerge naturally from the interactions within a CAS; it does not need to be
imposed from outside. Further, in a CAS, control is dispersed throughout the interactions
among agents; a central controller is not needed. Consider, for example, the CAS of the
lowly termite. Termite mounds are engineering marvels; the highest structures on the
planet, when compared to the size of its builders. Yet there is no CEO termite, no
architect termite, no blueprint, no termite on a far away hill viewing the structure in
perspective and radioing orders for adjustments as the building proceeds. Each individual
termite acts locally, within a context of other termites who are also acting locally. The
termite mound emerges from a process of self-organization. In contrast, most of our
traditional management theory is about how to establish order and control through the
actions of a few people at the top of an organizational hierarchy. This management
instinct, one that we have all learned, may be the biggest factor holding back innovation
and progress in our organizations. Christopher Langton5 calls the set of circumstances under which this creative emergence arises "the edge of chaos." This is a place where there is not enough agreement and certainty to make the choice of the next step trivial and obvious, but neither is there so much disagreement and uncertainty that the system is thrown into complete disorder. We have all been there many times in our lives within organizations. It is that anxious point in time when the plan has not quite come together yet; when it feels like we are on to something but no one is quite sure just what that something is. Our learned instinct in such moments is to try to achieve concreteness, troubleshoot the issues, and take action to fix things; in essence to break down the ambiguity into piece parts so that we can go on assembling our plans in a logical manner. The study of complex adaptive systems suggests that we might often be better off maintaining the anxiety, sustaining the diversity, letting the thing simmer for a while longer to see what will happen on its own. This is indeed uncomfortable for leaders schooled in machine thinking.
|
|||||||||||||||||||||||||||||||||
The Stock
Market: An Example of a Complex Adaptive System. The stock market is a good illustration of these
properties of a CAS. Buyers, sellers, companies, and regulators each have their own mental
models and are free to take many different actions. The specific actions of each agent are
somewhat unpredictable, and can often be construed as illogical by other agents observing
the action. Logical or not, each action changes the environment that others within the
system face. These others may take their own actions, which in turn further changes the
environment. The detailed movements of the system (whether the market is up or down today
and by how much) is fundamentally unpredictable. Furthermore, relatively small things,
like the off-hand remarks of the Federal Reserve Chairman, can have a large impact on the
market; there is non-linearity in the system. However, despite what seems to be total
chaos, there is an underlying order that allows us to make generally true statements about
the system (this is the basis of both the fundamentals and technical analysis approaches
to the stock market). Finally, no one "controls" the stock market. Rather, the
stock market "happens;" it creates its own unique behavior every day. Most organizational systems are a CAS.
Substituting terms such as employees, co-workers, bosses, outcomes, performance, and so on
into the stock market illustration above yields a pretty good description of what goes on
every day in most organizations. Try this substitution yourself and see if it doesn't
resonate with your experience in organizations. This is referred to as
"sense-making;" when the emerging understanding of complex adaptive systems
helps people make sense of what in the past has seemed a sometimes chaotic and nonsensical
world.6 |
|||||||||||||||||||||||||||||||||
Some
Emerging Principles of Complexity Management Our study of the
science of complex adaptive systems and our work with organizations has led us to propose
some principles of management that are consistent with an understanding of organizations
as CAS (see figure 1). In the spirit of the subject matter, there is nothing sacred or
permanent about this list. However, these principles do begin to give us a new way of
thinking about and approaching our roles as leaders in organizations. We are not the first to propose such a
list.7 Our intent here is to capture practical principles that emerge from the science of
complexity in language that resonates with management issues. Furthermore, astute readers
will also observe that our list of principles, and CAS theory itself, has much in common
with general systems thinking, the learning organization, total quality, empowerment,
gestalt theory, organizational development and other approaches. It has much in common
with these, but it is not any of these. CAS theory clarifies and pulls together lots of
good thinking from the past. An understanding of CAS is an understanding of how things
work in the real world. That others in the past have also understood these things and put
them into various contextual frames should not be surprising. An understanding of CAS
simply provides a broader, more fundamental, potentially unifying framework for these
ideas. |
|||||||||||||||||||||||||||||||||
Figure 1:
Nine, Emerging, and Connected Organizational and Leadership Principles from the Study of
Complex Adaptive Systems
1. View your system through the lens of complexity (rather than the metaphor of a machine or a military organization). As we have pointed out, the predominant metaphor in use in organizations today is that of a machine. Almost equally popular is the metaphor of a military operation. If an organization is a machine, then we just need to specify the parts well, and make sure that each part does its part. If an organization is a military operation, then command, control, and communication needs to be hierarchical; survival is key; and sacrificial heroes are desired (although no one really wants to be one themselves). Most of today's organizational artifacts-job descriptions, "rank and file" employees, turf battles, strategic plans and so on-emerge from these largely unexpressed and undiscussed metaphors. If you buy into these metaphors, then the traditional actions of management make sense and should work.The basic problem with these metaphors
when applied to a CAS is that they ignore the individuality of agents and the interaction
effects among agents. Or worse, they simply assume that all this can be tightly controlled
through better (read: more) specification. While there are many situations where the
machine and military metaphors might be useful-for example, routine surgical processes in
the health care organizations we worked with-there are also many situations where these
metaphors are grossly inadequate. When we "view our system through the lens of
complexity" we are taking on a new metaphor-that of a CAS-and, therefore, are using a
different model to determine what makes sense to do as leaders. Viewing the world through the complexity
lens has been a marvelously stress-reducing experience for the health care leaders that we
have worked with over the past few years. Many have come to see that the massive
sea-changes that they have experienced and agonized over recently-for example, the failed
Clinton health care reform plan, the rise of managed care, the AIDS epidemic-are natural
phenomena in a complex adaptive system. Such things will happen again, each will leave its
mark on the health care system, predicting when and where the next one will come is
futile, learning to be flexible and adaptable is the only sustainable leadership strategy. The view through the complexity lens need
not only be of very large scale systems. For example, Muhlenberg Regional Medical Center
knew that its biggest community relations problem was in its Emergency Room (ER). Hospital
CEO John Kopicki and VP Mary Anne Keyes knew that the traditional approach was to develop
a plan (they also toyed with the idea of launching a reengineering effort), and then use
their organizational weight to see to it that everyone followed the plan. The complexity
lens suggested, however, that a "good enough vision" and "minimum
specifications" (described in principle two below); along with interaction among the
ER staff and the willingness to hold the creative anxiety of not being able to say exactly
what ought to be done (described in principle four below) might lead to better results
than the traditional management or reengineering approaches. "The idea that the ER
staff could determine for themselves what they would do generated a burst of
enthusiasm," notes Kopicki. Starting without a master plan, "...they tried a
variety of innovations, kept what worked, and threw out what didn't. Within six months,
they had improved customer satisfaction scores by 67 percent. That's unheard of. No one
ever created that level of improvement in only six months. With that kind of success under
our belts, I've been leading the hospital towards a culture where this kind of
self-organization is the way we do things. We see more examples of it working all the
time."
|
|||||||||||||||||||||||||||||||||
2. Build a good enough vision and provide minimum specifications (rather than trying to plan out every little detail). Since the behavior of a CAS emerges from the interaction among the agents, and since the detailed behavior of the system is fundamentally unpredictable, it does little good to spend all the time that most organizations spend in detailed planning. Most organizational leaders have had the experience of participating in very detailed planning, only to find that assumptions and inputs must be changed almost immediately after the plan is finalized. Complexity science suggests that we would be better off with minimum specifications and general senses of direction, and then allow appropriate autonomy for individuals to self-organize and adapt as time goes by.The science behind this principle traces
it roots back to the "Boids" computer simulation, developed in 1987 by Craig
Reynolds (and available on many Internet software bulletin boards).8 The simulation
consists of a collection of autonomous agents-the boids-placed in a environment with
obstacles. In addition to the basic laws of physics, each agent follows three simple
rules: (1) try to maintain a minimum distance from all other boids and objects, (2) try to
match speed with neighboring boids, and (3) try to move toward the center of mass of the
boids in your neighborhood. Remarkably, when the simulation is run, the boids exhibit the
very life-like behavior of flying in flocks around the objects on the screen. They
"flock," a complex behavior pattern, even though there is no rule explicitly
telling them to do so.9 While this does not prove that real birds use these simple rules,
it does show that simple rules-minimum specifications-can lead to complex behaviors. These
complex behaviors emerge from the interactions among agents, rather than being imposed
upon the CAS by an outside agent or an explicit, detailed description. In contrast, we often over-specify things
when designing or planning new activities in our organizations. This follows from the
paradigm of "organization as a machine." If you are designing a machine, you had
better think of everything, because the machine cannot think for itself. Of course, in
some cases, organizations do act enough like machines to justify selected use of this
metaphor. For example, if I am having my gall bladder removed, I would like the surgical
team to operate like a precision machine; save that emerging, creative behavior for
another time! Maximum specifications and the elimination of variation might be appropriate
in such situations. Most of the time, however, organizations
are not machines; they are complex adaptive systems. The key learning from the simulations
is that in the case of a CAS, minimum specifications and purposeful variation are the way
to go. This principle would suggest, for example,
that intricate strategic plans be replaced by simple documents that describe the general
direction that the organization is pursuing and a few basic principles for how the
organization should get there. The rest is left to the flexibility, adaptability, and
creativity of the system as the context continually changes. This, of course, is a
frightening thought for leaders classically trained in the machine and military metaphors.
But the key questions are: Are these traditional metaphors working for us today? Are we
able today to lay out detailed plans and then 'just do it' with a guaranteed outcome? If
not, do we really think that planning harder will be any better? The quintessential organizational example
of this principle of good enough vision and minimum specifications is the credit-card
company, VISA International. Despite its $1 trillion annual sales volume and roughly half
a billion clients, few people could tell you where it is headquartered or how it is
governed. It's founding CEO, Dee Hock describes it as a nonstock, for-profit membership
corporation in which members (typically, banks that issue the VISA cards) cooperate
intensely "in a narrow band of activity essential to the success of the whole"
(for example, the graphic layout of the card and common clearinghouse operations), while
competing fiercely and innovatively in all else (including going after each other's
customers!).10 This blend of minimum specifications in the essential areas of cooperation,
and complete freedom for creative energy in all else, has allowed VISA to grow 10,000%
since 1970 despite the incredibly complex worldwide system of different currencies,
customs, legal systems and the like. "It was beyond the power of reason to design an
organization to deal with such complexity," Hock explains. "The organization had
to be based on biological concepts to evolve, in effect, to invent and organize
itself." Health care organizations are
traditionally quite rule bound. Because there are many legitimate industry regulations
that govern who can do what and how, many staff members in health care organizations
assume that everything must be done the way it has been done in order to satisfy legal
requirements. So the concepts of good enough vision and minimum specifications are both
freeing and scary to the health care leaders we worked with. The results for the risk
takers, however, have been good. For example, Mary Anne Keyes (the Muhlenberg Medical
Center VP we met in an earlier example) assembled a "little group of doctors and
nurses" to simplify the hospital's admission process and gave them just one simple
specification: "all admission work must be done within an hour of the patient coming
to the hospital." All other previously sacred cows were open to the group's
creativity. The group created the Express Admission process that is such a hit with
patients and doctors that 400 hospitals from around the country have asked to come to
learn about it. In a similar vein, Linda Rusch, a VP at
Hunterdon Medical Center, asked two nurse mangers to work with the staff nurses to
transform their units into "humanistic healing environments." "That's
all," Rusch tells us, "I'm convinced that they will create two units that are
both very very customer-service oriented and good places to heal." In another aspect
of the hospital's mission, community health, Rusch explains that after she laid out a few
minimum specifications regarding partnerships and the community, "the next thing I
know, I hear about these nursing units that are collaborating in all these different
projects with the outside public." In most health care institutions, true to the
classic military organization metaphor, it is someone's job to coordinate community
affairs. In many cases that person spends a great deal of time trying to
"motivate" staff to get involved. This does not seem to be a problem anymore at
Hunterdon Medical Center; nor at the other organizations in the VHA group who have made
similar progress.11 Good enough vision and minimum
specifications are also powerful concepts in regard to strategic planning. For example,
the Institute for Healthcare Improvement, a non-profit organization in Boston, by-passed
the classic MBA approach in its efforts to build its international activities. Instead,
the organization's board adopted 8 simple principles such as: "we should only work in
countries where there is a clear aim to improve" and "our international
collaborations must always be a two-way street of learning." These minimum
specifications, along with diverse efforts at building information flow (see principle
number four in a later section), comprise the organization's ever emerging
"plan" for international activities. Because of this flexibility, the
organization was able to respond quickly to requests from local leaders in Sweden to begin
a series of improvement efforts spurred by the recent Dagmar Agreement in that country's
parliament that mandates reductions in waiting lists in the health service. Such a
development might never have been predicted had the organization used a more traditional
approach to strategic planning; the opportunity would have been missed. 3. When life is far from certain, lead from the edge, with clockware and swarmware in tandem (that is, balance data and intuition, planning and acting, safety and risk, giving due honor to each). "Clockware" is a term that describes the management processes we all know that involve operating the core production processes of the organization in a manner which is rational, planned, standardized, repeatable, controlled, and measured. In contrast, "swarmware" are management processes that explore new possibilities through experimentation, trials, autonomy, freedom, intuition, and working at the edge of knowledge and experience. Good enough vision, minimum specifications, and metaphor are examples of swarmware that we have already seen. The idea is to say just enough to paint a picture or describe the absolute boundaries, and then let the people in the CAS become active in trying whatever they think might work.In an informed approach to complexity, it
is not a question of saying that one is good and the other is bad. The issue is about
finding an appropriate mix for a given situation. Where the world is certain and their is
a high level of agreement among agents (for example, the need for consistent variable
names and programming language syntax in a large software system, or the activities in the
operating room during a routine surgery) clockware is appropriate. In a clockware
situation, agents give up some of their freedom and mental models in order to accomplish
something they have agreed upon collectively. The CAS displays less emergent, creative
behavior, and begins to act more like a machine. There is nothing wrong with this. However, where the world is far from
certainty and agreement ("near the edge of chaos") swarmware is needed with its
adaptability, openness to new learning, and flexibility. Swarmware is also needed in
situations where the old clockware processes are no longer adequate for accomplishing the
purpose, or in situations where the purpose has changed, or in situations where creativity
is desirable for its own sake. Linda Rusch at the Hunterdon Medical
Center is working with her staff to move fluidly between clockware routines and swarmware
activities as the level of agreement and certainty varies in the situation. She laughs,
"My staff go around saying, "we're swarming now!'" James Taylor, the new CEO at the
University of Louisville Hospital, convinced his board to save the $500,000 they were
going to spend on consultants and various analyses to develop a strategic plan. Instead,
he argued, lets "just get on with addressing the strategic issues themselves."
He astutely points out that there is a strong tendency in most organizations to "get
some experts, plan it, and avoid talking about what the real issues are." Taylor sums
up the essence of the swarming we have seen in the organizations we work with, "It's
a more pragmatic, action orientation that says here are the strategic issues so let's
address them the best we can. Let's keep our ideas open... Let's create an organizational
environment where we can learn from our actions." While this might sound like an
abdication of leadership to those steeped in the organization-as-machine metaphor, the new
science suggests that it is the very essence of leadership in complex adaptive systems. |
|||||||||||||||||||||||||||||||||
4. Tune your place to the edge by fostering the right degree of: information flow, diversity and difference, connections inside and outside the organization, power differential, and anxiety (instead of controlling information, forcing agreement, dealing separately with contentious groups, working systematically down all the layers of the hierarchy in sequence, and seeking comfort). Theoretical studies of complex adaptive systems suggest that creative self-organization occurs when there is just enough information flow, diversity, connectivity, power differential, and anxiety among the agents. Too much of any of these can lead to chaotic system behavior; too little and the system remains stuck in a pattern of behavior.12Complexity researcher Stuart Kauffman
provides a simple, visual illustration that gives some insight into the science here.13
Consider a collection of a hundred or more buttons spread out on a table surface. Now,
select two buttons at random and tie them together. Continue this selection and tying
process, each time lifting the thread after you have made the tie to see how long a string
of buttons you can pick up. For a while, additional connectivity does not lead to creative
self-organization; each time you pick up the newly tied thread there are only two or three
buttons attached. At some point of additional connectivity, however, something seemingly
magical happens. You pick up the thread and 5, 8, or 10 buttons are attached in an
intricate pattern among the sea of buttons on the table.14 This creative self-organization
phenomena continues for a while until we pass the edge of chaos and get into chaos itself.
With too much connectivity among the buttons, the thread gets all tangled up. You can no
longer see a pattern among the sea of buttons; all you see is a mess of thread. Of course, the trick in a human CAS lies
in gauging the "right" amount of information flow, diversity, connectivity,
power differential, and anxiety among the agents. Since the predominant metaphors of
organizational life are those of a machine and military operation, most organizations
today have too little information flow and diversity, and too much power differential. The
degree of connectivity and anxiety can go either way. This is a general observation which,
of course, could be different in any specific context. If you are in a CAS, you will have
your own mental model about such things, as will the other agents in the system. Richard Weinberg, VP of Network
Development at Atlantic Health Systems, has used this concept of "tuning to the
edge," along with good enough vision and minimum specifications, in his work with
physicians who can often become embroiled in turf battles. For example, recent
technological advances have made it possible for radiologists and cardiologists to reshape
damaged arteries, something that used to require the skills of a vascular surgeon. In most
places, a senior hospital administrator would be put in the unenviable position of
representing the hospital's interests, while serving as negotiator and referee among these
powerful constituencies. Weinberg's approach instead involves "convening a group with
representatives of all three specialties" (increasing diversity, connections among
agents, and anxiety); giving them honest information about the hospital's resources and
requirements (increasing information flow and tuning the power differential); "asking
them to develop a plan" (in the end, decreasing diversity and power differential);
and "telling them that the hospital won't invest in the procedure until they have
come up with such a plan" (increasing power differential and anxiety). Weinberg cites
this approach as leading to many, creative, successful, collaborative relationships with
physician groups at a time when many health care organizations report nothing but
contention. The international strategic plan for the
Institute for Healthcare Improvement that we mentioned earlier is another example of the
use of this "tuning" principle. The Institute has firm financial goals but no
firm operational plans for its international efforts (increasing anxiety). Because of this
anxiety, it constantly solicits information inputs from its contacts in healthcare
organizations around the world, using innovative approaches involving the Internet
(increasing connections, information flow, and diversity). At the same time, it makes
known the various methods for improvement that it has available to offer (tuning the power
differential; where here, knowledge is power). However, the Institute does not push its
way onto the international health care scene; preferring instead to wait to be invited to
help by local leaders in a given country (tuning the power differential; where here,
control is power). A third example of "tuning"
involves forming what Lane and Maxfield call "generative relationships."15 A
generative relationship is one that generates outcomes that are greater than the simple
sum of the individual efforts of the parties working alone. Lane and Maxfield suggest that
generative relationships are necessary to deal with a world characterized by
"cascades of rapid change, perpetual novelty, and ambiguity." Jim Dwyer, VP for Medical Affairs at Memorial Hospital of
Burlington County, provides an illustration of this. "In the past," Dwyer says,
"if I were trying to develop a partnership with another physician group, I'd try to
bring people around to the right way-that is, my way-of seeing things. With generative
relationships, on the other hand, I begin by showing them what we could be doing together.
Then we define what we are both comfortable with and let the relationship grow from there.
Our relationship doesn't have to appear all at once. It's a lot more comfortable for
everyone if we let it emerge, let it generate itself." In a health care environment
where size and cash position seem to be temporarily dominating the scene, Dwyer's approach
is to "serve the community by creating relationships that allow partnering
organizations to benefit mutually, yet retain their identities." Since the detailed behavior of a CAS is
fundamentally unpredictable, there is no way to analyze your way to an answer about the
proper amount of information flow, diversity, connections inside and outside the
organization, power differential, and anxiety to sustain among the agents. You can have
more or less correct intuitions, and some sense of general direction, but that's
inherently the best you can do. You'll just have to try tuning up or down the various
factors and reflect on what happens. Reflection is, therefore, a key skill for
anyone in a CAS. Good "leaders" in a CAS lead not by telling people what to do;
rather they lead by being open to experimentation with the above factors, followed-up by
thoughtful and honest reflection on what happens. For example, James Taylor, the
University of Louisville Hospital CEO in our learning group, is practicing reflection when
he advises acting on strategic issues, and creating an organizational environment where we
can learn from those actions. 5. Uncover and work paradox and tension (rather than shying away from them as if they were unnatural). Because the behavior of a CAS emerges from the interaction among agents and because of non-linear effects, "weird" stuff seems to happen. Of course, it is only weird because we do not yet have a way to understand it. In a CAS, creativity and innovation have the best chance
to emerge precisely at the point of greatest tension and apparent irreconcilable
differences. Rather than smoothing over these differences-the typical leadership intuition
from the machine and military metaphors-we should focus on them and seek a new way
forward. So, for example, one group wants to hold on to the status quo while another wants
radical change. Mix them into a single group and take on the challenge of finding a
"radical way to hold on to the status quo." This is a statement of a paradox; it
makes no sense according to the prevailing mental models. However, working on it sincerely
places the group at the "edge of chaos" where creativity is a heightened
possibility. Zimmerman,16 Goldstein,17 and Morgan18 are
three leading complexity management theorists who each provide specific techniques and
metaphors for getting at these points of paradox and tension in organizations. For
example, Zimmerman describes "wicked questions" at the Canadian metals
distributor Fedmet. At a strategy planning retreat, the senior management team spent most
of the day openly discussing questions of paradox such as, "Are we really ready to
put responsibility for the work on the shoulders of the people who do the work?'' and
"Do our body language and our everyday actions reflect what we write in our vision
and values statements?" We have all been there before and we all know what the
"right" public answer is to such questions: "Well, of course, don't be
silly." But we also all know that these questions and others like them carry embedded
in them the seeds of paradox that often bring organizational progress to a grinding and
surprising halt (only surprising to those who hold the machine and military metaphors). 6. Go for multiple actions at the fringes, let direction arise (rather than believing that you must be "sure" before you proceed with anything). As we have already noted, in a CAS it does little good to plan the details. You can never know exactly what will happen until you do it. So, allowing the flexibility of multiple approaches is a very reasonable thing to do. Of course, such a flexible approach is unreasonable when we view the situation through the metaphor of a machine or military organization. A machine can only work one way, and an old-style military organization must follow procedures and regulations.The science that supports this principle
of CAS behavior comes primarily from the study of gene pools in evolutionary biology.
Ackley points outs that "Researchers have shown clearly and unequivocally how
populations of organisms that are learning (that is, exploring their fitness possibilities
by changing behavior) evolve faster than populations that are not learning."19 We do
not think it strains the metaphor here to suggest that our managerial instincts to drive
for organizational consensus around a single option might be equivalent to inbreeding in a
gene pool. And we all know the kinds of dysfunction that inbreeding in nature can spawn.
We are personally struck by the fact that even though the words "organization"
and "organism" have a common root, we have learned to think about them in such
remarkably different ways. The "fringes" that we are
referring to here are those issues that are far from the zone of certainty and agreement.
Recall that we pointed out that it was not a question of the machine metaphor being wrong
and the CAS metaphor being right, nor is it about throwing out clockware and replacing it
with swarmware. Neither approach is inherently right or wrong; but either approach can be
inappropriate and ineffective in a given context. The leadership skill lies in the
intuition to know which approach is needed in the context one is in. The degree of
certainty and agreement is good guide. However, when we do find ourselves in
situations far from certainty and agreement, the management advice contained in this
principle is to quit agonizing over it, quit trying to analyze it to certainty. Try
several small experiments, reflect carefully on what happens, and gradually shift time and
attention toward those things that seem to be working the best (that is, "let
direction arise"). These multiple actions at the fringes also serve the purpose of
providing us with additional insights about the larger systems that every system is
inevitably buried within. A concrete example of this principle is
the healthcare organization that is trying to come up with a new financial incentive plan
for associated physicians. There are many options and there are success and failure
stories in the industry for each one. Therefore, we are far from certainty and agreement.
Rather than meeting endlessly over it trying to pick the "right" approach,
experiment with several approaches. See what happens, see what seems to work and in what
context. Over time, you may find a "right" way for you, or you may find several
"right" ways. 7. Listen to the shadow system (that is, realize that informal relationships, gossip, rumor, and hallway conversations contribute significantly to agents' mental models and subsequent actions). Complexity theorist Ralph Stacey points out that every organization actually consists of two organizations: the legitimate and shadow systems; and that everyone in the organization is part of both.20 The legitimate system consists of the formal hierarchy, rules, and communications patterns in the organization. The shadow organization lies behind the scenes. It consists of hallway conversation, the "grapevine," the "rumor mill," and the informal procedures for getting things done. Most traditional management theory either ignores the shadow system, or speaks of it as something leaders must battle against (as in, "overcome resistance to change;" it's that military metaphor again).Stacey further points out that because the
shadow system harbors such diversity of thought and approach, it is often the place where
much of the creativity resides within the organization. While the legitimate system is
often focused on procedures, routines, and the like, the shadow system has few rules and
constraints. The diversity, tension, and paradox of these two organizations that co-exist
within one can be a great source of innovation if leaders could just learn to listen to
rather than battle against the shadow. When we see our organizations as CAS, we
realize that the shadow system is just a natural part of the larger system. It is simply
more interconnections among agents; often stronger interconnections than those in the
legitimate system. Leaders who lead from an understanding of CAS, will not have a need to
discredit, agonize over, or combat the shadow systems in their organizations. Rather, they
will recognize and listen to the shadow organization, using the interconnections it
represents as another avenue for tuning information flow, diversity of opinion, anxiety,
and power differential (see principle four). Jim Dwyer at Memorial Hospital of
Burlington County learned from the shadow system associated with his organization's formal
quality improvement efforts. "[In order to screen out projects of low benefit] We had
a formal mechanism for approving quality improvement projects," Dwyer notes, but
"the process became so difficult that people were losing enthusiasm over worthwhile
projects." Dwyer goes on to tell how he became involved in an ad-hoc improvement
project on the process of delivering anti-coagulants; a project that was cooked up by a
group of doctors and nurses talking in the cafeteria one day. As a result of the success
of this effort outside the formal improvement structure, Dwyer and other senior leaders
"basically decided to turn the structure upside-down. We created lots of
opportunities for people to generate projects," Dwyer explains, "and
restructured our quality program to support them." He concludes, "We expect
we'll see a lot more important projects because we have found a way to tap the shadow
system." We believe that Dwyer's experience is
typical of many experiences associated with formal improvement structures in many
industries. Recognizing that the shadow system exists, giving up some control, and
learning to tap the energy in the shadow are key recommendations we would make to leaders
in any industry who believe that their organization's improvement efforts are floundering. |
|||||||||||||||||||||||||||||||||
8. Grow complex systems by chunking (that is, allow complex systems to emerge out of the links among simple systems that work well and are capable of operating independently). Complex systems are... well, complex. They are not easily understood nor built in detail from the ground up. "Chunking" simply means that a good approach to building complex systems is to start small. Experiment to get pieces that work, and then link the pieces together. Of course, when you make the links, be aware that new interconnections may bring about unpredicted, emerging behaviors.This principle is the basis upon which
genetic evolution proceeds.21 Building blocks of organism functionality (for example,
webbed feet on a bird) develop and are combined through cross-over of genetic material
with other bits of functionality (for example, an oversized bill suitable for easily
scooping fish out of the water) to form increasingly complex organisms (a pelican). The
"good enough" genetic combinations may survive and are then available as
building blocks for future combinations. The UNIX computer operating system is another
good example of an ever-evolving complex system that was built up from chunks. The
basic-and at the time it was introduced, revolutionary-principle behind the UNIX system is
that software functions should be small, simple, stand-alone bits of code that do only one
thing well, embedded in an environment that makes it very easy for each such function to
pass its output on to another function for further processing. Applying this principle to team-building
in a mid-sized organization, for example, would suggests that leaders should look for and
support small natural teams. We might provide coaching and training for these teams. Then,
when these teams are functioning well, look for ways to get the teams to work together and
involve others. These new links may result in weird behavior; with a CAS, this is to be
expected. The leaders should be open to doing some adapting of their own. Rather than
insisting on pressing forward with the training, groundrules, or procedures that worked so
well in the first teams, the leaders should understand that the interconnections among
teams has resulted in a fundamentally new system that may need new approaches. Continual reflection and learning are key
in building complex systems. You cannot reflect on anything until you do something. So
start small, but do start. We have already seen several examples of
this principle. James Taylor is using chunking at the University of Louisville Hospital
when he focuses the organization on getting started working on strategic issues as they
come up, rather than trying to figure out the whole system in a grand strategic plan.
Hunterdon Medical Center and Chilton Memorial Hospital are also using the concept of
chunking in their community health efforts. Instead of developing an overall community
health program, they provide opportunities for small groups of hospital staff and
community members to come together where mutual interest lies (that is, in generative
relationships). The senior leaders then actively nurture these small efforts, and link
them flexibly in with other such efforts. The Institute for Healthcare Improvement has
similarly chosen a chunking approach in its international work. After starting up a
successful effort in Sweden, it now appears that it may be possible to start related
efforts in other Scandinavian countries. Each of these efforts will necessarily have
unique features; but as these new efforts come on line, establishing links across
countries may lead to further possibilities (increasing the information flow and
diversity, while decreasing the power differential). 9. Nice, forgiving, tough, and clear people finish first (so, balance cooperation and competition via the tit-for-tat strategy). Throughout this list of principles we have seen the theme of balance as a key to successful outcomes in a CAS. Here in this principle, we are talking about the balance between cooperation and competition among agents.The basis for this principle comes
primarily from the work of political scientist Robert Axelrod in his studies of the famous
"prisoner's dilemma" in a branch of mathematics called game theory.22 The
dilemma involves two prisoners being held separately for interrogation by police for a
crime they jointly committed. Each prisoner is offered a choice: he can turn on his
partner and become an informant, or remain silent. If both remain silent (that is, they
cooperate with one another), they can both go free because the police do not have enough
evidence to get a conviction without a confession. The police, however, cleverly offer an
incentive. If one of them becomes an informant (that is, he competes with his partner),
that prisoner will be granted immunity from prosecution and will be given a very nice
reward to live out his days in comfort. The partner will get the maximum sentence and be
assessed a fine. Of course, if both prisoners turn informant (that is, both choose to
compete), then both will get the maximum sentence and neither gets a reward. The dilemma
is a classic struggle between the virtues of cooperation and competition in an environment
of imperfect information. This "game" is played out for real in organizations in
various forms that we call: negotiation, partnering, collaborating, forming strategic
alliances, and so on. In the 1970s, Axelrod had the idea to
study various strategies for approaching the Prisoner's Dilemma through a computerized
tournament. Strategies would be paired up in many different combinations and would play
out the game, not once, but 200 times. This is a more realistic simulation of what goes on
in real relationships as the programs would have the chance to react to each other's
strategies, and to learn as they went along. Fourteen programs were submitted, but
astonishingly to Axelrod and his colleagues, the simplest strategy of all took the prize
in this complex contest. University of Toronto psychologist Anatol Rapoport's "Tit
for Tat" program started out by cooperating on the first move, and then simply did
exactly what the other program had done on the move before. The program was
"nice" in the sense that it would never defect first. It was "tough"
in the sense that it would punish uncooperative behavior by competing on the next move. It
was "forgiving" in that it returned to cooperation once the other party
demonstrated cooperation. And it was "clear" in the sense that it was very easy
for the opposing programs to figure out exactly what it would do next. The morale: Nice,
tough, forgiving, and clear people can finish first in cooperation-competition trade-off
situations. In his 1984 book, The Evolution of
Cooperation, Axelrod showed the profound nature of this simple strategy in its application
to all sorts of complex adaptive systems-trench warfare in WW1, politics, and fungus
growth on rocks.23 Commenting on this strategy, Waldrop (1992) says "Consider the
magical fact that competition can produce a very strong incentive for cooperation, as
certain players forge alliances and symbiotic relationships with each other for mutual
support. It happens at every level of and in every kind of complex adaptive system, from
biology, to economics, to politics."24 From the complexity perspective then, a
good leader would be one who knows how to, and prefers to, cooperate; but is also a very
skillful competitor when provoked to competition (that is, a nice, forgiving, tough, and
clear person). Note that this strategy rejects both extremes as a singular strategy. While
much is said these days about the importance of being cooperative and positive-thinking in
business dealings, the always-cooperative leader may find his or her proverbial lunch is
being eaten by others. Similarly, while sports and warrior metaphors are also popular in
some leadership circles, the always-competitive leader may find himself or herself on the
outside looking in as alliances are formed. Conclusion Our existing principles of leadership and
management in organizations are largely based on metaphors from science that are hundreds
of years old. It is time that we realized that science itself has largely replaced these
metaphors with more accurate descriptions of what really happens in the world. Science is
replacing its old metaphors not so much because they were wrong, but because they only
described simplistic situations that progress has now moved us well beyond. Similarly, our
organizations today are not the simple machines that they were envisioned to be in the
Industrial Revolution that saw the birth of scientific management. Further, people today
are no longer the compliant "cogs in the machine" that we once thought them to
be. We have intuitively known these things for many years. Management innovations such as
learning organizations, total quality, empowerment, and so on were introduced to overcome
the increasingly visible failures of the simple organization-as-machine metaphor. Still,
as we have pointed out, the metaphor remains and is strong. The emerging study of complex adaptive
systems gives us a new lens through which we can now begin to see a new type of
"scientific management." This new scientific management resonates well with more
modern, intuitive notions about what we must do to manage increasingly complex
organizations today. More importantly, the new thinking in science provides a consistent
framework to pull together these heretofore intuitive notions. Now, for example, advocates
of open communications and empowerment can claim the same firmness of ground that
advocates of structure and control have been claiming exclusively. Science can now say
rather clearly that structure and control are great for simple, machine-like situations;
but things like open communication, diversity, and so on are needed in complex adaptive
systems-like those in modern organizations. The new scientific management will, no doubt,
revolutionize organizations in the coming decades much as the old scientific management
changed the world in the early decades of this century. ______________________________
For a comprehensive review of the organization-as-machine metaphor, see: Morgan, G (1997) Images of Organization, Second Edition. Thousand Oaks, CA: Sage Publications.
|
|||||||||||||||||||||||||||||||||
Next | Previous | List of Terms Copyright © 2001, Plexus Institute Permission |