The Wardley Graph

A Non-Euclidean Wardley Map Embedding In Semantic Spacetime

Every Wardley mapper quickly learns the difference between graphs and maps: the space on a map has meaning, whereas space on a graph has none. Another way of framing this is that a Wardley Map is a Euclidean embedding (information is projected onto a Euclidean space, a vertical and horizontal axis). A graph is a non-Euclidean embedding (information content is in the vertices and edges and attributes of the graph). 

Euclidean embedding is an excellent interface for people as it is optimized for human visual processing of information. For Wardley Map tool builders, the Euclidean embedding is typically represented in a machine as x and y coordinates. I intend to demonstrate below that a non-Euclidean Wardley Map embedding in a semantic spacetime (a Wardley Graph) seems to be a better machine representation for Wardley Maps tooling.

Semantic Spacetime

Mark Burgess explains the nature of semantic spacetime in his book: “Smart Spacetime: How Information Challenges our Ideas About Space, Time, and Process”, but the key inspiration for this post is his Universal Data Analytics as Semantic Spacetime series on Medium. Below, I very quickly summarize the portion relevant to understanding what follows.

Within semantic spacetime, there exist four meta-types of semantic links, which describe process causal structure. That is, there are, fundamentally, only four kinds of relationships:

  1. CONTAINS (where A contains B) – a container/space-like relationship; to be inside or outside (part of); math analogy is a polar vector
  2. FOLLOWS (where A follows B) – a sequential/time-like relationship; to follow or precede; math analogy is a translation vector
  3. EXPRESSES (where A expresses B) – a local property being expressed; to express information; math analogy is a scalar property
  4. NEAR (where A is near B) – a similarity/nearness relationship; to be next to another location; math analogy is a primitive vector

Each one of the above types can be expressed in four directions:

  1. Forward – e.g.: “Contains”
  2. Backward – e.g.: “Is Contained By”
  3. Negative-Forward – e.g.: “Does Not Contain”
  4. Negative-Backward – e.g.: “Is Not Contained By”

A table description may be helpful:

Meta-TypeForwardBackwardNegative ForwardNegative Backward
CONTAINSContainsConstitutesDoes not ContainDoes not Constitute
FOLLOWSFollowsPrecedesDoes not FollowDoes not Precede
EXPRESSESExpressesDescribesDoes not ExpressDoes not Describe
NEARNearNearNot NearNot Near

And in an illustration form:

The four elementary spacetime relationship types

For specific instances of a link, we would add a specific type to the meta-type. For example: “Is Like” (NEAR), “After” (FOLLOWS), “Depends On” (FOLLOWS), etc. 

Examples and intuition regarding specific types1:

With this understanding of semantic spacetime, how do we embed a Wardley Map in it?

The Vertical Axis

The information contained within a Wardley Map vertical axis traditionally represents the value chain. Relative position is the primary information content and is typically modeled by a dependency graph. The semantic link type of the dependencies between components in a value chain is “depends on” (FOLLOWS).

The semantics appear to change near the customer, as the customer expresses a user need (such as “thirst”). The semantic link type of the value chain dependency between a customer and the user need is “expresses” (EXPRESSES).

What then, is the elementary type of link between the user need and the component that fulfills that need? It is another type of FOLLOWS, specifically “fulfilled by” (FOLLOWS).


Within the value stream context, it is of note that the customer/user need/offering is only modeled at the edge of the map near the customer anchor (expresses (EXPRESSES) + fulfilled by (FOLLOWS)) and not modeled in the dependency chain of the value stream (all of the “depends on” (FOLLOWS) links). If we consider what needs to happen for the cup of tea component to instantiate, we see that a cup of tea EXPRESSES the need for tea and hot water, and those have to be “fulfilled by” (FOLLOWS) the tea and hot water components. 

From this, we can see that a “depends on” (FOLLOWS) link is an aggregate of a component “expressing” (EXPRESSES) a user need which is “fulfilled by” (FOLLOWS) an offering fulfilling the need. That is, if we “zoom in” on a “depends on” (FOLLOWS) link we observe a user need appear surrounded by “expressing” (EXPRESSES) and “fulfilled by” (FOLLOWS) links. For example, “zooming in” into the link between cup of tea and tea:

Again, we typically don’t “zoom in” like this on a map, but my intent here is to demonstrate a certain equivalence present between semantic links in the value chain once they are projected into semantic spacetime.

Just Scaffolding

Simon Wardley often points out that the vertical position on a Wardley Map is “just scaffolding”.  Having seen it now from the semantic spacetime framing, this may be due to all of the information being contained in the graph links, that is, the relative non-Euclidean position with respect to the other nodes. The Wardley Graph directly represents the Wardley Map value chain as nodes and edges.

Horizontal Axis

The horizontal-axis of a Wardley Map traditionally maps to the stages of evolution2. In order to determine the horizontal coordinate, the stages of evolution cheat sheet offers qualitative guidance:

The cheat sheet

For a machine, rather than storing an x coordinate between 0 and 1,  it is more effective to retain the specific semantic links to each of the qualitative properties.

I explored machine encoding of the horizontal axis in, where I collected human (Euclidean projected) input in response to prompts for selecting how NEAR a component under consideration is to each characteristic of each evolutionary stage. This input was then interpreted as a four-dimensional vector with a weight assigned to each stage of evolution. The resultant four-dimensional vector was a summary of an underlying graph of relationships and allowed for aggregation across multiple summaries.

In order to retain the semantic spacetime graph, the approach here is to create a graph between the component and each characteristic in each evolutionary stage using EXPRESSES links. Visually, it looks something like this:

We can retain the meaning of a component being in the Product stage by having most of its links EXPRESS a characteristic that is CONTAINED by the Product stage. Similarly, a component with all links EXPRESSING characteristics CONTAINED by the Commodity stage would correspond to being a Commodity.

This graph representation retains the ability to aggregate multiple summaries by assigning weight to each EXPRESS link based on the count of its occurrences in the individual samples being aggregated.

Additional benefit is that there is no forcing of position in Euclidean space. If an EXPRESS link exists, it contributes to how much Product-ness is being EXPRESSED by the component. If the link does not exist, it has no effect.

This graph representation also enables a graph algorithm for summarizing an entire Wardley Map into a single component (described later).

Characteristics and General Properties Themselves

The characteristics and general properties of the stages of evolution themselves, instead of a table in the illustration above, can also be expressed as a graph. For example, the general property of Failure at each stage: tolerated, disappointed, not tolerated, surprised has bidirectional NEAR links between the stages:

tolerated is NEAR disappointed
disappointed is NEAR not tolerated
not tolerated is NEAR surprised

Notice that these are not transitive in the sense that Failure being tolerated is not NEAR being surprised by Failure. In general, Genesis is NEAR Custom, which is NEAR Product, which is NEAR Commodity.

Aside from the bidirectional NEAR links, there also exist unidirectional (“evolved from”) FOLLOWS links:

disappointed FOLLOWS tolerated
not tolerated FOLLOWS disappointed
surprised FOLLOWS not tolerated
(and in the case of Obsolescence Climatic Pattern3)
not tolerated FOLLOWS surprised
disappointed FOLLOWS not tolerated

In general, Genesis is FOLLOWED by Custom, which is FOLLOWED by Product, which is FOLLOWED by Commodity, (and in the case of Obsolescence Climatic Pattern) which is FOLLOWED by Product, which is FOLLOWED by Custom. 

In the context of Obsolescence Climatic Pattern, there are interesting differences in characteristics expressed by a component in Custom at the beginning of its existence when compared to being in Custom at the end of its existence.

Recovering the Horizontal Axis

The idea of the Wardley Graph is to maintain a graph machine representation for a Wardley Map. This means we need to be able to recover a Wardley Map from a Wardley Graph. For a demonstration, let’s consider the classic Cup of Tea map:

With the Cup of Tea map encoded as a Wardley Graph, we can project it onto Euclidean space, and we see something like this:

First, this is clearly not a Wardley Map. The idea here is to demonstrate that the essential information that we typically extract from the horizontal axis position is present in the graph links. The projection above is done using force-directed graph drawing. Notice the presence of stages of evolution I, II, III, and IV and imagine the horizontal axis aligned along that sequence of nodes. We see that where each component is located ends up corresponding to where it is present on the original Wardley Map. This means that the graph alone contains enough information to reconstruct the horizontal axis coordinate from the graph information alone.

Another thing of note is the location of the customers expressing user needs (Public and Thirst for Tea are highlighted in the picture). I often experience a lot of anxiety of where to place the customer on the evolutionary axis. Within the Wardley Mapping community, there are also methods that depict the customer using different horizontal labels, or even a completely different coordinate system like the user journey. The Wardley Graph encoding can demonstrate that customers and user needs are a boundary condition and do not need to be associated with stages of evolution to be useful.


Components Move

Throughout the stages of a component’s evolution, the component updates the EXPRESSES links between itself and the various stages of each evolutionary characteristic. The aggregation of these links viewed at different times changes as the component EXPRESSES more characteristics CONTAINED by one stage of evolution vs another (Genesis, Custom, Product, or Commodity). When these snapshots are viewed over time, an effect corresponding to what we label as “movement” occurs.

This same movement occurs on the Wardley Graph:

If you’re having trouble seeing the similarity, consider this visualization showing the same movement as the videos above, but this time on a Wardley Graph with nodes arranged in the familiar grid pattern:

Moving Components

One of the uses of Wardley Maps is to determine what actions to take where on the map. One of those actions may be to accelerate a component along its evolutionary trajectory. In order to move a component in semantic spacetime towards a more desired state of evolution, we can evaluate each characteristic and focus on the component EXPRESSING a different characteristic through our actions in the world. For example, increasing components ubiquity by switching to mass manufacturing methods.

This framing highlights that some characteristics may be actionable/controllable/leading (switch from experimental investment to searching for profit), while others may be unactionable/lagging (user perception, market perception).

Jabe Bloom (@cyetain) points out that “movement is not free”, therefore, affecting component characteristics requires action that expends time and treasure. Chris Daniel (@wardleymaps) points out that some moves are too expensive to make: “toxic legacy”, therefore, some changes in characteristics may be completely out of reach.


So, we have a Wardley Graph, what is the benefit?

One of the problems that machines need to deal with that humans do automatically is summary and aggregation of components on Wardley Maps. On the Cup of Tea map there is a Power component. This is fairly abstract, but if needed, we could replace the Power component with wall socket, breakers, panels, wires, power stations, etc. In machine representation, we need to be able to do something similar, and this is where Wardley Graph machine representation is really useful.

Instead of saying this is possible in the abstract, we can instead define a graph algorithm for Wardley Map component summary based on Mark Burgess’ algorithm for grouping nodes into supernodes4. The algorithm below is written in terms of summarizing all map components (excluding user needs and customers), but works just the same for summarizing only a subset of map components.

  1. Map components are all marked under a single “hub” component (the summary component for the map). All the map components are linked to the hub with a CONTAINS link, specifically “Is contained by” (CONTAINS).
  2. Aggregate information about each map component is aggregated into the hub component. This information is:
    1. Total count of map components (aggregated from existing counts if map consists of hub components already).
    2. Any other scalar-like property of the nodes, ex: EXPRESS links summed up and tallied
    3. Consider other graph machine learning node features. For example, a Graphlet Degree Vector (GDV) could express the particular way that components express evolutionary characteristics. The specific graphlet count could indicate how many characteristics are expressed by the component.
  3. Links from summarized components to things that are outside of the hub are copied to the hub component. Repeated links are summed and represented as link weight.
    1. For information encoded in the vertical axis (value chain), the hub component does not include user needs and customers, those remain outside of the hub.
    2. For information encoded in the horizontal axis (evolution), all of the component EXPRESS links designating their stage of evolution are aggregated and summed as link weights at the hub level, thus determining the hub’s summary stage of evolution.

As a result of the above algorithm, the hub becomes a single component, ready to answer queries, and can be used as a component of another map, thus expressing the fractal nature of Wardley Map components. For example, the above algorithm can generate a summary Tea Shop component for the entire Cup of Tea map. Note in the illustration below that the Tea Shop ends up projecting into space where you’d expect it to. Also note that the summary of Tea Shop (without Kettle) is also where you’d expect it to be.

With the Tea Shop, we have an example of summarizing many components into a single summary component. Another use for the summary algorithm is to integrate multiple points of view of the same component, a version of crowdsourcing. Multiple people (and machines) can determine characteristics expressed by some component. The summary algorithm can then integrate all those points of view and provide a crowdsourced version of the component. This is another powerful collaboration feature enabled by the Wardley Graph.

Another benefit of automated summary are component updates. When a constituent component is updated, the summary and anything else connected can be updated automatically as well. 

What is a Map?

How should maps be represented on the Wardley Graph?

I think it is worth making the distinction between a map and a component that summarizes other components. That is, a map is not a summary component. A map does not contain other maps. A map simply contains the components, customers, and user needs that are depicted on the map:

Another, example of a smaller map containing fewer components:

As mentioned before, a map is not the same as a summary component. Here is an example of a map that contains the Tea Shop summary component as part of the map:

With maps and components being distinct from each other we end up with a representation where maps can have both, the summary component and at the same time highlight some of that component’s constituents. For example, the kettle situation in the Tea Shop:

What’s Next?

The above is what I’ve been able to put together so far in my exploration of embedding Wardley Maps in non-Euclidean semantic spacetime. I hope you enjoyed the journey and see Wardley Maps from a new perspective. I very much recognize the irony of promoting a graph to represent a map, but I hope I made the context clear for when to use a graph (machine representation) and when to use a map (human interface).

As a tool builder, I find this graph representation very compelling and sympathetic to the problems I encountered when attempting to create a useful machine representation. I hope that by sharing it with you we can improve the capabilities of our tools and perhaps this could become a useful common foundation for a common interface/representation between our various systems.

To learn more about semantic spacetime and the underlying foundation for the whole graph thing I recommend Mark Burgess’ series: Universal Data Analytics as Semantic Spacetime. After going through Marks’ material, I adapted his SST library ( into a format more familiar to me, resulting in my version of the sst library ( These establish the semantic spacetime foundation on which Wardley Graph is built.

For the Wardley Graph implementation itself, my experimental library is, where you can see semantic spacetime concepts adapted specifically for the Wardley Graph use case. Additionally, that’s where you’ll find reference implementation of the summary algorithm and examples of how to encode the information contained in a Wardley Map into a graph.

There is plenty more to do and learn. Graph machine learning comes to mind. What would scenario planning look like on a Wardley Graph? Can I subscribe to components maintained by experts? Does a graph make it easier? The current implementation uses ArangoDB as the graph database, I intend to explore using Amazon Neptune next. I hope you find the graph representation as compelling as I do, and I look forward to seeing what the Wardley Mapping community can do with it.

1 Burgess, Mark. Smart Spacetime: How information challenges our ideas about space, time, and process (Kindle Location 6668). Kindle Edition.

2 Wardley, Simon. Wardley Mapping Book, Chapter 2: Finding a Path, accessed on 4 Jan 2022.

3 Slominski, Tristan. Obsolescence Climactic[sic] Pattern: When Things Move to the Left, accessed on 4 Jan 2022.

4 Burgess, Mark. Respecting the graph directly, accessed on 4 Jan 2022.

NFTs Through a Hohfeldian Lens

A Simplistic Summary

Photo by Shubham Dhage on Unsplash

In a 1913 essay “Some Fundamental Legal Conceptions as Applied in Judicial Reasoning”, Wesley Newcomb Hohfeld outlined what seems very much like a Promise Theoretic approach to judicial reasoning regarding interest in property. 

Hohfeld distinguishes the jural concepts of Right, Duty, Privilege, No-Right, Power, Liability, Disability, and Immunity.

As depicted, there exist promise-like relationships between these concepts. Hohfeld highlights jural correlatives: Right and Duty, No-Right and Privilege, Power and Liability, Disability and Immunity. Hohfeld also highlights jural opposites: Right and No-Right, Duty and Privilege, Power and Disability, and Liability and Immunity.

If A has a Right against B to exclude B from using A’s property, then B has a Duty to not use A’s property. This is an example of what is usually meant by property. Through the Promise Theory lens, A promising the exclusion of B (Right) works only if B promises to be excluded (Duty).  Additionally, A may also wield the Power to change or create a legal relationship regarding the property. A’s Power is correlated with B’s Liability to the changes created by A. Through the Promise Theory lens, A promising transfer of ownership (Power) only works if B promises to recognize the transfer (Liability).

NFTs seem to claim Right and Power without having the corresponding Duties and Liabilities in place. The NFT industry certainly wishes that the Duties and Liabilities were in place. But, that is not the case today. An NFT is advertised as property, and an NFT purchaser feels like they are buying the Right to exclude others from their property. The problem arises that everyone else has no Duty to be excluded. In the Promise Theory sense, an NFT purchaser promises the exclusion of others (Right), but everyone else does not promise to be excluded (Duty). Additionally, an NFT purchaser believes they assume Power to create legal relationships regarding their purchase. However, everyone else does not have the Liability to be subject to those legal relationships. In the Promise Theory sense, an NFT purchaser promises transfer of ownership (Power), but everyone else does not promise to recognize the transfer (Liability).

I’ll end by highlighting that it seems to me the NFT situation is much worse for NFTs. Not only does everyone not have the Duty and Liability regarding NFTs Right and Power, it also seems to me that everyone has Privilege and Immunity from that Right and Power via all the property mechanisms preceding NFTs.

Finite and Infinite Games

A Useful Reference Frame

Photo by NASA on Unsplash

There are at least two kinds of games: finite and infinite. A finite game is played for the purpose of winning, an infinite game for the purpose of continuing the play. Finite games are those instrumental activities – from sports to politics to wars – in which the participants obey rules, recognize boundaries and announce winners and losers. The infinite game – there is only one – includes any authentic interaction, from touching to culture, that changes rules, plays with boundaries and exists solely for the purpose of continuing the game. A finite player seeks power; the infinite one displays self-sufficient strength. Finite games are theatrical, necessitating an audience; infinite ones are dramatic, involving participants…

Carse, James P. (1986). “Finite and Infinite Games”. Composite quote., accessed on 29 Mar 2021.

Complex Domain

I no longer recall where I first came across finite and infinite games. I’m pretty sure I did not read the book. I’m not sure the concept needs a book. The introduction of the concept of infinite game, where the purpose is to continue to play until one no longer can, reminds me of the Complex domain. In the Complex domain, constraints change due to agent actions, enabling new actions, changing constraints, enabling new actions, and so on…

Sports Metaphor

The framing of finite and infinite games points out that the use of a sports metaphor in business is inappropriate. The purpose of a business is not to win. The purpose of a business is to continue to play. For someone looking at business through a sports lens, they’re playing the wrong game.

Military Metaphor

A military metaphor for business is better than a sports one. Despite Carse somehow classifying military conflict into a finite game, it is anything but. The purpose of the military is not to win a war. The end of hostilities is but a step in setting conditions for the peace that hopefully follows (that’s the reason why wars have rules). 

Having said that, I am not a fan of using the military metaphor for business. As I pointed out in Metaphors We Live By, we ought to be sensitive to everything that the metaphor brings along with it. Military deals with death and killing. Typical business stakes are not that high. I will happily trade a “war room” and “fog of war” for an “operations center” and “uncertainty”. 

What Do You Think?

Did you notice some metaphors that didn’t seem to fit in a particular context? Did the finite/infinite game distinction clarify why that is the case? Let me know in the comments.

Next Up

Without sports or military metaphors what should we use? I did not have a great answer for this until I came across Wardley Maps, a reference frame for business that originated within business context, coming soon.

Metaphors We Live By

This Is Water

Photo by Ahmed Zayan on Unsplash

Metaphor is “a figure of speech in which a word or phrase is applied to an object or action to which it is not literally applicable.”1 George Lakoff and Mark Johnsen in their 2003 book “Metaphors We Live By” propose that human thought processes are largely metaphorical. Once I was exposed to this concept, I could not unsee it. I paraphrase and quote liberally below.

Argument Is War

Your claims are indefensible.
They attacked every weak point in my argument. Their criticisms were right on target.
I demolished their argument.
I’ve never won an argument with them.
You disagree? Okay, shoot!
If you use that strategy, they’ll wipe you out. They shot down all of my arguments.

Time Is Money

You’re wasting my time.
This gadget will save you hours.
I don’t have the time to give you.
How do you spend your time these days? That flat tire costs me an hour.
I’ve invested a lot of time in them.
I don’t have enough time to spare for that. You’re running out of time.
You need to budget your time.
Put aside some time for ping pong. Is that worth your while?
Do you have much time left?
They’re living on borrowed time.
You don’t use your time profitably. I lost a lot of time when I got sick. Thank you for your time.

When we use the war metaphor to talk about arguments, when we use the money metaphor to talk about time, it is intended to highlight specific abstract concepts and anchor them in something more real, more grounded. That is the benefit of metaphor. However, along with the concepts we want to highlight, the metaphor brings along with it all the other aspects of the metaphor into our communication, whether we intend it or not. Those aspects may or may not be true. The metaphor may not apply where we don’t intend it to.

Words Matter More Than You Think

Lakoff and Johnsen ultimately point out that all of our language grounds out in our human embodiment.

I’m feeling up. That boosted my spirits. My spirits rose. You’re in high spirits. Thinking about them always gives me a lift. I’m feeling down. I’m depressed. They’re really low these days. I fell into a depression. My spirits sank. The physical basis is a person’s posture droops along with sadness and depression, whereas an erect posture goes with a positive emotional state.

Get up. Wake up. I’m up already. They rise early in the morning. They fell asleep. They dropped off to sleep. They’re under hypnosis. They sank into a coma. The physical basis is humans sleep lying down and stand up when awake.

They’re at the peak of health. Lazarus rose from the dead. They’re in top shape. As to their health, they’re way up there. They fell ill. They’re sinking fast. They came down with the flu. Their health is declining. They dropped dead. The physical basis is being ill forces us to lie down and being dead forces us down.

So, this is all a neat trick, and perhaps mind blowing. The reason I want to share this and bring it to your attention is so that you can recognize the metaphor in your choice of words. Because once you see your metaphor, you may become capable of choosing a different one.

Imagine a culture where an argument is viewed as a dance, the participants are seen as performers, and the goal is to perform in a balanced and aesthetically pleasing way. In such a culture, people would view arguments differently, experience them differently, carry them out differently, and talk about them differently. But we would probably not view them as arguing at all: they would simply be doing something different. It would seem strange even to call what they were doing “arguing.” Perhaps the most neutral way of describing this difference between their culture and ours would be to say that we have a discourse form structured in terms of battle and they have one structured in terms of dance. 

Some rhetorical questions to consider: Are you really fighting a fire when something bad happens at work? Are you really fighting a war when you’re in a war room? How much baggage are you bringing along with one convenient phrase? Do you want to feel at risk of death because a website is down and a computer can’t respond to a request? Is the metaphor worth it?

What Do You Think?

Since I read “Metaphors We Live By”, I cannot stop seeing the metaphor everywhere. Do you see it? Do you use some unusual metaphors purposefully? How would you describe “happy” without embodied metaphor? Is it even possible? Let me know in the comments.

Next Up

Next, I will write about Finite and Infinite Games.

How Complex Systems Fail

Complacency and the Clear/Chaotic boundary

We live in A Complex World. We are surrounded by complex systems. In this post I will take a moment to highlight how complex systems fail. 

My colleague Jason Koppe shared a great summary in a tweet anticipating this post, linking to Richard Cook’s

The website above is a great reference and goes into a lot… A LOT… more detail than I will go into here. I want to focus on a particular aspect of how complex systems fail that I find particularly useful in the context of Onboarding to a Software Team

Clear/Chaotic Boundary

I highlighted various aspects of the Cynefin framework in this series so far. I want to return to it once again and discuss the boundary between the Clear and Chaotic domains. Notice, in the figure below, that the boundary is drawn with a squiggle on the bottom. This is intentional, in that it intends to be a visual representation of a cliff between Clear and Chaotic. Transition from Clear to Chaotic is a one way transition without a direct return path. Why would Clear become Chaotic?

In the framing of Economy of Thought, I pointed out that while the world is complex, it is expensive to treat everything as such and I can choose to summarize some of the complexity as Clear. 

It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.

Mark Twain … maybe.

The problem occurs when I treat something as Clear and it turns out not to be the case.

Rasmussen’s Model

So, what happens that shifts a Clear system into the Chaotic domain? It’s not like it’s waiting for me to turn around and then beelines straight for the cliff into Chaos screaming “YOLOOOOOO!…” on the way down.

Photo by Jakob Owens on Unsplash

Jens Rasmussen came up with a model that I think captures the intuition behind what is happening.

Rasmussen, Jens (1997). “Risk management in a dynamic society: A modelling problem”. Safety Science, 27(2-3), 183-213. Accessed on 21 Mar 2021.

This is a busy figure. Take note of the boundaries on the right hand side. There is a Boundary to Economic Failure; if the system moves beyond there, you go bankrupt. There is a Boundary to Unacceptable Work Load; if the system moves beyond there, everyone is spents and work stops. As a result of these boundaries, there exists a pressure “from management” toward efficiency which applies a force to the left, away from the Boundary to Economic Failure. Additionally, there is a gradient towards ease of work, which applies a force to the left, away from the Boundary to Unacceptable Work Load.  Together, these forces create a movement in the system toward the third boundary on the left-hand side, the Boundary of functionally acceptable performance, i.e. failure.

So, it seems rather straightforward, we should simply not let the system drift beyond the failure boundary. Well, there are a few constraints that make this difficult.

Normalization of Deviance1

While Rasmussen’s model is valuable, here is a depiction of what it looks like for complex systems in practice.

Note the loss of information on the left hand side. We still know that the Boundary of functionally acceptable performance is somewhere on the left, but we don’t know exactly where it is. All we have is our perceived boundary of acceptable performance. This disparity causes an interesting dynamic. Let’s zoom in on the perceived boundary of acceptable performance and consider what happens to a system near such a boundary.

  1. The system, due to normal operation, crosses our perceived failure boundary.
  2. We notice that things are in a configuration where we expect them to fail, they’re not failing yet, so we quickly apply remediation and shift the system back to within acceptable performance. This cross into failure, no actual failure, and quick remediation continue until we reach 3.
  3. We now discovered that our perceived failure boundary is not the actual failure boundary. We acclimatize. We move our perceived failure boundary to match.
  4. The pattern of crossing into failure, system not failing, and remediation back into acceptable (now shifted) performance continues. We drift further.
  5. We ultimately locate the actual failure boundary, the system fails, and we inadvertently entered into Chaos.

The fact that the actual failure boundary is often unknown until our complex system fails, combined with the invisible unknown boundary continually shifting (in the Complex domain constraints change due to actions and the environment), demonstrates a pattern of how our systems will continuously flirt with the failure boundary. Do not forget that we have forces pressuring the system from right to left, the pressure of efficiency, and the gradient toward ease of work. This continuous pressure and not really knowing where the failure boundary lies combine to continuously shift the system toward failure.

What Do You Think?

How do you think about system failures? Let me know in the comments.

Next Up

Next, I will discuss how the words we use shape our thinking: Metaphors We Live By.



Black Swans and Turkey Problems

The concept of the Black Swan was popularized by Nassim Nicholas Taleb in his 2007 book by the same name. Over many introductions of this concept to friends and colleagues, I observed that it is an easy concept to describe but a difficult one to grasp. 

Unknown Unknown

Perhaps you’ve heard the following framing before. There are things we know we know: known knowns. There are things we know we don’t know: known unknowns. Then, there are things we don’t know, we don’t know: unknown unknowns. A Black Swan is an unknown unknown, an event that we did not see coming and had no way of knowing was coming. A Black Swan surprises us in a way we didn’t know we could be surprised. A Black Swan happens because we didn’t know what we didn’t know. That’s the simple description. It is straightforward, but its implications are not.

You cannot anticipate a Black Swan.

Again, you cannot anticipate a Black Swan.

If you think you can, then it is not a Black Swan you are anticipating. This is not any easy concept to wrap one’s head around. Consider that if you could anticipate it, then that means you know something or that you know that you don’t know something. But, that is either known known or known unknown. A Black Swan is an unknown unknown.

It gets worse.

A Black Swan will happen when you are maximally certain that it will not.

Fundamental Surprise

In his book, Taleb describes the Turkey Problem. Imagine a turkey scientist. As a turkey scientist, the turkey decides to better understand farmer-turkey relations. As part of the study, it attempts to predict farmer behavior and makes the following observation: “Day 1, farmer feeds me.” The turkey continues to make observations day by day: “Day 2, farmer feeds me”, “Day 3, farmer feeds me”, … and so on for 111 days. On day 111, what is the turkey’s model for farmer behavior? It probably looks something like this:

Unbeknownst to the turkey, Day 111 is the day before the U.S. Thanksgiving holiday, and the farmer instead of feeding the turkey slaughters it. 

You cannot anticipate a Black Swan.

A Black Swan will happen when you are maximally certain that it will not. 

Day 111 is the day when the turkey scientist has highest confidence that it will be fed. Everything in its entire existence taught it that food was coming on Day 111. 

Outside Context Problem

The usual example given to illustrate an Outside Context Problem was imagining you were a tribe on a largish, fertile island; you’d tamed the land, invented the wheel or writing or whatever, the neighbors were cooperative or enslaved but at any rate peaceful and you were busy raising temples to yourself with all the excess productive capacity you had, you were in a position of near-absolute power and control which your hallowed ancestors could hardly have dreamed of and the whole situation was just running along nicely like a canoe on wet grass… when suddenly this bristling lump of iron appears sailless and trailing steam in the bay and these guys carrying long funny-looking sticks come ashore and announce you’ve just been discovered, you’re all subjects of the Emperor now, he’s keen on presents called tax and these bright-eyed holy men would like a word with your priests.

Banks, Iain M (1996). “Excession”.

The Outside Context Problem, the Turkey Problem, Fundamental Surprise, Unknown Unknown, the Black Swan, all are attempting to pinpoint something very straightforward to state but very difficult to accept.

In a complex world, you cannot predict the future.

I will leave you with a final reflection. Consider the events in your past that changed the course of your life. How many of them did you plan and predict? How many just happened to you?

What Do You Think?

Do you feel the essence of what a Black Swan is? On the other hand, do you think you can predict the future? Perhaps you know how to reduce the frequency of Black Swans? Let me know in the comments.

Next Up

I will write about How Complex Systems Fail.


What to Do When in Doubt

Photo by Bryan Goff on Unsplash

In a previous post on complexity, I highlighted Liz Keogh’s shortcut for complexity estimation in order to quickly figure out the complexity of the system. In this post I will share Dave Snowden’s Cynefin approach for dealing with aporia and figuring out what to do when in doubt about what to do.1

An expression of doubt

So far, in my introduction of Dave Snowden’s Cynefin framework, I focused on the four domains: Clear, Complicated, Complex, and Chaotic. However, there are additional aspects of the framework I haven’t highlighted until now. For example, in the space in the center, between the domains, we see “AC”, which is the Confused domain.

The distinction between A and C in the Confused domain is intended to convey distinction between being active and being passive. In being passive, one remains Confused, and tends to drift toward their favorite action heuristic: Clear, Complicated, Complex, or Chaotic, whether it is appropriate or not. The danger being accidentally slipping into Chaos. In being active, one expresses Aporia (the expression of doubt) where one is actively confused and is attempting to unconfuse themselves.

“The instance of a decision is a madness”2

Snowden articulates six pathways out of Aporia from low to high risk, suggesting an order of elimination in how to proceed out of doubt. I paraphrase below.3

  1. If expert advice is questionable, shift provisionally into Complicated and set up a rapid debate between experts and people with different backgrounds to see whether expert analysis will be sufficient.
  2. Shift into Complex, identify multiple coherent contradictory hypotheses with champions, and conduct safe-to-fail experiments to discover more about the context.
  3. Shift into Complicated if there is a clear and obvious body of knowledge that you’ve been ignoring or didn’t know about so far. Perhaps the experts were right and it’s time to listen to them.
  4. If you’re concerned that you may not be seeing something essential, shift into the Chaotic-Complex liminal zone and identify multiple perspectives, especially minority perspectives. Outliers are very valuable as they provide diverse points of view and may offer hypotheses to be tested. Diversity here is key.
  5. If you can no longer tolerate Aporetic state, you may choose to enter into the Chaotic domain where you begin crisis management, exert control and take direct action to attempt to resolve the situation without necessarily having control over what way it resolves.
  6. Lastly, the most risky approach is to shift into Clear and assume that you need to apply standard processes and procedures harder. 

Linguistic, Aesthetic, Physical

Now, it may seem that being in doubt is not desirable. This is not always the case. Snowden points out that we may want to enter Aporia deliberately if we begin to observe our approaches are failing and perhaps we need to reconsider our assumptions. To deliberately induce Aporia, consider linguistic, aesthetic, and physical techniques. 

Linguistic approach is to use language: neologisms, foreign words, paradox, metaphor, counterfactuals, poetry, or quotations.4

Aesthetic approach is to use art: cartoons, illustrations, fine art, photographs, satire, improv, music, etc.5

Physical approach is to use the body: physical exercise, practice of craft, fasting, and other embodied approaches.6 I can personally vouch for this approach having ridden 2783.6 miles of the Continental Divide on a bicycle. This may be an extreme example, but it took me a few months to care about my professional career again after I got back.

What Do You Think?

Confusion is not something I naturally seek out. Do you deliberately enter the state of Aporia? What do you get out of it? Let me know in the comments.

Next Up

I will set aside the Cynefin framework and continue to explore complexity, this time through the framing of Black Swans and Turkey Problems: Excession.

1 Snowden, David J (2020). “Cynefin St David’s Day (5 of 5)”. Accessed on 17 Mar 2021.

2 Derrida quoting Kierkegaard in Derrida’s “Dialanguages” in “Points: Interviews 1974–1994”(Stanford, Stanford University Press, 1995, 147–8.).

3 See Snowden, David J (2020). “Cynefin St David’s Day (5 of 5)”. for more precise descriptions.

4 Snowden, David J (2021). “Linguistic aporia”. Accessed on 17 Mar 2021.

5 Snowden, David J (2021). “Aesthetic aporia”. Accessed on 17 Mar 2021.

6 Snowden, David J (2021). “Physical aporia”. Accessed on 17 Mar 2021.

Economy of Thought

Complexity is Expensive

Photo by Kevin Ku on Unsplash

This is the third post on defining complexity in the onboarding series. I highlighted before that, in a complex system, the relationship between cause and effect is knowable only in hindsight. Additionally, our constraints will change on the timescale under consideration. A sense-making heuristic is to probe-sense-respond using exaptive practices. In this post, I’ll highlight that this is expensive.

Finite Capacity

Our ability to think is, on the one hand, vast. On the other hand, we can only think so much and think only so fast. Ultimately, we have a finite amount of time to think. When dealing with finite resources, we can frame things in terms of an economy. There are two complementary definitions that I have in mind:

“efficient and concise use of nonmaterial resources (such as effort, language, or motion)”

“the arrangement or mode of operation of something” 

“Economy.” Dictionary, Merriam-Webster, Accessed on 10 Mar 2021.

When applied to thinking, we can imagine an Economy of Thought, that is, the arrangement of our thinking in order to make efficient and concise use of the finite amount of time available.

The Expense

From the frame of Economy of Thought, it turns out that thinking in the Complex Cynefin domain is the most expensive.

In the Ordered domains (Clear and Complicated), constraints do not change. I can come to know something and then I can rely on that knowledge not changing. “Things fall” is an unchanging knowledge. Logging into my email account is unchanging knowledge (on the relevant timescale). Traffic laws guiding me how to drive a car is unchanging knowledge (on the relevant timescale). Through the framing of Economy of Thought, I can learn Clear and Complicated things once, and rely on them from then on.

Chaotic domain is demanding, but it is transient and tends to resolve into other domains.

In the Complex domain, well… here be dragons. Constraints change, and they can change due to our actions. I can come to know something, only for that knowledge to change once I look away (imagine playing a visually demanding sport with your eyes closed). In order to remain relevant in the Complex domain, I need to be continuously engaged. I need to always look for the latest changes to update my knowledge. I have to be in a constant feedback loop with the domain to keep up to date knowledge. I need to keep up to date knowledge because the knowledge changes when I don’t look. This is expensive.

A Choice

Considering the world through the frame of Cynefin, I see the world in the Complex domain. The world is complex, therefore I must constantly probe-sense-respond to make sense of the world. However, the frame of Economy of Thought offers me a choice. While I can choose to expend all of me on all of the complexity of the world, I can also choose not to do that. I can approximate parts of the world as Complicated or Clear. In fact, all of us do this constantly. Our brains evolved to predict the patterns in time and space that allow for these Complicated and Clear shortcuts. Having a Cynefin frame of reference highlights that I can make a deliberate choice in how to engage with the world. I can choose to engage in complexity and maintain a tight feedback loop where required. At the same time I can summarize other complexity and treat it as Clear or Complicated if I don’t think constraints will change. I can be economical. I can arrange my thinking to make efficient and concise use of the finite amount of time available.

What Do You Think?

What’s your usual approach towards the world? Is it Clear, Complicated, Complex? Let me know in the comments.

Next Up

Next, I’ll write about how to decide when to take a Clear or Complicated shortcut: Aporia.

Cynefin Complexity

The Cynefin Framework

Denys Nevozhai on Unsplash

Last time in the onboarding series I wrote about complexity through the frame of relationship between cause and effect in the world.  Today, I want to introduce Dave Snowden’s Cynefin framework1 which underpins what I mean by complexity.

Ordered Systems

So far, I defined an Ordered System as a system where a relationship between cause and effect can be determined. The relationship could be clear or discovered through analysis. When the relationship is clear, that is a Clear System. 

Copied and modified from 

For a Clear System, the sense-making heuristic is sense-categorize-respond. We sense the situation, we categorize it (because cause and effect are clear), and we respond using the Best practice available for the category we selected. The constraints are Fixed, do not change, and will probably never change on the timescale under consideration (whether we act or not).

When the relationship between cause and effect can be discovered through analysis, that is a Complicated System.

Copied and modified from 

For a Complicated System, the sense-making heuristic is sense-analyze-respond. We sense the situation, we analyze it (because cause and effect can be determined through analysis), and we respond using one of the Good practices available.2 The constraints are Governing constraints, “…[they] provide limits to what can be done. In terms of our policies and processes, these are hard-and-fast rules. They are context-free, which means they apply to everything, regardless of context.”3 Because we enforce the constraints, the constraints do not change (similarly to Fixed constraints), and will probably never change on the timescale under consideration (whether we act or not).

Chaotic Systems

When the relationship between cause and effect cannot be determined, that is a Chaotic System.

Copied and modified from 

For a Chaotic System, the heuristic is to act-sense-respond. We act to establish order, we sense where stability lies, and we respond using Novel methods attempting to turn chaos into complexity.4 There are no constraints. “Chaos is caused by a lack of constraints; meeting constraints will cause it to dissipate. Think of fire burning until it runs out of fuel or oxygen. This is what makes Chaos transient and short-lived; it will rapidly grow until it meets constraints, at which point the situation resolves (but not necessarily in your favour).”5

Complex Systems

When the relationship between cause and effect can only be determined in hindsight, that is a Complex System. 

For a Complex System, the heuristic for sense-making is probe-sense-respond. We probe via multiple parallel and independent safe-to-fail experiments. We sense whether our probes are working, and we respond using Exaptive6 practices. If a probe is working, we reinforce it. If a probe is failing, we should dampen it. We should not conduct the probes in the first place unless we’ve identified amplification/reinforcement and dampening strategies ahead of time. The constraints are Enabling in the sense that they constrain what probes we can conduct (as opposed to any probe imaginable if there were no constraints). The constraints will change on the timescale under consideration due to our own actions (probes) and external factors.

A Cheat Sheet

Liz Keogh has a very useful shortcut for estimating complexity to get you started7:

5. Nobody has ever done it before.
4. Someone outside the organization has done it before (probably a competitor)

3. Someone in the company has done it before.
2. Someone in the team has done it before.

1. We all know how to do it.

What Do You Think?

This was quite a lot of dense exposition. If you feel something could use more clarification, let me know in the comments.

Next Up

I’ll continue sharpening the definition of complex through the framing I call the Economy of Thought.

1 Accessed on 9 Mar 2021.

2 It is worth noting that given you have the appropriate expertise, there are likely multiple good approaches to take. Pick one.

3 Keogh, Liz (2019). “Constraints and Cynefin”. Accessed on 9 Mar 2021.

4 Snowden, David J.; Boone, Mary E. (2007). “A Leader’s Framework for Decision Making”. Accessed on 9 Mar 2021.

5 Keogh (2019).

6 Exaptive in the sense that we are using our existing capabilities and exapting them for novel purposes that they were perhaps not originally intended for. Think of using a piece of paper to keep a chair from rocking back and forth.

7 Keogh, Liz (2013). “Estimating Complexity”. Accessed on 10 Mar 2021. Accompanying illustration is based on one of Keogh’s presentations.

Wardley Maps and a Thousand Brains

Why Maps Are Effective Tools

Photo by N. on Unsplash

I made a claim on Twitter recently that Jeff Hawkins’ new book: “A Thousand Brains: A New Theory of Intelligence”, explains why a map is an effective tool for the human brain.

Reference Frames

The key insight of the Thousand Brains theory is that the primary purpose of the neocortex is to process reference frames.

Each column in the neocortex—whether it represents visual input, tactile input, auditory input, language, or high-level thought—must have neurons that represent reference frames and locations. 

Up to that point, most neuroscientists, including me, thought that the neocortex primarily processed sensory input. What I realized that day is that we need to think of the neocortex as primarily processing reference frames. Most of the circuitry is there to create reference frames and track locations. Sensory input is of course essential. As I will explain in coming chapters, the brain builds models of the world by associating sensory input with locations in reference frames.

Hawkins, Jeff. A Thousand Brains (p. 50). Basic Books. Kindle Edition.

Recall that a reference frame is like the grid of a map.

Hawkins, (p. 59).

This has implications for what thinking actually is. All knowledge is stored in reference frames and thinking is a form of moving.

The hypothesis I explore in this chapter is that the brain arranges all knowledge using reference frames, and that thinking is a form of moving. Thinking occurs when we activate successive locations in reference frames.

Hawkins, (p. 71)

If everything we know is stored in reference frames, then to recall stored knowledge we have to activate the appropriate locations in the appropriate reference frames. Thinking occurs when the neurons invoke location after location in a reference frame, bringing to mind what was stored in each location. The succession of thoughts that we experience when thinking is analogous to the (…) succession of things we see when we walk about a town.

Hawkins, (p. 73)

The process of thinking is described as movement on a map!


Simon Wardley articulates exactly what is needed for an abstract map. He captures the map’s essence.

Maps are visual and context specific. The position of components has meaning based on the anchor. There is movement.1

To be an expert in any domain requires having a good reference frame, a good map.

Hawkins, (p. 87)

I believe that the reason why Wardley Maps are effective is that they are a reference frame for business. They are effective because they are in a form which allows our brains to think most naturally. They also happen to be a useful reference frame.

Beyond Wardley Maps

The reason for my excited tweet was a realization that Simon Wardley’s criteria of what makes something a map can be translated into a criteria of what makes something a good tool for our human brain. 

We have a clear set of constraints for creating really useful tools for the mind!

What Do You Think?

Did you already know how to make useful mind tools, what was your approach? Let me know in the comments.

1 Simon Wardley. Chapter 2: Finding a path. Accessed on 5 Mar 2021.