Integrated training in how we work

A request for comments…

This morning, my mind drifted towards an idea of how to offer training that other businesses pay for without becoming a consultancy. Becoming a consultancy is a non-goal. Declaring this a non-goal is my reaction to the inevitable drift from practice into selling a product based on lessons learned from organizing how to practice. A problem with that approach is that the shift to selling a (consultancy) product removes the participant from the practice of doing the thing, and into the practice of selling the consulting about doing the thing. I’m not staking out a moral high ground, I just don’t want to do the things that consultancy comes with.

The idea that I thought of this morning is not a new idea. It was new to me in this particular context. Naturally, as soon as I was able to formulate the idea in my head, I started seeing examples of this structure all around me.

I’m thinking about offering other companies to pay their employees to work for our company/organization. In exchange, the employees are indoctrinated into how we do work. We don’t go into a company as consultants and train within the context of that company, then leave. A reason not to do that is that there’s no way we’ll get enough context for the company within the time bounds of a consultancy engagement. This might not necessarily be due to limited time, but instead, due to the psychological positioning as a consultant, which always remains. In order to fully integrate ideas about how we do work, the employees need to be integrated into the context of our work, the purpose of our work, and then practice it for a number of months in order to integrate that tacit knowledge into their understanding of work. Once this integration takes place, they return to their original companies and then integrate the understanding to their new/old work environment.

I think that a period of one year would be ideal from perspective of our company. A period of three months would probably be ideal from the perspective of a customer company. So, perhaps a period of six months might work?

Regarding this concept not being new, companies send their employees on sabbaticals. Companies send their employees to grad schools. Any sort of liaison program is very similar to this. It’s like an internship paid for by the customer company. I think there’s just not many commercial companies that explicitly accept sabbatical employees with the intent to train them and then have them return to work for some other company, but I did … oh, about zero research on this (anyone got keywords I should use?).

You may well have no idea who I am and how we work or what our company does. I can go more into the details in the future if this concept is worth pursuing. I would like to know what you think? Would your company want to do something like this with someone somewhere? Would you feel that this is antithetical to developing your core competencies? What constraints would you want to be in place in order for your company to be comfortable doing this?

Advertisement

Model of Communication for Information Sharing

In How To Do Things, I mentioned that in order to thrive in the world, one of useful things to do is to focus on communication. Here is a model of communication for information sharing.1

figure_one

When I speak of focus on communication, I mean that both the information sender and the information receiver have a shared mental model for communication that resembles something like the above. I also expect them to understand that the purpose of communication in this context is to share information. Most importantly, I expect the sender and receiver to understand all the ways in which the communication process can result in failure to transmit the intended information.

In this model, there are three sources of error in communication. The first source of error can occur when information is encoded into a signal, for example, when we translate an idea in our minds into words, the many meanings of the words may or may not correspond to what we mean to communicate. The second source of error can occur when noise alters the signal, for example, having a conversation in a loud room can result in the recipient not hearing a word correctly, or perhaps not hearing a word at all. The third source of error can occur when information is decoded from signal back into information, for example, for words with multiple meanings, the recipient may choose to select a meaning that the sender did not intend.

What is important to notice is that even in the total absence of noise, we still have two sources of error, the sender encoding the information, and the receiver decoding it.

Before we get into how to correct encoding and decoding errors, let’s consider how can we detect errors in communication in the first place.

One effective way of detecting errors is to ask for a confirmation brief from the receiver. In other words, ask someone to repeat back to you what you just told them. Using this method, the recipient encodes the information received into a signal, the signal (plus noise) reaches the original sender, who then decodes the signal into information. The original sender then compares the original intended information with what they decoded from the confirmation brief and can decide whether or not the information is close enough to what they intended or whether further clarification is needed.

figure_two

The confirmation brief method of error detection can be initiated by the sender of the information, or by the recipient. Also note that saying “I understand” is insufficient to be considered a confirmation brief as it allows for no comparison by the sender.

So, how do we correct encoding and decoding errors when two people are attempting to communicate with each other? For one, encoding and decoding are done by separate people. The error correction can happen (or can fail to happen) independently of the sender and the recipient. The mechanism of encoding and decoding typically involves assumptions about what the other person means to communicate and how they mean to communicate it. The sender, chooses words2 and other media to encode information into a signal and using confirmation brief observes how effectively the recipient understood the intended information. When further clarification is needed, the sender can provide more detail, but alternatively, the sender can choose to encode information using different words, or different media in order to help the recipient understand. A good sender is a person who can modify their encoding to make continual progress in reducing the gap between what they intended to communicate and what they receive in the confirmation brief. Similarly, on the receiving side, a good receiver is a person who can modify their decoding to make continual progress in reducing the gap between what they provide in the confirmation brief and what the sender intends to communicate. The sender confirms understanding once the sender decides that the confirmation brief is close enough to what they intended to communicate.

In brief, “focus on communication” requires participants to have a mental model for communication and awareness of all sources of error that can impede communication. Participants should continually strive to improve the repertoire of encodings and decodings available to them. Effective communicators should continually strive to improve the methods for constructing and adapting information encoding and decoding processes.

Endnotes

1 What I present here is a human-centric simplification of Claude E. Shannon’s ideas that gave rise to the field of Information Theory. For his seminal paper and a much more detailed treatment of communication see C. E. Shannon (1948). A Mathematical Theory of Communication. The Bell System Technical Journal, Vol. 27, pp. 379–423, 623–656, July, October, 1948.
2 Notice that emergence of a context-specific jargon is a mechanism for reducing encoding and decoding errors by defining a dictionary of words and their context-specific meanings ahead of communication. However, it only works if the interpretation of the jargon meanings actually matches between sender and receiver. Furthermore, in order to arrive at the common interpretation of jargon, sender and receiver still need to go through the communication process to establish the common interpretation in the first place.

How To Do Things

screen-shot-2016-12-03-at-7-37-51-pm

Marvel Entertainment (2016). Marvel’s Guardians of the Galaxy – Trailer 2 (OFFICIAL). Retrieved 5 Dec 2016

For as long as I remember, I was interested in the question of how I should do things. I assume that some form of this question tugs at all of us and that each of us explores it to various depths depending on what is going on in our individual lives. While I don’t think I’m done figuring things out, it seems to me, based on conversations with others, that I have assembled enough information that it is worth sharing.

We’ll start at the conclusion. Here it is, my answer to how I should do things.

I want to thrive in a complex world.

There is a lot to unpack, so let’s get to it.

First, the answer does not really seem like an answer at all. We have a how question that is answered by I want, what should we make of that? While I would like to claim some deep insight, I can’t. I sort of stumbled onto it. In retrospect, the reason I want works as an answer, has to do with the idea of obliquity, as defined by John Kay in his essay:

Strange as it may seem, overcoming geographic obstacles, winning decisive battles or meeting global business targets are the type of goals often best achieved when pursued indirectly. This is the idea of Obliquity. (…) Obliquity is characteristic of systems that are complex, imperfectly understood, and change their nature as we engage with them.1

So, my answer is oblique. However, while John Kay names the idea of obliquity, he does not provide a mechanism to determine what specific oblique approach to take for doing things. Also, notice that obliquity is a “characteristic of systems that are complex” (emphasis mine). There’s that word again, complex. It appears both in my answer to how I should do things, and in my very loose association with obliquity that I claim makes it ok for a how question to be answered with an I want answer. Complexity is indeed the key. In order to really understand what I want to thrive in a complex world means and why it is appropriate, it is important to understand complexity. What follows next is an exposition of the idea of complexity through the lens of my personal journey to understand it.

A Non-Complex System

For contrast, before we begin to define complexity, let’s first examine a simple, non-complex system. Consider the following sequence of numbers:

4, 5, 6, 7, 8, 9, 10, …

The “” symbol at the end indicates that the sequence continues indefinitely. We observe that the sequence starts with “4”. To learn the next item in the sequence, we observe that we can add “1” to the previous number “4 + 1 = 5”, and we get “5”. Once we are at “5”, we observe that to learn the next item in the sequence, we can add “1” to it “5 + 1 = 6”, and we get “6”. Once we are at “6”, we observe that to learn the next item in the sequence, we can add “1” to it “6 + 1 = 7”, and we get “7”, and so on.

Now consider the following question:

What will be the number that is 50 places after “10” in the sequence?

We could, beginning with “10”, add “1”, and repeat that 50 times. However, because this is a non-complex system, we can analyze it and arrive at a shortcut of “10 + (1 * 50) = 60”. The number in the 50th place after “10” in the sequence is “60”. This is a crucial point. We did not have to calculate every step in the sequence between “10” and “60” in order to arrive at “60”. Because this is a non-complex system, we can analyze it and predict any item in the sequence without going through the entire sequence step by step.

Measurement Error In A Non-Complex System

Let’s imagine that the sequence we’ve been working with was measured from some real system and our measurements have error associated with them. For this example, we’ll assume our measurements are accurate to within 0.1. The sequence now is something like:

4.1, 5.1, 5.9, 7.0, 8.1, 8.9, 10.1, …

With this measurement error, our analysis would not end up with a rule to add “1” to the previous number, but instead we would use the mean (average) of the differences between the numbers in the sequence.

5.1 – 4.1 = 1.0
5.9 – 5.1 = 0.8
7.0 – 5.9 = 1.1
8.1 – 7.0 = 1.1
8.9 – 8.1 = 0.8
10.1 – 8.9 = 1.2

In this case, let’s assume that the mean turns out to be “1.0” with a standard error of “+/- 0.1”.

Notice that despite the measurement error, we can still figure out the number that is 50 places after “10” in the sequence using our shortcut of “10.1 + (1.0 * 50) = 60.1”. The standard error is “0.1 * 50 = 5”. The number that is 50 places after “10” in the sequence is “60 +/- 5”. The margin of error contains our ideal “60”. What about the number that is 100,000 places after “10” in the sequence? “10.1 + (1.0 * 100,000) = 100,010.1”. The standard error is “0.1 * 100,000 = 10,000”. We get “100,000 +/- 10,000”, and the margin of error contains our ideal “100,010”. The original measurement error does not hinder our ability to predict items in the sequence without going through the entire sequence step by step. We still get meaningful results, and we can meaningfully quantify how much error is in our prediction.

Furthermore, if we were to measure our system again, and assuming again that our measurements are accurate to within 0.1. Another sequence of measurements could be something like:

3.9, 4.9, 5.9, 7.1, 8.1, 8.9, 9.9, …

Using the measurements above, let’s assume our analysis again ends up with a rule to add “1.0” with a standard error of “+/- 0.1”.

When we want to figure out the number that is 50 places after “10” in this new sequence, based on our previous analysis, we can calculate “9.9 + (1.0 * 50) = 59.9”. The standard error is “0.1 * 50 = 5”. We get “60 +/- 5”, with the margin of error containing our ideal “60”. What about the number 100,000 places after “10”? We calculate “9.9 + (1.0 * 100,000) = 100,009.9”. The standard error is “0.1 * 100,000 = 10,000”. We get “100,000 +/- 10,000”, and the margin of error contains our ideal “100,010”. So, not only do we continue to get meaningful results, our predictions continue to lead us to similar results (within the margin of error) despite our initial error in measuring “10” in the sequence.

In summary, a non-complex system is predictable. It is predictable in a way that we can determine (within the margin of error) what the system will look like any number of steps in the future, without going through every step between our starting point and our prediction. Additionally, a non-complex system is predictable in a way that multiple predictions of the system any number of steps into the future will be close to each other (within the margin of error) despite starting with slightly different initial values due to measurement error.

A Complex2 System

In order to show an example of a complex system, we’ll have to get a little more… well, complex. I will present a set of equations, but then, to illustrate the complexity of the system we will look at a visual simulation. While the underlying details will be mathematically sound, I will avoid mathematics as much as possible. Instead, I will explain this complex system in terms of visuals.

First, the equations describing a deterministic complex system:

dx/dt = 10(y – x)
dy/dt = x(28 – z) – y
dz/dt = xy – 2.66z

The above equations describe a Lorenz system3, developed in 1963 as a simplified mathematical model for atmospheric convection (how air moves in the atmosphere). Here is what the system looks like when graphed over time (the animation starts over after twenty seconds):

lorenz-system

This and similar animations below were captured using https://highfellow.github.io/lorenz-attractor/attractor.html. Retrieved 5 Dec 2016.

What we are seeing is the values of x and y graphed over time. The center of the animation corresponds to x = 0, y = 0, and z = 0. To the left of center are negative x values. To the right of center are positive x values. Below the center are negative y values. Above the center are positive y values. This animation only shows x values (horizontal axis) and y values (vertical axis). The z values are not depicted. The animation starts with values x = 0.1, y = 0.1, and z = 0.1. The line drawn gives us some intuition about what shape the system takes over time.

Before we go on, let’s highlight the first important aspect of a complex system. Given this Lorenz system example, starting with x = 0.1, y = 0.1, and z = 0.1, consider the question:

What will be the x, y, and z values 50 time steps after start?

If you recall, in our non-complex system example, we were able to analyze the system and determine that all we were doing was adding “1” at each step. This analysis gave us a shortcut to compute the state of the system 50 steps ahead without having to go through every step. Instead, we used our shortcut of “start + (1 * 50) = answer”. The first important aspect of a complex system is that such a shortcut does not exist.

What this means, is that in order to figure out what the x, y, and z values are 50 steps after start, we have to go through (that is, calculate) each one of the 50 steps to get the answer. Similarly, to figure out what the values are 100,000 steps after start, we have to go through all 100,000 steps to get the answer. The only way to see what the system will do (in this example, what will be the x, y, and z values at some point in the future) is to either observe the system, or to simulate it, and see where it ends up.

Measurement Error In A Complex System

The next animation shows the same exact Lorenz system I described previously, but this time, the animation shows only part of the line drawn. This animation draws the exact same values as the previous animation, but it shows only the most recent points instead of leaving the entire line drawn. This is so we can more easily see what I’m about to demonstrate next.

lorenz-system-motion

In order to illustrate what happens with an initial measurement error in a complex system, we will draw 20 systems together at the same time. The previous animation showed one Lorenz system of equations. This next animation shows 20 Lorenz systems of equations, each system of equations starting with exactly the same x, y, and z values (I will add measurement error in later animations). This animation is meant to demonstrate that if we start with exactly the same values, each step we calculate will be exactly the same. That is, all 20 systems will end up with the same x, y, and z values 50 steps from the start, 100,000 steps from the start, and so on. You’ll notice that the drawn points look thicker. This is meant to illustrate the 20 systems drawn all having the same value at the same point in time.

lorenz-system-motion-20-series.gif

What happens if we introduce measurement error? In this example, the way I’ll demonstrate measurement error is to perturb the starting positions of x, y, and z of each of the 20 systems by a little bit. We will call this perturbation “spread”. We will use the following formula to adjust the starting point of x, y, and z:

x = 0.1 + (random(0, 1) * 2 * spread) – spread
y = 0.1 + (random(0, 1) * 2 * spread) – spread
z = 0.1 + (random(0, 1) * 2 * spread) – spread

random(0, 1) means a random number between 0 and 1. When we set “spread = 0.1”, one example of how much we’ll perturb the initial x value is 0.035241377710758706. This means, that x, instead of starting with 0.1, would in this case start with 0.1 + 0.035241377710758706 = 0.135241377710758706. We then similarly perturb starting value of y using a new random number between 0 and 1. We then perturb starting value of z. After we have perturbed values of x, y, and z, we use those as starting point for the first system (instead of x = 0.1, y = 0.1, and z = 0.1). We follow the same procedure for the remaining 19 systems.

With spread = 0.1, here is what the 20 systems look like:

lorenz-system-20-series-0_1-spread.gif

spread = 0.1

Notice that the systems begin together, but after a while, we can see them drift away from each other. While the systems started really close together, pretty soon, each system ends up arbitrarily far away from all the other systems. This illustrates the other crucial aspect of complex systems. Where the system ends up in the future is highly sensitive to where the system starts. Tiny differences in start conditions can lead to arbitrarily large differences in where the systems end up.

To illustrate this point further, let’s see what happens if instead of perturbing initial systems with spread = 0.1, we make the perturbations smaller. In other words, what happens if the 20 systems begin closer together?

Below is an animation with “spread = 0.01” (ten times less spread than before):

lorenz-system-20-series-0_01-spread.gif

spread = 0.01

Below is an animation with “spread = 0.001” (ten times less spread than before):

lorenz-system-20-series-0_001-spread.gif

spread = 0.001

Below is an animation with “spread = 0.0001” (ten times less spread than before):

lorenz-system-20-series-0_0001-spread.gif

spread = 0.0001

Below is an animation with “spread = 0.00001” (ten times less spread than before):

lorenz-system-20-series-0_00001-spread.gif

spread = 0.00001

Below is an animation with “spread = 0.000001” (ten times less spread than before):

lorenz-system-20-series-0_000001-spread.gif

spread = 0.000001

Next is an animation with “spread = 0.0000001” (ten times less spread than before, one million times less spread than our first perturbation). This means that the system starting positions are only changed by a tiny amount, somewhere around 0.0000001. For example, instead of x = 0.1 at start, we would have x = 0.10000003385421758612446 at start. This is also a good time to consider what measurements are you capable of with this level of accuracy? Here’s what 20 systems look like with this tiny level of initial starting position difference:

lorenz-system-20-series-0_0000001-spread.gif

spread = 0.0000001

In fact, no matter how small the difference in the initial start position, if there is any difference at all in the initial start position, our 20 simulated Lorenz systems will end up arbitrarily far away from each other. This is profoundly important. Imagine that one of these 20 Lorenz systems is the real system, and the other 19 systems are our simulations of the real system (so that we can try to predict what happens in a real system). This sensitivity to differences in the initial start position means that if there is any error at all in our measurement of the real system, our simulations will diverge arbitrarily far away from what the real system will do.

Properties Of A Complex System

I hope at this point, I have demonstrated two important properties of complex systems:

  1. There are no shortcuts to figure out where a complex system will end up in the future. We have to either observe the system itself, or simulate every step of its evolution.
  2. If there is any measurement error between a real complex system and our simulation of it, our simulation of it will diverge arbitrarily far away from the real system.

So far, I have demonstrated systems in form of abstract mathematical examples. The intent behind this is to demonstrate how complexity arises even in an ideal system where we know (because we define) everything about the system. The world has many more components, features, and types of interactions than the systems I described thus far. The world is much more complex. How can we, as humans, make sense of the complexity in the world around us?

A Human Lens On Complexity

The mathematical examples above are too abstract to inform my daily decisions as a human being. To actually make sense of the world and to make decisions in the world, I needed a different model. This is where I found Dave Snowden’s sense-making4 Cynefin5 framework to be useful.

Cynefin framework offers decision models depending on the nature of the system under consideration from the observer’s point of view. The nature of systems is considered from the perspective of domains. Cynefin framework offers the Ordered domain, Complex domain, Chaotic domain, and the domain of Disorder. Furthermore, the Ordered domain is divided into Simple and Complicated domains. See the figure below (the center unlabeled area is the domain of Disorder).

screen-shot-2016-12-05-at-10-46-14-am

Snowden, Dave (2010). The Cynefin Framework. Retrieved 5 Dec 2016

As before, prior to considering the Complex domain, let’s discuss simpler domains of the Cynefin framework for context.

In systems that appear to the observer to belong to the Simple domain, cause and effect relationships exist, are predictable and are repeatable. The observer perceives this cause and effect relationship as self-evident. An applicable decision model here is Sense-Categorize-Respond. This is the domain where application of Best Practice is valid. Best Practice implies that there is the best approach to respond with.

In systems that appear to the observer to belong to the Complicated domain, cause and effect relationships exist but are not self-evident, and therefore require expertise. An applicable decision model here is Sense-Analyze-Respond. The domain requires analysis in order to decide a course of action. This is the domain where application of Good Practice is valid. The difference between Good Practice and Best Practice is such that Good Practice consists of multiple approaches where each are valid given some level of expertise. This is in contrast with Best Practice, where there exists the best approach.

The Ordered domain (Simple and Complicated) corresponds to the mathematical example of a simple system discussed previously where cause and effect are predictable. What corresponds to the mathematical example of a complex system is the Complex domain, where cause and effect are not predictable.

In systems that appear to the observer to belong to the Complex domain, cause and effect relationships are only obvious in hindsight, with unpredictable, emergent outcomes. An applicable decision model in the Complex domain is Probe-Sense-Respond. Probing is conducted through safe-to-fail experiments. If we sense the experiment is pushing the system towards a desired outcome (succeeding), our response is to amplify it. If we sense the experiment is pushing the system towards undesired outcome (failing), our response is to dampen it. The experiment shouldn’t be conducted without identification of amplification and dampening strategies in advance. Otherwise, we will not be able to exploit desirable outcomes or dampen the undesirable ones. This amplification and damping of safe-to-fail experiments leads towards an Emergent Practice, novel and unique in some way.

In systems that appear to the observer to belong to the Chaotic domain, no cause and effect relationships can be determined. An applicable decision model in the Chaotic domain is Act-Sense-Respond. The goal is to stabilize the situation as quickly as possible.

The domain of Disorder is the perspective of not knowing which domain the system being observed is in.

Another important aspect of the Cynefin framework is the transition between the Simple domain and the Chaotic domain. Unlike transitions between other domains, it is useful to think of a transition from Simple to Chaotic domain as falling from a cliff.

screen-shot-2016-12-05-at-11-38-40-am

Snowden, Dave (2010). The Cynefin Framework. Retrieved 5 Dec 2016.

This indicates the danger of the Simple domain perspective that can lead to overconfidence in the belief that systems are simple, that systems are predictable, that past success guarantees future success, etc. In such case, the system drifts towards the transition and can enter the Chaotic domain in form of an unforeseeable crisis, accident, or a failure, where recovery is expensive. This transition area can be thought of as the “complacent zone.”

Notice that in discussing Cynefin domains, I have framed everything from the point of view of an observer. That is, the perception of the domain is dependent on who is doing the perceiving. Unfamiliar systems can appear Complex or Chaotic to one observer while being merely Complicated or Simple to another. While it may be possible that the systems themselves have intrinsic complexity, as in the mathematical complex system example discussed before, in order to make a decision, the perspective of the decision-maker is the only one available to the decision-maker. Hence, the way the decision-maker perceives the system domain dictates the decision model used.

With an intuition for complex systems and the context of the Cynefin framework, we can now meaningfully determine if we live in a complex world.

Do We Live In A Complex World?

One approach to determine whether we live in a complex world would be to consider the world from the perspective of a human-technology-nature system of systems. That is, humans exist in the world, and there exist human systems. Technology exists in the world, and there exist technology systems. Nature (not human and not technology) exists in the world, and there exist natural systems. Humans, technology, and nature interact, and there exist human-technology-nature systems. There are many of these human-technology-nature systems that interact together. Hence, the world can be thought of as a human-technology-nature system of systems. We can therefore attempt to determine what Cynefin domain describes the world.

Can the world be thought of as belonging to the Simple domain? While there may exist examples of human-technology-nature systems in the world where cause and effect are obvious (although I have a hard time coming up with any), every system in the world would have to belong to the Simple domain for us to consider the world in its entirety to belong to the Simple domain. This is not the case. We already demonstrated a system (Lorenz equations) that does not belong to the Simple domain, therefore, in order to thrive, thinking of the world as belonging to the Simple domain is insufficient.

Can the world be thought of as belonging to the Chaotic domain? Recall that the Chaotic domain was defined as the domain where there is no relationship between cause and effect. In general, we assume there exist causes and effects, and we do actually perceive relationships between cause and effect in the world. Therefore, in order to thrive, thinking of the world as belonging to the Chaotic domain is also insufficient.

So, in order to thrive, thinking of the world system of systems as belonging to the Simple domain, or the Chaotic domain, is insufficient.

Can the world be thought of as belonging to the Complicated domain? Recall that in a complex system, there are no shortcuts to figure out where a complex system will end up in the future and that any measurement error between a real complex system and our simulation of it will diverge arbitrarily far from the real system itself. We already demonstrated a system with these properties. Therefore, in order to thrive, it is insufficient to think of the world as belonging to the Complicated domain.

In order to thrive, it is necessary to perceive the world as belonging to the Complex domain. While it is true that some systems in the world may present themselves as belonging to the Simple, Complicated, or Chaotic domains, the world system of systems does demonstrate causes leading to effects (even if only in retrospect) and the types of interactions that exist are complex enough to need an explanation other than Simple or Complicated.

The Cost Of Being Wrong

What if the world does not belong to the Complex domain? What would be the cost of our mistake as compared to the alternatives? Let’s consider the question whether we live in a complex world from the perspective that we actually want to use the answer to pick a decision-making approach.

What is the utility of perceiving the world as belonging to the Simple domain? Simple domain implies the Sense-Categorize-Respond mode of decision-making. While this is a very simple and quick decision-making model, it will lead us astray if the system we are interacting with belongs to one of the other domains (Complicated, Complex, or Chaotic). I can’t think of any examples of self-evident human-technology-nature systems. Additionally, there is a danger of the Simple domain perspective that leads to an unforeseeable transition to the Chaotic domain resulting in an expensive crisis.

If we perceive the world as belonging to the Chaotic domain, that implies Act-Sense-Respond mode of decision-making. This is a very immediate mode of operation concerned only with the present and the desire to exit the Chaotic domain as soon as possible. Recall that the Chaotic domain was defined as the domain where there is no relationship between cause and effect. We just do, until the situation stabilizes. While this is a valid way to model the world, we typically want more from our approach to deciding in the world than just doing things without expecting we will have some desirable effect.

What is interesting about the Simple and Chaotic domains is that they tend to reflect the observer’s lack of understanding of the system. I mentioned already that I can’t think of real examples of Simple systems. Placing a system in the Simple domain seems to be a simplifying assumption to enable us to consider interactions of more systems than we would be able to otherwise. However, as mentioned before, there is a risk associated with the Simple domain resulting in expensive crisis. On the other hand, placing a system in the Chaotic domain seems to be done from position of ignorance. That is, we do not understand the cause and effect. We inherently do believe that there exist causes and effects in the world. A system in the Chaotic domain therefore, seems to tell us more about perspective of the observer rather than of the system itself. From this point of view, choosing to approach the world from the perspective of Chaotic domain (as opposed to being forced into it) seems to be a choice of willful ignorance. The cost of willful ignorance seems higher than other approaches, but more importantly, it is insufficient in order to thrive in the world.

Perceiving the world to be in the Complicated domain implies that given enough analysis, we can predict what will happen in the future. The decision-making mode applicable is Sense-Analyze-Respond. This is where after analysis, we have a Good Practice which we can reuse to exploit the system we are interacting with. This is a less-costly approach than Complexity domain’s Probe-Sense-Respond (which results in an Emergent Practice), but only if our analysis is correct and the system ends up in the state we predicted it should. Herein lies the problem of using Complicated domain approach. It works until it doesn’t. It implies that we understand everything there is to understand about the system and that we know what will happen to the system in the future. Much like Simple domain simplification can lead to an expensive crisis, so can Complicated domain not-as-much-simplification approach lead to an expensive crisis. This happens when the system is influenced by factors that we failed to analyze. For example, consider stock market crashes, car accidents, deadlines, unintended consequences of any kind, etc. The existence of insurance indicates that Complicated domain assumption is insufficient.

If we perceive the world as belonging to the Complex domain, it implies the Probe-Sense-Respond mode of decision-making. We expect the unexpected. We attempt to dampen or reinforce the unexpected results of our probes in course of ongoing safe-to-fail experiments. This is a more costly approach if the system turns out to belong to the Complicated domain for some time period. There is also the difficulty of determining what is safe-to-fail. What is safe-to-fail for a tribe, for example, having one person taste some new plant that the tribe stumbled across, may not be safe-to-fail for the person doing the tasting if the plant turns out to be poisonous. Determination of what is safe-to-fail in the first place itself requires a Probe-Sense-Respond mode of decision-making. The mistakes made from a Complex decision-making approach are in a sense, more aware, than those made from a Complicated decision-making approach. In Complex domain approach, we are aware we can make mistakes (failed experiments), whereas in a Complicated domain approach, mistakes are unforeseen when, eventually, the Complicated domain assumption turns out to be invalid once the perceived system is influenced by factors that were not part of analysis in scope of Complicated domain’s Sense-Analyze-Respond decision-making.

The cost of assuming the world belongs to the Complex domain is less than assuming otherwise.

How To Do Things

It is necessary to think of the world as Complex, and it is insufficient to think of the world as Simple, Complicated, or Chaotic. To thrive in the world, it is necessary to wield the Probe-Sense-Respond mode of decision-making. It is necessary to conduct safe-to-fail experiments, amplifying desired outcomes and dampening undesirable ones. This implies optimizing for learning. To optimize for learning, it is useful to focus on systems awareness, communication, and minimizing feedback loops. Because we are people, it is also useful to remember that people are the ones who do the things.

Endnotes

1 Kay, John (2014). Obliquity. Retrieved 5 Dec 2016.
2 In this example, I am using the word “complex” to describe, what in mathematics, is called “chaos”. Part of the difficulty of understanding complexity is due to different words being used in different domains of our knowledge to describe the same thing. Because in non-mathematical contexts the word that is used is “complex”, I am sticking with “complex” here for the sake of consistency.
3 See https://en.wikipedia.org/wiki/Lorenz_system. The parameters picked here are specifically the ones resulting in chaotic solutions to the Lorenz system.
4 There is an important distinction between a categorization framework and a sense-making framework. In a categorization framework, framework precedes data. In a sense-making framework, data precedes the framework. Sense-making is “a motivated, continuous effort to understand connections (which can be among people, places, and events) in order to anticipate their trajectories and act effectively”. See: Klein, G., Moon, B. and Hoffman, R.F. (2006). Making sense of sensemaking I: alternative perspectives. IEEE Intelligent Systems, 21(4), 70–73.
5 Snowden, Dave (2010). The Cynefin Framework. Retrieved 5 Dec 2016. Also see: Snowden, Dave (2009). How to organize a Children’s Party. Retrieved 5 Dec 2016.

Experimental Protocol for Protecting Local Storage from Cross-Site Scripting (XSS)

Update: DO NOT USE THIS. Thanks to @pfrazee for pointing out that I didn’t really protect against Cross-Site Scripting (XSS). The protocol described here fails at its goal. The failure is that an attacker can use XSS to execute the protocol itself and gain access to decrypted data. In other words, this protocol does nothing.

Solution: Use Content-Security-Policy to protect against XSS.

Local storage in the browser is vulnerable to Cross-Site Scripting (XSS) (see: Local Storage). A single mistake in sanitizing HTML or a compromise in the JavaScript supply chain (CDNs, libraries, etc.) can lead to arbitrary JavaScript being able to read local storage and expose any data within.

Assuming that XSS will occur, we can protect data in local storage by encrypting the data. This way, when an attacker gains access to local storage, the data will be protected by encryption.

There are two problems with encrypting local storage. The first problem is where to store the key. The second problem is trusting JavaScript encryption and all the disagreements that leads to.

To address the first problem, we can store the encryption key for local storage in a cookie. We can completely bypass the second problem by encrypting and decrypting server-side.

By storing the encryption key in an HttpOnly, Secure, path-specific cookie, we protect it from XSS attacks. However, the cookie remains vulnerable to Cross-Site Request Forgery (CSRF) (see: Cross-Site Request Forgery (CSRF)). The use of CSRF tokens addresses this vulnerability (see: Synchronizer CSRF Tokens).

What follows is an experimental protocol for securing local storage with encryption, where the encryption key is stored in an HttpOnly, Secure, path-specific cookie protected by CSRF tokens.

Encryption protocol:

1. Client requests “/” from the server.
2. 
Server includes CSRF token in the response.
3. 
Client, using CSRF token, sends POST to “/encrypt”.
4. 
Server encrypts the request payload using crypto key from Cookie header, and responds with ciphertext.
4a. If client request does not contain crypto key in Cookie header, the server generates new crypto key and includes it in “Set-Cookie” header in the response (HttpOnly, Secure, path=/decrypt and path=/encrypt).
5. Client stores ciphertext response in local storage.

Decryption protocol:

1. Client requests “/” from the server.
2. Server includes CSRF token in the response.
3. Client retrieves ciphertext from local storage.
4. Client, using CSRF token, sends POST to “/decrypt”.
5. Server decrypts the request payload using crypto key from Cookie header, and responds with plaintext.
5a. If client request does not contain crypto key in Cookie header, server responds with a failure status code of some kind.
6. Client uses plaintext response and discards it when no longer needed.

A proof of concept demo here: Node.js Github Gist.

* the demo does not set the Cookie to be Secure, but that’s just to keep the demo simple.

 

Node.js control flow using EventEmitter

Recently, I read yet another blog post complaining about how concurrency is hard in Node.js. As is often the case, the examples highlighting the problems are leaving a lot to be desired.

Since I’ve written quite a bit of Node.js, I’d like to present an example of concurrency style in Node.js that isn’t a “strawman” often touted as the bane of Node.js’ existence.

EventEmitter

If you’ve been around Node.js for a little bit, you might have heard something along the lines “everything is a stream”, or “everything is an event emitter.” There is a reason for that.

Let’s take the example from the blog post I mentioned:


function dostuff(callback) {
task1(function(x) {
task2(x, function(y) {
task3(y, function(z) {
if (z < 0) {
callback(0);
} else {
callback(z);
});
});
});
}

view raw

gistfile1.js

hosted with ❤ by GitHub

If you find yourself doing the above, you’re either in a rush, or should stop and rethink your design. Here’s how to accomplish a similar thing using continuation passing:


var dostuffEmitter = new EventEmitter();
dostuffEmitter.on('task1', function (callback) {
dostuffEmitter.emit('task2', x, callback);
});
dostuffEmitter.on('task2', function (x, callback) {
dostuffEmitter.emit('task3', y, callback);
});
dostuffEmitter.on('task3', function (y, callback) {
dostuffEmitter.emit('finish', z, callback);
});
dostuffEmitter.on('finish', function (z, callback) {
if (z < 0) {
callback(0);
} else {
callback(z);
}
});

view raw

gistfile1.js

hosted with ❤ by GitHub

Why is this “better”? Well, for one, adding error handling is straightforward:


var dostuffEmitter = new EventEmitter();
dostuffEmitter.on('task1', function (callback) {
if (error) {
dostuffEmitter.emit('error', error);
return;
}
dostuffEmitter.emit('task2', x, callback);
});
dostuffEmitter.on('task2', function (x, callback) {
if (error) {
dostuffEmitter.emit('error', error);
return;
}
dostuffEmitter.emit('task3', y, callback);
});
dostuffEmitter.on('task3', function (y, callback) {
if (error) {
dostuffEmitter.emit('error', error);
return;
}
dostuffEmitter.emit('finish', z, callback);
});
dostuffEmitter.on('finish', function (z, callback) {
if (z < 0) {
callback(0);
} else {
callback(z);
}
});

view raw

gistfile1.js

hosted with ❤ by GitHub

We can add telemetry like so:


dostuffEmitter.on('task2', function (x, callback) {
if (error) {
dostuffEmitter.emit('error', error);
return;
}
dostuffEmitter.emit('telemetry', someMetricHere);
dostuffEmitter.emit('task3', y, callback);
});

view raw

gistfile1.js

hosted with ❤ by GitHub

We can alter control flow without rewriting the “callback hell”:


dostuffEmitter.on('task1', function (callback) {
dostuffEmitter.emit('task3', x, callback);
});
dostuffEmitter.on('task2', function (x, callback) {
dostuffEmitter.emit('finish', y, callback);
});
dostuffEmitter.on('task3', function (y, callback) {
dostuffEmitter.emit('task2', z, callback);
});

view raw

gistfile1.js

hosted with ❤ by GitHub

In general, the EventEmitter will be a better idea. I hope you’ll find some of these hints interesting and find writing control flow in Node.js a bit more friendly.

Amazon AWS “Simple Log Service”

Recently, Amazon AWS announced availability of Access Logs for Elastic Load Balancers. One of the key points from the announcement was that by selecting a simple checkbox the Access Logs will magically appear in an S3 bucket of your choice, neatly organized according to time, with uniques where necessary to avoid conflicts:

All Buckets/<bucket-name>/AWSLogs/<account-number>/elasticloadbalancing/<region>/<year>/<month>/<day>

With file names looking something like:

<account-number>_elasticloadbalancing_<region>_<bucket-name>_<year><month><day>T<hour><minutes>Z_<loadbalancer-ip>_<unique>.log

That’s convenient.

The contents of the files? For HTTP connections, every line is:

<ISO-8601 date> <bucket-name> <requesting-ip:port> .. blah blah.. "GET http://<hostname>:<port>/?QUERY_STRING HTTP/1.1"

That’s interesting. And how long can that query string in the log be? I ran a little test using a Node.js script. The test script ran out of memory after successfully making a request with over 16,777,216 characters in the query string. And by “successfully”, I mean that I had a log file delivered to my S3 bucket with that query string present in it.

Are you thinking what I’m thinking?

If one sets up an AWS Elastic Load Balancer without any instances backing it, it will return 503 errors (Service Unavailable). But.. *drumroll* it will log those attempts… with the query string included. Log collection infrastructure just became a whole lot simpler.

If you’re interested in testing this out, I built a thing: drifter-rsyslog. It is an rsyslog Node.js plugin that takes messages via stdin and turns them into HTTP GET requests with the message URI encoded in the query string (it’s brand new, probably with bugs, but it’s a start). If you don’t like Node.js, it should be straightforward to build something similar.

Happy logging! and if you’d like me to setup this sort of collection infrastructure for you, let me know.

DISTRIBUTE ALL THE THINGS

Tonight, I’ve had the opportunity to present at the Austin JavaScript Meetup:

In 2013, writing software requires orchestration of multiple “machines”, be it EC2 instances, database replicas, or mobile clients. Despite this fact, quickly prototyping systems that ignore this reality remains the state of the art. In this talk, we will introduce many distributed concepts, demonstrate how they work, and show how they can be grown from a single machine into a globally distributed system… in JavaScript! (and Node.js). We will discuss DHTs, gossip protocols, object capabilities, peer-to-peer in the browser, Ken protocol, Indeed’s Boxcar, Lambda Architecture, SOA, and how they all fit together with others.

If you are interested in a world beyond Node.js on Rails, join us and get a taste of what’s possible to build today and get a peak into the distributed Internet of tomorrow.

The link to the presentation: DISTRIBUTE ALL THE THINGS

How not to design distributed applications

This is an initial response to Anant Narayanan’s inquiry as to where I disagreed with his talk. So, I want to thank him for being kind enough to engage me on Twitter, which prompted me to write this. Perhaps I have misinterpreted some statements or saw a fault where there was none. I’m happy to have that discussion in the comments.

The 2013 Realtime Conference looks amazing. If you haven’t seen the talks, go ahead and treat yourself here: http://2013.realtimeconf.com/video/. It is well worth your time, inspirational, and entertaining.

However, there was one talk that stood out, which struck me intuitively as “all sorts of wrong”, and that was Anant Narayanan’s “Message Passing vs. Data Synchronization” (http://vimeo.com/77352415). This is a write up of my thoughts as Anant kindly wanted to dig into what I thought and where I disagreed.

Anant rightfully points out that simple implementations of Message Passing do not cut it. There are things to consider in the design of distributed systems and he highlights: persistence, fault tolerance, scaling, consistency, and security. All of these have been solved before, as he points out, but then he goes on to state his thesis that none of the code implementing those solutions belongs in our application code. This is where “all sorts of wrong” begins.

The entire talk serves as a warning against going along with the thesis, for the simple reason that if we abstract persistence, fault tolerance, scaling, consistency, and security to something else without understanding what we’re doing, we risk relying on a wrong implementation, as shown throughout the talk. Also, what is it that we’re building at that point and why are we getting paid to do it?

The talk states that “message passing is just a primitive” and that we should abstract away from it. This is fine, as long as the abstractions do not lose touch with asynchronous and faulty nature of message passing. More importantly, the abstractions should also be correct. It is very easy to cross the fine line from abstraction into ambiguation, and the examples presented in the talk do just that.

We are shown a strawman abstraction comparison between the progression from DOM manipulation to jQuery to Angularjs/Ember.js which is contrasted with progression from WebSockets to Socket.io to “?”. The talk goes on to suggest that the progression from jQuery to Angularjs and Ember.js is equivalent to a progression from Socket.io to persistent, fault tolerant, scalable, consistent, and secure applications. The talk proposes we are “just” one abstraction step from realizing the ultimate distributed dream. I’m sorry, but this is nowhere near a fair comparison and it overly encourages the viewer to make gross simplifications which will end up badly. Why? Because the proposed solution is “Data Synchronization.”

In describing what “Data Synchronization” is, the talk posits that “most apps observe and modify data” without pausing and asking how desirable that is in a distributed system in the first place. A common pattern does not imply a correct pattern. This is the wrong metaphor to begin with, and it leads at first to what appear on the surface to be similar solutions, but then diverges quite rapidly into the wrong types of solutions for distributed systems. To contrast, consider a distributed application that is composed of services where “services respond to events and emit commands.” I can make both metaphors fit, but the latter one is better choice for distributed system design.

Next, the talk posits that “data reflects state.” This is ok on its own, but again, the metaphor implied in this statement is the wrong kind. This is because Anant speaks as though he is implying a global application state. To contrast, there is a different kind of state, which would be distributed application state that requires no synchronization. If an application is designed using services that are properly bounded and do not require access to each other’s data, there is no data in the entire application that would require synchronization. As before, one of these metaphors is appropriate for distributed systems and one of them isn’t.

The talk proceeds to describe a chat system where the chat information is stored in some synchronized data store into which we can insert rows and which can replay rows on demand when a user needs to see them. We have systems now large enough that I can point to Twitter architecture as an existence proof that this is not a way to design a distributed chat application. It is the classic “it works until it doesn’t” design, and if we start where the talk suggests, we will end up with a grand rewrite once we reach a certain threshold. We already know how to build these systems, we could start with the correct design, but this would require us to familiarize ourselves with persistence, fault tolerance, scaling, consistency, and security concerns, which the talk attempts to convince is not necessary via statements like: “you want this layer of data synchronization”. No, actually, you don’t.

Next, the talk takes on fault tolerance by arguing that checking for errors (as a result of message passing) does not belong in our application code. This only holds until we have to receive confirmation that the command (a message) the application sent was actually processed, and at that point, we will find ourselves checking for errors. Additionally, because we are working on a distributed system, sometimes the simplest way of correcting an error will be through user action and not some form of automatic retry. This is a property of open systems, and to build those, we cannot assume the abstraction of Data Synchronization.

When the talk addresses security, I cannot fault this stance, because web application security is broken across the industry. We have abused the Access Control List (ACL) model to death and most have forgotten about (or never thought to look for) a better alternative which is Object Capabilities. It is, however, a complex topic and a subject for another post. If you are interested you can check out http://www.erights.org/elib/capability/index.html, which is a great repository on the subject. To demonstrate that this is not some fringe technology, you could think about what OAuth2 (http://tools.ietf.org/html/rfc6749) and Bearer Token usage (http://tools.ietf.org/html/rfc6750) give you and how one would implement a client on top of an API using Bearer Tokens instead of a classic RESTful web API.

The talk sums up the above examples by stating that this is why we shouldn’t concern ourselves with Message Passing. I argue that due to what I outlined above we should absolutely concern ourselves with Message Passing and other distributed concerns. Otherwise, we’ll end up implementing systems that will break from poor design.

The talk then demonstrates the “advantages” of Data Synchronization, and the first slide asks “why not directly store state?”. For an answer to that, I refer you to Nathan Marz’s Lambda Architecture talk (http://www.infoq.com/presentations/Complexity-Big-Data) and let you answer that for yourself instead of accepting Anant’s answer. Additionally, there are multiple advantages realized using Event Sourcing and CQRS (see http://en.wikipedia.org/wiki/Domain-driven_design) that one gives up by “directly storing state.” We are then shown how distributed counters are “inefficient” by demonstration of a distributed counter implemented the wrong way. For how to implement distributed counters take a look at Distributed Counters in Cassandra (http://www.datastax.com/wp-content/uploads/2011/07/cassandra_sf_counters.pdf) for how it can be done in Cassandra, or consider the problem of probabilistic counters described in Big Data Counting: How to Count a Billion Distinct Objects Using Only 1.5 KB Of Memory (http://highscalability.com/blog/2012/4/5/big-data-counting-how-to-count-a-billion-distinct-objects-us.html). It’s not an easy problem.

Anant leaves us with the question “where is the Angular or Ember of data”? I would say, that perhaps Joyent’s Manta (http://www.joyent.com/products/manta) is a good start. However, Data Synchronization, as presented, is definitely not the answer.

Acknowledgments:

I would like to thank Scott Bellware for taking the time to review this post and suggesting improvements.

Implementing Unforgeable Actor Addresses

Actor Model of Computation is a great example of a decentralized system. However, one of the features lacking in most actor model implementations I have come across is the unforgeability of actor addresses. Recently, I had the opportunity to write about the issue on Dale Schumacher’s “It’s Actors All The Way Down” blog:  “Towards a Universal Implementation of Unforgeable Actor Addresses”.