Experimental Protocol for Protecting Local Storage from Cross-Site Scripting (XSS)

Update: DO NOT USE THIS. Thanks to @pfrazee for pointing out that I didn’t really protect against Cross-Site Scripting (XSS). The protocol described here fails at its goal. The failure is that an attacker can use XSS to execute the protocol itself and gain access to decrypted data. In other words, this protocol does nothing.

Solution: Use Content-Security-Policy to protect against XSS.

Local storage in the browser is vulnerable to Cross-Site Scripting (XSS) (see: Local Storage). A single mistake in sanitizing HTML or a compromise in the JavaScript supply chain (CDNs, libraries, etc.) can lead to arbitrary JavaScript being able to read local storage and expose any data within.

Assuming that XSS will occur, we can protect data in local storage by encrypting the data. This way, when an attacker gains access to local storage, the data will be protected by encryption.

There are two problems with encrypting local storage. The first problem is where to store the key. The second problem is trusting JavaScript encryption and all the disagreements that leads to.

To address the first problem, we can store the encryption key for local storage in a cookie. We can completely bypass the second problem by encrypting and decrypting server-side.

By storing the encryption key in an HttpOnly, Secure, path-specific cookie, we protect it from XSS attacks. However, the cookie remains vulnerable to Cross-Site Request Forgery (CSRF) (see: Cross-Site Request Forgery (CSRF)). The use of CSRF tokens addresses this vulnerability (see: Synchronizer CSRF Tokens).

What follows is an experimental protocol for securing local storage with encryption, where the encryption key is stored in an HttpOnly, Secure, path-specific cookie protected by CSRF tokens.

Encryption protocol:

1. Client requests “/” from the server.
Server includes CSRF token in the response.
Client, using CSRF token, sends POST to “/encrypt”.
Server encrypts the request payload using crypto key from Cookie header, and responds with ciphertext.
4a. If client request does not contain crypto key in Cookie header, the server generates new crypto key and includes it in “Set-Cookie” header in the response (HttpOnly, Secure, path=/decrypt and path=/encrypt).
5. Client stores ciphertext response in local storage.

Decryption protocol:

1. Client requests “/” from the server.
2. Server includes CSRF token in the response.
3. Client retrieves ciphertext from local storage.
4. Client, using CSRF token, sends POST to “/decrypt”.
5. Server decrypts the request payload using crypto key from Cookie header, and responds with plaintext.
5a. If client request does not contain crypto key in Cookie header, server responds with a failure status code of some kind.
6. Client uses plaintext response and discards it when no longer needed.

A proof of concept demo here: Node.js Github Gist.

* the demo does not set the Cookie to be Secure, but that’s just to keep the demo simple.



Node.js control flow using EventEmitter

Recently, I read yet another blog post complaining about how concurrency is hard in Node.js. As is often the case, the examples highlighting the problems are leaving a lot to be desired.

Since I’ve written quite a bit of Node.js, I’d like to present an example of concurrency style in Node.js that isn’t a “strawman” often touted as the bane of Node.js’ existence.


If you’ve been around Node.js for a little bit, you might have heard something along the lines “everything is a stream”, or “everything is an event emitter.” There is a reason for that.

Let’s take the example from the blog post I mentioned:

If you find yourself doing the above, you’re either in a rush, or should stop and rethink your design. Here’s how to accomplish a similar thing using continuation passing:

Why is this “better”? Well, for one, adding error handling is straightforward:

We can add telemetry like so:

We can alter control flow without rewriting the “callback hell”:

In general, the EventEmitter will be a better idea. I hope you’ll find some of these hints interesting and find writing control flow in Node.js a bit more friendly.

Amazon AWS “Simple Log Service”

Recently, Amazon AWS announced availability of Access Logs for Elastic Load Balancers. One of the key points from the announcement was that by selecting a simple checkbox the Access Logs will magically appear in an S3 bucket of your choice, neatly organized according to time, with uniques where necessary to avoid conflicts:

All Buckets/<bucket-name>/AWSLogs/<account-number>/elasticloadbalancing/<region>/<year>/<month>/<day>

With file names looking something like:


That’s convenient.

The contents of the files? For HTTP connections, every line is:

<ISO-8601 date> <bucket-name> <requesting-ip:port> .. blah blah.. "GET http://<hostname>:<port>/?QUERY_STRING HTTP/1.1"

That’s interesting. And how long can that query string in the log be? I ran a little test using a Node.js script. The test script ran out of memory after successfully making a request with over 16,777,216 characters in the query string. And by “successfully”, I mean that I had a log file delivered to my S3 bucket with that query string present in it.

Are you thinking what I’m thinking?

If one sets up an AWS Elastic Load Balancer without any instances backing it, it will return 503 errors (Service Unavailable). But.. *drumroll* it will log those attempts… with the query string included. Log collection infrastructure just became a whole lot simpler.

If you’re interested in testing this out, I built a thing: drifter-rsyslog. It is an rsyslog Node.js plugin that takes messages via stdin and turns them into HTTP GET requests with the message URI encoded in the query string (it’s brand new, probably with bugs, but it’s a start). If you don’t like Node.js, it should be straightforward to build something similar.

Happy logging! and if you’d like me to setup this sort of collection infrastructure for you, let me know.


Tonight, I’ve had the opportunity to present at the Austin JavaScript Meetup:

In 2013, writing software requires orchestration of multiple “machines”, be it EC2 instances, database replicas, or mobile clients. Despite this fact, quickly prototyping systems that ignore this reality remains the state of the art. In this talk, we will introduce many distributed concepts, demonstrate how they work, and show how they can be grown from a single machine into a globally distributed system… in JavaScript! (and Node.js). We will discuss DHTs, gossip protocols, object capabilities, peer-to-peer in the browser, Ken protocol, Indeed’s Boxcar, Lambda Architecture, SOA, and how they all fit together with others.

If you are interested in a world beyond Node.js on Rails, join us and get a taste of what’s possible to build today and get a peak into the distributed Internet of tomorrow.

The link to the presentation: DISTRIBUTE ALL THE THINGS

How not to design distributed applications

This is an initial response to Anant Narayanan’s inquiry as to where I disagreed with his talk. So, I want to thank him for being kind enough to engage me on Twitter, which prompted me to write this. Perhaps I have misinterpreted some statements or saw a fault where there was none. I’m happy to have that discussion in the comments.

The 2013 Realtime Conference looks amazing. If you haven’t seen the talks, go ahead and treat yourself here: http://2013.realtimeconf.com/video/. It is well worth your time, inspirational, and entertaining.

However, there was one talk that stood out, which struck me intuitively as “all sorts of wrong”, and that was Anant Narayanan’s “Message Passing vs. Data Synchronization” (http://vimeo.com/77352415). This is a write up of my thoughts as Anant kindly wanted to dig into what I thought and where I disagreed.

Anant rightfully points out that simple implementations of Message Passing do not cut it. There are things to consider in the design of distributed systems and he highlights: persistence, fault tolerance, scaling, consistency, and security. All of these have been solved before, as he points out, but then he goes on to state his thesis that none of the code implementing those solutions belongs in our application code. This is where “all sorts of wrong” begins.

The entire talk serves as a warning against going along with the thesis, for the simple reason that if we abstract persistence, fault tolerance, scaling, consistency, and security to something else without understanding what we’re doing, we risk relying on a wrong implementation, as shown throughout the talk. Also, what is it that we’re building at that point and why are we getting paid to do it?

The talk states that “message passing is just a primitive” and that we should abstract away from it. This is fine, as long as the abstractions do not lose touch with asynchronous and faulty nature of message passing. More importantly, the abstractions should also be correct. It is very easy to cross the fine line from abstraction into ambiguation, and the examples presented in the talk do just that.

We are shown a strawman abstraction comparison between the progression from DOM manipulation to jQuery to Angularjs/Ember.js which is contrasted with progression from WebSockets to Socket.io to “?”. The talk goes on to suggest that the progression from jQuery to Angularjs and Ember.js is equivalent to a progression from Socket.io to persistent, fault tolerant, scalable, consistent, and secure applications. The talk proposes we are “just” one abstraction step from realizing the ultimate distributed dream. I’m sorry, but this is nowhere near a fair comparison and it overly encourages the viewer to make gross simplifications which will end up badly. Why? Because the proposed solution is “Data Synchronization.”

In describing what “Data Synchronization” is, the talk posits that “most apps observe and modify data” without pausing and asking how desirable that is in a distributed system in the first place. A common pattern does not imply a correct pattern. This is the wrong metaphor to begin with, and it leads at first to what appear on the surface to be similar solutions, but then diverges quite rapidly into the wrong types of solutions for distributed systems. To contrast, consider a distributed application that is composed of services where “services respond to events and emit commands.” I can make both metaphors fit, but the latter one is better choice for distributed system design.

Next, the talk posits that “data reflects state.” This is ok on its own, but again, the metaphor implied in this statement is the wrong kind. This is because Anant speaks as though he is implying a global application state. To contrast, there is a different kind of state, which would be distributed application state that requires no synchronization. If an application is designed using services that are properly bounded and do not require access to each other’s data, there is no data in the entire application that would require synchronization. As before, one of these metaphors is appropriate for distributed systems and one of them isn’t.

The talk proceeds to describe a chat system where the chat information is stored in some synchronized data store into which we can insert rows and which can replay rows on demand when a user needs to see them. We have systems now large enough that I can point to Twitter architecture as an existence proof that this is not a way to design a distributed chat application. It is the classic “it works until it doesn’t” design, and if we start where the talk suggests, we will end up with a grand rewrite once we reach a certain threshold. We already know how to build these systems, we could start with the correct design, but this would require us to familiarize ourselves with persistence, fault tolerance, scaling, consistency, and security concerns, which the talk attempts to convince is not necessary via statements like: “you want this layer of data synchronization”. No, actually, you don’t.

Next, the talk takes on fault tolerance by arguing that checking for errors (as a result of message passing) does not belong in our application code. This only holds until we have to receive confirmation that the command (a message) the application sent was actually processed, and at that point, we will find ourselves checking for errors. Additionally, because we are working on a distributed system, sometimes the simplest way of correcting an error will be through user action and not some form of automatic retry. This is a property of open systems, and to build those, we cannot assume the abstraction of Data Synchronization.

When the talk addresses security, I cannot fault this stance, because web application security is broken across the industry. We have abused the Access Control List (ACL) model to death and most have forgotten about (or never thought to look for) a better alternative which is Object Capabilities. It is, however, a complex topic and a subject for another post. If you are interested you can check out http://www.erights.org/elib/capability/index.html, which is a great repository on the subject. To demonstrate that this is not some fringe technology, you could think about what OAuth2 (http://tools.ietf.org/html/rfc6749) and Bearer Token usage (http://tools.ietf.org/html/rfc6750) give you and how one would implement a client on top of an API using Bearer Tokens instead of a classic RESTful web API.

The talk sums up the above examples by stating that this is why we shouldn’t concern ourselves with Message Passing. I argue that due to what I outlined above we should absolutely concern ourselves with Message Passing and other distributed concerns. Otherwise, we’ll end up implementing systems that will break from poor design.

The talk then demonstrates the “advantages” of Data Synchronization, and the first slide asks “why not directly store state?”. For an answer to that, I refer you to Nathan Marz’s Lambda Architecture talk (http://www.infoq.com/presentations/Complexity-Big-Data) and let you answer that for yourself instead of accepting Anant’s answer. Additionally, there are multiple advantages realized using Event Sourcing and CQRS (see http://en.wikipedia.org/wiki/Domain-driven_design) that one gives up by “directly storing state.” We are then shown how distributed counters are “inefficient” by demonstration of a distributed counter implemented the wrong way. For how to implement distributed counters take a look at Distributed Counters in Cassandra (http://www.datastax.com/wp-content/uploads/2011/07/cassandra_sf_counters.pdf) for how it can be done in Cassandra, or consider the problem of probabilistic counters described in Big Data Counting: How to Count a Billion Distinct Objects Using Only 1.5 KB Of Memory (http://highscalability.com/blog/2012/4/5/big-data-counting-how-to-count-a-billion-distinct-objects-us.html). It’s not an easy problem.

Anant leaves us with the question “where is the Angular or Ember of data”? I would say, that perhaps Joyent’s Manta (http://www.joyent.com/products/manta) is a good start. However, Data Synchronization, as presented, is definitely not the answer.


I would like to thank Scott Bellware for taking the time to review this post and suggesting improvements.

Implementing Unforgeable Actor Addresses

Actor Model of Computation is a great example of a decentralized system. However, one of the features lacking in most actor model implementations I have come across is the unforgeability of actor addresses. Recently, I had the opportunity to write about the issue on Dale Schumacher’s “It’s Actors All The Way Down” blog:  “Towards a Universal Implementation of Unforgeable Actor Addresses”.