Programming, Security

A simple explanation of SSL/TLS

What is SSL/TLS?

SSL, which stands for Secure Sockets Layer, was originally created by Netscape Communications to secure eCommerce applications. Version 1.0 was never released but version 2.0 was released in 1995 followed by version 3.0 in 1996. SSL was adopted as an open standard by the Internet Engineering Task Force and renamed to TLS, which stands for Transport Layer Security. TLS version 1.0 was released in 1999. Version 1.1 of TLS was released in 2006 followed by version 1.2 in 2008.

In the interest of full disclosure, Netscape Communications was purchased by AOL Inc. in 1999. I was employed by AOL Inc. from September 2002 through December 2003. I don’t feel this necessarily biases me in respect to this discussion but you may choose to interpret this information however you wish.

SSL and TLS are algorithms for secure communication over an insecure medium by two parties who have not necessarily previously interacted. Typically SSL/TLS are used over TCP connections but there have been proposals for their use over UDP as well. SSL/TLS is the mechanism used to secure the HTTPS protocol.

SSL and TLS provide security by way of three key mechanisms:

  • Endpoint Authentication
  • Message Integrity
  • Secrecy
Endpoint Authentication

Endpoint Authentication means that while communicating electronically with a party at the other end of a communication channel, I can be certain that the party I’m communicating with is actually the party I think I’m communicating with. In SSL/TLS, this is provided by the use of server and optionally client digital certificates as well as a trust network in the form of a database of Certificates of Authority.

Message Integrity

Message Integrity means that while communicating electronically with a party at the other end of a communication channel, I can be certain that the content of the message I receive has not been altered while in transit. The data I receive is exactly what the sender sent. Message Integrity is provided by the inclusion of a MAC with each message.

Secrecy

Most people think only of Secrecy when they think of SSL/TLS. Secrecy means that while communicating electronically with a party at the other end of a communication channel, other parties observing my conversation cannot know the content. Secrecy is provided by SSL/TLS through the use of symmetric encryption.

Why do we need SSL/TLS?

The reason we need SSL/TLS is because we often need to communicate over insecure media. The Internet is an insecure medium. When you send network packets from your computer to another computer via network infrastructure not directly under your control, you can’t be certain what exactly happens to it in between. You can’t even be certain it will ever arrive at its intended destination. “Network infrastructure not directly under your control” can also mean the space all around you if you happen to be using a wireless network at a Starbucks, for example.

Vulnerabilities to unsecured communications can include active or passive attacks. Man in the middle attacks can be either active or passive. Firesheep would be an example of a passive attack. The bottom line is that when you send data over an insecure medium, you don’t know where that data is going or who is going to intercept it and when you receive data over an insecure medium, you can’t be sure where it came from or who sent it. An unsecured communication channel over an insecure network cannot be trusted.

How does SSL/TLS work?

I won’t go too far into details, but here’s a set of illustrations that show both the problem and how SSL/TLS solves them. The illustrations will chronicle a riveting love story between Steve and Sally with Joe passing messages between them. Steve and Sally will initially just pass messages to Joe, trusting him to deliver them unaltered to their true love. We’ll see what Joe can do to the messages in transit and we’ll figure out ways to keep Joe from being such a jerk. In this way, we’ll derive the fundamentals that SSL/TLS uses to provide Endpoint Authentication, Message Integrity, and Secrecy.

Let’s meet the cast in our story

Steve is our manly hero who is in love with Sally.
Steve

Sally is our beautiful heroin who is in love with Steve.
Sally

Joe is the villain who is Jealous of Steve and Sally’s love and who is also greedy and completely lacking integrity.
Joe

He said, She said

Scenario: Steve wants to send a love letter to Sally, but he doesn’t know exactly where Sally lives, so he writes it and gives it to Joe because Joe says he knows where Sally lives.

Steve
“I love you, marry me”
“Send me money”
Joe
“Go to hell”
“I hate you, asshole”
Sally

The problem is that Joe, acting as the middle man, has altered the communication before passing it on. What Steve and Sally need is a way of protecting their messages from Joe.

Symmetric Key Cryptography

Symmetric Key Cryptography is a form of cryptography where the same key used to encrypt a message can also be used to decrypt the encrypted message.

In cryptography, we speak of data as being in plaintext or cyphertext. Plaintext is unencrypted whereas cyphertext has been obscured using some cryptographic algorithm.

A symmetric key encryption algorithm is an invertible function that takes plaintext and a key as input and produces cyphertext. The inverse of the function takes the same key and the cyphertext and produces the original plaintext.

Encrypt(key, plaintext) -> cyphertext
Decrypt(key, cyphertext) -> plaintext

To be useful to exchange information securely between two parties, the key must be known to both of them. This is known as a shared secret.

Steve Uses Encryption

Scenario: Steve chooses a key and uses it to encrypt his love letter to Sally. Steve gives the encrypted letter to Joe to give to Sally.

Steve
“@#$%!”
“???”
Joe
“@#$%!”
“???”
Sally

Now Joe can’t read Steve’s letter. Now Sally can’t read it either. Steve needs to share the key he chose to encrypt his love letter with Sally.

Steve Shares his Key

Scenario: Steve chooses a key and uses it to encrypt his love letter to Sally. Steve gives the encrypted letter and the key to Joe to give to Sally.

Steve
“I love you, marry me + key”
“Send me money”
Joe
“Go to hell + key”
“I hate you, asshole”
Sally

Now Sally can read Steve’s letter. But now Joe can read Steve’s letter. Worse, Joe can also choose a different key and change the message and encrypt it with his key and give Sally the altered letter and Joe’s key. We’re back where we started. Steve needs a way to get the key he used to encrypt his letter to Sally without Joe being able to access it.

Public Key Cryptography

Public Key Cryptography works similarly to symmetric key cryptography except that there are a pair of keys rather than just one. The pair of keys consists of a public key and a private key. The public key can be used to encrypt data and the private key can be used to decrypt it. The public key can be freely shared while the private key must be kept secret. If you generate a public/private key pair, anyone who has your public key can encrypt a message and send it to you and it doesn’t matter who sees the encrypted message because nobody can decrypt it without the private key, which only you have.

Encrypt(key_pub, plaintext) -> cyphertext
Decrypt(key_priv, cyphertext) -> plaintext

This method of encryption totally avoids the need for a shared secret. Our hero and heroin could each choose a public/private key pair, exchange the public keys through Joe and just encrypt all their love letters with the others’ public key and send the encrypted letters through Joe as well. The problem is that public key cryptography algorithms are orders of magnitude slower than symmetric key cryptography algorithms. Fortunately, we can get the best of both worlds. All we need to do is choose a symmetric key and share that securely by encrypting it with a public key. Now we can exchange a shared secret through Joe and Joe can’t read it.

Additionally, Public Key Cryptography can be used in the reverse direction to produce a digital signature. This is done by encrypting a digest of a message using the private key. Anyone with the public key can decrypt it can compare it to the digest of the message. In this way, anyone can verify that the signature was generated by the holder of the private key.

Hash_priv(key_priv, plaintext) -> signature
Hash_pub(key_pub, plaintext, signature) -> match/no-match

Steve Asks For Sally’s Public Key

Scenario: Steve wants Sally’s public key so he can send her an encrypted love letter and the key he used to encrypt it. Joe gives Steve Joe’s public key. Steve encrypts his love letter with his symmetric key and he encrypts the symmetric key with Joe’s public key and he gives both to Joe. Joe now can decrypt the symmetric key and then he can decrypt the love letter.

Steve
“Send me your public key”
“Here you go”
Joe
 
Sally
Steve
“I love you, marry me + key”
“Send me money”
Joe
“Go to hell + key”
“I hate you, asshole”
Sally

The problem now is that Steve thinks he has gotten Sally’s public key but he’s really gotten Joe’s public key. He thinks he’s sending a message that only Sally can read but he’s wrong. Joe needs a way to know that he’s gotten Sally’s public key rather than Joe’s.

Trusted Third Party

Meet Sam, a new cast member in our story. Steve has met Sam and has gotten Sam’s public key. Sally has met Sam and has gotten Sam’s public key. Joe doesn’t like Sam because, unlike Joe, Sam is very trustworthy.

Sam is able to digitally sign Steve and Sally’s public keys. The way this is really done is by packaging up the public key with some metadata about it in something called a certificate. Importantly, the certificate contains the public key and the name of the owner.

Scenario: Sally generates a certificate with her name and public key and asks Sam to sign it. Sam signs it and returns the signed version to Sally. Steve now asks Sally for her certificate through Joe and she gives it to him, also through Joe. Steve can now encrypt his love letter with a symmetric key and encrypt the symmetric key with Sally’s public key and send both to Sally through Joe. Joe can’t read the messages in transit.

Steve
“Send me your public key”
“Here you go”
Joe
“Send me your public key”
“Here you go”
Sally
Steve
“I love you, marry me + key”
“???”
Joe
“Go to hell + key”
“I hate you, asshole”
Sally

The problem now is that while Joe can’t know the content of the message Steve is sending to Sally, he can still choose his own symmetric key, send any message he wants to Sally, encrypt it with his symmetric key, encrypt his symmetric key with Sally’s public key, and send his own encrypted message and encrypted key to Sally. Sally won’t know that Joe altered the message in transit. Joe won’t be able to send a message to Steve using the symmetric key that Steve chose because he can’t know it because Joe doesn’t have Sally’s private key. Steve will probably know something’s wrong, but Sally won’t.

Message Authenticity Codes

The final piece of the puzzle comes in the form of Message Authenticity Codes or MAC. A MAC is a hash of the combination of a message and a shared secret key. When two parties are communicating over an insecure medium and both have access to a shared key, the sender can generate a MAC of the message and send both the message and the MAC. The recipient can generate the MAC of the message using the same key and know if the message has been altered in transit based on whether the calculated MAC matches the one received with the message.

Using a MAC, Steve and Sally can exchange a public key, a shared secret, and any number of messages, all through Joe without Joe being able to read the messages or alter them.

Steve
“Send me your public key”
“Here you go”
Joe
“Send me your public key”
“Here you go”
Sally
Steve
“I love you, marry me + key + MAC”
“I will! + MAC”
Joe
“I love you, marry me + key + MAC”
“I will! + MAC”
Sally

Now our hero and heroin can live happily ever after…

A Less Contrived Explanation

I’ve taken a lot of liberties with the details of the protocol in the illustrations above. In actuality, an SSL/TLS conversation is quite a bit more complicated. Nevertheless, I’ve covered all the principles involved. To give these illustrations more context, Steve could be you, Sally could be the website of your bank, and Joe could be any number of switches or proxies operating between your computer and your bank’s computers and that may or may not be controlled by unscrupulous people wishing to drain your bank account.

Who’s Sam? Sam represents an entity you may not have heard of. Sam is a Certificate of Authority or CA. A CA is a certificate that is allowed to digitally sign other certificates. A CA may or may not be signed by some other CA. If not it’s known as a root certificate. Certificates of Authority are distributed in a trust database which is basically just a list of trusted root certificates, although it may contain some non-root certificates as well. Your browser may have been downloaded with its own trust database (Firefox contains its own trust database) or it may use the trust database of your OS (Chrome and Internet Explorer do this).

Who controls the CAs? CAs are owned by companies or other organizations that are presumably trustworthy. Who decides what CA owners are trustworthy? That’s done by companies that make browsers, like Mozilla and Microsoft. Ultimately, it’s up to you to decide whether you trust Mozilla and Microsoft to make these decisions for you. Regardless of whether your browser uses its own trust database or the trust database of your OS, it likely gives you the opportunity to manage your trust database for yourself. The trouble is that most people wouldn’t know how to do this, much less how to make good decisions. Most people just have to trust Mozilla and Microsoft.

I won’t go too deep into the details, but let’s just take a minute to examine how the SSL/TLS protocol really works. The following describes how the most basic SSL/TLS conversation looks like. It can get a lot more complicated than this, but now that you know the principles from the earlier illustrations, all of this should make a lot of sense.

First, a client will contact a server to establish a TCP connection. Next, the client will send a Client Hello message which contains the protocol version the client wants to use, a client random data, and a list of cypher suites the client wants to use. The Client Hello message is sent in plaintext so anyone observing the conversation will see the client random value. That’s OK because it’s just going to be combined with other random data later.

If the server accepts the protocol version and cypher suites offered by the client, the server will respond with a Server Hello message followed by a Server Certificate message and a Server Hello Done message. The Server Hello message contains the negotiated protocol version and cypher suite and a server random data. The important thing to note here is that the server sends its certificate in the Server Certificate message which should have previously been digitally signed by a CA that the client trusts.

Next, the client will verify that the certificate is owned by the organization that the client intended to contact, typcially by comparing the Common Name field of the X.509 certificate to the domain name of the website it was trying to connect to and by validating that the certificate was signed by a CA that it trusts and that the digital signature is valid. The client will then send a Client Key Exchange message that contains what is called a premaster secret. The premaster secret is generated by the client using the client and server random data. The premaster secret is encrypted using the public key of the server provided to the client in the server’s certificate. Only the server that the client is intending to communicate with should have the private key corresponding to the public key in the server’s certificate so only that server should be able to decrypt the premaster secret.

Both the client and server will separately derive what’s called the master secret from the premaster secret. The master secret is used by both client and server to derive the symmetric keys and hash keys used for the remainder of the conversation. While these are all shared, it turns out that client and server use separate symmetric keys to encrypt the data they send. The keys used to calculate the message MACs are also separate for client and server as well as separate from the encryption keys.

Next, the client will send a Change Cipher Spec message that verifies the protocol version that the client believes the server has agreed to, followed by a Client Finished message. The Client Finished message contains a hash of the entire SSL/TLS handshake as seen by the client using the client hash key. The server will verify that the hash generated by the client matches the hash that the server generates for the conversation using the client’s hash key. If the hashes don’t match then the server knows that the handshake has been manipulated in transit.

If the handshake hash from the client is validated by the server, the server will send a Change Cipher Spec message followed by a Server Finished message. The Server Finished message contains a hash of the entire handshake conversation as seen by the server and hashed using the hash key of the server. The client will generate the hash of the handshake conversation using the server’s hash key and compare it to the hash received from the server. If it doesn’t match then the client will know the handshake was manipulated in transit.

At this point the handshake is completed and application data can be sent and received according to whatever application protocol is in use. Any message that is sent is encrypted using the symmetric key of the sender and coupled with a MAC generated using the message combined with the hash key of the sender. Upon receipt, the message is decrypted using the hash key of the sender and the MAC is validated using the hash key of the sender. Remember that both sides have their own symmetric and hash keys but all four keys are possessed by both sides.

The client can validate that it is communicating with the intended server because it validated the server’s certificate. Both client and server can be certain of the integrity of the data they receive by verifying the MAC that accompanies each message. All data sent and received following the handshake is encrypted using keys derived from the premaster secret which was shared by the client with the server using the server’s public key. These components have, respectively, provided the SSL/TLS conversation with endpoint authentication, message integrity, and secrecy.

But wait, there are two endpoints involved in any TCP conversation and only one of them was authenticated with a certificate. In the example I’ve just covered, only the server provided a certificate. In fact, SSL/TLS have a facility for the server to demand during the handshake that the client also provide a certificate. While the authenticity of the server is always verified since the Server Certificate message is a required part of the handshake, the request by the server for a client certificate is optional. Most of the time, this facility is not used. This is because clients are typically authenticated using a password at the application layer after the SSL/TLS conversation is already established. If everyone could be authenticated using certificates all the time, the world would be a better place. Unfortunately, key management is somewhat complicated. For this reason, client certificates are most often used for server to server communication.

Programming

Erlang n-squared gen_server

One of the nice things about Erlang is it’s amazing performance. However, in certain circumstances its performance is surprisingly poor.

As I pointed out in my previous post, work is done in an Erlang application by Erlang processes and each process has an input queue of messages.

The standard way of using Erlang is with OTP. OTP is an acronym that stands for Open Telecom Platform. OTP is a framework and set of support libraries that handle general problems like spawning supervised processes and building whole applications with them as well as a wide variety of general purpose support modules.

Typically processes in an OTP application that do most of the work will implement the gen_server behavior. In general, a gen_server process will wait for a message to arrive in its message queue and then process it before waiting for another. If in between beginning to handle a message and finishing, two or more messages arrive in the process message queue, the process will immediately begin processing the next one at the front of the queue.

Pulling the next message from the front of the queue is very fast. More importantly, it’s an O(1) time complexity operation with respect to the number of messages waiting to be processed. It’s done with syntax that looks like this:

receive
    Message -> handle_message(Message)
end

In this example, the first message in the queue will be removed from the queue and then bound to the variable Message and then it will be passed as a parameter to the function handle_message/1.

However, Erlang has a facility that allows messages to be processed out of order. This is done using an Erlang mechanism called selective receive. Selective receive will scan the message queue of a process from the front to the back looking for a message matching a particular pattern. If no match is found, a selective receive will wait for a matching message to be added to the end. Selective receive is done using syntax like this:

receive
    some_message -> got_the_message_i_was_looking_for()
end

In this code snippet, the message queue will be scanned for a message matching the single atom some_message, waiting for it if necessary, and when found, it will be removed from the queue and the function got_the_message_i_was_looking_for/0 will be called.

The problem here is that while pulling a message from the front of the queue has O(1) time complexity, using selective receive has O(N) time complexity with respect to the number of messages waiting in the queue to be processed. This is fine if your gen_server process doesn’t handle a lot of messages or else it doesn’t make use of selective receive. However, if your process is a high throughput server process the use of selective receive can be a big problem.

The most common Erlang facility that would make use selective receive would be calling gen_server:call/2 or gen_server:call/3. These functions make synchronous calls to other gen_server processes. The way this is implemented is by sending a message to the other process (which is normally always asynchronous) and then waiting for a particular pattern of response message using selective receive. Regardless of the time complexity of selective receive, it’s generally not advisable to wait synchronously for work to be done in a high throughput server process, so this usually isn’t an issue.

The real problem is typically more subtle. This is because a subset of OTP library calls are implemented in terms of selective receive. For example, the OTP library function for sending a packet to a UDP port is implemented by sending a message with the payload to an Erlang port followed by a selective receive to get the result. If, for example, your application sends UDP packets to localhost port 514 to log messages to syslog, you might assume that you could do that directly from a high throughput server process. If you were to do that, your application will probably work fine most of the time. However, if your application ever has a spike in workload or a stall in processing that causes your high throughput server process to get a bit behind, it may take a long time to catch up. If an Erlang process were to have N messages in its message queue and the processing of each message required sending a UDP packet then the O(N) nature of selective receive means that processing N messages has O(N^2) time complexity.

If an Erlang process is continuing to receive more messages at a constant rate, it’s possible for it to get far enough behind that it takes more time to process a message than the average time between messages arriving. In this case, it will never get caught up. Since each message in a processes’ message queue takes memory, the accumulation of messages in the processes message queue will cause the Erlang VM to eventually use all available RAM followed by increasingly severe swapping.

Here’s a simple demonstration of the problem. The following module definition will send N messages to itself, handle each by sending a UDP packet, and print the length of time it takes to drain its message queue.

-module(time_udp).

-export([start/1]).

start(N) ->
    {ok, Socket} = gen_udp:open(0),
    Seq = lists:seq(1, N),
    lists:foreach(fun(_I) -> self() ! message end, Seq),
    Start = erlang:now(),
    lists:foreach(fun(_I) ->
            receive _Message -> gen_udp:send(Socket, {127, 0, 0, 1}, 1234, <<"message">>) end
        end, Seq),
    End = erlang:now(),
    io:format("processed ~p messages in ~p seconds~n", [N, time_diff(Start, End)]).

time_diff({StartMega, StartSecs, StartMicro}, {EndMega, EndSecs, EndMicro}) ->
    ((EndMega - StartMega) * 1000000.0) + (EndSecs - StartSecs) + ((EndMicro - StartMicro) / 1000000.0).

Now calling start/1 with increasing values of N will illustrate the quadratic relationship between N and the time it takes to send N UDP packets. If the relationship were linear, doubling N should roughly double the time. Instead, as the following output shows, doubling N roughly multiplies the time by 4 which is exactly what would be expected if the relationship were quadratic.

$ erl
Erlang R16B03 (erts-5.10.4) [source] [64-bit] [smp:4:4] [async-threads:10] [kernel-poll:false]

Eshell V5.10.4  (abort with ^G)
1> time_udp:start(10000).
processed 10000 messages in 0.343503 seconds
ok
2> time_udp:start(20000).
processed 20000 messages in 1.222438 seconds
ok
3> time_udp:start(40000).
processed 40000 messages in 4.574205 seconds
ok
4> time_udp:start(80000).
processed 80000 messages in 17.498623 seconds
ok
5>

If only the implementation of sending a message to a UDP socket were implemented with an Erlang built-in function rather than a selective receive, this would not be a problem. Without writing your own native Erlang module, there is no way to avoid n-squared time complexity when sending UDP packets while using gen_server.*

So how do you write an Erlang application that doesn’t suffer from this problem and still use OTP? The answer is gen_server2. gen_server2 is actually a fork of the OTP gen_server module but with some significant modifications. The objective of the modifications is to minimize the time spent scanning the message queue when doing a selective receive. It accomplishes this by first draining the message queue into local memory before handling the first message in the queue the same way that gen_server would have done. By doing this, the cost of scanning the message queue when doing a selective receive is limited to any messages that have arrived in the message queue since handling of the current message started.

While gen_server2 can solve the n-squared problem caused by a large message backlog in the gen_server processes you’ve written for your application, it will not eliminate the problem entirely for any OTP application. The reason is that the same problem exists in all supervisor processes using the OTP supervisor behavior. For very busy supervisors, the problem can be severe since the protocol for a supervisor starting a new child process involves a selective receive in the supervisor to get the result of the child processes initialization. Additionally, the Erlang VM will automatically start a number of support processes that are implemented using gen_server.

One such system process that is easy to overload is the error_logger process. Generally applications don’t block waiting for the error_logger process to do work so an accumulation of messages in the error_logger process just causes increased memory and CPU utilization until the it catches up (assuming it does so).

You might think that if your application doesn’t send UDP packets, it’s safe from this problem (other than the supervisor and error_logger) so you don’t need gen_server2. While I’ve tracked down this problem for certain to exist within the gen_udp implementation, I strongly suspect that the same issue exists within other OTP library calls, though I’ve not specifically identified any. Since gen_server2 behaves identically to gen_server under normal circumstances and strictly better than gen_server under abnormal but not necessarily unusual circumstances, I strongly recommend using gen_server2 rather than gen_server in your Erlang applications.

* It is possible to not use the gen_udp module to send UDP messages and handle the result message asynchronously, avoiding the selective receive performed in the gen_udp implementation. However, doing so would eliminate the encapsulation within the gen_udp implementation of the format of the result message. It’s possible to do this, but not necessarily a good idea.

Programming

Some surprising Erlang behavior

Erlang is in many ways a fantastic language. Its syntax is a little foreign, at first, to someone like myself coming from a background in C and C++. The language and the runtime environment have some really nice qualities, though. When I began working with it, I quickly acquired an appreciation for it. I won’t get too deep into the details of the language and why I like it but I will start off with a discussion of a couple key features relevant to this discussion.

First, work is done in an Erlang application by Erlang processes. Each process is a lightweight thread and has an identifier called a pid. Each process has an input queue of messages. An Erlang process generally operates by receiving a message from its message queue, doing some work to process the message, and then waiting for another message to arrive in its queue. Some of the work that a process may do might involve generating and sending messages to another process using the other process’ pid. Under normal circumstances, sending a message from one process to another process is asynchronous, meaning the send operation does not block waiting for the receiving process to handle the message or even necessarily for it to be added to the queue of the receiving process.

Second, an Erlang node is an instance of the Erlang interpreter, an OS-level process. Erlang nodes can be easily joined together so that a single Erlang application can span many physical hosts. Once a node has joined an Erlang cluster, it becomes trivial for processes in remote hosts to communicate with one another. Sending a message to a remote process is done with the pid of the remote process, just like a local process. This allows for Location Transparency, one of the nice features of Erlang.

Erlang has gotten a reputation as a platform that is very stable. Joe Armstrong, one of the original authors of Erlang, has famously claimed that the Ericcson AXD301 switch, developed using Erlang, has achieved NINE nines of reliability.

Having heard this, I was extremely surprised to identify a case where an Erlang application performs not just poorly but it literally comes to a screeching halt. The problem occurs when one of the nodes in an Erlang cluster suddenly goes network silent. This can occur for a variety of reasons. For example, the node may have kernel panicked or it may have started swapping heavily or a network cable connecting one of the physical hosts in the cluster may have gotten unplugged. When this condition occurs, messages sent to processes on the node which has gone dark are buffered up to a limit but once the buffer fills up, sending messages to processes on the node which has gone dark goes from being asynchronous to being synchronous. Erlang processes not sending messages to the failed node still continue to do work as normal but any process sending a message to the failed node will halt.

The Erlang runtime will monitor the other nodes to which the local node is attached. If a host where one of the nodes is running goes network silent then after some period of time (defaulting to about 1 minute), the other nodes will decide that the network silent node is down. The length of time before a non-responsive node is considered down is tune-able using an Erlang kernel parameter, net_ticktime. You might think that the processes waiting to send messages to processes on the down node would get unblocked when the target node is considered down, but that (mostly) doesn’t happen. It turns out that once the target node is considered down, blocked senders will get unblocked once every 7 seconds, likely tune-able using the Erlang kernel parameter net_setuptime.

I’ve come up with a fairly easy way to demonstrate this problem given two Linux VMs. Let’s call them faucet-vm and sink-vm. Both need Erlang installed.

On the sink-vm host, create sink.erl with these contents:

-module(sink).
-export([start/0, entry/0]).

start() ->
    erlang:register(?MODULE, erlang:spawn(?MODULE, entry, [])).

entry() ->
    loop(0).

loop(X) ->
    receive ping ->
        io:format("~p: ping ~p~n", [timeasfloat(), X]),
        loop(X + 1)
    end.

timeasfloat() ->
    {Mega, Sec, Micro} = os:timestamp(),
    ((Mega * 1000000) + Sec + (Micro * 0.000001)).

Now compile and start the sink process:

$ erlc sink.erl
$ erl -sname sink -setcookie monster
Erlang R15B01 (erts-5.9.1) [source] [64-bit] [smp:4:4] [async-threads:0] [kernel-poll:false]

Eshell V5.9.1  (abort with ^G)
(sink@sink-vm)1> sink:start().
true
(sink@sink-vm)2>

On the faucet-vm host, create faucet.erl with these contents:

-module(faucet).
-export([start/0, entry/0]).

start() ->
    erlang:spawn(?MODULE, entry, []).

entry() ->
    Sink = rpc:call('sink@sink-vm', erlang, whereis, [sink]),
    loop(Sink, 0).

loop(Sink, X) ->
    io:format("~p: sending ping ~p~n", [timeasfloat(), X]),
    Sink ! ping,
    loop(Sink, X + 1).

timeasfloat() ->
    {Mega, Sec, Micro} = os:timestamp(),
    ((Mega * 1000000) + Sec + (Micro * 0.000001)).

Now compile and start the faucet process:

$ erlc faucet.erl
$ erl -sname faucet -setcookie monster
Erlang R15B01 (erts-5.9.1) [source] [64-bit] [smp:2:2] [async-threads:0] [hipe] [kernel-poll:false]

Eshell V5.9.1  (abort with ^G)
(faucet@faucet-vm)1> faucet:start().

This spews horrific amounts of console output on both faucet-vm and sink-vm consoles. If you don’t see tons of console output on both sides, you’ve probably done something wrong.

In a separate console on sink-vm, use iptables to block all IP traffic between sink-vm and faucet-vm:

# iptables -A INPUT -s faucet-vm -j DROP && iptables -A OUTPUT -d faucet-vm -j DROP

The output of the two processes will halt shortly after the traffic between them is blocked. Which one stops first actually depends on whether the sink process is able to keep up with the faucet process before the traffic is blocked. About 60 seconds after halting, each node should report that the other node is DOWN. The number of messages the faucet claims to have sent will be higher than the number the sink has received. The difference will be the number of messages the faucet has buffered to send to the sink before blocking. After the faucet reports that the sink is DOWN, it will report one sent message every 7 seconds.

Here’s the tail of the output of the sink process when I tried it:

1440398109.640845: ping 45404
1440398109.640864: ping 45405
1440398109.640881: ping 45406
1440398109.640899: ping 45407
1440398109.640917: ping 45408
1440398109.640935: ping 45409
1440398109.640953: ping 45410
1440398109.640971: ping 45411
1440398109.640988: ping 45412
1440398109.641006: ping 45413
1440398109.641022: ping 45414
1440398109.64104: ping 45415
1440398109.641057: ping 45416
(sink@sink-vm)2>
=ERROR REPORT==== 23-Aug-2015::23:36:03 ===
** Node faucet@faucet-vm not responding **
** Removing (timedout) connection **

And here’s the tail of the output of the faucet process when I tried it:

1440398099.86504: sending ping 63339
1440398099.865061: sending ping 63340
1440398099.865107: sending ping 63341
1440398099.865159: sending ping 63342
1440398099.865199: sending ping 63343
1440398099.865224: sending ping 63344
1440398099.865246: sending ping 63345
1440398099.865293: sending ping 63346
1440398099.865328: sending ping 63347
1440398099.865364: sending ping 63348
1440398157.560028: sending ping 63349
(faucet@faucet-vm)2>
=ERROR REPORT==== 23-Aug-2015::23:35:57 ===
** Node 'sink@sink-vm' not responding **
** Removing (timedout) connection **
1440398164.566233: sending ping 63350
1440398171.574962: sending ping 63351
1440398178.582127: sending ping 63352
(faucet@faucet-vm)2>

The implications of this behavior are rather severe. If you have an Erlang application running on a cluster of N nodes, you can’t assume that if 1 of the nodes goes network silent that you will only lose 1/N total capacity of the cluster. If there are critical processes attempting to communicate with processes on the failed node then they will eventually be unable to do any work at all, including work that is unrelated to the failed node.

I haven’t yet tried this, but it is likely possible to solve this problem by routing all messages to remote processes through a proxy process. Each remote node can have a registered local proxy and the sender would find the appropriate proxy depending on the node of the remote process using erlang:node/1. Each node proxy could be monitored by a separate process that detects when the proxy has become unresponsive due to the remote node going down. When hung, the proxy can either be restarted to keep its message queue from growing too large or it can just be killed until the remote node becomes available again. At the very least, Location Transparency is lost.

An astute reader might notice that my use of iptables to simulate a network silent node is imperfect. Specifically, iptables will not block ARP traffic. It’s conceivable that ARP traffic might interfere with the experiment. It turns out it does not. I’ve specifically tested this by also blocking ARP traffic using arptables and obtained identical results. I left ARP and arptables out of the example for simplicity.

Additionally, I’ve carefully experimented with various iptables rules to simulate nodes becoming unavailable in other ways with similar results. Specifically, using a REJECT rule for the INPUT chain rather than DROP results in an ICMP destination port unreachable packet being returned to the faucet-vm host but the faucet process will still block. Using REJECT --reject-with icmp-host-unreachable will generate ICMP destination host unreachable packets returned to the faucet-vm host but this will also leave the faucet process blocked.

It turns out the only way to get the sending process (mostly) unblocked is if the sending node receives a TCP RST packet from the target host. I tested this by using REJECT --reject-with tcp-reset and allowing outbound RST packets. This effectively simulates a node whose host is still running but where Erlang is not. The reason this only mostly unblocks the sending process is that the rate of sending is still significantly lower than if the target node is up. I measured the rate of pings “sent” by the faucet under these conditions to be roughly 1/10th the rate when the sink node is up and traffic is not blocked.

One might think this problem is just a bug and if it was a bug, it might be limited to a particular version. After all, the example output above is from R15B01. If this behavior is a bug, it’s been around for a while and it hasn’t been fixed yet. I’ve reproduced it with at least three different versions. The earliest version was R15B01 and the newest is the current version, OTP 18.0.

The conclusion to be drawn is that when designing a distributed application using Erlang, you have to be aware of this behavior and either work around it one way or another or else accept that your application is unlikely to be highly available.

Edit:

After spending more time investigating this problem and work-arounds, I can now say for sure that the solution I suggested above can be made to work. However, it seems that there is a simpler solution. Instead of using the ! operator or erlang:send/2, it is possible to completely mitigate this problem by using erlang:send/3, passing [nosuspend, noconnect] as the options parameter.

If only nosuspend is used, the problem is mitigated up until the point where the remote node is identified as DOWN. Once that happens, it’s back to one message per 7 seconds until communication with the remote host is re-established. This suggests that the 7 second delay is the timeout waiting to connect to the remote node once it has been removed from the node list. Also using noconnect avoids blocking for 7 seconds on each send after the remote node has been determined to be down, which solves the problem in question but it potentially causes others. Using noconnect means that you have to have some other mechanism for re-establishing communication with remote nodes once they become available again. This isn’t necessarily challenging but it needs to be considered in when designing your application.