Mathematics, Number Theory

Congruence of exponents

What is the congruence of exponents? The term is, so far, rather vague, but I’d like to use it to refer to something quite specific. Allow me to explain precisely what I mean.

First, let’s take a look at the Carmichael function. The Carmichael function is also known as the least universal exponent function. It represents the period of the modular exponential cycle for a given positive integer m. The following congruence relationship describes this cycle, and is a known quality of the Carmichael function:

abab + λ(m) (mod m)

where a is an integer, m is a positive integer, b is an integer such that bemax of m, and where emax of m is the largest exponent of m under prime factorization.

It’s important to note that b can be any integer greater than or equal to emax of m. Let’s say that c = b + λ(m). It follows that cemax of m, since bemax of m and λ(m) ≥ 1. Therefore:

acac + λ(m) (mod m)

But remember that we defined c = b + λ(m). We can combine the previous two congruences:

abab + λ(m) = acac + λ(m) (mod m)

Since c = b + λ(m), we can rewrite the last term as:

ac + λ(m) = ab + λ(m) + λ(m) = ab + (2 * λ(m))

So now we have:

abab + λ(m)ab + (2 * λ(m)) (mod m)

If we define d = b + (2 * λ(m)), we can follow the same process to show that:

abab + λ(m)ab + (2 * λ(m))ab + (3 * λ(m)) (mod m)

We can continue by this method indefinitely an arbitrary number of times. In this fashion, I’ll show by mathematical induction that the following is therefore true:

abab + (n * λ(m)) (mod m)

where a is an integer, n is a non-negative integer, m is a positive integer, and b is an integer where bemax of m under the prime factorization m. The basis statement is this:

P(0):

abab + (0 * λ(m)) = ab (mod m)

which shows that the basis statement is true. The induction hypothesis is this:

P(k):

abab + (k * λ(m)) (mod m)

where k ≥ 0. The induction step is this:

P(k + 1):

ab + ((k + 1) * λ(m)) = ab + (k * λ(m)) + λ(m)

If we substitute f for b + (k * λ(m)), we get:

ab + (k * λ(m)) + λ(m) = af + λ(m)

Because bemax of m, k ≥ 0, and λ(m) ≥ 1, we know that femax of m. Therefore:

afaf + λ(m) (mod m)

We can make the same substitution in the induction hypothesis statement. Therefore, by the induction hypothesis:

abab + (k * λ(m)) = afaf + λ(m) (mod m)

which completes the proof.

We’ve shown that when n is positive that:

abab + (n * λ(m)) (mod m)

However, if we require the additional stipulation that b + (n * λ(m)) ≥ emax of m, then n can be any integer. We can therefore say that for the given integers, a, b, and n, and the positive integer m:

abab + (n * λ(m)) (mod m)

where bemax of m, and where b + (n * λ(m)) ≥ emax of m.

If we define another integer, c, such that c = b + (n * λ(m)), what we’ve done is establish a modular congruence relationship between b and c, mod λ(m). We can show this relationship like so:

bc (mod λ(m))

We can now restate the earlier congruence in terms of b and c. Given the integers a, b, and c, and the positive integer, m:

abac (mod m)

if all of the following are also true:

  • bemax of m
  • cemax of m
  • bc (mod λ(m))

This final congruence and constraints are what I will call the congruence of exponents.

In my next post, I’ll use the congruence of exponents to show something interesting about recursive modular exponentiation.

Mathematics, Number Theory

Improving on Fermat’s factoring method

In my last post I described Fermat’s factoring method. Fermat’s method is very simple but it’s not the most effective. However, I’ve discovered a significant improvement. It’s not going to beat the Quadratic sieve or General number field sieve, but it’s a substantial improvement none-the-less.

As far as I know, the improvement I describe here is an original discovery. I’ve done a bit of research looking at other improvements to Fermat’s method to see if I could find this method described elsewhere, but with no success. That being said, this method is rather simple so I’d be surprised if I was really the first person to find it.

The simplicity of this method is illustrated by the fact that its proof relies only on the basic rules of modular arithmethic. The rules are fairly simple. I’ll go over them here for background.

First, the most basic concept of modular arithmetic is the congruence relation. Modular congruence represents the equivalence of the members of a congruence class, modulo some integer n. The members of a congruence class are equivalent in the sense that they share the same remainder when divided by n. The relationship between two members of the same congruence class, a and b, (mod n), is represented as

ab (mod n)

Next, addition is distributive. This means that for any two integers, a and b, (mod n)

(a + b)(mod n) = (a (mod n)) + (b (mod n))

which means that the remainder of a divided by n plus the remainder of b divided by n is equal to the remainder of the sum of a and b divided by n.

Next, multiplication is distributive. This means that for any two integers, a and b, (mod n)

ab(mod n) = (a (mod n)) * (b (mod n))

which means that the remainder of a divided by n multiplied by the remainder of b divided by n is equal to the remainder of the product of a and b divided by n.

Last, common factors can be removed. This means that for any two integers, a and b, (mod n), and a third integer, c where c evenly divides a, b, and n

ab (mod n) implies that (a / c) ≡ (b / c) (mod (n / c))

Now, let’s examine the factoring problem again. Let’s assume that we want to factor an odd semiprime, m with factors p and q. Our goal is to discover p and q, knowing only m.

What can we possibly know about p and q? At first glance, it would seem like the answer to that question is “not much”. However, it turns out we can know a great deal.

Let’s try an enumerate some things we know:

  • m is odd
  • p is prime
  • q is prime
  • p is odd, because m is odd and p is a factor of m, which means that 2 cannot be a factor of either p or q
  • q is odd, because m is odd and q is a factor of m, which means that 2 cannot be a factor of either p or q
  • p + q is even, because both p and q are odd and the sum of two odd numbers is an even number
  • pq is even, because both p and q are odd and the difference of two odd numbers is an even number
  • φ(m) is even, where φ is Euler’s totient function, because φ(m) = (p – 1)(q – 1) and both p – 1 and q – 1 are even because both p and q are odd

Note that all these except for the first three, which were given, follow from the distributive modular addition rule, (mod 2).

That’s already quite a lot, isn’t it? We won’t stop there.

As I demonstrated in the post about Fermat’s method, we also know that because m is odd, m is the difference of two squares, which we’ll call u2 and v2 and that u = (p + q) / 2 and v = (pq) / 2, assuming we arbitrarily choose p to be the larger of p and q if they are not equal. Note, again, that u and v are both integers because both p + q and pq are even.

Here’s something new, though. We also know that u and v have opposite parity. By that I mean that if u is even then v must be odd and vice versa. We know this because m is the difference of p2 and q2 and m is odd. If u is even then u2 is even. If u is odd then u2 is odd. The same is true of v2 and v. If u2 and v2 are both even or both odd then their difference would be even. Therefore one of u2 and v2 must be even and the other must be odd and likewise one of u and v must be even and the other must be odd.

Remember that Fermat’s method iterates though all the possible values of u beginning with ceil(√m) until it finds the one where u2m is a square. What if we could know whether u is even or odd? If we could then we could reduce the number of iterations by half. For example, if we know that u is even, we can start with the first even number greater or equal to ceil(√m) and then increment by 2 for each iteration rather than 1.

It turns out that we can know if u is even or odd with complete certainty. The way we can do that is by using both the modular arithmetic distribution rules we already talked about and by examining the possible combinations of sums and products of congruence classes, (mod 4). There are only 4 congruence classes, (mod 4). Also because we know that p and q are both odd we only have to consider the odd ones. There are only 2 odd congruence classes, (mod 4), which are 1 and 3. p, q, and m must all be congruent with either 1 or 3, (mod 4). By first examining what the possible products of 1 and 3 are and then comparing that to the congruence class of m, (mod 4), we can know which congruence class combinations, (mod 4), that p and q could possibly be members of. Here are the possible multiplications of 1 and 3:

(1 * 1) ≡ 1 (mod 4)
(1 * 3) ≡ 3 (mod 4)
(3 * 3) ≡ 1 (mod 4)

Note that if p and q are both congruent with 1, (mod 4), or both congruent with 3, (mod 4), then m will be congruent with 1, (mod 4). Otherwise m will be congruent with 3, (mod 4), and one of p or q will be congruent with 1, (mod 4), and the other will be congruent with 3, (mod 4). Now let’s see what the possible sums could be of those congruence class combinations:

(1 + 1) ≡ 2 (mod 4)
(1 + 3) ≡ 0 (mod 4)
(3 + 3) ≡ 2 (mod 4)

From this we can conclude that if both p and q are congruent with 1, (mod 4), or both are congruent with 3, (mod 4), then their sum will be congruent with 2, (mod 4). Otherwise their sum will be congruent with 0, (mod 4). Therefore, if m is congruent with 1, (mod 4), then the sum of p and q is congruent with 2, (mod 4), and if m is congruent with 3, (mod 4), then the sum of p and q is congruent with 0, (mod 4).

Remember that u = (p + q) / 2. This means that p + q = 2u. Now, depending on whether m is congruent with 1, (mod 4), or with 3, (mod 4), we can know that one of the following is true:

2u ≡ 2 (mod 4) (if m ≡ 1 (mod 4))
2u ≡ 0 (mod 4) (if m ≡ 3 (mod 4))

Because 2 divides 2u, 0, 2, and 4, we can remove the common factor of 2 from both sides of the congruence and the modulus in both these expressions. Therefore the above is equivalent to the following:

u ≡ 1 (mod 2) (if m ≡ 1 (mod 4))
u ≡ 0 (mod 2) (if m ≡ 3 (mod 4))

A number that is congruent with 0, (mod 2), is an even number and a number that is congruent with 1, (mod 2), is an odd number. Therefore, if m is congruent with 3, (mod 4), then u is even and if m is congruent with 1, (mod 4), then u is odd.

Let’s look at an example, which will hopefully make this more clear. Let’s consider the case where

p = 349
q = 191
m = 66659
ceil(√m) = 259
u = (349 + 191) / 2 = 270

Now, because 66659 ≡ 3 (mod 4), we can know that u is even. Rather than starting our search for u at 259 and incrementing by 1 on each iteration, we can start at 260 and increment by 2. Our search for u will take only 6 iterations rather than 12. We’ve reduced the number of iterations by half.

But can we do better than a 2x improvement? We likely can! We can guarantee ourselves a 2x improvement if we use (mod 4), but we can get a better improvement with larger mod values. Let’s see how we do with (mod 12).

There are 6 odd congruence classes, (mod 12), which are 1, 3, 5, 7, 9, and 11. Let’s see what the possible multiplications of these classes are:

(1 * 1) ≡ 1 (mod 12)
(1 * 3) ≡ 3 (mod 12)
(1 * 5) ≡ 5 (mod 12)
(1 * 7) ≡ 7 (mod 12)
(1 * 9) ≡ 9 (mod 12)
(1 * 11) ≡ 11 (mod 12)
(3 * 3) ≡ 9 (mod 12)
(3 * 5) ≡ 3 (mod 12)
(3 * 7) ≡ 9 (mod 12)
(3 * 9) ≡ 3 (mod 12)
(3 * 11) ≡ 9 (mod 12)
(5 * 5) ≡ 1 (mod 12)
(5 * 7) ≡ 11 (mod 12)
(5 * 9) ≡ 9 (mod 12)
(5 * 11) ≡ 7 (mod 12)
(7 * 7) ≡ 1 (mod 12)
(7 * 9) ≡ 3 (mod 12)
(7 * 11) ≡ 5 (mod 12)
(9 * 9) ≡ 9 (mod 12)
(9 * 11) ≡ 3 (mod 12)
(11 * 11) ≡ 1 (mod 12)

This is a much larger set of possible combinations than was the set of multiplications of classes (mod 4). To make it easier to evaluate them, let’s organize them into a table by congruence class:

1 3 5 7 9 11
(1 * 1) ≡ 1 (mod 12)
(5 * 5) ≡ 1 (mod 12)
(11 * 11) ≡ 1 (mod 12)
(1 * 3) ≡ 3 (mod 12)
(3 * 5) ≡ 3 (mod 12)
(3 * 9) ≡ 3 (mod 12)
(7 * 9) ≡ 3 (mod 12)
(9 * 11) ≡ 3 (mod 12)
(1 * 5) ≡ 5 (mod 12)
(7 * 11) ≡ 5 (mod 12)
(1 * 7) ≡ 7 (mod 12)
(5 * 11) ≡ 7 (mod 12)
(1 * 9) ≡ 9 (mod 12)
(3 * 3) ≡ 9 (mod 12)
(3 * 7) ≡ 9 (mod 12)
(3 * 11) ≡ 9 (mod 12)
(5 * 9) ≡ 9 (mod 12)
(9 * 9) ≡ 9 (mod 12)
(1 * 11) ≡ 11 (mod 12)
(5 * 7) ≡ 11 (mod 12)

Now let’s see what the possible sums of these classes are, (mod 12).

1 3 5 7 9 11
(1 + 1) ≡ 2 (mod 12)
(5 + 5) ≡ 10 (mod 12)
(11 + 11) ≡ 10 (mod 12)
(1 + 3) ≡ 4 (mod 12)
(3 + 5) ≡ 8 (mod 12)
(3 + 9) ≡ 0 (mod 12)
(7 + 9) ≡ 4 (mod 12)
(9 + 11) ≡ 8 (mod 12)
(1 + 5) ≡ 6 (mod 12)
(7 + 11) ≡ 6 (mod 12)
(1 + 7) ≡ 8 (mod 12)
(5 + 11) ≡ 4 (mod 12)
(1 + 9) ≡ 10 (mod 12)
(3 + 3) ≡ 6 (mod 12)
(3 + 7) ≡ 10 (mod 12)
(3 + 11) ≡ 2 (mod 12)
(5 + 9) ≡ 2 (mod 12)
(9 + 9) ≡ 6 (mod 12)
(1 + 11) ≡ 0 (mod 12)
(5 + 7) ≡ 0 (mod 12)

From this, we can conclude that

u ≡ 1 or u ≡ 5 (mod 6) (if m ≡ 1 (mod 12))
u ≡ 0 or u ≡ 2 or u ≡ 4 (mod 6) (if m ≡ 3 (mod 12))
u ≡ 3 (mod 12) (if m ≡ 5 (mod 6))
u ≡ 2 or u ≡ 4 (mod 6) (if m ≡ 7 (mod 12))
u ≡ 1 or u ≡ 3 or u ≡ 5 (mod 6) (if m ≡ 9 (mod 12))
u ≡ 0 (mod 6) (if m ≡ 11 (mod 12))

Let’s re-examine the example we checked earlier factoring m = 66659. Because 66659 ≡ 11 (mod 12), we know that u ≡ 0 (mod 6). Now when factoring 66659, we can start checking u at 264 and we can increment by 6 on each iteration. We’ve reduced the number of iterations required to only 2. This is a 6x improvement.

Can we do any better than 6x? Not with such a small value of m. Let’s try something much larger and see how well we can do.

p = 433024223
q = 413158523
m = 178907648397902629
ceil(√m) = 422974761
u = (433024223 + 413158523) / 2 = 423091373

Let’s see how well we do with (mod 96). I won’t build a table of all the possible combinations of products and sums of the congruence classes (mod 96). In this example, 178907648397902629 ≡ 37 (mod 96). I’ll tell you what such a table would show, however. It would show that one of the following is true:

u ≡ 13 (mod 48)
u ≡ 19 (mod 48)
u ≡ 29 (mod 48)
u ≡ 35 (mod 48)

How does this help us though? Which one do we start with? If we choose one and it’s the wrong one, how do we decide it’s the wrong one?

We can’t know which of these congruences is actually true. If we were to choose one and it’s not the right one and we ignore the others, we could follow it to the highest possible value of u before we could know it was the the wrong one. We could potentially make Fermat’s method far worse than the worst case, which is obviously not an improvement.

Instead, what we can do is apply Fermat’s method four times in parallel. As soon as one of the parallel attempts finds the answer, the others can stop. How much of an improvement can we get this way over Fermat’s method? It turns out that using this approach the improvement is 1/2 the modulus divided by the number of possible congruences for u. In this case, the improvement is 12x because 48 / 4 = 12.

Can we better than 12x? We can! If we choose a modulus of 1260, there are 24 possible modulus values for u. I won’t bore you with the details of what they are. With 24 possible modulus values for u, that means that using a modulus of 1260, we can get a 39.375x improvement over Fermat’s method.

If you want to examine what the possible modulus values for u are given a modulus for m, here is a short python program that calculates them:


import sys

m = long(sys.argv[1])
a = long(sys.argv[2])
mods = {}
for i in range(1L, m, 2 if m % 2 == 0 else 1):
    for j in range(i, m, 2 if m % 2 == 0 else 1):
        mult = i * j
        mm = mult % m
        s = i + j
        if mm not in mods:
            mods[mm] = set()
        mods[mm].add((s % m) / 2)

print sorted(mods[a])

Why did I choose the modulus values of 96 and 1260? Here’s where I digress into conjecture. It appears that integers with low totient values are good candidate modulus values for this method. Here totient refers to Euler’s totient function and by “integers with low totient values”, I mean integers with a low ratio of totient to value.

For example, compare the possible modulus values for u when m is 11, 12, and 13. Note that 11 and 13 are prime and thus their totients are 10 and 12 respectively. They have high totient to value ratios. On the other hand, 12 has a totient of 4. It has a totient to value ratio that is relatively low. The range for the number of possible modulus values for u using a modulus of 11 is either 5 or 6. The range of the number of possible modulus values of u using a modulus of 13 is either 6 or 7. By contrast, the range of the number of possible modulus values of u using a modulus of 12 is between 1 and 3.

Other than my own limited observations, I have no evidence that low totient to value integers are more effective than high totient to value integers. I have no mathematical proof.

The question, then, is how useful is this factoring method? I’m not positive but it appears that this method is not going to be more effective than the quadratic seive or the general number field seive methods. The value of this method depends entirely on how quickly the number of possible modulus values for u increases with respect to the low totient modulus. This is again conjecture, but as far as I can tell, the rate of improvement of this method decreases as the modulus value increases. I’ll go into a deeper analysis of the performance of this method in a future post. For now, I think the general number field seive method is safe in its place on top.

Mathematics, Number Theory

Fermat’s factoring method

In my last post I discussed the RSA problem. Involved with the RSA problem is the fact that being able to easily factor the product of two large primes would allow one to easily derive the RSA private key from the corresponding public key, rendering RSA useless.

So how would one go about factoring m, the product of two large primes, p and q?

The most naive approach is called trial division. It begins by dividing m by first 2, then 3, etc., until you find a number that has no remainder when m is divided by it. If m is not divided evenly by 2 then you can skip all multiples of 2, so in fact you can skip all composite numbers. This method will eventually find the smaller factor which leads directly to the larger one no later than when you try the square root of n. This is far from the most efficient method.

For certain subsets of integer factoring problems, Fermat’s method can be vastly more efficient than trial division. The closer p is to q, the more efficient Fermat’s method is. Fermat’s method was first described by Peirre de Fermat in the 17th century.

Fermat’s method is based on the fact that all positive odd integers can be represented as the difference of two squares. This is easy to show using simple algebra. If we say that

m = pq

and we make the qualification that m is odd, we can perform some substitutions that will allow us to show that m is the difference of two squares. If we arbitrarily say that p is larger or equal to q then we can define

u = (p + q) / 2

and

v = (pq) / 2

then we can say that

m = u2v2

which can easily be shown to be true as

u2v2 = ((p + q) / 2)2 – ((pq) / 2)2
= ((p2 + 2pq + q2) / 4) – ((p2 – 2pq + q2) / 4) = 4pq / 4 = pq = m

But saying that m is the difference of u2 and v2 is not necessarily the same as saying that m is the difference of two squares, since u and v may not be integers. We did, after all, define them as fractions. However, if u and v are integers then m is the difference of two squares. It is easy to show that u and v are integers.

We started with the qualification that m is odd. If that’s the case then both p and q are odd since if either p or q were even then 2 would be a factor of one of them which would mean that 2 would be a factor of m. Since both p and q are odd then both their sum and difference must be even because and odd number plus an odd number is an even number and an odd number minus an odd number is an even number. Therefore, both their sum and difference are evenly divided by 2 which means that both u and v are integers. m is therefore the difference of two squares.

So what is Fermat’s method? How does it work? It’s pretty simple, in fact. It involves searching through the possible values of u and v. u must be greater than or equal to √m. u is equal to √m when v is 0 and m is a square. It is always true, even when m is prime, that if u is (m + 1) / 2 and v is (m – 1) / 2, which corresponds to the factors of m which are m and 1.

((m + 1) / 2)2 – ((m – 1) / 2)2 = ((m2 + 2m + 1) / 4) – ((m2 – 2m + 1) / 4) = 4m / 4 = m

Fermat’s method begins by first choosing a value, we’ll call u’ with initial value √m. If mu’2 is also a square then u = u’ and v = mu’2. Otherwise, we increment u’ and test it again. Eventually this method will find u in u – √m iterations. Having found u and v, we can easily derive p and q as

p = u + v

and

q = uv

Why is this an effective method? It isn’t always. If p and q are far apart then Fermat’s method may be much less effective than trial division. In the worst case, when m is prime and p = m and q = 1, the number of iterations required is ((m + 1) / 2) – √m. However, when p and q are close together, the number of iterations can be very small. Let’s look at an example. Let’s say that p is 2153, q is 1933, and m is 4161749.

u = (2153 + 1933) / 2 = 2043

ceil(√m) = 2041

In this example, we will find that u is 2043 in only 3 iterations. We’ve just used Fermat’s method to factor a 22-bit semiprime in only 3 steps!

Let’s make sure everything adds up.

v = 20432m = 12100 = 1102
p = 2043 + 110 = 2153
q = 2043 – 110 = 1933

What about when p and q are further apart? Let’s try a different example. This time we’ll say that p is 1373347, q is 3, and m is 4120041. Notice that in this example, m has very similar magnitude to the previous example.

u = (1373347 + 3) / 2 = 686675

ceil(√m) = 2030

Look what a difference there is in this example. This time using Fermat’s method we would have go through 684646 iterations to find u! That’s much worse than trial division, which would require at most 2029 iterations even without skipping composite numbers.

Clearly Fermat’s method is not useful for all factoring problems. Also, clearly it’s rather good for a subset of them. But why?

This graph illustrates the relationship between u and v for the function u = f(v) = √(m – v2). The slope of the curve is 1 at v = 0 and as v increases, the curve is asymptotic to the line u = v.

fermat_u_v

Fermat’s method is effective at factoring numbers with factors that are close together because in that case, v is relatively small. Because the slope of the curve of u and v is close to 1 for small values of v, a small change in u translates to a large change in v. Incrementing u by 1 when u is close to √m means skipping many possible values for v.

So this is great, we can use Fermat’s method to solve real factoring problems. These days the recommendation is to use a public key with 2048 bits, which means we would need to factor a 2048 bit number to obtain the private key from the public key. We’re going to examine a much simpler problem. Let’s see how well Fermat’s method does on the RSA-100 challenge, which is a challenge to factor a 100 decimal digit number – a number with only 330 bits. In this example

p = 40094690950920881030683735292761468389214899724061
q = 37975227936943673922808872755445627854565536638199
m = 1522605027922533360535618378132637429718068114961380688657908494580122963258952897654000350692006139

p and q are very close together right? This should be no problem for Fermat’s method. Let’s see how many iterations we would have to use.

u = (40094690950920881030683735292761468389214899724061 + 37975227936943673922808872755445627854565536638199) / 2
= 39034959443932277476746304024103548121890218181130

ceil(√m) = 39020571855401265512289573339484371018905006900195

So to factor Fermat’s method requires 14387588531011964456730684619177102985211280935 iterations to find u. That’s much better than trial division. In fact, it’s approximately 2712 times better. But it still takes 1.44e+45 iterations! That’s a lot of iterations. If you could check 1,000,000 values of u per second per CPU core, that would still take 1.44e+39 seconds. That’s 4.56e+35 CPU-years. Remember, this was only a 330-bit number. Clearly Fermat’s method is not very useful to factor 2048-bit RSA keys.

There are actually much better methods of factoring large semiprimes than Fermat’s method. The two currently fastest known methods are called the Quadratic seive and the General number field seive. As it happens, both of these methods are rooted in Fermat’s method. Look for a future post to read a hopefully simple explanation about how they work.

Cryptography, Security

The RSA problem

The RSA algorithm is heavily relied upon to secure communication on the Internet. Primarily this is done in the form of certificates used to secure SSL/TLS connections as is done to secure the HTTPS protocol. But would you believe that the RSA algorithm cannot be shown mathematically to be secure? It’s true! This fact is known as the RSA problem.

To understand why the RSA problem exists, it is necessary to understand how the RSA algorithm works. It turns out there’s not that much to it. RSA is a public/private key encryption algorithm, also known as an asymmetric key encryption algorithm. With a public/private key pair, a value can be encrypted using the public key and can only decrypted using the corresponding private key. I’ll first explain how to generate and use an RSA public/private key pair and then attempt to describe the mathematics behind why it works. Once that’s out of the way, it will be possible to explain the challenge of mathematically proving that the RSA algorithm is secure.

To generate an RSA public/private key pair, you first choose two large prime numbers. Let’s call them p and q. The product of p and q we’ll call n. Using p and q, knowing they are prime, we can easily compute φ(n) = (pq)(q – 1) where φ is Euler’s totient function. Next, we’ll choose an exponent which we’ll call e, which is less than φ(n) and coprime with φ(n). Next we’ll calculate the multiplicative modualar inverse of e (mod φ(n)), which we’ll call d. The combination of n and e is the public key. The combination of n and d is the private key.

To encrypt a value m, which must be less than n, using the public key we calculate

c = me (mod n)

To decrypt c using the private key we can obtain the original value m by calculating

m = cd (mod n)

Why does this work? The answer lies in Euler’s theorem. Euler’s theorem proves that for any coprime integers a and n,

aφ(n) ≡ 1 (mod n)

If you’re not familiar with modular congruence, this states that if you divide aφ(n) by n, the remainder will be 1.

What we need to do is show that (me)dm (mod n). Note that (me)d can also be written as med.

Since we chose d such that ed ≡ 1 (mod φ(n)), we can also say that ed = 1 + hφ(n) where h is some positive integer. Therefore

medm1 + hφ(n)m(mhφ(n)) ≡ m(mφ(n))h (mod n)

Now, using Euler’s theorem we can convert the last term into

m(1)hm (mod n)

which is what we intended to show. As long as m is a positive integer less than n then m is equal to the least positive member of the congruence class of (me)d (mod n), meaning m is the remainder when dividing by n. Note that if m is too small then me may be less than n and it may be possible to derive m from me by taking the eth root of me.

So what’s the problem? The problem is that the security of the RSA algorithm rests on an unproven assumption. The assumption is that it is easier to find two very large prime numbers than it is to factor their product. It turns out it’s very easy to find large primes, or rather to find very large integers that are very likely prime. For example, choosing two random 512-bit numbers that are likely prime can be done in a small fraction of a second with a modern computer. However, using the most efficient currently known integer factorization algorithm, factoring the 1024-bit product of those two primes would take many thousands of CPU-years of computation.

But why does the complexity of factorizing the product of two primes matter with respect to the security of RSA? The reason is that if the two primes chosen, p and q can be obtained by factoring their product, n, which is part of the public key, then it becomes trivial to calculate φ(n), and then it becomes trivial to calculate d. Being able to easily calculate d given n means that the private key can easily be recovered from the public key. Being able to easily recover the private key from the public key would completely invalidate the security of the RSA algorithm.

So, again, what’s the problem? It takes far longer to factor the product of two large prime numbers than it takes to choose them. We’re safe using RSA, right? Well, the problem is that while the currently known algorithms for factoring the product of two large primes take far longer to factor than it takes to choose them, nobody has so far been able to prove that a much faster algorithm doesn’t exist. That doesn’t mean that one does exist, just that we can’t say for sure that one does not.

In fact, at least one such algorithm is known to exist, but it is currently impractical to implement on the scale necessary to defeat RSA. The algorithm is known as Shor’s algorithm and it relies on quantum computers able to process numbers as large as n, which currently do not exist for values of n currently used in RSA public keys. Quantum computers are getting more and more powerful all the time, however, so it’s a good bet that the days of RSA are numbered since eventually quantum computers will be able to factor numbers large enough to make the use of RSA impractical.

Database

Redis Transactions

Redis is a pretty awesome piece of software. If you’re not familiar with it, Redis is a in-memory key-value store, similar to Memcached. Like Memcached, Redis has get, set, increment, and decrement commands for string value keys. Unlike Memcached, it supports numerous other data type primitives including hash maps, sets, sorted sets, and lists. It also supports persistence, replication, and isolated and atomic transactions. Today I’m going to show you how Redis transactions work.

First, note that Redis is single-threaded. This simplifies the implementation of transactions in Redis server code since a batch of operations for a transaction can be performed in isolation from other operations without the need for locking keys. The lack of key locks means that Redis does not have to worry about deadlock detection or avoidance.

However, providing isolation without locking keys implies that the operations within a transaction must be performed in batch. There is no opportunity for a client to read a key within a transaction, get the result, and perform alternative Redis operations within the same transaction. After beginning a transaction with the MULTI command, rather than returning the result of any operation directly to the client, Redis returns a status of QUEUED, indicating the operation has been queued in memory by the server for later execution. Once all the operations to be performed within the transaction have been queued, the client can execute them atomically with the EXEC command. If the transaction is successful, a list containing the results of the operations performed will be returned to the client.

Allow me to present an example using redis-cli, the command line Redis client utility:
$ redis-cli
127.0.0.1:6379> MULTI
OK
127.0.0.1:6379> GET foo
QUEUED
127.0.0.1:6379> SET foo bar
QUEUED
127.0.0.1:6379> EXEC
1) (nil)
2) OK
127.0.0.1:6379>

This simple use of Redis transactions is fine when all that needs to be done is atomically perform multiple operations. But what if you need to get the value of a Redis key and then update it? If all you do is read and then write, the write will be atomic, so there’s not even a need for a transaction in that case. But how do you ensure that your read and write are not racing with some other read and write trying to make the same change? Or some conflicting change?

For example, what if a feature of a distributed application works with three Redis keys. Let’s call them A, B, and C. Keys A and B are strings that can have any value and key C holds a hash of A and B. When either A or B is updated, the application must read the other and update C. If when updating key A, the application just reads key B, calculates the new hash and updates key A and C atomically in a transaction, it may be that another concurrent process has updated keys B and C between the time the first process reads B and writes A and C.

This is an example of an application storing denormalized data. It turns out that any time an application stores denormalized data that this type of concurrency issue is possible.

It turns out that Redis provides a way to handle this case correctly even though Redis won’t return key values between MULTI and EXEC. The solution lies in the WATCH command.

The way the WATCH command works is that if after watching a key, it changes before a transaction is committed with an EXEC command, the transaction fails. In fact, this is the only way for a Redis transaction to fail without communication being severed between client and and server. If the WATCH command is performed before the read operation, it won’t be possible to wind up with inconsistent data in Redis.

Here’s an example using redis-cli again using the same concurrent example. Here we prepare for the transaction in a first terminal:

127.0.0.1:6379> WATCH B
OK
127.0.0.1:6379> GET B
"bar"
127.0.0.1:6379>

Now in a second terminal, we watch and get A and then update C:

127.0.0.1:6379> WATCH A
OK
127.0.0.1:6379> GET A
"foo"
127.0.0.1:6379> MULTI
OK
127.0.0.1:6379> SET B baz
QUEUED
127.0.0.1:6379> SET C 80338e79d2ca9b9c090ebaaa2ef293c7
QUEUED
127.0.0.1:6379> EXEC
1) OK
2) OK
127.0.0.1:6379>

Now back in the first terminal, we complete the first transaction:

127.0.0.1:6379> MULTI
OK
127.0.0.1:6379> SET A baz
QUEUED
127.0.0.1:6379> SET C c3c23db5285662ef7172373df0003206
QUEUED
127.0.0.1:6379> EXEC
(nil)
127.0.0.1:6379>

The result of nil indicates that the transaction failed.

Assuming the desired behavior is still to set the key A to “baz”, the application should respond to a failed transaction by retrying the entire transaction from the beginning, including the WATCH and the GET. If the application is making heavy use of a key in this way resulting in frequent transaction collisions, it will increase the workload of both Redis and the application server, but it will never wind up with inconsistent denormalized data in the Redis database.

Programming, Security

A simple explanation of SSL/TLS

What is SSL/TLS?

SSL, which stands for Secure Sockets Layer, was originally created by Netscape Communications to secure eCommerce applications. Version 1.0 was never released but version 2.0 was released in 1995 followed by version 3.0 in 1996. SSL was adopted as an open standard by the Internet Engineering Task Force and renamed to TLS, which stands for Transport Layer Security. TLS version 1.0 was released in 1999. Version 1.1 of TLS was released in 2006 followed by version 1.2 in 2008.

In the interest of full disclosure, Netscape Communications was purchased by AOL Inc. in 1999. I was employed by AOL Inc. from September 2002 through December 2003. I don’t feel this necessarily biases me in respect to this discussion but you may choose to interpret this information however you wish.

SSL and TLS are algorithms for secure communication over an insecure medium by two parties who have not necessarily previously interacted. Typically SSL/TLS are used over TCP connections but there have been proposals for their use over UDP as well. SSL/TLS is the mechanism used to secure the HTTPS protocol.

SSL and TLS provide security by way of three key mechanisms:

  • Endpoint Authentication
  • Message Integrity
  • Secrecy
Endpoint Authentication

Endpoint Authentication means that while communicating electronically with a party at the other end of a communication channel, I can be certain that the party I’m communicating with is actually the party I think I’m communicating with. In SSL/TLS, this is provided by the use of server and optionally client digital certificates as well as a trust network in the form of a database of Certificates of Authority.

Message Integrity

Message Integrity means that while communicating electronically with a party at the other end of a communication channel, I can be certain that the content of the message I receive has not been altered while in transit. The data I receive is exactly what the sender sent. Message Integrity is provided by the inclusion of a MAC with each message.

Secrecy

Most people think only of Secrecy when they think of SSL/TLS. Secrecy means that while communicating electronically with a party at the other end of a communication channel, other parties observing my conversation cannot know the content. Secrecy is provided by SSL/TLS through the use of symmetric encryption.

Why do we need SSL/TLS?

The reason we need SSL/TLS is because we often need to communicate over insecure media. The Internet is an insecure medium. When you send network packets from your computer to another computer via network infrastructure not directly under your control, you can’t be certain what exactly happens to it in between. You can’t even be certain it will ever arrive at its intended destination. “Network infrastructure not directly under your control” can also mean the space all around you if you happen to be using a wireless network at a Starbucks, for example.

Vulnerabilities to unsecured communications can include active or passive attacks. Man in the middle attacks can be either active or passive. Firesheep would be an example of a passive attack. The bottom line is that when you send data over an insecure medium, you don’t know where that data is going or who is going to intercept it and when you receive data over an insecure medium, you can’t be sure where it came from or who sent it. An unsecured communication channel over an insecure network cannot be trusted.

How does SSL/TLS work?

I won’t go too far into details, but here’s a set of illustrations that show both the problem and how SSL/TLS solves them. The illustrations will chronicle a riveting love story between Steve and Sally with Joe passing messages between them. Steve and Sally will initially just pass messages to Joe, trusting him to deliver them unaltered to their true love. We’ll see what Joe can do to the messages in transit and we’ll figure out ways to keep Joe from being such a jerk. In this way, we’ll derive the fundamentals that SSL/TLS uses to provide Endpoint Authentication, Message Integrity, and Secrecy.

Let’s meet the cast in our story

Steve is our manly hero who is in love with Sally.
Steve

Sally is our beautiful heroin who is in love with Steve.
Sally

Joe is the villain who is Jealous of Steve and Sally’s love and who is also greedy and completely lacking integrity.
Joe

He said, She said

Scenario: Steve wants to send a love letter to Sally, but he doesn’t know exactly where Sally lives, so he writes it and gives it to Joe because Joe says he knows where Sally lives.

Steve
“I love you, marry me”
“Send me money”
Joe
“Go to hell”
“I hate you, asshole”
Sally

The problem is that Joe, acting as the middle man, has altered the communication before passing it on. What Steve and Sally need is a way of protecting their messages from Joe.

Symmetric Key Cryptography

Symmetric Key Cryptography is a form of cryptography where the same key used to encrypt a message can also be used to decrypt the encrypted message.

In cryptography, we speak of data as being in plaintext or cyphertext. Plaintext is unencrypted whereas cyphertext has been obscured using some cryptographic algorithm.

A symmetric key encryption algorithm is an invertible function that takes plaintext and a key as input and produces cyphertext. The inverse of the function takes the same key and the cyphertext and produces the original plaintext.

Encrypt(key, plaintext) -> cyphertext
Decrypt(key, cyphertext) -> plaintext

To be useful to exchange information securely between two parties, the key must be known to both of them. This is known as a shared secret.

Steve Uses Encryption

Scenario: Steve chooses a key and uses it to encrypt his love letter to Sally. Steve gives the encrypted letter to Joe to give to Sally.

Steve
“@#$%!”
“???”
Joe
“@#$%!”
“???”
Sally

Now Joe can’t read Steve’s letter. Now Sally can’t read it either. Steve needs to share the key he chose to encrypt his love letter with Sally.

Steve Shares his Key

Scenario: Steve chooses a key and uses it to encrypt his love letter to Sally. Steve gives the encrypted letter and the key to Joe to give to Sally.

Steve
“I love you, marry me + key”
“Send me money”
Joe
“Go to hell + key”
“I hate you, asshole”
Sally

Now Sally can read Steve’s letter. But now Joe can read Steve’s letter. Worse, Joe can also choose a different key and change the message and encrypt it with his key and give Sally the altered letter and Joe’s key. We’re back where we started. Steve needs a way to get the key he used to encrypt his letter to Sally without Joe being able to access it.

Public Key Cryptography

Public Key Cryptography works similarly to symmetric key cryptography except that there are a pair of keys rather than just one. The pair of keys consists of a public key and a private key. The public key can be used to encrypt data and the private key can be used to decrypt it. The public key can be freely shared while the private key must be kept secret. If you generate a public/private key pair, anyone who has your public key can encrypt a message and send it to you and it doesn’t matter who sees the encrypted message because nobody can decrypt it without the private key, which only you have.

Encrypt(key_pub, plaintext) -> cyphertext
Decrypt(key_priv, cyphertext) -> plaintext

This method of encryption totally avoids the need for a shared secret. Our hero and heroin could each choose a public/private key pair, exchange the public keys through Joe and just encrypt all their love letters with the others’ public key and send the encrypted letters through Joe as well. The problem is that public key cryptography algorithms are orders of magnitude slower than symmetric key cryptography algorithms. Fortunately, we can get the best of both worlds. All we need to do is choose a symmetric key and share that securely by encrypting it with a public key. Now we can exchange a shared secret through Joe and Joe can’t read it.

Additionally, Public Key Cryptography can be used in the reverse direction to produce a digital signature. This is done by encrypting a digest of a message using the private key. Anyone with the public key can decrypt it can compare it to the digest of the message. In this way, anyone can verify that the signature was generated by the holder of the private key.

Hash_priv(key_priv, plaintext) -> signature
Hash_pub(key_pub, plaintext, signature) -> match/no-match

Steve Asks For Sally’s Public Key

Scenario: Steve wants Sally’s public key so he can send her an encrypted love letter and the key he used to encrypt it. Joe gives Steve Joe’s public key. Steve encrypts his love letter with his symmetric key and he encrypts the symmetric key with Joe’s public key and he gives both to Joe. Joe now can decrypt the symmetric key and then he can decrypt the love letter.

Steve
“Send me your public key”
“Here you go”
Joe
 
Sally
Steve
“I love you, marry me + key”
“Send me money”
Joe
“Go to hell + key”
“I hate you, asshole”
Sally

The problem now is that Steve thinks he has gotten Sally’s public key but he’s really gotten Joe’s public key. He thinks he’s sending a message that only Sally can read but he’s wrong. Joe needs a way to know that he’s gotten Sally’s public key rather than Joe’s.

Trusted Third Party

Meet Sam, a new cast member in our story. Steve has met Sam and has gotten Sam’s public key. Sally has met Sam and has gotten Sam’s public key. Joe doesn’t like Sam because, unlike Joe, Sam is very trustworthy.

Sam is able to digitally sign Steve and Sally’s public keys. The way this is really done is by packaging up the public key with some metadata about it in something called a certificate. Importantly, the certificate contains the public key and the name of the owner.

Scenario: Sally generates a certificate with her name and public key and asks Sam to sign it. Sam signs it and returns the signed version to Sally. Steve now asks Sally for her certificate through Joe and she gives it to him, also through Joe. Steve can now encrypt his love letter with a symmetric key and encrypt the symmetric key with Sally’s public key and send both to Sally through Joe. Joe can’t read the messages in transit.

Steve
“Send me your public key”
“Here you go”
Joe
“Send me your public key”
“Here you go”
Sally
Steve
“I love you, marry me + key”
“???”
Joe
“Go to hell + key”
“I hate you, asshole”
Sally

The problem now is that while Joe can’t know the content of the message Steve is sending to Sally, he can still choose his own symmetric key, send any message he wants to Sally, encrypt it with his symmetric key, encrypt his symmetric key with Sally’s public key, and send his own encrypted message and encrypted key to Sally. Sally won’t know that Joe altered the message in transit. Joe won’t be able to send a message to Steve using the symmetric key that Steve chose because he can’t know it because Joe doesn’t have Sally’s private key. Steve will probably know something’s wrong, but Sally won’t.

Message Authenticity Codes

The final piece of the puzzle comes in the form of Message Authenticity Codes or MAC. A MAC is a hash of the combination of a message and a shared secret key. When two parties are communicating over an insecure medium and both have access to a shared key, the sender can generate a MAC of the message and send both the message and the MAC. The recipient can generate the MAC of the message using the same key and know if the message has been altered in transit based on whether the calculated MAC matches the one received with the message.

Using a MAC, Steve and Sally can exchange a public key, a shared secret, and any number of messages, all through Joe without Joe being able to read the messages or alter them.

Steve
“Send me your public key”
“Here you go”
Joe
“Send me your public key”
“Here you go”
Sally
Steve
“I love you, marry me + key + MAC”
“I will! + MAC”
Joe
“I love you, marry me + key + MAC”
“I will! + MAC”
Sally

Now our hero and heroin can live happily ever after…

A Less Contrived Explanation

I’ve taken a lot of liberties with the details of the protocol in the illustrations above. In actuality, an SSL/TLS conversation is quite a bit more complicated. Nevertheless, I’ve covered all the principles involved. To give these illustrations more context, Steve could be you, Sally could be the website of your bank, and Joe could be any number of switches or proxies operating between your computer and your bank’s computers and that may or may not be controlled by unscrupulous people wishing to drain your bank account.

Who’s Sam? Sam represents an entity you may not have heard of. Sam is a Certificate of Authority or CA. A CA is a certificate that is allowed to digitally sign other certificates. A CA may or may not be signed by some other CA. If not it’s known as a root certificate. Certificates of Authority are distributed in a trust database which is basically just a list of trusted root certificates, although it may contain some non-root certificates as well. Your browser may have been downloaded with its own trust database (Firefox contains its own trust database) or it may use the trust database of your OS (Chrome and Internet Explorer do this).

Who controls the CAs? CAs are owned by companies or other organizations that are presumably trustworthy. Who decides what CA owners are trustworthy? That’s done by companies that make browsers, like Mozilla and Microsoft. Ultimately, it’s up to you to decide whether you trust Mozilla and Microsoft to make these decisions for you. Regardless of whether your browser uses its own trust database or the trust database of your OS, it likely gives you the opportunity to manage your trust database for yourself. The trouble is that most people wouldn’t know how to do this, much less how to make good decisions. Most people just have to trust Mozilla and Microsoft.

I won’t go too deep into the details, but let’s just take a minute to examine how the SSL/TLS protocol really works. The following describes how the most basic SSL/TLS conversation looks like. It can get a lot more complicated than this, but now that you know the principles from the earlier illustrations, all of this should make a lot of sense.

First, a client will contact a server to establish a TCP connection. Next, the client will send a Client Hello message which contains the protocol version the client wants to use, a client random data, and a list of cypher suites the client wants to use. The Client Hello message is sent in plaintext so anyone observing the conversation will see the client random value. That’s OK because it’s just going to be combined with other random data later.

If the server accepts the protocol version and cypher suites offered by the client, the server will respond with a Server Hello message followed by a Server Certificate message and a Server Hello Done message. The Server Hello message contains the negotiated protocol version and cypher suite and a server random data. The important thing to note here is that the server sends its certificate in the Server Certificate message which should have previously been digitally signed by a CA that the client trusts.

Next, the client will verify that the certificate is owned by the organization that the client intended to contact, typcially by comparing the Common Name field of the X.509 certificate to the domain name of the website it was trying to connect to and by validating that the certificate was signed by a CA that it trusts and that the digital signature is valid. The client will then send a Client Key Exchange message that contains what is called a premaster secret. The premaster secret is generated by the client using the client and server random data. The premaster secret is encrypted using the public key of the server provided to the client in the server’s certificate. Only the server that the client is intending to communicate with should have the private key corresponding to the public key in the server’s certificate so only that server should be able to decrypt the premaster secret.

Both the client and server will separately derive what’s called the master secret from the premaster secret. The master secret is used by both client and server to derive the symmetric keys and hash keys used for the remainder of the conversation. While these are all shared, it turns out that client and server use separate symmetric keys to encrypt the data they send. The keys used to calculate the message MACs are also separate for client and server as well as separate from the encryption keys.

Next, the client will send a Change Cipher Spec message that verifies the protocol version that the client believes the server has agreed to, followed by a Client Finished message. The Client Finished message contains a hash of the entire SSL/TLS handshake as seen by the client using the client hash key. The server will verify that the hash generated by the client matches the hash that the server generates for the conversation using the client’s hash key. If the hashes don’t match then the server knows that the handshake has been manipulated in transit.

If the handshake hash from the client is validated by the server, the server will send a Change Cipher Spec message followed by a Server Finished message. The Server Finished message contains a hash of the entire handshake conversation as seen by the server and hashed using the hash key of the server. The client will generate the hash of the handshake conversation using the server’s hash key and compare it to the hash received from the server. If it doesn’t match then the client will know the handshake was manipulated in transit.

At this point the handshake is completed and application data can be sent and received according to whatever application protocol is in use. Any message that is sent is encrypted using the symmetric key of the sender and coupled with a MAC generated using the message combined with the hash key of the sender. Upon receipt, the message is decrypted using the hash key of the sender and the MAC is validated using the hash key of the sender. Remember that both sides have their own symmetric and hash keys but all four keys are possessed by both sides.

The client can validate that it is communicating with the intended server because it validated the server’s certificate. Both client and server can be certain of the integrity of the data they receive by verifying the MAC that accompanies each message. All data sent and received following the handshake is encrypted using keys derived from the premaster secret which was shared by the client with the server using the server’s public key. These components have, respectively, provided the SSL/TLS conversation with endpoint authentication, message integrity, and secrecy.

But wait, there are two endpoints involved in any TCP conversation and only one of them was authenticated with a certificate. In the example I’ve just covered, only the server provided a certificate. In fact, SSL/TLS have a facility for the server to demand during the handshake that the client also provide a certificate. While the authenticity of the server is always verified since the Server Certificate message is a required part of the handshake, the request by the server for a client certificate is optional. Most of the time, this facility is not used. This is because clients are typically authenticated using a password at the application layer after the SSL/TLS conversation is already established. If everyone could be authenticated using certificates all the time, the world would be a better place. Unfortunately, key management is somewhat complicated. For this reason, client certificates are most often used for server to server communication.

Programming

Erlang n-squared gen_server

One of the nice things about Erlang is it’s amazing performance. However, in certain circumstances its performance is surprisingly poor.

As I pointed out in my previous post, work is done in an Erlang application by Erlang processes and each process has an input queue of messages.

The standard way of using Erlang is with OTP. OTP is an acronym that stands for Open Telecom Platform. OTP is a framework and set of support libraries that handle general problems like spawning supervised processes and building whole applications with them as well as a wide variety of general purpose support modules.

Typically processes in an OTP application that do most of the work will implement the gen_server behavior. In general, a gen_server process will wait for a message to arrive in its message queue and then process it before waiting for another. If in between beginning to handle a message and finishing, two or more messages arrive in the process message queue, the process will immediately begin processing the next one at the front of the queue.

Pulling the next message from the front of the queue is very fast. More importantly, it’s an O(1) time complexity operation with respect to the number of messages waiting to be processed. It’s done with syntax that looks like this:

receive
    Message -> handle_message(Message)
end

In this example, the first message in the queue will be removed from the queue and then bound to the variable Message and then it will be passed as a parameter to the function handle_message/1.

However, Erlang has a facility that allows messages to be processed out of order. This is done using an Erlang mechanism called selective receive. Selective receive will scan the message queue of a process from the front to the back looking for a message matching a particular pattern. If no match is found, a selective receive will wait for a matching message to be added to the end. Selective receive is done using syntax like this:

receive
    some_message -> got_the_message_i_was_looking_for()
end

In this code snippet, the message queue will be scanned for a message matching the single atom some_message, waiting for it if necessary, and when found, it will be removed from the queue and the function got_the_message_i_was_looking_for/0 will be called.

The problem here is that while pulling a message from the front of the queue has O(1) time complexity, using selective receive has O(N) time complexity with respect to the number of messages waiting in the queue to be processed. This is fine if your gen_server process doesn’t handle a lot of messages or else it doesn’t make use of selective receive. However, if your process is a high throughput server process the use of selective receive can be a big problem.

The most common Erlang facility that would make use selective receive would be calling gen_server:call/2 or gen_server:call/3. These functions make synchronous calls to other gen_server processes. The way this is implemented is by sending a message to the other process (which is normally always asynchronous) and then waiting for a particular pattern of response message using selective receive. Regardless of the time complexity of selective receive, it’s generally not advisable to wait synchronously for work to be done in a high throughput server process, so this usually isn’t an issue.

The real problem is typically more subtle. This is because a subset of OTP library calls are implemented in terms of selective receive. For example, the OTP library function for sending a packet to a UDP port is implemented by sending a message with the payload to an Erlang port followed by a selective receive to get the result. If, for example, your application sends UDP packets to localhost port 514 to log messages to syslog, you might assume that you could do that directly from a high throughput server process. If you were to do that, your application will probably work fine most of the time. However, if your application ever has a spike in workload or a stall in processing that causes your high throughput server process to get a bit behind, it may take a long time to catch up. If an Erlang process were to have N messages in its message queue and the processing of each message required sending a UDP packet then the O(N) nature of selective receive means that processing N messages has O(N^2) time complexity.

If an Erlang process is continuing to receive more messages at a constant rate, it’s possible for it to get far enough behind that it takes more time to process a message than the average time between messages arriving. In this case, it will never get caught up. Since each message in a processes’ message queue takes memory, the accumulation of messages in the processes message queue will cause the Erlang VM to eventually use all available RAM followed by increasingly severe swapping.

Here’s a simple demonstration of the problem. The following module definition will send N messages to itself, handle each by sending a UDP packet, and print the length of time it takes to drain its message queue.

-module(time_udp).

-export([start/1]).

start(N) ->
    {ok, Socket} = gen_udp:open(0),
    Seq = lists:seq(1, N),
    lists:foreach(fun(_I) -> self() ! message end, Seq),
    Start = erlang:now(),
    lists:foreach(fun(_I) ->
            receive _Message -> gen_udp:send(Socket, {127, 0, 0, 1}, 1234, <<"message">>) end
        end, Seq),
    End = erlang:now(),
    io:format("processed ~p messages in ~p seconds~n", [N, time_diff(Start, End)]).

time_diff({StartMega, StartSecs, StartMicro}, {EndMega, EndSecs, EndMicro}) ->
    ((EndMega - StartMega) * 1000000.0) + (EndSecs - StartSecs) + ((EndMicro - StartMicro) / 1000000.0).

Now calling start/1 with increasing values of N will illustrate the quadratic relationship between N and the time it takes to send N UDP packets. If the relationship were linear, doubling N should roughly double the time. Instead, as the following output shows, doubling N roughly multiplies the time by 4 which is exactly what would be expected if the relationship were quadratic.

$ erl
Erlang R16B03 (erts-5.10.4) [source] [64-bit] [smp:4:4] [async-threads:10] [kernel-poll:false]

Eshell V5.10.4  (abort with ^G)
1> time_udp:start(10000).
processed 10000 messages in 0.343503 seconds
ok
2> time_udp:start(20000).
processed 20000 messages in 1.222438 seconds
ok
3> time_udp:start(40000).
processed 40000 messages in 4.574205 seconds
ok
4> time_udp:start(80000).
processed 80000 messages in 17.498623 seconds
ok
5>

If only the implementation of sending a message to a UDP socket were implemented with an Erlang built-in function rather than a selective receive, this would not be a problem. Without writing your own native Erlang module, there is no way to avoid n-squared time complexity when sending UDP packets while using gen_server.*

So how do you write an Erlang application that doesn’t suffer from this problem and still use OTP? The answer is gen_server2. gen_server2 is actually a fork of the OTP gen_server module but with some significant modifications. The objective of the modifications is to minimize the time spent scanning the message queue when doing a selective receive. It accomplishes this by first draining the message queue into local memory before handling the first message in the queue the same way that gen_server would have done. By doing this, the cost of scanning the message queue when doing a selective receive is limited to any messages that have arrived in the message queue since handling of the current message started.

While gen_server2 can solve the n-squared problem caused by a large message backlog in the gen_server processes you’ve written for your application, it will not eliminate the problem entirely for any OTP application. The reason is that the same problem exists in all supervisor processes using the OTP supervisor behavior. For very busy supervisors, the problem can be severe since the protocol for a supervisor starting a new child process involves a selective receive in the supervisor to get the result of the child processes initialization. Additionally, the Erlang VM will automatically start a number of support processes that are implemented using gen_server.

One such system process that is easy to overload is the error_logger process. Generally applications don’t block waiting for the error_logger process to do work so an accumulation of messages in the error_logger process just causes increased memory and CPU utilization until the it catches up (assuming it does so).

You might think that if your application doesn’t send UDP packets, it’s safe from this problem (other than the supervisor and error_logger) so you don’t need gen_server2. While I’ve tracked down this problem for certain to exist within the gen_udp implementation, I strongly suspect that the same issue exists within other OTP library calls, though I’ve not specifically identified any. Since gen_server2 behaves identically to gen_server under normal circumstances and strictly better than gen_server under abnormal but not necessarily unusual circumstances, I strongly recommend using gen_server2 rather than gen_server in your Erlang applications.

* It is possible to not use the gen_udp module to send UDP messages and handle the result message asynchronously, avoiding the selective receive performed in the gen_udp implementation. However, doing so would eliminate the encapsulation within the gen_udp implementation of the format of the result message. It’s possible to do this, but not necessarily a good idea.

Programming

Some surprising Erlang behavior

Erlang is in many ways a fantastic language. Its syntax is a little foreign, at first, to someone like myself coming from a background in C and C++. The language and the runtime environment have some really nice qualities, though. When I began working with it, I quickly acquired an appreciation for it. I won’t get too deep into the details of the language and why I like it but I will start off with a discussion of a couple key features relevant to this discussion.

First, work is done in an Erlang application by Erlang processes. Each process is a lightweight thread and has an identifier called a pid. Each process has an input queue of messages. An Erlang process generally operates by receiving a message from its message queue, doing some work to process the message, and then waiting for another message to arrive in its queue. Some of the work that a process may do might involve generating and sending messages to another process using the other process’ pid. Under normal circumstances, sending a message from one process to another process is asynchronous, meaning the send operation does not block waiting for the receiving process to handle the message or even necessarily for it to be added to the queue of the receiving process.

Second, an Erlang node is an instance of the Erlang interpreter, an OS-level process. Erlang nodes can be easily joined together so that a single Erlang application can span many physical hosts. Once a node has joined an Erlang cluster, it becomes trivial for processes in remote hosts to communicate with one another. Sending a message to a remote process is done with the pid of the remote process, just like a local process. This allows for Location Transparency, one of the nice features of Erlang.

Erlang has gotten a reputation as a platform that is very stable. Joe Armstrong, one of the original authors of Erlang, has famously claimed that the Ericcson AXD301 switch, developed using Erlang, has achieved NINE nines of reliability.

Having heard this, I was extremely surprised to identify a case where an Erlang application performs not just poorly but it literally comes to a screeching halt. The problem occurs when one of the nodes in an Erlang cluster suddenly goes network silent. This can occur for a variety of reasons. For example, the node may have kernel panicked or it may have started swapping heavily or a network cable connecting one of the physical hosts in the cluster may have gotten unplugged. When this condition occurs, messages sent to processes on the node which has gone dark are buffered up to a limit but once the buffer fills up, sending messages to processes on the node which has gone dark goes from being asynchronous to being synchronous. Erlang processes not sending messages to the failed node still continue to do work as normal but any process sending a message to the failed node will halt.

The Erlang runtime will monitor the other nodes to which the local node is attached. If a host where one of the nodes is running goes network silent then after some period of time (defaulting to about 1 minute), the other nodes will decide that the network silent node is down. The length of time before a non-responsive node is considered down is tune-able using an Erlang kernel parameter, net_ticktime. You might think that the processes waiting to send messages to processes on the down node would get unblocked when the target node is considered down, but that (mostly) doesn’t happen. It turns out that once the target node is considered down, blocked senders will get unblocked once every 7 seconds, likely tune-able using the Erlang kernel parameter net_setuptime.

I’ve come up with a fairly easy way to demonstrate this problem given two Linux VMs. Let’s call them faucet-vm and sink-vm. Both need Erlang installed.

On the sink-vm host, create sink.erl with these contents:

-module(sink).
-export([start/0, entry/0]).

start() ->
    erlang:register(?MODULE, erlang:spawn(?MODULE, entry, [])).

entry() ->
    loop(0).

loop(X) ->
    receive ping ->
        io:format("~p: ping ~p~n", [timeasfloat(), X]),
        loop(X + 1)
    end.

timeasfloat() ->
    {Mega, Sec, Micro} = os:timestamp(),
    ((Mega * 1000000) + Sec + (Micro * 0.000001)).

Now compile and start the sink process:

$ erlc sink.erl
$ erl -sname sink -setcookie monster
Erlang R15B01 (erts-5.9.1) [source] [64-bit] [smp:4:4] [async-threads:0] [kernel-poll:false]

Eshell V5.9.1  (abort with ^G)
(sink@sink-vm)1> sink:start().
true
(sink@sink-vm)2>

On the faucet-vm host, create faucet.erl with these contents:

-module(faucet).
-export([start/0, entry/0]).

start() ->
    erlang:spawn(?MODULE, entry, []).

entry() ->
    Sink = rpc:call('sink@sink-vm', erlang, whereis, [sink]),
    loop(Sink, 0).

loop(Sink, X) ->
    io:format("~p: sending ping ~p~n", [timeasfloat(), X]),
    Sink ! ping,
    loop(Sink, X + 1).

timeasfloat() ->
    {Mega, Sec, Micro} = os:timestamp(),
    ((Mega * 1000000) + Sec + (Micro * 0.000001)).

Now compile and start the faucet process:

$ erlc faucet.erl
$ erl -sname faucet -setcookie monster
Erlang R15B01 (erts-5.9.1) [source] [64-bit] [smp:2:2] [async-threads:0] [hipe] [kernel-poll:false]

Eshell V5.9.1  (abort with ^G)
(faucet@faucet-vm)1> faucet:start().

This spews horrific amounts of console output on both faucet-vm and sink-vm consoles. If you don’t see tons of console output on both sides, you’ve probably done something wrong.

In a separate console on sink-vm, use iptables to block all IP traffic between sink-vm and faucet-vm:

# iptables -A INPUT -s faucet-vm -j DROP && iptables -A OUTPUT -d faucet-vm -j DROP

The output of the two processes will halt shortly after the traffic between them is blocked. Which one stops first actually depends on whether the sink process is able to keep up with the faucet process before the traffic is blocked. About 60 seconds after halting, each node should report that the other node is DOWN. The number of messages the faucet claims to have sent will be higher than the number the sink has received. The difference will be the number of messages the faucet has buffered to send to the sink before blocking. After the faucet reports that the sink is DOWN, it will report one sent message every 7 seconds.

Here’s the tail of the output of the sink process when I tried it:

1440398109.640845: ping 45404
1440398109.640864: ping 45405
1440398109.640881: ping 45406
1440398109.640899: ping 45407
1440398109.640917: ping 45408
1440398109.640935: ping 45409
1440398109.640953: ping 45410
1440398109.640971: ping 45411
1440398109.640988: ping 45412
1440398109.641006: ping 45413
1440398109.641022: ping 45414
1440398109.64104: ping 45415
1440398109.641057: ping 45416
(sink@sink-vm)2>
=ERROR REPORT==== 23-Aug-2015::23:36:03 ===
** Node faucet@faucet-vm not responding **
** Removing (timedout) connection **

And here’s the tail of the output of the faucet process when I tried it:

1440398099.86504: sending ping 63339
1440398099.865061: sending ping 63340
1440398099.865107: sending ping 63341
1440398099.865159: sending ping 63342
1440398099.865199: sending ping 63343
1440398099.865224: sending ping 63344
1440398099.865246: sending ping 63345
1440398099.865293: sending ping 63346
1440398099.865328: sending ping 63347
1440398099.865364: sending ping 63348
1440398157.560028: sending ping 63349
(faucet@faucet-vm)2>
=ERROR REPORT==== 23-Aug-2015::23:35:57 ===
** Node 'sink@sink-vm' not responding **
** Removing (timedout) connection **
1440398164.566233: sending ping 63350
1440398171.574962: sending ping 63351
1440398178.582127: sending ping 63352
(faucet@faucet-vm)2>

The implications of this behavior are rather severe. If you have an Erlang application running on a cluster of N nodes, you can’t assume that if 1 of the nodes goes network silent that you will only lose 1/N total capacity of the cluster. If there are critical processes attempting to communicate with processes on the failed node then they will eventually be unable to do any work at all, including work that is unrelated to the failed node.

I haven’t yet tried this, but it is likely possible to solve this problem by routing all messages to remote processes through a proxy process. Each remote node can have a registered local proxy and the sender would find the appropriate proxy depending on the node of the remote process using erlang:node/1. Each node proxy could be monitored by a separate process that detects when the proxy has become unresponsive due to the remote node going down. When hung, the proxy can either be restarted to keep its message queue from growing too large or it can just be killed until the remote node becomes available again. At the very least, Location Transparency is lost.

An astute reader might notice that my use of iptables to simulate a network silent node is imperfect. Specifically, iptables will not block ARP traffic. It’s conceivable that ARP traffic might interfere with the experiment. It turns out it does not. I’ve specifically tested this by also blocking ARP traffic using arptables and obtained identical results. I left ARP and arptables out of the example for simplicity.

Additionally, I’ve carefully experimented with various iptables rules to simulate nodes becoming unavailable in other ways with similar results. Specifically, using a REJECT rule for the INPUT chain rather than DROP results in an ICMP destination port unreachable packet being returned to the faucet-vm host but the faucet process will still block. Using REJECT --reject-with icmp-host-unreachable will generate ICMP destination host unreachable packets returned to the faucet-vm host but this will also leave the faucet process blocked.

It turns out the only way to get the sending process (mostly) unblocked is if the sending node receives a TCP RST packet from the target host. I tested this by using REJECT --reject-with tcp-reset and allowing outbound RST packets. This effectively simulates a node whose host is still running but where Erlang is not. The reason this only mostly unblocks the sending process is that the rate of sending is still significantly lower than if the target node is up. I measured the rate of pings “sent” by the faucet under these conditions to be roughly 1/10th the rate when the sink node is up and traffic is not blocked.

One might think this problem is just a bug and if it was a bug, it might be limited to a particular version. After all, the example output above is from R15B01. If this behavior is a bug, it’s been around for a while and it hasn’t been fixed yet. I’ve reproduced it with at least three different versions. The earliest version was R15B01 and the newest is the current version, OTP 18.0.

The conclusion to be drawn is that when designing a distributed application using Erlang, you have to be aware of this behavior and either work around it one way or another or else accept that your application is unlikely to be highly available.

Edit:

After spending more time investigating this problem and work-arounds, I can now say for sure that the solution I suggested above can be made to work. However, it seems that there is a simpler solution. Instead of using the ! operator or erlang:send/2, it is possible to completely mitigate this problem by using erlang:send/3, passing [nosuspend, noconnect] as the options parameter.

If only nosuspend is used, the problem is mitigated up until the point where the remote node is identified as DOWN. Once that happens, it’s back to one message per 7 seconds until communication with the remote host is re-established. This suggests that the 7 second delay is the timeout waiting to connect to the remote node once it has been removed from the node list. Also using noconnect avoids blocking for 7 seconds on each send after the remote node has been determined to be down, which solves the problem in question but it potentially causes others. Using noconnect means that you have to have some other mechanism for re-establishing communication with remote nodes once they become available again. This isn’t necessarily challenging but it needs to be considered in when designing your application.