jix.one

Proving 50-Year-Old Sorting Networks Optimal: Part 2

Posted on September 1st 2021, tagged sorting-networks

In the previous post I gave an overview of sorting networks, the problem of finding minimal-size sorting networks and the state of the art at the time I started looking into this. Here I will begin describing my method for finding minimal-size sorting networks. With that method I was able to prove for the first time that the smallest known 11 and 12-channel sorting networks are in fact of minimal size.

I wrote up a detailed description including proofs as part of my paper on this. For this blog, I want to give a more concise overview that focuses on the main ideas.

If you haven’t read the previous post yet, I recommend that you do so before continuing here, as I will build on the the method by M. Codish, L. Cruz-Filipe, M. Frank, P. Schneider-Kamp [1] that I described there.

Partial sorting networks

Previously I mentioned that I came up with my method by combining a computer search—as used before—with the idea behind Van Voorhis’s bound which so far was only used separately.

Before taking a detailed look at Van Voorhis’s bound, I want to revisit the generate-and-prune approach by Codish et al. [1] and present some of the sorting network properties it uses—but in a slightly different way. Initially, I did this to gain a better understanding of how generate-and-prune behaves, but later it turned out to be exactly what I needed to incorporate the idea behind Van Voorhis’s bound into a similar search procedure.

For the generate-and-prune approach, we start building a sorting network from the empty comparator network by appending one comparator at a time. For each sorting network cc we encounter, we then use the set of possible zero-one output sequences outputs(c)\text{outputs}(c) to compare candidates and exclude some of them. At no point in the algorithm, though, do we care about what cc looks like apart from outputs(c)\text{outputs}(c).1

I thought it would be a good idea to make this explicit. For that I introduced the notion of a partial sorting network on a set of zero-one sequences. As we will mostly work with zero-one sequences, I will use the term sequence set to refer to a set of such sequences, all having the same length.

Given a sequence set XX, a comparator network cc is a partial sorting network on XX if and only if for every sequence xXx \in X, the network cc produces a sorted output when applied to xx.2 For the sequence set BnB^n of all length-nn sequences, a partial sorting network on BnB^n is exactly the same as an nn-channel sorting network. On the other end, any comparator network is a partial sorting network on the empty set \emptyset.

As an example for a partial sorting network in between those extremes consider this network:

The sorting network [1,3][2,4][1,2]

The sequence (0,0,1,0)(0, 0, 1, 0) will pass through that network unchanged and is not sorted, so we don’t have a sorting network. But there is a set of 12 sequences, depicted below, that all will be sorted by this network, making it a partial sorting network on that sequence set.

Diagram that shows a set of 12 length-4 input sequences and the intermediate sets of sequences after applying the comparators of the above network, ending in the set of 5 sorted length-4 sequences

We can also define the minimal size for partial sorting networks. Where s(n)s(n) is the size of the smallest nn-channel sorting network, we use s(X)s(X) to denote the size of the smallest partial sorting network on XX. For example s()=0s(\emptyset) = 0 and s(Bn)=s(n)s(B^n) = s(n). We use the same syntax s(c)s(c) to refer to the size of a specific sorting network cc.

So when the argument is an integer, s(n)s(n) is the minimal size among nn-channel sorting networks, when the argument is a sequence set s(X)s(X) is the minimal size among partial sorting networks on XX and when the argument is a sorting network s(c)s(c) is its size.

If we want to check whether a comparator network is a partial sorting network or want to construct partial sorting networks incrementally, it is again useful to compute the sequence set of possible outputs. Now this depends on the sequence set of inputs we consider. Given the inputs XX and a comparator network cc, I use XcX^c to denote the sequence set of possible outputs of cc when restricting the input to the sequences in XX.

That choice of notation might seem a bit surprising at first, but it mirrors the fact that taking the outputs of one comparator network as inputs for another is the same as concatenating both networks. As we write the operations of a network from left to right, we have (Xc)d=Xcd{(X^c)}^d = X^{cd} for all sequence sets XX and all comparator networks cc and dd.3

A Useful Recurrence

After defining partial sorting networks and extending our definition of minimal size to them, we can give a very useful recurrence for s(X)s(X), the minimal size of a partial sorting network on the sequence set XX. The following equation holds, as long as s(X)>0s(X) > 0, i.e. as long as there is no network using only unconditional exchanges that can sort all sequences in XX.

s(X)=1+min1i<jns(X[i,j]) s(X) = 1 + \min_{1 \le i < j \le n} s(X^{[i, j]})

Here [i,j][i, j] ranges over all comparators that can be present in a standard form network. In the previous post we saw that any sorting network can be converted into standard form without changing the number of comparators. The same is almost true for any comparator network. We can orient all comparators [i,j][i, j] so that i<ji < j, but we cannot always get rid of all unconditional exchanges. What we can do, using the same rewrite rules as before, is to make sure that all comparators precede all unconditional exchanges. We’re going to relax our definition of standard form to allow such trailing unconditional exchanges as long as there is no equivalent comparator network without. This doesn’t change what standard form means when considering sorting networks, but extends its definition to cover all comparator networks and thus also partial sorting networks.

Now let cc be a standard form, minimal size partial sorting network on XX. As conversion to standard form doesn’t change the size, there is such a minimal size network in standard form. We are still assuming s(X)>0s(X) > 0, so cc has at least one comparator. This allows us to write cc as [k,l]c[k, l]c' where [k,l][k, l] is the first comparator of cc and cc' is the remaining suffix.

We also get s(X)=s(c)=1+s(c)=1+s(X[k,l])s(X) = s(c) = 1 + s(c') = 1 + s(X^{[k, l]}). To see that this holds, we need to convince ourselves that cc' is a minimal size partial sorting network on X[k,l]X^{[k, l]}. It is a partial sorting network on X[k,l]X^{[k, l]} as we know that XcX^c is sorted so X[k,l]c=(X[k,l])cX^{[k, l]c'} = (X^{[k, l]})^{c'} is also sorted. If there would be a smaller partial sorting network dd on X[k,l]X^{[k, l]} we could prepend [k,l][k, l] and construct a partial sorting network on XX that is smaller than cc, but cc is of minimal size for XX, so cc' must be of minimal size for X[k,l]X^{[k, l]}.

We also have s(X)1+s(X[i,j])s(X) \le 1 + s(X^{[i, j]}) for all i,ji, j. Again, for a smaller sorting network dd on X[i,j]X^{[i, j]} we could prepend [i,j][i, j] and construct a partial sorting network on XX that is smaller than cc. This alone gives us s(X)1+min1i<jns(X[i,j]).s(X) \le 1 + \min_{1 \le i < j \le n} s(X^{[i, j]}). If we then consider that among the choices for [i,j][i, j] we will find [k,l][k, l] for which we have s(X)=1+s(X[k,l])s(X) = 1 + s(X^{[k, l]}), we can turn the inequality to an equality and get the recurrence from above.

Recursive Computation

Given such a recurrence, a natural question is to ask whether we can use it to compute s(X)s(X). For now we will ignore any practical concerns like run-time or memory consumption, but even then, that recurrence alone doesn’t quite allow us to compute s(X)s(X).

There are two obstacles that we need to overcome: First, the recurrence assumes s(X)>0s(X) > 0, so we need a test for that. By definition we have s(X)=0s(X) = 0 exactly when XX can be sorted using only unconditional exchanges or equivalently by applying a fixed permutation to all sequences in XX. If that is the case, we can find such a permutation by defining the position weight w(i)w(i) as the number of sequences xXx \in X for which xi=1x_i = 1, i.e. the number of sequences that have a one in the given position. Any permutation σ\sigma which rearranges the positions such that their weights w(i)w(i) are non-decreasing will sort all sequences in a set XX with s(X)=0s(X) = 0. We can apply this procedure to any set XX, and then check whether it indeed contains only sorted sequences.4

This gives us a base-case for a recursive computation, but there is a second obstacle: we might run into a cycle when applying the recurrence, causing infinite recursion.5 Whenever the recursion ends up in a cycle, we could remove the comparators on that cycle and still get the same set of outputs. This means anything that contains a cycle cannot have minimal size, so excluding those choices from the recurrence would not change the result.

It turns out that we can avoid cycles, by excluding so called redundant comparators.6 A comparator is redundant with respect to a sequence set XX if applying it to XX either leaves XX unchanged or is equivalent to applying an unconditional exchange to XX. That definition gives us a direct argument for why such a comparator cannot be part of a minimal size partial sorting network: we can either remove it or replace it with the equivalent exchange, reducing the size by one in either case. I’m calling the comparators [i,j][i, j] with i<ji < j that are not redundant with respect to XX the successors of XX and denote them by U(X)\mathcal U(X).

This gives us an improved recurrence s(X)=1+min[i,j]U(X)s(X[i,j]). s(X) = 1 + \min_{[i, j] \in\, \mathcal U(X)} s(X^{[i, j]}). To me it seemed intuitively true that this is enough to avoid any cycles and this was also backed up by experimental evidence from implementing this. My intuition was that any non-redundant comparator affects at least one pair of sequences in such a way that they both get closer to being in the same order. For a more formal statement and proof of this see Definition 39 and Lemma 40 in my paper.

This removes the last obstacle and—in theory—we can now compute s(X)s(X) via s(X)={1+min[i,j]U(X)(s(X[i,j])if s(X)>00min((otherwise. s(X) = \begin{cases} 1 + \displaystyle\min_{[i, j] \in\, \mathcal U(X)}^{\mathstrut} s(X^{[i, j]}) & \text{if } s(X) > 0 \\ 0 \phantom{\displaystyle\min_{\mathstrut}^{\mathstrut}} &\text{otherwise.} \end{cases}

Outlook

In practice though, using a direct implementation of this recursive formulation is not at all effective at computing s(X)s(X). Apart from avoiding redundant comparators, this explores the same tree that we already saw in the last post. The difference is that this does a depth first search instead of a breadth first search, which is the reason why we needed to avoid cycles in the first place. Unlike a depth first search, the breadth first search does not explore candidate networks in order of increasing size, so it cannot terminate as soon as the first complete (partial) sorting network is found, but has to explore the whole tree. Additionally we already saw that the tree contains a lot of redundant subtrees, as exploited in the prune step of the generate-and-prune method.

So what did we gain from doing this and why did I explore this in the first place? The answer to that is: dynamic programming.

The final recursive formula shows that s(X)s(X) can be computed from the values of several s(Y0),,S(Yk)s(Y_0), \ldots, S(Y_k), where each YiY_i is smaller in some sense. In the context of dynamic programming, a problem with that property is said to have optimal substructure. In general, dynamic programming is a—not very descriptive—umbrella term for exploiting overlap between the subproblems of a problem that has optimal substructure. A well-known application of that idea, arguably the simplest, is memoization, which caches the results of all subproblem computed so far and reuses that result if the same subproblem is encountered again.

Dynamic programming can be much more though: we can change what it means for subproblems to overlap, we do not have to use depth first search, we can adapt the exploration order depending on already computed subproblems, we can compute sound approximations of subproblems and iteratively refine those, etc.

This gives us a framework that allows us to combine the idea behind Van Voorhis’s bound with some of the ideas behind the pruning done by the generate-and-prune method. Exploring this in more detail will be part of the next posts in this series. I’m not sure how much I will cover on this blog and in what time frame—that depends on how much time I can find for writing—but you can already find all the details in the paper. Also, let me know if there is something specific you would like to see covered.

References

[1]
Michael Codish, Luís Cruz-Filipe, Michael Frank, and Peter Schneider-Kamp. 2016. Sorting nine inputs requires twenty-five comparisons. Journal of Computer and System Sciences 82, 3 (May 2016), 551–563.↩︎↩︎
[2]
Donald E. Knuth. 1998. Section 5.3.4 "Networks for Sorting". In The Art of Computer Programming: Volume 3: Sorting and Searching (2nd ed.). Addison-Wesley.↩︎

  1. We also need to keep track of cc’s length, but as we generate candidates of stepwise increasing length, that happens implicitly.↩︎

  2. In the last post I used uppercase letters to denote sorting networks, following the paper by Codish et al. From this post on I will use lowercase letters, as I make frequent use of both sorting networks and sequence sets, wanting to distinguish them. Using uppercase letters for sequences sets felt more natural to me.↩︎

  3. This also matches the conventions often used in computational group theory where (semi-)group actions are from the right and written as exponentials.↩︎

  4. If we are concerned about the performance of these checks, we can also use the fact that any sequence set XX with s(X)=0s(X) = 0 cannot contain two distinct sequences with the same number of ones—any permutation will leave one of the unsorted. There are only n+1n+1 possible values for the number of ones in a sequence, so by the pigeonhole principle when X>n+1|X| > n + 1 we have two such sequences and thus s(X)>0s(X) > 0. Otherwise, in general, we still need to perform the described test, as there are sequence sets with Xn+1|X| \le n + 1 and s(X)>0s(X) > 0. Applying additional constraints to the sequence sets we consider allows us to avoid that check even in that case. In the paper I’m doing this by defining a family of well-behaved sequence sets.↩︎

  5. We cannot have infinite recursion without cycles, as there are only a finite number of sequence sets of a given sequence length.↩︎

  6. This is the same notion as described in exercise 51 of [2], just recast in terms of partial sorting networks.↩︎