Encoding Matrix Rank for SAT Solvers

Posted on December 7th 2018, tagged sat

I’m working on a problem where I want to use a SAT solver to check that a property P(v1,,vn)P(v_1, \ldots, v_n) holds for a bunch of vectors v1,,vnv_1, \ldots, v_n, but I don’t care about the basis choice. In other words I want to check whether an arbitrary invertible linear transform TT exists so that the transformed vectors have a certain property, i.e. P(T(v1),,T(vn))P(T(v_1), \ldots, T(v_n)). I solved this by finding an encoding for constraining the rank of a matrix. With that I can simply encode P(Mv1,,Mv2)P(M v_1, \ldots, M v_2) where MM is a square matrix constrained to have full rank and which therefore is invertible.

There is nothing particularly novel about my encoding, but there are many ways to approach this so I wanted to share my solution.

I assume that encoding field operations is no problem. Currently I’m working in the finite field F2\mathbb F_2, so encoding to propositional logic is trivial. When working in other fields using an SMT-Solver might be more convenient, although other finite fields can be encoded to propositional logic without too much hassle.

When we want to check the rank of a matrix by hand, we’re probably going to use Gaussian elimination to transform the matrix into row echelon form (with arbitrary non-zero leading entries). We then get the rank as the number of non-zero rows. The iterative nature of Gaussian elimination where we are swapping rows and adding multiples of rows to other rows doesn’t lead to a nice encoding. The naive way to encode this would require a copy of the whole matrix for each step of the algorithm.

What we can use instead, is the fact that Gaussian elimination effectively computes an LU decomposition of a matrix. After performing Gaussian elimination on a matrix AA, the result will be a matrix UU in row echelon form, so that there are matrices P,LP, L with PA=LUPA = LU where PP is a permutation matrix and LL is a lower unitriangular matrix (triangular with the diagonal all ones). The permutation matrix PP corresponds to the swapping of rows and the matrix LL corresponds to adding multiples of a row to rows below it. While Gaussian elimination may swap rows after already adding multiples of a row to it, it will never move a row ii above a row jj when a multiple of row jj was already added to row ii. That explains why LL is still unitriangular even when we swap rows.

You might have noticed that I ignored the details for non-square matrices until now. I will assume that our matrix is wider than tall. As column rank and row rank are the same for a matrix, this is not a restriction as we can transpose a tall matrix to a wide. For the PA=LUPA = LU decomposition, with AA being a m×nm \times n matrix with mnm \le n the matrix PP will be m×mm \times m, the matrix LL will be m×mm \times m and UU will be m×nm \times n.

(001010100)P(026246111)A=(100210011)L(111024002)U\begin{align*} \underbrace{\begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix}}_{P} \cdot \underbrace{\begin{pmatrix} 0 & 2 & 6 \\ 2 & 4 & 6 \\ 1 & 1 & 1 \end{pmatrix}}_{A} = \underbrace{\begin{pmatrix} 1 & 0 & 0 \\ 2 & 1 & 0 \\ 0 & 1 & 1 \end{pmatrix}}_{L} \cdot \underbrace{\begin{pmatrix} 1 & 1 & 1 \\ 0 & 2 & 4 \\ 0 & 0 & 2 \end{pmatrix}}_{U} \end{align*}

In general the permutation matrix PP is not uniquely determined. In iteration ii there might be multiple rows jij \ge i with the fewest zeros on the left, so different choices will lead to different LU decompositions:

(010100001)P(026246111)A=(10001012121)L(246026001)U\begin{align*} \underbrace{\begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}}_{P} \cdot \underbrace{\begin{pmatrix} 0 & 2 & 6 \\ 2 & 4 & 6 \\ 1 & 1 & 1 \end{pmatrix}}_{A} = \underbrace{\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ \frac{1}{2} & -\frac{1}{2} & 1 \end{pmatrix}}_{L} \cdot \underbrace{\begin{pmatrix} 2 & 4 & 6 \\ 0 & 2 & 6 \\ 0 & 0 & 1 \end{pmatrix}}_{U} \end{align*}

For full rank matrices, after we fix a PP though, the requirement of LL being unitriangular and UU being in row echelon form completely determines them. For lower rank matrices, some of the entries of LL don’t affect the result, so we need to force them to 0 if we want a unique decomposition.

This is already much nicer to encode. The property of a matrix being unitriangular, a permutation or in row echelon form have straight forward encodings. A matrix product is also easy to encode. Nevertheless we can improve a bit and get rid of the permutation matrix and get a uniquely determined decomposition instead.

To do this we need to relax the row-echelon form to something slightly less constrained. We need to perform row swaps to get a non-zero entry in the leftmost position possible. Assuming the matrix is full rank, we could just relax the constraint for the non-zero entry to be the leftmost of all remaining rows and instead take the rightmost non-zero entry in the current row. We still require that all entries below that non-zero entry are zero, but there might be other non-zero entries below and left of it. We wouldn’t even need to select the leftmost non-zero entry, and could instead select any non-zero entry. Choosing the leftmost makes the choice unique and adds a nice symmetry as we require all entries left of and below of it to be zero.

This corresponds to running Gaussian elimination on a suitably column-permuted matrix so that we never require row swaps. This works fine in the full rank case, but may not work for lower ranks. To get around that issue we simply allow and skip all-zero rows anywhere in the matrix. A succinct characterization of the resulting matrices is this: The first non-zero entry of each row is the last non-zero entry of its column. I’m not aware of a name for these matrices, if you know how they are called please let me now.

With this we can write any square or wide matrix AA as A=LUA = LU' where UU' is of this form and LL is lower unitriangular. If AA is full rank this is unique, otherwise it is unique up to the entries of LL that are multiplied just with zeros. The rank still corresponds to the number of non-zero rows in UU', although they may be anywhere in the matrix.

(026246111)A=(10021012121)L(026206001)U\begin{align*} \underbrace{\begin{pmatrix} 0 & 2 & 6 \\ 2 & 4 & 6 \\ 1 & 1 & 1 \end{pmatrix}}_{A} = \underbrace{\begin{pmatrix} 1 & 0 & 0 \\ 2 & 1 & 0 \\ \frac{1}{2} & \frac{1}{2} & 1 \end{pmatrix}}_{L} \cdot \underbrace{\begin{pmatrix} 0 & 2 & 6 \\ 2 & 0 & -6 \\ 0 & 0 & 1 \end{pmatrix}}_{U'} \end{align*}

Encoding this for SAT or SMT solver is now straightforward: We encode the required form of LL and the require form of UU', both simple constraints on non-zero or 1 entries, as well as the number of required non-zero rows of UU'. The uniqueness of the decomposition ensures that the SAT solver will not spend time exploring the same matrix just expressed differently.

Update: For full rank matrices you can also just ask for the existence of an inverse matrix, which is even simpler to encode, but has a different runtime behavior. As usual with SAT solvers it’s always worth to try different approaches as it’s hard to estimate which will be faster for a given problem.