jix.one

Refactoring Varisat: 2. Clause Storage and Unit Propagation

Posted on March 2nd 2019, tagged algorithms, refactoring-varisat, rust, sat

This is the second post in my series about refactoring varisat. Since the last post I started implementing some of the core data structures and algorithms of a CDCL based SAT solver: clause storage and unit propagation. In this post I will explain how the these parts work and the rationale behind some of the decisions I made.

Developer Documentation

For the new varisat code base I’m not just documenting the public API but am also adding a lot of developer documentation. The developer documentation for the latest master is automatically built. While I’m trying to document everything while writing the code, I also plan to continuously revisit and improve the existing documentation. I’m also open to feedback here. If you think some code needs additional documentation or find existing documentation not clear enough, feel free to file an issue about this.

The Context Struct

Internally a SAT solver uses a bunch of different data structures that store different information derived from the formula. To solve the satisfiability problem different routines use and update different subsets of these structures, deriving further information. All these data structures are bundled in an outer containing data structure that I call the context.

Most routines work with multiple of these data structures at the same time. These routines also often call sub-routines that work on a different subset of these data structures. This results in most SAT solvers I’m aware of implementing most routines as methods of this context structure (although often called differently).

This isn’t great for documenting the dependencies between different routines and data structures. If there was a clear hierarchy of data structures and how they’re used, nesting them accordingly would solve this problem. This doesn’t work too well for SAT solvers though. No matter how you organize the data structures, there are always some routines that use data structures far apart. Even if you find a way to mostly avoid this, you never know the requirements of new techniques you might add later.

Nevertheless this is the approach taken by most SAT solvers and the first version of varisat. There is a second problem, though and that is specific to rust. The borrow checker requires one to be much more careful with passing references to functions. For a solver written in C++ it is always possible to pass a reference to the outer context structure to give a function access to any subset of the contained data structures. In rust this only works if no data is borrowed elsewhere at the same time.

For the first version of varisat I worked around this by passing references to different contained data structures individually. Code written using this workaround becomes hard to read and change. Passing all the extra parameters clutters the code and accessing a new data structure in one place requires coordinated changes in many only slightly related places. The only upside is that the code and data dependencies are clearly documented.

Ideally I’d like to document which structures are accessed by each function in its declaration without it repeating that information in any function’s code. This would be enough to statically ensure that rust’s dynamic aliasing invariants are upheld. In a way this would allow partial borrowing, which already works within a function, to work everywhere. While I was thinking about this, I came across Niko Matsakis’ blog post about this limitation of rust. While it gave me some hope that rust will allow some form of partial borrowing in the future, I needed this sooner.

This is why I came up with the partial_ref library. It uses macros and trait based meta-programming to emulate the feature I need. The implementation is inspired by the techniques I’ve learned from the frunk crate. With this I can annotate the fields of the context struct and then declare partial references like this: partial!(Context, mut ClauseAllocP, mut ClauseDbP, BinaryClausesP). The types ending in P are marker types declaring which contained data structures are borrowed. Re-borrowing a subset of a partial reference can be done by the borrow or split_borrow methods, which infer the required parts while statically checking the borrowing rules. The individual parts can be accessed using the part or split_part method. The split_ variants allow simultaneous borrows of non-overlapping or non-mutable partial references. This still requires calls to part or borrow in many places, but avoids repeating the list of used data structures everywhere.

All this happens at compile time. The generated code uses a single pointer to represent a partial reference. This can lead to better code generation than any of the more verbose workarounds, as they can’t guarantee that the individual references point into the same containing structure.

All in all I’m very happy with how using partial_ref turned out so far.

The Clause Allocator

Varisat is a conflict driven clause learning (CDCL) based SAT solver. A CDCL based solver works on a Boolean formula in conjunctive normal form (CNF), adding and removing clauses, while keeping the formula equisatisfiable to the input formula. As such it needs to store the clauses of the current formula.

A dedicated allocator is used to speed up allocation while at the same time reducing memory usage and fragmentation. Instead of using an individual Vec<Lit> for each clause there is a single Vec<LitIdx> used as a buffer for storing all clauses. LitIdx is the underlying integer type used to represent literals. For each clause in the buffer, the literals are preceded by a clause header. The header contains the length of the clause as well as other metadata associated with each clause.

By making use of rust’s #[repr(transparent)] it is possible to safely store both the header and the literals in the same vector. This also allows us to define a dynamically sized type for clauses and safely cast slices of our buffer into references of this clause type.

Storing such references in other data structures isn’t feasible though. The context data structure would become self-referential, with all the problems this brings in rust. Another problem is that we couldn’t grow the buffer while references pointing into it exist. Instead we define a new type for long lived clause references which stores an offset into the buffer. This also allows us to use an integer type smaller than a pointer, saving memory in all places where many clauses a referenced.

Whenever a new clause is allocated it is simply appended to the buffer. Clauses are never deleted from the buffer, they are just marked as deleted. Reclaiming the space used by deleted clauses is handled from outside of the clause allocator by creating a new allocator which uses a new buffer and then copying just the non-deleted clauses.

The Clause Database

The clause allocator provides storage for clauses, but it doesn’t keep track of the allocated clauses. This is done by the clause database. It stores references to all clauses of the current formula, which are used for garbage collection It also stores a partition of the clauses into four different tiers. Three tiers are for redundant clauses. Redundant clauses are those where the formula with them is equivalent to the formula without them. This is the case for newly learned clauses. The remaining tier consists of the irredundant clauses, whose removal may change the solution set. Initially all clauses of the input formula are considered irredundant, even though some might actually be redundant.

The partition in redundant/irredundant allows us to remove some redundant clauses from time to time, which is needed for solving performance and memory use. The more clauses are stored, the slower the solving becomes. On the other hand learning new clauses is how a CDCL solver makes progress. Splitting the irredundant clauses into three tiers is part of the heuristic used to decide which clauses to remove. I’ll write more about that in a later post.

Binary Clauses

Clauses with only two literals, called binary clauses, are handled separately from other clauses. A binary clause xyx \vee y is equivalent to the implications ¬xy\neg x \to y and ¬yx\neg y \to x. Knowing that one of the literals is false we can derive that the other must be true, without looking at any other literals. We also never want to forget binary clauses as they are useful for making progress and take up little storage. Instead of storing them in the clause allocator, each literal has a list of literals implied through binary clauses. As we won’t forget binary clauses there is no need to remember which are redundant and which aren’t. This results in just a Vec<Lit> per literal, where each binary clause results in one entry of two vectors. There is no need to store any extra metadata and thus a very compact representation. While most solvers special case binary clauses, not all solvers use this approach.

Watchlists and Unit propagation

We already considered how binary clauses allow us to derive the value of variables given the assignment of other variables. This can be generalized to long clauses. A literal that is false can be removed from a disjunction without changing the result. Thus we can remove false literals to simplify a clause. When all but one literal are removed, the clause becomes a unit clause. To satisfy a unit clause we need to assign the remaining literal. It can also happen that all literals are false and we end up with an empty clause. This means that our assignment is not compatible with the clause and is called a conflict.

As we’re going to repeatedly assign literals and then unassign literals when we backtrack, we’re not going to actually remove any literals. Instead we look for clauses where all but one literal of a clause are assigned false, and thus the remaining literal has to be true to satisfy the clause.

Even though we’re not removing any literals we still say that a clause becomes unit when all but one literals are assigned false. The process of iteratively assigning true to the remaining literal of a clause that became unit is called unit propagation.

SAT solvers spend most of their time doing unit propagation. Therefore it is important to do this as efficient as possible. A naive way to perform unit propagation would be to iterate through all clauses, and for each clause count the non-false literals. This requires looking at all clauses whenever a single literal is assigned.

As a first improvement we could keep track of which clauses contain which literals. Then, whenever a literal is assigned false, only clauses containing that literal can become unit and we would only have to look at those. The approach taken by CDCL based solvers improves this further.

The algorithm used is based on a simple observation: a clause cannot become unit as long as it has two non-false literals. This might sound like an obvious statement, but nevertheless allows us to speed up unit propagation quite a bit. Instead of tracking all literals of all clauses, we track two non-false literals per unsatisfied clause. These literals are called watched literals. For each literal we keep a list of clauses where it is among the two watched literals. These are called watchlists. As long as these literals aren’t assigned false, we don’t care about what happens to the other literals of a clause, as there is no way for that clause to become unit. Only when one of the watched literals is assigned false, we process the clause. Ignoring the conflict case for now, three things can happen: 1) we find the clause has a true literal and thus is satisfied, 2) the clause is unit and we get a new assignment, or 3) we can find a replacement non-false literal for the watched literal that was assigned.

To illustrate this consider this example where aa and bb are assigned true, xx and yy are unassigned and gg was just assigned false. The watched literals are underlined.

  1. ¬bgxa\neg b \vee \underline g \vee \underline x \vee a, finding the true literal aa, resulting in ¬bgxa\neg b \vee g \vee \underline x \vee \underline a.

    Here the clause is satisfied. Making a true literal a watched literal handles backtracking. This ensures that we’re again watching two non-false literals as soon as as the true literal becomes unassigned.

  2. ¬ag¬b¬x\neg a \vee \underline g \vee \neg b \vee \underline {\neg x}, where the clause becomes unit, implying ¬x\neg x.

    Here the new assignment ¬x\neg x is found. The watched literals do not change. This is compatible with backtracking. The clause can only become non-unit when the assignment to ¬g\neg g, and with it the implied assignment to ¬x\neg x, is removed. At that point both watched literals are non-false.

  3. ¬yg¬bx\neg y \vee \underline g \vee \neg b \vee \underline x, finding the non-false literal ¬y\neg y, resulting in ¬yg¬bx\underline {\neg y} \vee g \vee \neg b \vee \underline x.

    Here we just maintain two watched non-false literals.

There are a few additional optimizations implemented: The watched literals are always moved to the beginning of the clause, so we know which literals are watched without scanning the watchlists. Also the watchlists contain a “blocking literal” for each watched clause, which is just another literal of the clause. The blocking literal allows use to detect some satisfied clauses without accessing the memory used for clause storage.

Currently the implementation of unit propagation uses only safe abstractions that perform bounds checking. For varisat 0.1 I used unchecked accesses that relied on the correctness of other parts of the solver. This resulted in pervasive unsafe annotations. When this rewrite is fully functional, I plan to carefully benchmark the difference and only remove bound checks where necessary. Right now I expect that I will start to use unsafe code that will manually perform a minimal number of bound checks. This should get me the performance I want, without relying on global invariants for memory safety.

What’s Next?

The next steps ahead are implementing conflict analysis and backtracking. I already implemented some supporting code for this that I didn’t wrote about yet. Together with a dummy branching heuristic this is enough to solve some small formulas, so I also plan to add a public API and a command line interface.

If you don’t want to miss future posts, you can subscribe to the RSS feed or follow me on Twitter.