This post is one is a category called my_research. See this post for further explanations.

Problem solving is an important part of much of computer science research. After all, what is an algorithm if not an instruction for a computer on how to solve some specific problem? When asking about my research I often get the feeling that people are expecting me to give a concrete problem that I am solving, something like “I work on generating high school timetables” or “I work on developing the algorithms that Netflix uses in order to suggest movies for you”. Instead what I usually end up answering is “I work on solving all problems”. While there is some exaggeration for comedic effect in that statement, there’s also some truth. In this post I will go into some more detail on what I mean by solving all problems at once. Reiterating the goal with the category my_research, the main goal here is to illustrate my opinions, not to provide full technical details. This wikipedia page is a good starting point for further reading.

When asked what problem I work on I am faced with a dilemma. The “right” answer to that question is neither very informative nor interesting to someone without a background in the field. During my PhD I worked on the Maximum Satisfiability (MaxSAT) problem. Very informally, the MaxSAT problem gives you a set of mathematical constraints over boolean variables and asks you to find an assignment for those variables that satisfies the constraints “as well as possible”.

*Example: A concrete example of a MaxSAT instance is the set where we have three disjunctions (logical ors) with two different variables, and and their negations. The first constraint is satisfied if either or is set to true. The second constraint is satisfied if is set to false and the third one if is set to false. Already this simple formula gives some intuition on the difficulty of the problem. Notice that all three constraints can not be satisfied at the same time, the best we can do is two out of three.*

A very valid question to ask at this point is: what is the point of solving MaxSAT? While I will defend the value of doing research and studying problems without worrying about the practical applications, there are some pragmatic reasons for being able to argue that your research has practical applications as well (*cough* funding *cough*). There are two main reasons for my interest in MaxSAT: 1) the abstract nature of the problems makes it (slightly) simpler to analyze while 2) the existence of *reductions *allow MaxSAT solving techniques to be applied to solving many other problems that have more practical applications. In these posts I want to focus on reason 2) and discuss how MaxSAT solving can be applied to solving other problems. In order to do that I will start by a brief introduction to reductions.

To understand reductions you can imagine that you are trying to solve a super complicated problem. In addition to being difficult, your problem might also be really specific, maybe one that no one else has worked on before. While you could try developing your own solution to the problem from scratch, most interesting problems tend to be difficult enough to make developing solutions to them very time consuming and prone to errors. Often a better idea is to change your viewpoint. If you can write an algorithm that converts your problem to an instance of MaxSAT and another one that converts MaxSAT solutions to solutions of your problem, then you can apply MaxSAT solving techniques that have been developed over tens of years. The program that converts your problem to MaxSAT and back is called an MaxSAT encoding of your problem. Due to some nice technical details (that we will get into later) developing MaxSAT encodings is often significantly easier that solving problems. We note that encodings are more often called reductions.

To summarise, I develop MaxSAT solving techniques so that someone else can make use of MaxSAT solvers to solve their more “interesting” problem. In that sense my aim is to solve all of the problems.