Category Archives: my_research

Excellent performance in the MaxSAT Evaluation

The MaxSAT Evaluation is an annual event meant to evaluate the state of the art in MaxSAT solving technology. The evaluation has two tracks: the complete track, where the goal is to find the provably optimal solution as fast as possible and the incomplete track where the goal is to find a solution which is as good as possible within a limited time.

This year we submitted a solver called Loandra to the incomplete track of the competition. Loandra implements core boosted linear search, an idea described in this paper that I worked on during my visit to Melbourne last fall. The results of the Evaluations were presented yesterday at the SAT conference, I’ve only seen a summary but I am happy to say that Loandra performed very well, achieving the highest score on the unweighted 300s track and the second to highest on the weighted 300s track. Overall a very nice result, I am very pleased.

I’d like to thank the organizers of the MaxSAT evaluation as well as my co-authors, Emir Demirovic and Peter Stuckey! I am currently on a holiday, but once I get back I will put up Loandra on github as well. It will also eventually be made available here.

EDIT: Loandra is now available at: https://github.com/jezberg/loandra and on the software pages.

 

My thesis won an award

My PhD thesis: Solving Optimization Problems via Maximum Satisfiability : Encodings and Re-Encodings (available from e-thesis) was awarded one of four doctoral dissertation awards for 2018 of The University of Helsinki. I am grateful and honoured by this recognition. Thank you to everyone who played a role in my nomination and congratulations to my three other fellow recipients.

More information can be found in the University’s own news story.

Research visits during Spring 2019

I will be travelling to Toronto in the spring of 2019 for a 3-month research visit to the University of Toronto, hosted by Prof. Fahiem Bacchus. Prof. Bacchus is a very well known researcher, both in my and many other fields. I am looking forward to working with him and learning from him. I will be leaving Helsinki on the 29.1 and coming back on 27.4. My sincerest thanks go to both Fahiem for agreeing to host me and The Helsinki Institute of Information Technology for financially supporting the trip.

MR: Measuring difficulty

This post is one in the series explaining my research. Even though the post should be self-contained, it does make more sense in the context of the other posts in the series. (Look for the “my_research” category).

In the last post we used the following travel plan problem to discuss the differences between decision and optimization problems.

Graph

Recall that the scenario we consider is one where you are planning a round trip to visit all of the northern capitals and come back to Helsinki. The decision problem in this setting asks whether or not there exist any route that is shorter than some given bound B. This problem has a yes or no answer. In contrast, the optimization problem asks for the shortest possible route, and does not have a yes or no answer, instead the answer is a number. In this post we will investigate, how difficult these problems are  to solve.

A reasonable question to ask at this point is, how do you measure difficulty in computer science? Is there a scale that assigns a difficulty value to a problem? Can we for example say that the decision problem has difficulty 5? The short answer is: no. The longer answer is: kind of. There are several well defined classes and hierarchies of problems that allow grouping “similar” problems together. Furthermore, most researchers in the field believe that a problem higher up in the hierarchy is more difficult to solve than one lower down. However, there are many open questions related to the relationship between problem classes and difficulties, the most well known one is probably the P vs. NP problem. We will look into the hierarchy of problems in more detail in later posts.

While saying anything concrete about the difficulty of a single problem is not easy, we do have another option. We can compare two problems with each other. Lets get back to our travel planning and ask, is the decision or the optimization problem more difficult? Another way of phrasing the same question is: If we had a solution to one problem, could we use that to get a solution to the other? Lets start by assuming that we have a magic box that can find us the shortest route visiting all cities in the graph. We do not know how it works (nor do we care), we just assume that it does. With this box it is fairly easy to solve any decision problem of form “is there a route that is at most of length K” for some positive number K. We simply use the magic box to figure out if the length of the shortest route is less than K. If it is, we know that there exists a route shorter than K. If it isn’t we know that there exists no route shorter than K (since the box gave us the shortest possible route). So in this sense, the optimization problem is indeed easier than the decision problem.

How about the other way? Assume we have another magic box to which given any number K tells us if there exists a route that visits all cities and is shorter than K. Can we use this box to solve the optimization problem more efficiently? While it might at first glance seem like the answer should be no, there actually is a way. You first ask whether or not there is a route shorter than 1km. If the answer is yes, you know its the shortest possible route. If the answer is no, you then ask if there is a route of length at most 2km. Again, if the answer is yes, you know that has to be the shortest possible since you know there is no route of length 1 or shorter. You keep on going this way until your box answers yes, at which point you know you’ve found the shortest route. This is a very simple scheme, any readers familiar with computer science should have no problems coming up with improvements (and any readers not familiar with computer science that figure out improvements should consider a change of career).

So there we go. A solution to either problem can be used to solve the other one. So both are equally difficult, right? Not quite. The final point I want to consider in this post is how much extra work we needed in order to get the solution to the other problem. Given a solution to the optimization problem we do not have to do anything extra to solve the decision problems. Just use the solution method once and you can immediately solve all possible decision problems. On the other hand, given a solution method to any decision problem, you need to use it many times in order to solve the optimization problem. Even worse, the number of times you need to use your decision problem solver depends on the particular graph. For the simple scheme i described, you need to use your decision solver as many times as the length of the shortest route is. Even though the number can be decreased significantly, it will always depend on the particular graph. There is no constant number of calls that can be used to solve the optimization problem in any graph. So, as expected, the optimization problem is, in this sense, more difficult.

 

MR: Deciding vs. Optimizing

For an explanation of the my_research category, please see Introducing: Explaining my research.

Having established how all problems can be solved at once I next want to discuss the limitations of this idea. To recap the last post, assume we are interested in solving a problem A but have no idea how to do so. However, we do know that there exists a fairly similar problem B to which there are many known good solution approaches. Then instead of spending time and effort on developing a solution to problem A, we instead probably should try to find an reduction from problem A to problem B. A reduction is a function that allows us to: i) see an instance of problem A as an instance of problem B and ii) use the solution to problem B to obtain a solution to problem A. In the next few posts I want to go into some more detail on reductions. Specifically I wish to discuss criteria under which we can expect reductions from A to be to exist. Not every problem can be reduced to any other problem. In my opinion there are two main factors that affect the existence of reductions, the types and complexities of the problems A and B. In this post I will discuss the main two types of problems encountered in computer science research: decision problems and optimization problems.

Consider a scenario in which you own a plane and you are planning a flying trip to visit all the northern capitals (Helsinki, Stockholm, Oslo, Copenhagen, and Reykjavik). To help you in the planning you look up the pairwise distances between the cities and construct the following “map” to help in your planning.

Graph.png

As we can see, even for a relatively small number of cities like this, the order in which you visit them can affect the length of your trip considerably. For example, travelling Helsinki -> Stockholm -> Copenhagen -> Reykjavik -> Oslo -> Helsinki results in a route that is 7399km while going Helsinki -> Copenhagen -> Oslo -> Stockholm -> Reykjavik -> Helsinki requires 8740km, that is over a thousand kilometers more travelled.

Imagine now that your plane only has enough fuel to travel 7300km. At first glance it is not clear whether or not you can visit all four other cities and get back to Helsinki while only travelling 7300km. Figuring out if you can is a decision problem, i.e. a problem with an yes or no answer. Notice that in the general case, an instance of the decision problem consists of both a graph and a bound, i.e. we are asking whether or not there exists a route in the graph that is shorter than the bound. For this instance, the answer to the decision problem: is there a route that visits all the northern capitals and is at most 7300km long?, is no. This can be verified by enumerating all 4! = 24 different routes.

Having learnt that 7300km worth of fuel will not cut it you go out and get some more. As you are cheap, you would however want to get away with using as little fuel as possible. Hence you are interested in figuring out the shortest possible route that visits all cities and returns to Helsinki. This is an optimization problem. In contrast to the decision problem, an optimization problem does not ask for an yes or no answer. Instead the answer is the length of the shortest possible route. The interested reader can use this tool to verify that the 7399km long route Helsinki -> Stockholm -> Copenhagen -> Reykjavik -> Oslo -> Helsinki is the shortest possible route.

To summarize,  a decision problem is one with an “yes or no” answer while an optimization problem asks for “the best” answer. Before ending this post I want to highlight another important difference between deciding and optimizing. Consider a situation in which you have a candidate solution to either the decision problem (is there a route shorter than some bound?) or the optimization (how long is the shortest route?). If I am looking for a route that is shorter than 7500km and you give me one, I can easily use the map to verify that the route you’ve given me is indeed shorter than 7500km. Specifically, I do not need to be able to solve the problem myself in order to verify that the solution you give me is correct.  However, if I am looking for the shortest possible route and you give me one that is 7421km long, I have no way of verifying that it is indeed the shortest one, except for solving the problem on my own (or taking your word for it). For the interested reader I will mention that the property of solutions to decision problems being easily verifiable holds even if the answer is “no”. If I am looking for a route shorter than 6000km there exists standard forms of “proof” that you could present to me in order to convince me that there does not. If you are interested in learning more you can start here.

 

 

MR: Solving all the problems at once

This post is one is a category called my_research. See this post for further explanations.

Problem solving is an important part of much of computer science research. After all, what is an algorithm if not an instruction for a computer on how to solve some specific problem? When asking about my research I often get the feeling that people are expecting me to give a concrete problem that I am solving, something like “I work on generating high school timetables” or “I work on developing the algorithms that Netflix uses in order to suggest movies for you”. Instead what I usually end up answering is “I work on solving all problems”. While there is some exaggeration for comedic effect in that statement, there’s also some truth. In this post I will go into some more detail on what I mean by solving all problems at once. Reiterating the goal with the category my_research, the main goal here is to illustrate my opinions, not to provide full technical details. This wikipedia page is a good starting point for further reading.

When asked what problem I work on I am faced with a dilemma. The “right” answer to that question is neither very informative nor interesting to someone without a background in the field. During my PhD I worked on the Maximum Satisfiability (MaxSAT) problem. Very informally, the MaxSAT problem gives you a set of mathematical constraints over boolean variables and asks you to find an assignment for those variables that satisfies the constraints “as well as possible”.

Example: A concrete example of a MaxSAT instance is the set { (x \lor y), (\lnot x), (\lnot y)} where we have three disjunctions (logical ors) with two different variables, x and y and their negations. The first constraint is satisfied if either x or y is set to true. The second constraint is satisfied if x is set to false and the third one if y is set to false. Already this simple formula gives some intuition on the difficulty of the problem. Notice that all three constraints can not be satisfied at the same time, the best we can do is two out of three.

A very valid question to ask at this point is: what is the point of solving MaxSAT? While I will defend the value of doing research and studying problems without worrying about the practical applications, there are some pragmatic reasons for being able to argue that your research has practical applications as well (*cough* funding *cough*). There are two main reasons for my interest in MaxSAT: 1) the abstract nature of the problems makes it (slightly) simpler to analyze while 2) the existence of reductions allow MaxSAT solving techniques to be applied to solving many other problems that have more practical applications. In these posts I want to focus on reason 2) and discuss how MaxSAT solving can be applied to solving other problems. In order to do that I will start by a brief introduction to reductions.

To understand reductions you can imagine that you are trying to solve a super complicated problem. In addition to being difficult, your problem might also be really specific, maybe one that no one else has worked on before. While you could try developing your own solution to the problem from scratch, most interesting problems tend to be difficult enough to make developing solutions to them very time consuming and prone to errors. Often a better idea is to change your viewpoint. If you can write an algorithm that converts your problem to an instance of MaxSAT and another one that converts MaxSAT solutions to solutions of your problem, then you can apply MaxSAT solving techniques that have been developed over tens of years. The program that converts your problem to MaxSAT and back is called an MaxSAT encoding of your problem. Due to some nice technical details (that we will get into later) developing MaxSAT encodings is often significantly easier that solving problems. We note that encodings are more often called reductions.

To summarise, I develop MaxSAT solving techniques so that someone else can make use of MaxSAT solvers to solve their more “interesting” problem. In that sense my aim is to solve all of the problems.

Introducing: Explaining my research

When thinking about what to do with this blog there were a number of different things that came to mind. The one that I decided to actually try to implement is a series of posts attempting to explain my research to a wider audience.

group - 1

Often I find that explaining my research is fairly difficult, even when discussing with other computer science researchers. I do not think this is due to the topics that I work on being very difficult, in fact I am sure there are a number of people in the world that could have developed the same results I have given the same opportunities that I have been given. Instead I think the main difficulty in explaining my contributions to the field is that they build on a large amount of previous research and results. Thus to be able to detail the contributions of a specific publication to someone without a background in the field, I first have to explain most of the work on which my publication builds on, and to do that I need to explain the work on which the previous work builds on etc.

The aim of these posts is to overview the topics necessary for understanding my research and to highlight my personal views on the topics. The long term goal is to be able to explain each of my publications. However, in order to get there, I first have to discuss a number of topics, including optimization problems, complexity theory, reductions and NP-hardness and constraint programming with an emphasis on propositional logic, SAT solving and MaxSAT solving. As the intended audience of these post are people without any background in computer science (or even any kind of natural sciences) I will simplify many of the technical details. In other words, these posts will not cover the topics thoroughly, and should be seen as an introduction that is sufficient for understanding my personal research. I will attempt to provide links to further resources on each topic.

The first post in this series with actual content will be on optimization problems. Until then!