Category Archives: my_research

Good performance at the MSE2021

The results of the 2021 MaxSAT Evaluation were just announced at the 2021 SAT conference (by me :)). My solver Loandra performed very well, taking first place in the weighted 300s track and fourth in all other tracks. The source code of that version of Loandra is already available at the evaluation website and in its repository.


I would like to thank my co-organizers of the Evaluation and the organizers of the SAT conference. I am looking forward to taking a closer look at the results.

The spring of presentations continues

A week ago I gave the second talk at the Simons Institute, this time at the Theoretical Foundations of SAT/SMT Solving seminar. The event webpage can be found here. I had the opportunity to talk about my work on preprocessing for Maximum Satisfiability, covering work done both during and after the preprocessing.

I would again like to thank Marjin Heule for considering me as a speaker for the event. Next time you can hear me speak is at Jakob Nordströms video seminars on the 4.6. I will talk about Abstract Cores in the IHS algorithm for MaxSAT, covering my recent paper from SAT 2020. Looking forward to seeing you there!

Invited Talks at the Simons Institute

This spring I have been very fortunate to have been invited to give to talks at the various different seminars run by the Simons Institute in California.

The first talk on solving MaxSAT was given on the 13th of April together with Matti Järvisalo. The talk was a part of the Beyond Satisfiability seminar, a recording of it can be found here.

The second talk will be on MaxSAT preprocessing on May 5th as a part of the Theoretical Foundation of SAT/SMT solving workshop. More information can be found here. As the talk is online. I invite anyone who is interested to attend. I am sure the talks will be interesting, personally I am looking forward to hearing about SAT and QBF preprocessing form Benjamin and Martina.

I would like to thank the organisers of the various seminars of the Simons Institute for considering me as a speaker in their events.

My Thesis won an Award

I am happy to let you know that my Thesis: Solving Optimization Problems via Maximum Satisfiability: Encodings and Re-Encodings was awarded the international ACP Doctoral Thesis Award by the Association of Constraint Programming. I will be receiving the award and giving a presentation on my thesis on the 11.9 at the CP 2020 conference. The conference is virtual and free to attend, I hope to see you all there.

Im an honoured for receiving the price and want to thank the ACP for the recognition.
My presentation can be found at: at google drive.

A news article with more info can be found at: https://www.helsinki.fi/en/news/data-science-news/international-dissertation-award-to-berg

Best Paper award at SAT 2020

bpaper-page001

I am happy to let you know that my paper titled Abstract Cores in Implicit Hitting set MaxSat solving has been awarded a best paper award at the 2020 SAT conference. The paper reports on work that was started during my visit to Toronto in 2019. We propose a technique that seeks to address the inherent drawback in the so called implicit hitting set approach to MaxSAT solving, one of the most successful approaches to solving MaxSAT instances corresponding to real world applications. Abstract cores seek to reduce the number of cores that the algorithm needs to extract before terminating without increasing the complexity of the core extraction steps too much. The instantiation of this idea in the MaxHS solver was one of the best performing approaches in the 2020 MaxSAT Evaluation.

I would like to thank the conference organizers for the recognition and for putting together an event that, considering the current circumstances, was probably the best possible way of organizing SAT 2020. I would also like to thank my co-authors, especially Prof. Fahiem Bacchus for his efforts in the implementation and getting the experimental results.

New papers accepted

The new decade has started well for me with two accepted publications already.
The first one, Preprocessing in Incomplete MaxSAT Solving is joint work with Marcus Leivo and Matti järvisalo, and is also Marcus first publication! The paper investigates the effect that preprocessing has on the non-optimal solutions of MAxSAT instances. The paper ties  my recent work on incomplete MaxSAT solving with my earlier work on MaxSAT preprocessing and I am very excited to push this line of research further. It will be published in the proceedings of the 2020 ECAI conference. I will most likely be travelling there myself to present this work, we are also looking into proposing a tutorial for the conference.

The second paper, Core-Guided and Core-Boosted Search for CP, extends and generalises my work on incomplete MaxSAT solving to the more general finite domain constraint programming paradigm. The paper is joint work with Graeme Grange, Emir Demirovic and Peter Stuckey from Australia. I am surprised and pleased to see that our previous work on MaxSAT seems to work so well in CP solving as well. The paper will be published in the proceedings of the 2020 CPAIOR conference. I will most likely not travel there myself, but I am sure that my co-authors will do the paper justice.

I’d like to thank all my co-authors for their work, it’s been a pleasure collaborating and hopefully we can continue doing so. The papers will be added to this webpage when the publishing process is done. If you want to read them earlier, feel free to send me an e-mail.

Excellent performance in the MaxSAT Evaluation

The MaxSAT Evaluation is an annual event meant to evaluate the state of the art in MaxSAT solving technology. The evaluation has two tracks: the complete track, where the goal is to find the provably optimal solution as fast as possible and the incomplete track where the goal is to find a solution which is as good as possible within a limited time.

This year we submitted a solver called Loandra to the incomplete track of the competition. Loandra implements core boosted linear search, an idea described in this paper that I worked on during my visit to Melbourne last fall. The results of the Evaluations were presented yesterday at the SAT conference, I’ve only seen a summary but I am happy to say that Loandra performed very well, achieving the highest score on the unweighted 300s track and the second to highest on the weighted 300s track. Overall a very nice result, I am very pleased.

I’d like to thank the organizers of the MaxSAT evaluation as well as my co-authors, Emir Demirovic and Peter Stuckey! I am currently on a holiday, but once I get back I will put up Loandra on github as well. It will also eventually be made available here.

EDIT: Loandra is now available at: https://github.com/jezberg/loandra and on the software pages.

 

My thesis won an award

My PhD thesis: Solving Optimization Problems via Maximum Satisfiability : Encodings and Re-Encodings (available from e-thesis) was awarded one of four doctoral dissertation awards for 2018 of The University of Helsinki. I am grateful and honoured by this recognition. Thank you to everyone who played a role in my nomination and congratulations to my three other fellow recipients.

More information can be found in the University’s own news story.

Research visits during Spring 2019

I will be travelling to Toronto in the spring of 2019 for a 3-month research visit to the University of Toronto, hosted by Prof. Fahiem Bacchus. Prof. Bacchus is a very well known researcher, both in my and many other fields. I am looking forward to working with him and learning from him. I will be leaving Helsinki on the 29.1 and coming back on 27.4. My sincerest thanks go to both Fahiem for agreeing to host me and The Helsinki Institute of Information Technology for financially supporting the trip.

MR: Measuring difficulty

This post is one in the series explaining my research. Even though the post should be self-contained, it does make more sense in the context of the other posts in the series. (Look for the “my_research” category).

In the last post we used the following travel plan problem to discuss the differences between decision and optimization problems.

Graph

Recall that the scenario we consider is one where you are planning a round trip to visit all of the northern capitals and come back to Helsinki. The decision problem in this setting asks whether or not there exist any route that is shorter than some given bound B. This problem has a yes or no answer. In contrast, the optimization problem asks for the shortest possible route, and does not have a yes or no answer, instead the answer is a number. In this post we will investigate, how difficult these problems are  to solve.

A reasonable question to ask at this point is, how do you measure difficulty in computer science? Is there a scale that assigns a difficulty value to a problem? Can we for example say that the decision problem has difficulty 5? The short answer is: no. The longer answer is: kind of. There are several well defined classes and hierarchies of problems that allow grouping “similar” problems together. Furthermore, most researchers in the field believe that a problem higher up in the hierarchy is more difficult to solve than one lower down. However, there are many open questions related to the relationship between problem classes and difficulties, the most well known one is probably the P vs. NP problem. We will look into the hierarchy of problems in more detail in later posts.

While saying anything concrete about the difficulty of a single problem is not easy, we do have another option. We can compare two problems with each other. Lets get back to our travel planning and ask, is the decision or the optimization problem more difficult? Another way of phrasing the same question is: If we had a solution to one problem, could we use that to get a solution to the other? Lets start by assuming that we have a magic box that can find us the shortest route visiting all cities in the graph. We do not know how it works (nor do we care), we just assume that it does. With this box it is fairly easy to solve any decision problem of form “is there a route that is at most of length K” for some positive number K. We simply use the magic box to figure out if the length of the shortest route is less than K. If it is, we know that there exists a route shorter than K. If it isn’t we know that there exists no route shorter than K (since the box gave us the shortest possible route). So in this sense, the optimization problem is indeed easier than the decision problem.

How about the other way? Assume we have another magic box to which given any number K tells us if there exists a route that visits all cities and is shorter than K. Can we use this box to solve the optimization problem more efficiently? While it might at first glance seem like the answer should be no, there actually is a way. You first ask whether or not there is a route shorter than 1km. If the answer is yes, you know its the shortest possible route. If the answer is no, you then ask if there is a route of length at most 2km. Again, if the answer is yes, you know that has to be the shortest possible since you know there is no route of length 1 or shorter. You keep on going this way until your box answers yes, at which point you know you’ve found the shortest route. This is a very simple scheme, any readers familiar with computer science should have no problems coming up with improvements (and any readers not familiar with computer science that figure out improvements should consider a change of career).

So there we go. A solution to either problem can be used to solve the other one. So both are equally difficult, right? Not quite. The final point I want to consider in this post is how much extra work we needed in order to get the solution to the other problem. Given a solution to the optimization problem we do not have to do anything extra to solve the decision problems. Just use the solution method once and you can immediately solve all possible decision problems. On the other hand, given a solution method to any decision problem, you need to use it many times in order to solve the optimization problem. Even worse, the number of times you need to use your decision problem solver depends on the particular graph. For the simple scheme i described, you need to use your decision solver as many times as the length of the shortest route is. Even though the number can be decreased significantly, it will always depend on the particular graph. There is no constant number of calls that can be used to solve the optimization problem in any graph. So, as expected, the optimization problem is, in this sense, more difficult.