[m-users.] Examples of constraint solving tasks?

Paul Bone paul at bone.id.au
Mon Sep 22 21:53:17 AEST 2014


On Fri, Sep 19, 2014 at 10:14:42AM +0200, Ondrej Bojar wrote:
> Dear all,
> 
> it's been a while since I've been using Mercury. (Don't worry, I still haven't found any substitute programming language, it just happened that all I've done in the meantime was so ugly that Perl fitted just right.)
> 
> Now I'd like to write a solver for an 'optimal riffle shuffle' problem.
> 
> I am given two linear sequences (a, b, c, d, .. and A, B, C, D, ..) and I want to merge them into one linearly sorted sequence (e.g. a, A, b, c, B, d, C, D, ...) subject to:
> - a number of (hard or soft) ordering constraints between the two sequences (b<B, C>d, ...)
> - minimal cost which is some function (linear might be sufficient) of the node positions (e.g. b-c to make b and c as close possible)
> 
> I've tried glpsol on an early formulation of the problem as linear optimization. Satisfying the (hard) ordering constraints is easy, as is the minimization of the cost, but I don't think there is way to require the global linear order. One way to express this would be to say that each variable a, b, .., A, B, has to receive a distinct integer value.
> 
> I haven't really written any CS code, but I know Mercury is the basis of HAL and there used to be the G12 group. Are there any code examples that would illustrate the bits and pieces I might need?
> 
> Looking forward to your response and feeling very homesick for Australia and Melbourne in particular.
> 

I don't think there are any examples that are both 1) open source, 2) easy
to read.

There are two main ways to write solvers in Mercury, you can use Mercury's
backtracking to control your search, or you can choose not to use Mercury's
backtracking and implement the search yourself.  It's pretty clear that the
first option should be simpler, the benefit of the second option is that it
can give you some extra control which can be useful for some problems.  For
example you can control which order you try things in or how to prune away
parts of your search space that you know will not have solutions.

Lets start assuming that you're going to use Mercury search.  I agree with
you that giving each symbol a value and requiring that all values are
distinct is a useful way to express the problem.

main(!IO) :-
    % Values is a list that gets filled in with four items.
    solutions(solve(4), Solutions),
    map(print_solution, Solutions, !IO).

solve(NumberOfValues, Values) :-
    assign_positions_to_values(NumberOfValues, Values),
    check_solution(Values).

assign_positions_to_values(N, Values).
    % Use a loop to return a list where each element is a number between 1 
    % 4 inclusive.  Each iteration should be nondeterministic so that the
    % predicate as a whole is nondeterministic.

check_solutions(Values) :-
    % Check that there is a total ordering (no position occurs more than
    % once), all positions are between 4 and length(Values).  And any other
    % things eg: A < C eg (assuming there are always 4):
    Values = [A, B, C, D],
    A < C.

Next, add scoring of each solution (calculate your minimal cost) so that you
can select the result with the lowest cost (from the list returned by
solutions/2).  To add soft constraints you need to factor these into your
scoring, basically add a penalty for breaking each soft constraint.  That's
really up to you because you're the one who understands the problem that
you're trying to solve.

While that should be enough that your program returns the correct solution,
it can be slow for large enough problems.  This is because the solution
space is very large.  In this case there are N! potential solutions for a
list of N elements, so if N is large enough your program could be very slow
as Mercury will test every single combination.

Constraint Logic Programming and Constraint Programming are the domains of
research that deal with optimising such things, by pruning away parts of the
search tree that are known to not contain any solutions.  However in order
to use these techniques with Mercury you must not use Mercury's backtracking
and instead write your own code for what to test next.  (Unless you use
impurity or something else that breaks normal logical semantics.) This is
more difficult and not really something that can be explained in an e-mail,
besides there are online resources that do this better, search for "Branch
and bound".

Fortunately I can point you at some code that implements a branch-and-bound
search.  In my thesis I needed to take a regular conjunction:

    A, B, C, D

And choose how best to turn it into a parallel conjunction:

    (A, B, C) & D or
    A & (B, C, D)
    etc..

I wrote a branch and bound solver to do this:

https://github.com/PaulBone/mercury/blob/master/deep_profiler/autopar_find_best_par.m#L383

That's where the solver begins, but the interesting part is in
generate_parallelisations_loop at:

https://github.com/PaulBone/mercury/blob/master/deep_profiler/autopar_find_best_par.m#L607

I noticed that if I had more than, say 80, goals in a conjunction than even
using branch and bound was too slow.  So this code checks to see if we've
already spent a long time searching for the correct solution and if we have
then it just picks the branch that looks "good so-far" and commits to it.

There's a description of this algorithm in my thesis
http://www.mercurylang.org/documentation/papers.html#pbone_phd_thesis

Let me know if you have any more questions.

I hope you're well, it's been a long time since we've spoken.

Cheers.


-- 
Paul Bone



More information about the users mailing list