[m-dev.] Solvers, negation, existential quantification, and so forth

Mark Brown mark at cs.mu.OZ.AU
Thu Jun 1 19:09:56 AEST 2006

On 01-Jun-2006, Peter Schachte <schachte at csse.unimelb.edu.au> wrote:
> On Thu, Jun 01, 2006 at 03:38:31PM +1000, Mark Brown wrote:
> > On 01-Jun-2006, Ralph Becket <rafe at cs.mu.OZ.AU> wrote:
> > > RECOMMENDATION: we (somehow, perhaps by annotation on mode declarations)
> > > identify constraint posting predicates that may succeed, but leave the
> > > solver state inconsistent.
> > 
> > If we were to adopt such a scheme, then I'd suggest putting the explicit
> > annotation on the goals where success means consistent.
> Since you need to indicate which solver(s) they are incomplete
> relative to, you'll need to list the incomplete ones anyway.

I don't understand this point.

Elsewhere in the thread, you explained:
> This property needs to be parametric in the solver id involved.  This
> is because a primitive may render a *particular* solver complete, but
> it can't possibly promise to render *all* solvers complete.

I'm not entirely sure what you mean by a solver id.  For the FD library,
there is no such thing as a "solver" in the program, any more than there
is a solver for ordinary Mercury variables.  There is only a constraint
graph, for which each component may either be satisfiable or unsatisfiable;
labelling all of the variables in a component is sufficient to ensure that
we know which.  Whether there are other components of the constraint graph
which are unsatisfiable cannot be determined by such labelling.

In any case, there is no need for an id to tell you which component you
are working with.  The variables themselves tell you that.

(For other solvers, a "solver id" may be the right approach, though.)

> > However, I don't think that attaching this information to modes is
> > fine-grained enough.  Consider the goal `X = Y + Z' for FD variables X, Y
> > and Z.  A program will be solver complete for this goal if at least two of
> > the variables are ground at the time of the call (assuming a reasonable
> > implementation of '+'), but may not be if only one of them is ground.
> Hang on, I think that's exactly backward.  The constraint X = Y + Z
> with 2 or fewer ground variables is always satisfiable, so it's
> trivially complete.

Perhaps I wasn't clear.  The variables in question may already be
constrained, e.g. by X `neq` Y, Z = 0, but the unsatisfiability may not
be detected until either X or Y becomes ground.

> With 3 ground variables, it's pretty trivial to
> make it complete:  just calculate and do an equality check.  The
> incomplete case is if you've already got some funky constraints on all
> 3 variables that doesn't ground them, and you don't want to bother to
> check whether the relation holds.  If you don't mind checking, you're
> complete.  We want to define a goal as complete as long as it can't
> *introduce* failure without failing; it need not confirm the whole
> constraint store is satisfiable.  If the constraint store is already
> unsatisfiable, a goal for which the solver is solver complete can do
> anything it likes.  Ie, if S is the current constraint store and
> c(X,...) is a constraint, then we say the solver is solver complete
> for c iff
> 	forall S, X... : S /\ succeeds(c(X...)) => c(X...)

I agree, but I suspect we're talking about different problems.

For the goal

	X `neq` Y,
	Z = c(0),	% i.e., Z = 0
	X = Y + Z

the program is not generally solver complete, since this goal could
succeed even though it is inconsistent.

For the goal

	X `neq` Y,
	Z = c(0),
	X = Y + Z,
	labelling([X, Y, Z])

the program is solver complete, since all variables are labelled.

The point I was making above is that for the goal

	X `neq` Y,
	Z = c(0),		% A
	X = Y + Z,		% B
	labelling([X])		% C

the program _is_ solver complete, even though only X gets labelled.  The
proof of this is that line A causes Z to be ground, line C causes X to be
ground, and then (since two of the variables are ground) line B causes Y
to be ground.  This goal would indeed fail (assuming that the constraint
implementations are "checking").

More to the point, the following goal is fine:

	some [Y, Z] (
		X `neq` Y,
		Z = c(0),
		X = Y + Z

Failure will occur at the latest when X eventually becomes ground, which
will have to occur before X is quantified away.

> > Peter Schachte pointed out to me that we may be able to express the condition
> > we want by using something like functional dependencies.  E.g.: the signature
> > for '+' could assert the dependencies
> > 
> > 	(X, Y -> Z), (X, Z -> Y), (Y, Z -> X)
> > 
> > which, if we have information about which variables are ground at the start
> > of the call, will tell us whether the program is solver somplete for that
> > goal.
> I still think this could form the basis of a very nice replacement for
> Mercury's whole mode and determinism system, with a lot of research.

I have to agree :-).

(Mode segments, anyone?)

> But I don't think it can help CLP much, because you don't know at
> compile time which variables are ground.  That's too data-dependant.

See the addendum to my earlier post.

We don't need to check that all variables are ground.  We only need to
check the assertion that if the non-local variables are ground, then the
local variables become ground.  This is much easier to check.


mercury-developers mailing list
Post messages to:       mercury-developers at cs.mu.oz.au
Administrative Queries: owner-mercury-developers at cs.mu.oz.au
Subscriptions:          mercury-developers-request at cs.mu.oz.au

More information about the developers mailing list