[m-dev.] Solvers, negation, existential quantification, and so forth

Peter Schachte schachte at csse.unimelb.edu.au
Thu Jun 1 17:10:52 AEST 2006


On Thu, Jun 01, 2006 at 03:38:31PM +1000, Mark Brown wrote:
> On 01-Jun-2006, Ralph Becket <rafe at cs.mu.OZ.AU> wrote:
> > RECOMMENDATION: we (somehow, perhaps by annotation on mode declarations)
> > identify constraint posting predicates that may succeed, but leave the
> > solver state inconsistent.
> 
> If we were to adopt such a scheme, then I'd suggest putting the explicit
> annotation on the goals where success means consistent.

Since you need to indicate which solver(s) they are incomplete
relative to, you'll need to list the incomplete ones anyway.

> However, I don't think that attaching this information to modes is
> fine-grained enough.  Consider the goal `X = Y + Z' for FD variables X, Y
> and Z.  A program will be solver complete for this goal if at least two of
> the variables are ground at the time of the call (assuming a reasonable
> implementation of '+'), but may not be if only one of them is ground.

Hang on, I think that's exactly backward.  The constraint X = Y + Z
with 2 or fewer ground variables is always satisfiable, so it's
trivially complete.  With 3 ground variables, it's pretty trivial to
make it complete:  just calculate and do an equality check.  The
incomplete case is if you've already got some funky constraints on all
3 variables that doesn't ground them, and you don't want to bother to
check whether the relation holds.  If you don't mind checking, you're
complete.  We want to define a goal as complete as long as it can't
*introduce* failure without failing; it need not confirm the whole
constraint store is satisfiable.  If the constraint store is already
unsatisfiable, a goal for which the solver is solver complete can do
anything it likes.  Ie, if S is the current constraint store and
c(X,...) is a constraint, then we say the solver is solver complete
for c iff

	forall S, X... : S /\ succeeds(c(X...)) => c(X...)

> Peter Schachte pointed out to me that we may be able to express the condition
> we want by using something like functional dependencies.  E.g.: the signature
> for '+' could assert the dependencies
> 
> 	(X, Y -> Z), (X, Z -> Y), (Y, Z -> X)
> 
> which, if we have information about which variables are ground at the start
> of the call, will tell us whether the program is solver somplete for that
> goal.

I still think this could form the basis of a very nice replacement for
Mercury's whole mode and determinism system, with a lot of research.
But I don't think it can help CLP much, because you don't know at
compile time which variables are ground.  That's too data-dependant.

> > This property is inherited by compound
> > goals.
> 
> How?  The "obvious" way to do it (propagating incompleteness upwards) would
> not be very accurate, since a conjunction containing incomplete conjuncts
> may itself be complete if the last conjunct labels all the variables.

That's why you need a separate "incompleteness" to specify that a goal
*removes* (not just doesn't add) incompleteness.

> This is why I asked earlier for a theory about solver completeness.  The
> key questions I want to know about are:
> 
> 	1)  Given information about the completeness of atomic goals,
> 	how can we infer completeness information about compound goals
> 	containing them?
> 
> 	2)  What form does this information take?
> 
> 	3)  How much of this theory is valid for solvers in general, and
> 	how much of it is solver specific?  Is there any useful theory at
> 	all for solvers in general?

My answers:

2)  a tuple:  <set of solver ids each goal is incomplete for, set of
    solver ids the goal makes complete>

1) Abstract interpretation should work.  Abstractly execute the code,
   replacing each goals with tuple described above.  Here are the
   abstractions of conjunction and disjunction.  Atomic goals abstract
   to their declarations.  Negation and if->then;else left as an
   exercise.  main/2 must be abstracted as <empty,_>, and that must
   also be the abstraction of the program point before a call that
   passes an io.state.

	abstract_conjunction(<i1,c1>, <i2,c2>) =
		<(i1 union i2) setdiff c2, c2 union (c1 setdiff i2)>

	abstract_disjunction(<i1,c1>, <i2,c2>) =
		<i1 union i2, c1 intersect c2>

3)  I think this theory is useful in general, but it still requires all
    all primitive constraints to be reliably declared.  The rest can
    be inferred or checked.

-- 
Peter Schachte              Most people would sooner die than think, in fact
schachte at cs.mu.OZ.AU        they do so.
www.cs.mu.oz.au/~schachte/      -- Bertrand Russell 
Phone: +61 3 8344 1338      
--------------------------------------------------------------------------
mercury-developers mailing list
Post messages to:       mercury-developers at cs.mu.oz.au
Administrative Queries: owner-mercury-developers at cs.mu.oz.au
Subscriptions:          mercury-developers-request at cs.mu.oz.au
--------------------------------------------------------------------------



More information about the developers mailing list