[m-dev.] Solvers, negation, existential quantification, and so forth

Mark Brown mark at cs.mu.OZ.AU
Mon Jun 5 15:18:16 AEST 2006

On 02-Jun-2006, Ian MacLarty <maclarty at cs.mu.OZ.AU> wrote:
> On Fri, Jun 02, 2006 at 04:57:33PM +1000, Mark Brown wrote:
> > On 02-Jun-2006, Ian MacLarty <maclarty at cs.mu.OZ.AU> wrote:
> > > On Thu, Jun 01, 2006 at 03:38:31PM +1000, Mark Brown wrote:
> > > > Peter Schachte pointed out to me that we may be able to express the condition
> > > > we want by using something like functional dependencies.  E.g.: the signature
> > > > for '+' could assert the dependencies
> > > > 
> > > > 	(X, Y -> Z), (X, Z -> Y), (Y, Z -> X)
> > > > 
> > > 
> > > Could you elaborate on exactly what these functional dependencies mean?
> > 
> > They express relationships between the groundness of solver variables.
> > `X, Y -> Z' means that (X is ground) and (Y is ground) implies (Z is ground).
> > 
> > > What is the difference between the above constraints and having the
> > > following three modes?
> > > 
> > > :- mode +(in, in, out) is det.
> > > :- mode +(in, out, in) is det.
> > > :- mode +(out, in, in) is det.
> > 
> > Mode declarations put constraints on the call patterns, so you wouldn't
> > be allowed to call this '+' unless two of the arguments are already ground.
> > 
> They say more than that.  The three modes above say that the value of
> any one of the arguments is a function of the value of the other two
> arguments.
> > With the constraint solving version of '+' the variables have inst `any'
> > at the time of call -- none of the variables need be ground.  Any time after
> > the call the relationship above will hold, so if X and Y become ground
> > sometime later on then Z will also.
> > 
> > The "functional dependencies" or "groundness assertions" or whatever we want
> > to call them provide extra information about the `any' insts which we
> > otherwise wouldn't have.
> > 
> Okay I see.  Though I think that extra information you're talking about
> is actually in those three mode declarations already.  There may be
> other modes with any insts, but those three ground modes give the
> "groundness assertions" you're talking of, don't they?

The modes entail the groundness assertions, but not vice-versa.  That is,
mode declarations give too much information.

(I should have never used the phrase "something like functional dependencies";
they are only superficially similar, and in particular they don't tell you
that some arguments are functionally dependnent on others).

> Of course using this information would mean doing something very
> different with the modes at compile time (for example you probably
> wouldn't want to generate code for those three modes, only the any
> mode).  My point is that the extra information can be expressed with the
> existing mode system as I see it.  Could you give an example of these
> "functional dependencies" that couldn't be expressed with the existing
> mode system?

It may be possible to derive all the groundness information that we need
from existing mode declarations.  I don't mind whether the information
comes from existing mode declarations or from some new declarations; at
this stage, I'm just interested in whether the groundness information is
of general use at all in helping the programmer ensure that their program
is solver complete (and helping them diagnose the problem when it isn't).

Here's an example where groundness information can help.  Take the
following definition of a function that adds a constant to a solver
variable (whose type is fdvar):

        :- func plusc(fdvar, int) = fdvar.
        :- mode plusc(ia, in) = ia is semidet.

        plusc(A, N) = A + c(N).

where we assume the existence of definitions for:

        % Addition:
        :- func fdvar + fdvar = fdvar.
        :- mode ia + ia = ia is semidet.

        % Create a new variable with a constant value.
        :- func c(int) = fdvar.
        :- mode c(in) = oa is det.

First, consider what the compiler will do when expanding the definition
to superhomogeneous form.  It will create something like:

        plusc(A, N) = V0 :-
                some [V1, V2] (
                        V1 = c(N),
                        V2 = A + V1,
                        V0 = V2

We really don't want to require the user to have to assert anything
about V1 and V2 -- they don't even appear in the original program!

Now assume we have the following groundness information (and please ignore
the concrete syntax):

	:- groundness A + B = C <= (A, B -> C), (A, C -> B), (B, C -> A).
	:- groundness c(A) = B <= (true -> B).

And for equality we would of course have:

	:- groundness A = B <= (A <-> B).

Reading these as boolean formulas, we can abstractly interpret the body of
plusc as follows:

	true -> V1,					% V1 = c(N)
	A /\ V1 -> V2, A /\ V2 -> V1, V1 /\ V2 -> A,	% V2 = A + V1
	V0 <-> V2					% V0 = V2

If we project away the existentially quantified variables V1 and V2, we
get the boolean formula A <-> V0.  So the inferred groundness information
for plusc is

	:- groundness plusc(A, N) = B <= (A <-> B).

which is as we would expect.

Now the payoff.  To check that no incompleteness can be introduced by a
call to plusc, it is sufficient to check that if all the head variables
of each clause are ground then all existentially quantified variables
are ground.  That is, we need to check that the formula
`(A /\ V0) -> (V1 /\ V2)' is entailed by the above formula for the body
of plusc, which it is.  Checking this is left as an exercise.

The above example shows that groundness information can help verify
completeness, particularly in the case that the compiler introduces
variables that are later quantified away.  To balance things a bit,
here's a case where it won't work.  Consider the goal

	L = min(X, Y),
	U = max(X, Y)

where min and max have the usual meaning.  The groundness information for
min and max is:

	:- groundness min(X, Y) = L <= (X, Y -> L).
	:- groundness max(X, Y) = U <= (X, Y -> U).

therefore the best that we can infer for the goal in question is
(X, Y -> L, U).  In particular, we cannot infer (L, U, X -> Y), even
though it would be true (since either X = L and Y = U or X = U and Y = L).
For a goal such as above, we may need to somehow promise that Y is ground.

Another way in which the groundness analysis is not strong enough is that
constraints may become entailed before some variables are ground, in which
case it is not necessary to ensure that the variables become ground later.
But I can't imagine any practical analysis that will help with this.

To conclude, we _can_ do something useful to help programmers ensure that
their programs are solver complete for main.  There will be cases (such
as the min/max example) where we are not accurate enough, but I think
these will be relatively rare.  Anyone have any good examples that
demonstrate otherwise?


mercury-developers mailing list
Post messages to:       mercury-developers at cs.mu.oz.au
Administrative Queries: owner-mercury-developers at cs.mu.oz.au
Subscriptions:          mercury-developers-request at cs.mu.oz.au

More information about the developers mailing list