[m-dev.] Thinking about mode analysis and uniqueness

Nancy Mazur Nancy.Mazur at cs.kuleuven.ac.be
Mon Jan 6 18:25:45 AEDT 2003


Hi Ralph, 

this feels like a liveness analysis to me, in the exact sense that we use in
the context of compile-time garbage collection. 
Possible sharing/aliasing --> You might have a look at the reuse branch. 
We do a full analysis to collect (possible) aliasing information between
terms. But perhaps the analysis is too detailed for your intentions? 

* Ralph Becket <rafe at cs.mu.OZ.AU> [2003-01-06 04:08]:
> Second attempt.
> 
> Properties of variables at some point in a computation:
> 
> bX	X is definitely bound to some (fully ground) value.
> 
> fX	X is definitely free.
> 
> sX	X is possibly shared (i.e. X may have been used in more than one
> 	construction).  Possible sharing may be asserted or inferred
> 	(see Sharing, below.)
> 
> uX	X is definitely unique (i.e. X has been used in at most one
> 	construction that is itself definitely unique).
> 
> dX	X is definitely dead (i.e. X may not be used again.)
> 
> lX	X is definitely alive.
> 
> X ~ Y	X and Y are possibly aliased.  ~ is an equivalence relation.
> 
> X < Y	X is possibly used in the construction bound to Y.  < is a
> 	partial order.
> 
> Sharing:
> 
> X is possibly shared if two of its possible aliases (which may include X
> itself) are possibly used in the constructions for two other variables
> which are not themselves possibly aliased:
> 
> 	(some [Y, Z, A, B] (X ~ Y, X ~ Z, Y < A, Z < B, not A ~ B))  =>  sX
> 
> This is a little subtle.  The constraint on A and B is most clearly
> stated as possibly_different(A, B); now
> 	possibly_different(A, B)   <=> not definitely_the_same(A, B)
> and 	definitely_the_same(A, B)   => possibly_the_same(A, B)
> so	not possibly_the_same(A, B) => not definitely_the_same(A, B)
> hence	not possibly_the_same(A, B) => possibly_different(A, B)
> and possibly_the_same is synonymous with ~.
> 
> Definitions:
> 
> 	fX <=> not bX.
> 	uX <=> not sX.
> 	lX <=> not dX.
> 
> Inference:
> 
> 	(sX, X ~ Y) => sY
> 	(dX, X ~ Y) => dY
> 
> 	(X < Y, sY) => sX
> 	(dX, X < Y) => dY

I'm not sure this is correct... 

Your definition X < Y: 
|| X < Y X is possibly used in the construction bound to Y.  < is a
||       partial order.

So this reflects situations like: Y = f(X, ... ) 
If so, then the 'deadness' of X doesn't tell anything about the deadness
of Y... 
The other way around; 
	X = f(Y,...)
If X is dead, this indeed implies all the Y's being dead... but I don't see
how this rule will ever get applied. Because in most situations, you are
interested in proving that all the arguments are dead, in order to
conclude that X is fully dead. And even then, you have to be sure that
the outermost constructor is not shared. 

I don't know if I'm making myself clear here, but the thing is, I don't
understand the point of this rule ;-)

> Goals:
> 
> I will use the notation Precondition | Goal | Postcondition
> or just Goal | Postcondition where Precondition is not of interest
> or just Precondition | Goal where Postcondition is not of interest.
> 
> We treat goals as operators on states of variables and write P(S) to
> denote the state resulting from successfully executing P starting in
> state S.
> 
> The Postcondition lists those properties that are explicitly affected
> by successfully executing the goal.  All others that do not follow are
> unaffected (so we don't have to write endless frame axioms.)
> 
> Aliasing, X = Y
> 
> 	lX, fX, lY, bY  |  X = Y  |  bX, X ~ Y
> 
> Test for equality, X = Y
> 
> 	lX, bX, lY, bY  |  X = Y  |
> 
> Construction, X = f(Y, ...)
> 
> 	lX, fX, lY, bY, ...  |  X = f(Y, ...)  |  bX, Y < X
> 
> Deconstruction, X = f(Y, ...)
> 
> 	lX, bX, lY, fY, ...  |  X = f(Y, ...)  |  bY, Y < X, ...
> 
> 	We may also have postcondition uY for some uY depending upon the
> 	inst of X at this point.

I don't know how finegrained you are planning to go (and I'm not sure
anymore what the goal of this mode-analysis was in the first place), but 
uY will only be the case if X is fully unique. Are you planning to make
a difference between uX, and the uniqueness of the outermost functor of
X? 
Anyway, I have the feeling that you really ought to check the
reuse-branch I've been working on. If the only difference is the degree
of detail of the analysis, then it's not difficult to modify the level
op detail by simplifying the domains we use. 

> Test for equality with functor, X = f(Y, ...)
> 
> 	lX, bX, lY, bY, ...  |  X = f(Y, ...)  |
> 
> Procedure calls, p(X, Y, ...)
> 
> 	The pre- and postconditions for each call will depend upon the
> 	procedure used.
> 
> 	The precondition will require that all arguments be live.
> 
> 	Argument mode X	| Precondition	| Postcondition
> 	----------------+---------------+--------------
> 	free >> free	| fX		|
> 	free >> bound	| fX		| bX, sX
> 	free >> unique	| fX		| bX, uX
> 	bound >> bound	| bX		| sX
> 	unique >> bound	| bX, uX	| sX
> 	unique >> unique| bX, uX	|
> 	unique >> dead	| bX, uX	| dX
> 
> 	It is an error for the postcondition of p(X, Y, ...) to
> 	include (X ~ Y, uX, uY) (unless we also have (dX, dY) in the
> 	postcondition.)
> 
> 	This constraint forbids Peter's example where
> 
> 		:- mode p(di, uo, uo).
> 
> 		p(X, Y, Z) :- Y = X, Y = Z.

I wonder how the precision of the overall process will be. I have a
strong feeling that everything will be shared at some point. 

> Conjunction, (P, Q), starting with state S
> 
> 	If PrecondP | P then S must entail PrecondP.
> 	If PrecondQ | Q then P(S) must entail PrecondQ.
> 	The resulting state is Q(P(S)).
> 
> Disjunction, (P ; Q), starting with state S
> 
> 	If PrecondP | P then S must entail PrecondP.
> 	If PrecondQ | Q then S must entail PrecondQ.
> 	If X is a nonlocal and P | bX then we must also have Q | bX
> 	and vice versa.
> 	For any nonlocals X and Y we must have
> 		if P | dX    then (P ; Q) | dX    (and similarly for Q),

I disagree. It's not because P leaves a variable dead, that the whole
disjunction makes it dead. What if Q contains some non-deterministic
call depending on X. In that case, even if X is not used 'after' (P; Q),
it is unsafe to call it dead, as it still may be used upon backtracking. 

> 		if P | sX    then (P ; Q) | sX    (and similarly for Q),
> 		if P | X < Y then (P ; Q) | s < Y (and similarly for Q),
> 		if P | X ~ Y then (P ; Q) | s ~ Y (and similarly for Q).

Hmmm. What if P | sX and Q | X < Y ? What is the result? 
(P ; Q) | sX sY  ? How does the information get lost here? 

I think you should be more specific on how the 'least upper bound' of
the different elements in your domain is... because it is the least
upper bound that you are computing here... 

> Negation, not P
> 
> 	Handled as ( if P then false else true ).
> 
> Conditional goals, ( if P then Q else R ), starting with state S
> 
> 	If X is a nonlocal then we may not have P | bX.
> 
> 	Otherwise handled as (P, Q ; R).
> 
> Initial state
> 
> 	The initial properties of each argument variable are taken from
> 	the argument mode (with initally free variables also being
> 	unique) and local variables are initially free and unique.
> 
> Variable Death
> 
> 	If at any point we have uX and for all Y s.t. X ~ Y we have that
> 	Y does not appear anywhere in the subsequent computation, then
> 	we infer that X is dead, hence dX.

"anywhere in the subsequent computation" may be a bit harder then you
think if you want to take non-deterministic procedure calls into
account... 

> Reuse criterion
> 
> 	A variable X's cell may be destructively updated (reused) if we
> 	have (uX, dX).
> 
> I'll hold off posting thoughts on efficient implementation until I see
> what holes people can find in the above.

I'll repeat myself ;-) But this analysis is the goal of the liveness
analysis we're doing in the context of CTGC. The precision of this
analysis is finer-grained then what you specify here. 

So, my questions would be: 
- what is the exact purpose of all these inferences? 
- can't the liveness information we collect be simply used for your
  purposes? Or is the analysis too complex/time consuming? 

The thing is, that the analysis we've implemented infers for each
variable that is ever deconstructed whether it becomes dead or not. If
it's not dead, the analysis will still say to what it is aliased. And if
the aliases die, then at some point the whole structure may die. 

Other, a finer grained question... 
I guess you don't need a fixpoint computation, because every argument
of a procedure call that is bound is automatically set to shared? 

For the moment I have no further remarks... but I think we should talk
;-) Or else one of us will do some work in double here... 

Ciao ciao, 
Nancy

--------------------------------------------------------------------------
mercury-developers mailing list
Post messages to:       mercury-developers at cs.mu.oz.au
Administrative Queries: owner-mercury-developers at cs.mu.oz.au
Subscriptions:          mercury-developers-request at cs.mu.oz.au
--------------------------------------------------------------------------



More information about the developers mailing list