# [m-dev.] Uniqueness again

Ralph Becket rafe at cs.mu.OZ.AU
Fri Jan 31 14:02:01 AEDT 2003

```I think I have a way to handle mode analysis for nested uniqueness
that would fit well with the current mode analyser and only slow things
down when dealing with explicitly unique procedure arguments or if
destructive update optimization is turned on.  The idea is to only
keep track of such information as is necessary to decide whether a value is
necessarily unique (and, optionally, reusable), but to defer actually
doing the analysis until we find an attempt to pass a value as a
(unique >> whatever) argument to a procedure, at which point it becomes
necessary to decide the (necessary) uniqueness of the value in question.

The analysis incrementally constructs a possible use graph with edges (<)
between variables:
- X < X implicitly (this makes the analysis simpler);
- X < Y if X is possibly used in the construction of Y;
- Y < X if Y is possibly obtained by deconstructing X;
- X < Y, Y < X if X and Y are possible aliases (let us write X ~ Y as
shorthand for X < Y, Y < X.)
Each goal that is scheduled extends the graph; the resulting graph is
associated with the success of that goal.  The graph resulting from a
disjunction is obtained by taking the union of the graphs resulting from
each of the disjuncts.

X is necessarily unique iff it is not possibly shared.

All local variables start off being necessarily unique.  Head variables
start off possibly unique only if their initial inst is unique.

X becomes possibly shared either
- as the result of a procedure call (this is tracked by the existing
analysis) or
- if we have some Y and Z s.t. X is possibly used by Y and Z, Y and Z are
not possible aliases of one another, and either Y and Z are possibly
bound to different values or either of Y or Z is possibly shared or
- if we have some Y such that X is possibly used by Y and both X and Y
have appeared in the same procedure call in some scheduled goal.

The latter two conditions may be costly to decide, hence we only test
them when absolutely necessary.

A cell is reusable iff it is necessarily unique and dead (i.e. used
neither in the future nor on backtracking.)  Reuse optimization requires
deciding the liveness and necessary uniqueness of each necessarily bound
variable at each construction.  This optimization should probably only
be turned on at -O5 and above.

Since the possible use graph is used only on demand for costly analyses,
we could just use something like a list of edges as a first-cut
representation.  I don't think this approach would have any detectable
impact on compile times for code that does not use uniqueness.