[m-dev.] Data structures for constraint mode analysis
Ralph Becket
rafe at cs.mu.OZ.AU
Tue Jan 11 18:06:40 AEDT 2005
r.fothergill at ugrad.unimelb.edu.au, Tuesday, 11 January 2005:
> % The mode_constraints module contains a new approach to mode analysis set
> % out by dmo in chapter six of his thesis "Precise and Expressive Mode Systems
Just for your edification, s/dmo/DMO/ and s/thesis/dissertation/: the
thesis is the argument, the dissertation its expression (unless you're
American, in which case thesis = dissertation).
> % for Typed Logic Programming Languages" and the paper "Constriant based mode
> % analysis of Mercury" both available on the papers page of the mercury
> % website.
And s/mercury/Mercury/.
> % It associates each program variable a set of boolean variables describing
> % where it has been bound and where it is free, and builds constraints
> % between all such variables based on the program.
That's not quite accurate: David's approach associates each "position"
in a goal with boolean variables corresponding to whether particular
program variables are constructed at that point or not.
> By solving these constraints
> % it can form a partial order for all goals in a conjunct infer modes and
> % check declarations. (The next step is to order then conjuncts, but that is
> % beyond the scope of my work).
The first stage in DMO's analysis works at the level of identifying the
producers and consumers of program variables. From this we can infer a
partial order on the goals and then worry about things like subtyping
and uniqueness.
> % Mercury code for building constriants and solving them using ROBDDs already
> % exists, however this has proved to be an order of magnitude slower than the
*Two* orders of magnitude in some cases...
> % existing mode analysis system. It has been hypothesised that a propagation
> % based solver using less generic constraints more specific to the types
> % of constraints mode analysis deals with can do a better job.
> %
> % In the first case, I intend to work with existing code to build more
> % abstract constraints for mode analysis. The constraints will take the
> % form of the data structure mc_constraint(T) set out below.
What you're really looking at is extracting the high-level constraints
generated by DMO's code before they are processed (e.g. by the ROBDD).
> % Once this is done the constraints can be converted to various forms and
> % solved with existing solvers for testing purposes.
> %
> % The final goal will be to build a propagation based solver that deals with
> % these new constraints. The rough form this will take is described below
> % with some example data structures that would form the core of the approach.
[...]
> % This set of constraints is designed to completely represent all the
> % constraints for mode analysis in chapter 6 of the aforementioned thesis
> % and the paper on constraint mode analysis. They are complicated enough
> % that I won't go into them here, but for those who are familiar with the
> % work, you may want to check that these will cover all cases.
I think you mean DMO's constraints are complex, not your representation
(which aims at transparency).
> :- type mc_constraint(T)
> ---> equiv_bool(var(T), bool)
> % equiv_bool(X, yes) gives the constraint (X)
> % equiv_bool(X, no) gives the constraint not(X)
> ; equivalent(vars(T))
> % equivalent(Xs) gives (all Xi, Xj in Xs).(Xi<->Xj)
> ; implies(var(T), var(T))
> % implies(A, B) gives A->B
> ; equiv_disj(var(T), vars(T))
> % equiv_disj(X, [Y1,...,Yn]) gives X<->OR(Y1,...,Yn)
> ; at_most_one(vars(T))
> % at_most_one(Xs) gives
> % (all Xi, Xj in Xs).(i = j or not(Xi and Xj))
> ; disj(mc_constraints(T))
> % This has been included for the purposes of
> % representing the mode declaration constraints,
> % which are a disjunction of conjunctions of other
> % constraints. The intended form is:
> % disj([conj(...), ..., conj(...)])
> % Note disj([]) represents fail or false .
Don't mention `fail'. We're pure and we're not doing Prolog :-)
> ; conj(mc_constraints(T))
> % See disj.
> % Note that conj([]) is the empty constraint, or true.
> .
>
>
>
> %------------------------------------------------------------------------------
> %
> % The following are intended for the solver.
> %
>
>
> % Here is the basic functioning of the solver. Constraints should be built
> % and annotated with the variables that participate in them. Each constraint
> % is given a unique id to identify it. Variables are similarly annotated with
> % the constraints that refer to them.
What are you going to do with this information?
> % Whenever a variable is bound its effect is propagated in the following
> % manner:
> % ConstraintInfo^var_map is consulted to find the constraints this variable
> % occured in. The binding of this variable may imply many other bindings
> % based on these constraints, and these should be made and their propagation
> % scheduled. It may also result in the simplifying of a constraint, and or
> % the construction of new annotations, the details of which are to be
> % decided. If the constraints are found to be inconsistant with the binding,
> % either backtracking is required or a mode error has occured.
I'd suggest pre-processing the constraints as far as possible to avoid
redundant computation. I've already written a small solver for DMO's
constraints; here are some of my thoughts:
(1) A huge number of constraints are equivalences between non-negated
variables. Identify the equivalence classes and replace each variable in
other constraints with its representative equivalence class member.
(2) Separate out eqv_bool constraints. A useful special case is the
majority of predicates have only one mode which can be seen as a
conjunction of eqv_bool constraints.
(3) The majority of the remaining constraints can be represented as
binary implications. Propagate through these constraints first. A
special representation for this class of constraint may be appropriate.
(4) One of the two remaining tricky cases are constraints of the form
X <=> Y1 \/ ... \/ Yn, which is always accompanied by
at_most_one([Y1, ..., Yn]). It's worth adding the constraints
not X => not Yi and Yi => X to your propagation graph if you're
specialising the representation of binary implications.
(5) For the mode constraints, you just need to filter out inconsistent
modes with each variable assignment, ensuring that you never exhaust the
set of options and, presumably, propagating the remaining constraints
when you get down to a single remaining option.
(6) I expect the Propagation graph would allow you to make good
decisions when choosing which order to set variables. One would prefer
to assign variables that cause more propagation earlier rather than
later.
> % Futhermore, if the program is segmented into parts (SCCs) as it
> % currently is, calls to already processed SCCs can be used to bind a large
> % number of variables taken out of one of the solution sets of for that SCC
> % (the choice of solution set being another choicepoint).
You only need to deal with non-singleton SCCs if you are inferring the
modes for some predicates.
> :- type mc_binary_annotation == mc_binary_annotation(mc_type).
>
> :- type mc_binary_annotation(T) --->
> binding_implication(var(T), bool, var(T), bool).
> % binding_implication(X, Xval, Y, Yval) indicates that
> % if X has been bound to Xval, this can by propagated
> % by binding Y to Yval
Yukky names! Suggest s/mc_binary_annotation/mc_implication/ and
s/binding_implication/implies/.
It might be better to aggregate consequences of X being true (and X
being false).
> :- type constraint_id == int.
>
> :- type constraint == constraint(mc_type).
> % Note the potential to slip the binary constraint annotations in here as
> % a new field. I havn't done it yet as I'm not sure yet whether they'll
> % be used or not.
> :- type constraint(T) --->
> constraint(
> mc_constraint :: mc_constraint(T),
> % The original constraint
> cur_mc_constraint :: mc_constraint(T),
> % As bindings are made, propagation can be
> % performed not just through variables but through
> % constraints, simplifying them.
> participating_vars :: vars(T),
> cur_participating_vars :: vars(T),
> id :: constraint_id
> ).
Bear in mind that, for efficiency, you'll later want to remove fields
that change frequently (that is, frequently updating fields in larger
structures can be costly).
> :- type var_state --->
> var_state(
> is_bound :: maybe(bool),
> is_constrained :: list(constraint_id)
> ).
>
>
> :- import_module counter.
>
> :- type constraint_info --->
> constraint_info(
> constraint_map :: map(constraint_id, constraint(mc_type)),
> % Stores the constraints. Maps them to their ids
> var_map :: map(var(mc_type), var_state),
> % Maps each variable to the constraints it
> % participates in, and whether it is bound
> counter :: counter
> % To keep track of constraint_id's as the constraints
> % are added.
> ).
Ditto.
I'd advise going for the simplest possible solution first and then
adding bookkeeping if and where it turns out to be necessary.
Good stuff!
-- Ralph
--------------------------------------------------------------------------
mercury-developers mailing list
Post messages to: mercury-developers at cs.mu.oz.au
Administrative Queries: owner-mercury-developers at cs.mu.oz.au
Subscriptions: mercury-developers-request at cs.mu.oz.au
--------------------------------------------------------------------------
More information about the developers
mailing list