[m-dev.] option-finetuning (generating trans-opts)

Zoltan Somogyi zs at cs.mu.OZ.AU
Wed Sep 13 14:29:31 AEDT 2000


On 13-Sep-2000, Fergus Henderson <fjh at cs.mu.OZ.AU> wrote:
> 	(a) use the existing trans-opt system
> 	    (drawback: requires more passes than you want)
> 
> 	(b) change the existing trans-opt system to do what you want
> 	    (drawback: the reduced precision might be bad for
> 	    termination analysis)
> 
> 	(c) add a new trans-opt system, in addition to the existing one,
> 	    with a different file extension
> 	    (drawback: increases complexity of the whole system; and
> 	    could make things very slow indeed if you want to do both
> 	    structure reuse inference and termination analysis at the
> 	    same time, since you might end up using both trans-opt
> 	    systems)
> 
> 	(d) add a new trans-opt system, but use the same file extension:
> 	    make the way .trans_opt files are used depend on a compilation
> 	    option.
> 	    (drawback: increases complexity of the whole system,
> 	    though in a different way than (c);
> 	    and might be harder to handle in Mmake)
> 
> I'm not sure which of these would be best.

I don't think any of them are satisfactory in the long term.

The rumor that Peter Schachte was referring to in another email is that
I have talked to a few people about replacing our current approach to
intermodule optimization with the approach described in the paper

Bueno, de la Banda, Hermenegildo, Marriott, Puebla and Stuckey:
A model for inter-module analysis and optimizing compilation

in LOPSTR 2000. Peter Stuckey calls this the latex model of compilation,
because it allows the rerunning of the same command as part of a single update.
(I have a copy, if people want one.)

This approach allows the propagation of program analysis information along
circular dependencies, as long as the domain of analysis is finite, and the
analysis is monotonic: i.e. each iteration gets you more precision, and there
is a limit on how precise you can get. It also allows you to stop iterating
the propagation of analysis information at any time: if you do, you may get
less than full optimization, but will get correct results. (You can't stop
when you propagate the effects of changes in source code, but such propagation
is limited and cannot be circular.)

It seems Nancy's analysis does not allow you to stop the analysis at any point,
but I think this can be fixed reasonably simply, either with the current
.trans_opt file system or with the latex model. The idea is that instead of
modifying the interface of a procedure that can exploit reuse, you make another
procedure with the same semantics that does exploit reuse, and record somewhere
the circumstances under which you can call the reusing variant instead of the
original one. The latex model is strongly oriented towards multivariant
specialization, and the paper has worked out several of the crucial details
in a generic manner.

For compile time garbage collection and structure reuse, the "reusing" variant
of e.g. procedure p1 need not actually perform any reuse if the reuse would
have taken place inside procedure p2, called from p1, but a change in the
definition of p2 makes this impossible. The latex model allows this.

For destructive update, the caller of p1 will want to depend on the fact that
p1 returns the same pointer for an output argument as it was given for an input
argument. Since in this case you cannot substitute the generic version for the
specialized version, this case requires an extension of the latex model, but
only a small and relatively simple one.

Implementing the latex model would require writing a new mmake, preferably
in Mercury. It would need to duplicate many of the features of GNU make.
It would need to be integrated with the Mercury compiler, perhaps even
to the extent that it could invoke the compiler to read a source file,
and then send it a list of commands down a pipe to (1) find out what
include_module items the file has, (2) to create the int3 file, (3) to create
the .int2/.int files, (4..N) to perform analyses based on ever more accurate
information about other modules, and (N+1) to generate code, all without
reparsing the source file (though some tasks will require the compiler
to parse other files). It would be nice if the interface between this mmake2
and the Mercury compiler was a generic one, and could be applied to other
language implementations as well. For full generality, the system needs
to handle libraries, where the set of variants you have for each procedure
is fixed.

This is therefore a reasonably big job, both in design and implementation,
not to be undertaken lightly. Who's interested?

Zoltan.
--------------------------------------------------------------------------
mercury-developers mailing list
Post messages to:       mercury-developers at cs.mu.oz.au
Administrative Queries: owner-mercury-developers at cs.mu.oz.au
Subscriptions:          mercury-developers-request at cs.mu.oz.au
--------------------------------------------------------------------------



More information about the developers mailing list