[mercury-users] term_expansion (was Re: Microsoft Common Lisp?)

Peter Schachte pets at cs.mu.oz.au
Tue Feb 18 17:18:44 AEDT 1997

On Mon, 17 Feb 1997, Fergus Henderson wrote:
> Well, it's not part of the standard library, but we have (since release 0.6)
> provided source code for doing this in the `samples' directory that comes
> with the Mercury distribution.

This looks pretty good.

> Peter Schachte wrote:
> >     2.  term_expansion doesn't require users to monkey with Makefiles
> > 	to get their programs to work correctly.
> I consider the requirement that files that use language extensions have
> have different extension to be a feature, not a bug.  The different
> extension gives maintenance programmers a clue that something funny
> is going on here.

Even if we agree that a source file that is going to be transformed
should be marked in some way, putting this information in the file
name seems like a pretty bad idea.  This information belongs *in* the
file.  Right now you can read the source file to find out about all
its dependencies, except that to find its compile-time dependencies,
you have to look at the file name and then read the makefile.  If
someone hands you a program listing, they probably won't include the
makefile, and the listing might not even contain the file name.

But not all transformations will be language extensions; some will
simply be optimizations.  Perhaps partial execution will be able to
implement many of these sorts of optimizations, but there will always
be optimizations based on difficult-to-prove properties of predicates.
For example, it would be nice if some programs that use append/3 were
translated into an accumulator-passing style with difference pairs (I
know someone was/is working on this; I'm really talking about more
specialized sorts of optimizations, but this is the best example I can
think of right now).  But doing this requires the knowledge that
append(in,in,out) is associative.  This is probably pretty hard to
prove.  Anyway, I would argue that this optimization should be part of
the list library; that way users of the list library get the benefit
of this improvement without doing anything special.

True, this sort of capability would allow module implementors to do
all sorts of horrible things.  But remember that great quote from
Lawrence Flon:  "There never has been, nor will there ever be, any
programming language in which it is the least bit difficult to write
bad code."  I think the aim in designing a programming language should
be more to make it easy to write good programs than to make it hard to
write bad ones -- the latter is hopeless.

> >     3.  term_expansion handles, though not a smoothly as one might
> > 	like, the use of multiple languages extensions in the same
> > 	file, while simple make rules won't.

> Hang on, here's another idea.
> 	PREPROCESSOR = foo | bar
> 	%.m: %.pp
> 		cat $< | $PREPROCESSOR > $@

This approach winds up running all the preprocessors on all files that
need any preprocessing.  This may be horribly inefficient, but more
importantly it may be wrong.  For example, there may be a
transformation that generates a chart parser from a grammar, and
another that generates an LALR parser from similar input.  You need to
be able to specify different transformations for different source
files to do this.  This approach also loses all information about
which language extensions each file uses.  All you know is that a
given file might use some extensions.  You have to read the makefile
just to see what the possible extensions are, and you have to read
every line of the source file to really know which extensions it uses.

> Using preprocessing to invent special-purpose languages or language
> extensions ... can ... make the source code more difficult to
> understand, and it's just one more thing that a maintenance programmer
> must learn before they can start being productive.

This is true.  Of course, the same thing can be said of library
packages.  Whenever you use a library package, you commit the
maintenance programmer to understanding that package.  One would hope
that the library package is designed well enough that it's not too
much of a burden.  And presumably using the package makes the source
code more compact, which (usually) makes maintenance easier.

The same argument applies to language extensions.  Sure, they may be
designed badly, and may be confusing, but I think it's fair to assume
that extensions that are actually used are used because they're well
enough designed that they make programming easier.  Many of them will
make maintenance easier too, for the same reasons (it's usually easier
to write and maintain smaller, higher-level programs).  Would you
rather maintain a compiler written in C, or one written in C+yacc+lex,
or one written in Eli?

In the long run, when Mercury is a popular and successful programming
language, I envision a dozen nifty transformations in the standard
Mercury library, including lexer- and parser-generators, extensions
for named term arguments with convenient accessors functions,
non-positional notation for predicate and function calls, hidden
state-passing extensions, expert system packages, user interface
specification languages, etc.  And, I would hope, there will be a lot
more available from commercial vendors and as free packages.  I can
imagine a time when many, if not most, Mercury code uses one or more
such package.  In this world, I believe it will be important to make
it as easy as possible for the users of such packages.  And that means
not forcing them fight with makefiles and gmake's flaws and bugs.

-Peter Schachte      URL:  http://www.cs.mu.oz.au/~pets/
pets at cs.mu.OZ.AU     PGP:  finger pets at for key
    [A computer is] like an Old Testament god, with a lot of rules
    and no mercy.  -- Joseph Campbell

More information about the users mailing list