[m-rev.] www diff: Delete the developer documentation from the www repertory

Paul Bone paul at bone.id.au
Fri Feb 14 10:58:31 AEDT 2014


Delete the developer documentation from the www repertory

This documentation is now maintained in the main source repository and the
webserver has been configured to find it there.

development/include/developer.inc:
    Add a comment directing maintainers to the correct location of the
    documentation.  This also includes a brief note about how the webserver
    can find the documentation.

development/developers/allocation.html:
development/developers/bootstrapping.html:
development/developers/bytecode.html:
development/developers/c_coding_standard.html:
development/developers/coding_standards.html:
development/developers/compiler_design.html:
development/developers/developer_intro.html:
development/developers/gc_and_c_code.html:
development/developers/glossary.html:
development/developers/release_checklist.html:
development/developers/reviews.html:
development/developers/todo.html:
development/developers/work_in_progress.html:
    Delete this copy of the documentation.
---
 development/developers/allocation.html        |  543 -------
 development/developers/bootstrapping.html     |   50 -
 development/developers/bytecode.html          |  529 -------
 development/developers/c_coding_standard.html |  704 ---------
 development/developers/coding_standards.html  |  536 -------
 development/developers/compiler_design.html   | 1913 -------------------------
 development/developers/developer_intro.html   |  222 ---
 development/developers/gc_and_c_code.html     |   75 -
 development/developers/glossary.html          |  138 --
 development/developers/release_checklist.html |  189 ---
 development/developers/reviews.html           |  284 ----
 development/developers/todo.html              |  385 -----
 development/developers/work_in_progress.html  |  106 --
 development/include/developer.inc             |   11 +
 14 files changed, 11 insertions(+), 5674 deletions(-)
 delete mode 100644 development/developers/allocation.html
 delete mode 100644 development/developers/bootstrapping.html
 delete mode 100644 development/developers/bytecode.html
 delete mode 100644 development/developers/c_coding_standard.html
 delete mode 100644 development/developers/coding_standards.html
 delete mode 100644 development/developers/compiler_design.html
 delete mode 100644 development/developers/developer_intro.html
 delete mode 100644 development/developers/gc_and_c_code.html
 delete mode 100644 development/developers/glossary.html
 delete mode 100644 development/developers/release_checklist.html
 delete mode 100644 development/developers/reviews.html
 delete mode 100644 development/developers/todo.html
 delete mode 100644 development/developers/work_in_progress.html

diff --git a/development/developers/allocation.html b/development/developers/allocation.html
deleted file mode 100644
index b7dd0d0..0000000
--- a/development/developers/allocation.html
+++ /dev/null
@@ -1,543 +0,0 @@
-<html>
-<head>
-
-<title>
-	The Storage Allocation Scheme
-</title>
-</head>
-
-<body
-	bgcolor="#ffffff"
-	text="#000000"
->
-
-<hr>
-
-This document describes
-the storage allocation system used by the LLDS code generator.
-
-<hr>
-
-<h2> FORWARD LIVENESS </h2>
-
-<p>
-
-Each goal has four sets of variables associated with it to give information
-about changes in liveness on forward execution. (Backward execution is a
-different matter; see a later part of this document.) These four sets are
-
-<ul>
-<li>	the pre-birth set
-<li>	the pre-death set
-<li>	the post-birth set
-<li>	the post-death set
-</ul>
-
-<p>
-
-The goal that contains the first value-giving occurrence of a variable
-on a particular computation path will have that variable in its pre-birth set;
-the goal that contains the last value-using occurrence of a variable on
-a particular computation path will have that variable in its post-death set.
-
-<p>
-
-The different arms of a disjunction or a switch are different computation
-paths. The condition and then parts of an if-then-else on the one hand
-and the else part of that if-then-else on the other hand are also different
-computation paths.
-
-<p>
-
-An occurrence is value-giving if it requires the code generator to associate
-some value with the variable. At the moment, the only value-giving occurrences
-are those that bind the variable. In the future, occurrences that don't bind
-the variable but give the address where it should later be put may also be
-considered value-giving occurrences.
-
-<p>
-
-An occurrence is value-using if it requires access to some value the code
-generator associates with the variable. At the moment we consider all
-occurrences to be value-using; this is a conservative approximation.
-
-<p>
-
-Mode correctness requires that all branches of a branched control structure
-define the same set of nonlocal variables; the exceptions are branches that
-cannot succeed, as indicated by the instmap at the end of the branch being
-unreachable. Such branches are considered by mode analysis to "produce"
-any variable they are required to produce by parallel branches.
-To make it easier to write code that tracks the liveness of variables,
-we implement this fiction by filling the post-birth sets of goals representing
-such non-succeed branches with the set of variables that must "magically"
-become live at the unreachable point at end of the branch in order to
-match the set of live variables at the ends of the other branches.
-(Variables that have become live in the ordinary way before the unreachable
-point will not be included.) The post-birth sets of all other goals will be
-empty.
-
-<p>
-
-This guarantees that the set of variables born in each branch of a branched
-control structure will be the same, modulo variables local to each branch.
-
-<p>
-
-We can optimize the treatment of variables that are live inside a branched
-control structure but not after, because it is possible for the variable
-to be used in one branch without also being used in some other branches.
-Each variable that is live before the branched structure but not after
-must die in the branched structure. Branches in which the variable is used
-will include the variable in the post-death set of one of their subgoals.
-As far as branches in which the variable is not used are concerned, the
-variable becomes dead to forward execution as soon as control enters the
-branch.  In such circumstances, we therefore include the variable in the
-pre-death set of the goal representing the branch. (See below for the method
-we use for making sure that the values of such "dead" variables are still
-available to later branches into which we may backtrack and which may need
-them.)
-
-<p>
-
-This guarantees that the set of variables that die in each branch of a branched
-control structure will be the same, modulo variables local to each branch.
-
-<p>
-
-It is an invariant that in each goal_info, a variable will be included
-in zero, one or two of these four sets; and that if it is included in
-two sets, then these must be the pre-birth and post-death sets. (This
-latter will occur for singleton variables.)
-
-<p>
-
-<hr>
-<hr>
-
-<h2> STORE MAPS </h2>
-
-<p>
-
-There are four kinds of situations in which the code generator must
-associate specific locations with every live variable, either to put
-those variables in those locations or to update its data structures
-to say that those variables are "magically" in those locations.
-
-<p>
-
-<ol>
-<li> At the ends of branched control structures, i.e. if-then-elses, switches
-   and disjunctions. All branches of a branched structure must agree exactly
-   on these locations.
-
-<li> At the start and end of the procedure.
-
-<li> At points at which execution may resume after a failure, i.e. at the
-   start of the else parts of if-then-elses, at the start of the second and
-   later disjuncts in disjunctions, and after negated goals.
-
-<li> Just before and just after calls and higher-order calls (but not
-   pragma_c_codes).
-</ol>
-
-<hr>
-
-<h3> Ends of branched control structures </h3>
-
-<p>
-
-We handle these by including a store_map field in the goal_infos of
-if_then_else, switch and disj goals.
-This field, like most other goal_info fields
-we will talk about in the rest of this document,
-is a subfield of the code_gen_info field of the goal_info.
-Through most of the compilation process,
-the code_gen_info field contains no information;
-its individual subfields are filled in
-during the various pre-passes of the LLDS code generator.
-The store map subfield
-it is meaningful only from the follow_vars pass onwards.
-
-<p>
-
-The follow_vars pass fills this field of goals representing branched control
-structures with advisory information, saying where things that will be used
-in code following the branched structure should be.
-This advisory information may include duplicates (two variables
-mapped to the same location), it may miss some variables that are live at
-the end of the branched structure, and it may include variables that are
-not live at that point.
-
-<p>
-
-The store_map pass uses the advisory information left by the follow_vars pass
-to fill in these fields with definitive information. The definitive store maps
-guarantee that no two variables are allocated the same location, and they
-cover exactly the set of variables forward live at the end of the branched
-structure, plus the variables that are in the resume set of any enclosing
-resume point (see below).
-
-<p>
-
-The passes of the backend following store_map must not do anything to
-invalidate this invariant, which means that they must not rearrange the code
-or touch the field. The code generator will use these fields to know what
-variables to put where when flushing the expression cache at the end of
-each branch in a branched structure.
-
-<p>
-
-<hr>
-
-<h3> Starts and ends of procedures </h3>
-
-<p>
-
-We handle these using the mechanisms we use for the ends of branched
-structures, except the map of where things are at the start and where
-they should be at the end are computed by the code generator from the
-arg_info list.
-
-<p>
-
-<hr>
-
-
-<h3> Resumption points </h3>
-
-<p>
-
-We handle these through the resume_point subfield of the code_gen_info field
-in goal infos. During the liveness pass, we fill in this field for every goal
-that establishes a point at which execution may resume after backtracking.
-This means
-the conditions of if-then-elses (the resumption point is the start of
-the else part), every disjunct in a disjunction except the last (the
-resumption point is the start of the next disjunct), and goals inside
-negations (the resumption point is the start of the code following the
-negated goal). The value of this field will give the set of variables
-whose values may be needed when execution resumes at that point.
-Note that for the purposes of handling resumption points, it does not
-matter whether any part of an if-then-else, disjunction or negation
-can succeed more than once.
-
-<p>
-
-The resume_point field does not assign a location to these variables.
-The reason is that as an optimization, each conceptual resumption point
-is associated with either one or two labels, and if there are two labels,
-these will differ in where they expect these variables to be. The
-failure continuation stack entry created by the code generator
-that describes the resumption point will associate a resume map with
-each label, with each resume map assigning a location to each variable
-included in the resume vars set.
-
-<p>
-
-The usual case has two labels. The resume map of the first label maps each
-variable to its stack slot, while the resume map of the second label maps
-each variable to the location it was occupying on entry to the goal.
-The code emitted at the resumption point will have, in order, the first
-label, code that moves each variable from its location according to the
-first store map to its location according to the second store map
-(this will be a null operation if the two maps agree on the location
-of a variable). The idea is that any failure that occurs while all these
-variables are guaranteed to still be in their original locations can be
-implemented as a jump directly to the second label, while failures at
-other points (including those from to the right of the disjunct itself,
-as well as failures from semidet or nondet calls inside the disjunct)
-will jump (directly or indirectly via a redo() or fail()) to the first
-label. The section on backward liveness below discusses how we make sure
-that at these points all the variables in the resume_point set are actually
-in their stack slots.
-
-<p>
-
-We can omit the first label and the code following it up to but not including
-the second label if we can guarantee that the first label will never be
-jumped to, directly or indirectly. We can give this guarantee for negated
-goals, conditions in if-then-elses and disjuncts in disjunctions that cannot
-succeed more than once if the goal concerned cannot flush any variable to
-the stack (which means it contains only inline builtins). We cannot give
-this guarantee for disjuncts in disjunctions that can succeed more than once
-even if the goal concerned contains only inline builtins, since in that case
-we may backtrack to the next disjunct after leaving the current disjunct.
-
-<p>
-
-We can omit the second label if we can guarantee that it will never be
-jumped to, directly or indirectly. We can give this guarantee if the goal
-concerned has no failure points before a construct (such as a call)
-that requires all the resumption point variables to be stored on the stack.
-
-<p>
-
-The resume_locs part of the resume_point field will say which labels
-will be needed.
-
-<p>
-
-It is an invariant that in a disjunction, the resume_point field of one
-disjunct must contain all the variables included in the resume_point fields
-of later disjuncts.
-
-<p>
-
-When one control structure that establishes a resumption point occurs inside
-another one, all the variables included in the relevant resume_point of the
-outer construct must appear in *all* the resume_point fields associated
-with the inner construct. This is necessary to make sure that in establishing
-the inner resumption point, we do not destroy the values of the variables
-needed to restart forward execution at the resumption point established
-by the outer construct. (See the section on resumption liveness below.)
-
-<p>
-
-When one control structure which establishes a resumption point occurs after
-but not inside another one, there is no such requirement; see the section
-on backward liveness below.
-
-<p>
-
-
-<hr>
-
-<p>
-
-<h3> Calls and higher order calls </h3>
-
-<p>
-
-We handle these by flushing all variables that are live after the call
-except those produced by the call. This is equivalent to the set of
-variables that are live immediately after the call, minus the pre-birth
-and post-birth sets of the call, which in turn is equivalent to the set
-of variables live before the call minus the pre-death and post-death
-sets of the call.
-
-<p>
-
-The stack allocation code and the code generator figure out the set of
-variables that need to be flushed at each call independently, but based
-on the same algorithm. Not attaching the set of variables to be saved
-to each call reduces the space requirement of the compiler.
-
-<p>
-
-The same applies to higher order calls.
-
-<p>
-
-
-<hr>
-<hr>
-
-<p>
-
-<h2> BACKWARD LIVENESS </h2>
-
-<p>
-
-There are three kinds of goals that can introduce nondeterminism: nondet
-disjunctions, nondet calls and nondet higher order calls. All code that
-executes after one of these constructs must take care not to destroy the
-variables that are needed to resume in those constructs. (We are *not*
-talking here about preserving variables needed for later disjuncts;
-that is discussed in the next section.)
-
-<p>
-
-The variables needed to resume after nondet calls and higher order calls
-are the variables saved across the call in the normal fashion. The variables
-needed to resume after nondet disjunctions are the variables included in
-any of the resume_point sets associated with the disjuncts of the disjunction.
-
-<p>
-
-The achievement of this objective is in two parts. First, the code generator
-makes sure that each of these variables is flushed to its stack slot before
-control leaves the construct that introduces nondeterminism. For calls and
-higher order calls this is done as part of the call mechanism. For nondet
-disjunctions, the code generator emits code at the end of every disjunct
-to copy every variable in the resume_point set for that disjunct into its
-stack slot, if it isn't there already. (The mechanism whereby these variables
-survive to this point is discussed in the next section.)
-
-<p>
-
-Second, the stack slot allocation pass makes sure that each of the variables
-needed to resume in a construct that introduces nondeterminism is allocated
-a stack slot that is not reused in any following code from which one can
-backtrack to that construct. Normally, this is all following code, but if
-the construct that introduced the nondeterminism is inside a cut (a some
-that changes determinism), then it means only the following code inside
-the cut.
-
-<p>
-
-
-<hr>
-<hr>
-
-<p>
-
-<h2> RESUMPTION LIVENESS </h2>
-
-<p>
-
-Variables whose values are needed when execution resumes at a resumption point
-may become dead in the goal that establishes the resumption point. Some points
-of failure that may cause backtracking to the resumption point may occur
-after some of these variables have become dead wrt forward liveness.
-However, when generating the failure code the code generator must know
-the current locations of these variables so it can pick the correct label
-to branch to (and possibly generate some code to shuffle the variables
-to the locations expected at the picked label).
-
-<p>
-
-When entering a goal that establishes a resumption point, the code generator
-pushes the set of variables that are needed at that resumption point onto
-a resumption point variables stack inside code_info. When we make a variable
-dead, we consult the top entry on this stack. If the variable being made dead
-is in that set, we do not forget about it; we just insert it into a set of
-zombie variables.
-
-<p>
-
-To allow a test of membership in the top element of this stack to function
-as a test of membership of *any* element of this stack, we enforce the
-invariant that each entry on this stack includes all the other entries
-below it as subsets.
-
-<p>
-
-At the end of the goal that established the resumption point, after popping
-the resumption point stack, the code generator will attempt to kill all the
-zombie variables again (after saving them on the stack if we can backtrack
-to the resumption point from the following code, which is possible only for
-nondet disjunctions). Any zombie variables that occur in the next entry of
-the resumption point stack will stay zombies; any that don't occur there
-will finally die (i.e. the code generator will forget about them, and
-release the space they occupy.)
-
-<p>
-
-The sets of zombie variables and forward live variables are always
-disjoint, since a variable is not made a zombie until it is no longer
-forward live.
-
-<p>
-
-It is an invariant that at any point in the code generator, the code
-generator's "set of known variables" is the union of "set of zombie
-variables" maintained by the code generator and the set of forward
-live variables as defined in the forward liveness section above.
-
-<p>
-
-
-<hr>
-<hr>
-
-<p>
-
-<h2> FOLLOW VARS </h2>
-
-
-<p>
-
-When the code generator emits code to materialize the value of a variable,
-it ought to put it directly into the location where it is required to be next.
-
-<p>
-
-The code generator maintains a field in the code_info structure that records
-advisory information about this. The information comes from the follow_vars
-pass, which fills in the follow_vars field in the goal info structure of some
-goals. Whenever the code generator starts processing a goal, it sets the field
-in the code_info structure from the field of the goal info structure of that
-goal, if that field is filled in.
-
-<p>
-
-The follow_vars pass will fill in this field for the following goals:
-
-<ul>
-<li> the goal representing the entire procedure definition
-<li> each arm of a switch
-<li> each disjunct of a disjunction
-<li> the condition, then-part and else-part of an if-then-else
-<li> the first goal following any non-builtin goal in a conjunction
-  (the builtin goals are non-complicated unifications and calls to
-  inline builtin predicates and functions)
-</ul>
-
-<p>
-
-The semantics of a filled in follow_vars field:
-<ul>
-<li> If it maps a variable to a real location, that variable should be put
-  in that location.
-
-<li> If it maps a variable to register r(-1), that variable should be put
-  in a currently free register.
-
-<li> If it does not map a variable to anything, that variable should be put
-  in its stack slot, if that stack slot is free; otherwise it should be put
-  in a currently free register.
-</ul>
-
-<p>
-
-The follow_vars field should map a variable to a real location if the
-following code will require that variable to be in exactly that location.
-For example, if the variable is an input argument of a call, it will
-need to be in the register holding that argument; if the variable is not
-an input argument but will need to be saved across the call, it will need
-to be in its stack slot.
-
-<p>
-
-The follow_vars field should map a variable to register r(-1) if the
-variable is an input to a builtin that does not require its inputs to
-be anywhere in particular. In that case, we would prefer that the
-variable be in a register, since this should make the code generated
-for the builtin somewhat faster.
-
-<p>
-
-When the code generator materializes a variable in way that requires
-several accesses to the materialized location (e.g. filling in the fields
-of a structure), it should put the variable into a register even if
-the follow_vars field says otherwise.
-
-<p>
-
-Since there may be many variables that should be in their stack slots,
-and we don't want to represent all of these explicitly, the follow_vars
-field may omit any mention of these variables. This also makes it easier
-to merge follow_vars fields at the starts of branched control structures.
-If some branches want a variable in a register, their wishes should take
-precedence over the wishes of the branches that wish the variable to be
-in its stack slot or in which the variable is not used at all.
-
-<p>
-
-When the code generator picks a random free register, it should try to avoid
-registers that are needed for variables in the follow_vars map.
-
-<p>
-
-When a variable that is currently in its stack slot is supposed to be put
-in any currently free register for speed of future access, the code generator
-should refuse to use any virtual machine registers that are not real machine
-registers. Instead, it should keep the variable in its stack slot.
-
-<p>
-
-<hr>
-</body>
-</html>
-
diff --git a/development/developers/bootstrapping.html b/development/developers/bootstrapping.html
deleted file mode 100644
index 7b8b6a9..0000000
--- a/development/developers/bootstrapping.html
+++ /dev/null
@@ -1,50 +0,0 @@
-
-<html>
-<head>
-
-
-<title>
-	Bootstrapping
-</title>
-</head>
-
-<body
-	bgcolor="#ffffff"
-	text="#000000"
->
-
-
-<hr>
-
-<h2> Changes that don't bootstrap </h2>
-
-<p>
-
-Sometimes changes need to be made to the Mercury system that mean
-previous versions of the compiler will no longer successfully compile
-the new version.
-<p>
-
-Whenever anyone makes a change which prevents bootstrapping with a
-previous version of the compiler, they should add a cvs tag to all the
-files in the relevant directories <em>before committing</em>, and
-mention this tag in the log message.  The tag should be of the form
-bootstrap_YYYYMMDD_<short_description_of_change>.
-<p>
-
-The rationale for the cvs tag is that it allows machines to be
-bootstrapped (if they didn't manage to do it in a daily build)
-by doing `cvs update -r<tag>' on the relevant build directory.
-After that compiler has been installed, a `cvs update -A' will remove
-the cvs sticky tags.
-<p>
-
-Optionally, a test should be added to the configuration script so
-that people installing from CVS don't use an outdated compiler to
-bootstrap.  Practically this may be difficult to achieve in some cases.
-
-<hr>
-
-</body>
-</html>
-
diff --git a/development/developers/bytecode.html b/development/developers/bytecode.html
deleted file mode 100644
index c765a0a..0000000
--- a/development/developers/bytecode.html
+++ /dev/null
@@ -1,529 +0,0 @@
-
-
-<html>
-<head>
-<title>
-	Information On The Mercury Bytecode Format
-</title>
-</head>
-
-<body
-	bgcolor="#ffffff"
-	text="#000000"
->
-
-<hr>
-
-<h1>
-Information On The Mercury Bytecode Format</h1>
-<hr>
-
-<h2> Summary of types </h2>
-
-<dl>
-<dt> byte
-	<dd> 
-	unsigned char 0-255
-<dt> cstring
-		<dd>
-		Sequence of non-zero bytes terminated by zero-byte. <br>
-		XXX: May change this later to allow embedded
-		zero-bytes in strings.
-<dt> short
-		<dd>
-		2 bytes interpreted as signed short.
-		It is 2's complement and big-endian. (The
-		most significant byte is read first.)
-<dt> int
-		<dd>
-		4 bytes interpreted as a signed int.
-		It is 2's complement and big-endian.
-<dt> float
-		<dd>
-		8 bytes interpreted as floating point value.
-		It is IEEE-754 64-bit format and big-endian.
-<dt> list of T
-		<dd>
-		contiguous sequence of T
-<dt> determinism
-		<dd>
-		one byte interpreted as follows
-			<ul>
-			<li> 0 = det
-			<li> 1 = semidet
-			<li> 2 = multidet
-			<li> 3 = nondet
-			<li> 4 = cc_multidet
-			<li> 5 = cc_nondet
-			<li> 6 = erroneous
-			<li> 7 = failure
-			</ul>
-<dt> tag is one of
-		<dd>
-		<ul>
-		<li> 0 (byte) (simple tag) followed by
-			<ul>
-			<li> primary (byte)
-			</ul>
-		<li> 1 (byte) (complicated tag) followed by
-			<ul>
-			<li> primary (byte)
-			<li> secondary (int)
-			</ul>
-		<li> 2 (byte) (complicated constant tag) followed by
-			<ul>
-			<li> primary (byte)
-			<li> secondary (int)
-			</ul>
-		<li> 3 (byte) (enum tag)
-			(For enumeration of pure constants.)
-		<li> 4 (byte) (no_tag)
-		</ul>
-		XXX: Need explanation of all these.
-<dt> cons_id (constructor id) is one of:
-		<dd>
-		<ul>
-		<li> 0 (byte) (cons) followed by
-			<ul>
-			<li> functor name (cstring)
-			<li> arity (short)
-			<li> tag (tag)
-			</ul>
-		<li> 1 (byte) (int const) followed by
-			<ul>
-			<li> integer constant (int)
-			</ul>
-		<li> 2 (byte) (string const) followed by
-			<ul>
-			<li> string constant (cstring) <br>
-				XXX: no '\0' in strings!
-			</ul>
-		<li> 3 (byte) (float const) followed by
-			<ul>
-			<li> float constant (float)
-			</ul>
-		<li> 4 (byte) (pred const) followed by
-			<ul>
-			<li> module id (cstring)
-			<li> predicate id (cstring)
-			<li> arity (short)
-			<li> procedure id (byte)
-			</ul>
-		<li> 5 (byte) (code addr const) followed by
-			<ul>
-			<li> module id (cstring)
-			<li> predicate id (cstring)
-			<li> arity (short)
-			<li> procedure id (byte)
-			</ul>
-		<li> 6 (byte) (base type info const) followed by
-			<ul>
-			<li> module id (cstring)
-			<li> type name (cstring)
-			<li> type arity (byte)
-			</ul>
-		</ul>
-		Note that not all of these alternatives are
-		meaningful in all bytecodes that have arguments of
-		type cons_id. <br>
-		XXX: Specify exactly which cases are meaningful.
-<dt> op_arg (argument to an operator) is one of:
-		<dd>
-		<ul>
-		<li> 0 (byte) followed by 
-			<ul>
-			<li> variable slot (short)
-			</ul>
-		<li> 1 (byte) followed by
-			<ul>
-			<li> integer constant (int)
-			</ul>
-		<li> 2 (byte) followed by
-			<ul>
-			<li> float constant (float) XXX: not yet supported
-			</ul>
-		</ul>
-<dt> dir (direction of information movement in general unification)
-	  is one of:
-		<dd>
-		<ul>
-		<li> 0 (byte) to_arg
-		<li> 1 (byte) to_var
-		<li> 2 (byte) to_none
-		</ul>
-</dl>
-
-
-<h2> Summary of Bytecodes </h2>
-
-<p>
-
-Note: Currently we specify only the static layout of bytecodes.
-We also need to specify the operational semantics of the bytecodes,
-which can be done by specifying state transitions on the abstract
-machine. That is, to specify the meaning of a bytecode, we simply
-say how the state of the abstract machine has changed from before
-interpreting the bytecode to after interpreting the bytecode.
-
-<p>
-
-<ul>
-<li> enter_pred (0)
-	<ul>
-	<li> predicate name (cstring)
-	<li> number of procedures in predicate (short)
-	</ul>
-
-<li> endof_pred (1)
-
-<li> enter_proc (2)
-	<ul>
-	<li> procedure id (byte) <br>
-		procedure id is used to distinguish the procedures
-		in a predicate. <br>
-		XXX: should use short instead?
-	<li> determinism of the procedure (determinism)
-	<li> label count (short) <br>
-		Number of labels in the procedure. Used for allocating a
-		table of labels in the interpreter.
-	<li> temp count (short) <br>
-		Number of temporary variables needed for this procedure. (?)
-	<li> length of list (short) <br>
-		Number of items in next arg
-	<li> list of
-		<ul>
-		<li> Variable info (cstring)
-		</ul>
-		XXX: we should also have typeinfo for each variable.
-	</ul>
-
-<li> endof_proc (3)
-
-<li> label (4)
-	<ul>
-	<li> Code label. (short)
-	</ul>
-	Used for jumps, switches, if-then-else, etc.
-
-<li> enter_disjunction (5)
-	<ul>
-	- label id (short) <br>
-		Label refers to the label immediately after the disjunction.
-	</ul>
-
-<li> endof_disjunction (6)
-
-<li> enter_disjunct (7)
-	<ul>
-	<li> label id (short) <br>
-		Label refers to label for next disjunct.
-	</ul>
-
-<li> endof_disjunct (8)
-	<ul>
-	<li> label id (short) <br>
-		Label refers to label for next disjunct.(?)
-		Is -1 if there is no next disjunct in this disjunction.
-	</ul>
-
-<li> enter_switch (9)
-	<ul>
-	<li> variable in slots on which we are switching (short)
-	<li> label immediately after the switch (short)
-	</ul>
-	We jump to the label after we've performed the switch.
-		label refers to label immediately after corresponding
-		endof_switch.
-
-<li> endof_switch (10)
-
-<li> enter_switch_arm (11)
-	<ul>
-	<li> constructor id (cons_id)
-	<li> label id (short)  <br>
-		label refers to label for next switch arm.
-	</ul>
-			
-<li> endof_switch_arm (12) 
-	<ul>
-	<li> label id (short)
-		Label id refers to label immediately before next switch arm. 
-		(?)
-	</ul>
-
-<li> enter_if (13)
-	<ul>
-	<li> else label id (short)
-	<li> follow label id (short) <br>
-		label refers to label at endof_if
-		Note that we must've pushed a failure context
-		before entering the enter_if. If the condition
-		fails, we follow the failure context.
-	<li> frame pointer tmp (short) <br>
-		XXX: hmm... dunno..
-	</ul>
-
-
-<li> enter_then (14)
-	<ul>
-	<li> frame pointer temp (short) <br>
-		XXX: what's this for?
-	</ul>
-	XXX: should have flag here? [I wrote this note in a meeting.
-	What in hell did I mean?]
-
-<li> endof_then (15) XXX: enter_else is a better name.
-	<ul>
-	<li> follow label (short) <br>
-		XXX: label just before endof_if ???
-	</ul>
-
-<li> endof_if (16)
-
-<li> enter_negation (17)
-	<ul>
-	<li> label id (short)
-	</ul>
-		label refers to label at endof_negation.
-		Note: As with if-then-else, we must push a failure
-		context just before entering enter_negation. If the
-		negation fails, we follow the failure context.
-
-<li> endof_negation (18)
-
-<li> enter_commit (19)
-	<ul>
-	- temp (short) <br>
-		XXX: what's this for?
-	</ul>
-	XXX: how does this work?
-
-<li> endof_commit (20)
-	<ul>
-	<li> temp (short) <br>
-		XXX: what's this for?
-	</ul>
-
-<li> assign (21)
-	<ul>
-	<li> Variable A in slots (short)
-	<li> Variable B in slots (short)
-	</ul>
-	A := B. Copy contents of slot B to slot A.
-
-<li> test (22)
-	<ul>
-	<li> Variable A in slots (short)
-	<li> Variable B in slots (short)
-	</ul>
-	Used to test atomic values (int, float, etc). Before entering
-	test, a failure context must be pushed. If the test fails,
-	the failure context is followed.
-	
-
-<li> construct (23)
-	<ul>
-	<li> variable slot (short)
-	<li> constructor id (cons_id)
-	<li> list length of next arg (short)
-	<li> list of:
-		<ul>
-		<li> variable slot (short)
-		</ul>
-	</ul>
-	Apply constructor to list of arguments (in list of variable slots)
-	and store result in a variable slot.
-
-<li> deconstruct (24)
-	<ul>
-	<li> variable slot Var (short)
-	<li> constructor id (cons_id)
-	<li> list length of next arg (short)
-	<li> list of:
-		<ul>
-		<li> variable slot (short)
-		</ul>
-	</ul>
-
-	<p>
-
-	If cons_id is:
-		<dl>
-		<dt> a functor applied to some args, 
-			<dd>
-			then remove functor and put args into variable slots.
-		<dt> an integer constant, 
-			<dd>
-			then check for equality of the constant and the 
-			value in the variable slot
-		<dt> a float constant, 
-			<dd>
-			then check for equality of the constant and the 
-			value in the variable slot.
-		<dt> anything else, 
-			<dd>
-			then makes no sense and interpreter should 
-			raise error. <br>
-			XXX: correct? 
-		</dl>
-
-	<p>
-
-	Note: We must push a failure context before entering deconstruct.			If the deconstruct fails (i.e. functor of Var isn't
-		the same as cons_id, or ints are not equal, or floats are
-		not equal), then we must follow the failure context.
-
-<li> complex_construct (25)
-	<ul>
-	<li> var (short)
-	<li> cons id (cons_id)
-	<li> list length (short)
-	<li> list of:
-		<ul>
-		<li> var (short)
-		<li> direction (dir)
-		</ul>
-	</ul>
-	
-	This used for general unification using partially instantiated
-	terms. This is made possible by bromage's aliasing work.
-
-<li> complex_deconstruct (26)
-	<ul>
-	<li> variable slot (short)
-	<li> constructor id (cons_id)
-	<li> list length of next arg (short)
-	<li> list of
-		<ul>
-		<li> variable slot (short)
-		<li> direction (dir)
-		</ul>
-	</ul>
-	Note: This is a generalised deconstruct. The directions specify
-	which way bindings move. XXX: This is still not 100% crystal clear.
-
-<li> place_arg (27)
-	<ul>
-	<li> register number (byte)  <br>
-		XXX: Do we have at most 256 registers?
-	<li> variable number (short)
-	</ul>
-	Move number from variable slot to register.
-	(Note: See notes for pickup_arg.) <p>
-
-	XXX: We will need to #include imp.h from ther Mercury runtime,
-	since this specifies the usage of registers. For example, we
-	need to know whether we're using the compact or non-compact
-	register allocation method for parameter passing. (The compact
-	method reuses input registers as output registers. In the 
-	non-compact mode, input and output registers are distinct.)
-
-<li> pickup_arg (28)
-	<ul>
-	<li> register number (byte)
-	<li> variable number in variable slots (short)
-	</ul>
-	Move argument from register to variable slot. <p>
-
-	(Note: We currently don't make use of floating-point registers.
-	The datatype for pickup_arg in the bytecode generator allows
-	for distinguishing register `types', that is floating-point
-	register or normal registers. We may later want to spit out
-	another byte `r' or `f' to identify the type of register.)
-
-
-
-<li> call (29)
-	<ul>
-	<li> module id (cstring)
-	<li> predicate id (cstring)
-	<li> arity (short)
-	<li> procedure id (byte)
-	</ul>
-	XXX: If we call a Mercury builtin, the module name is `mercury_builtin'.
-	What if the user has a module called mercury_builtin?
-
-<li> higher_order_call (30)
-	<ul>
-	<li> var (short)
-	<li> input variable count (short)
-	<li> output variable count (short)
-	<li> determinism (determinism)
-	</ul>
-
-<li> builtin_binop (31)
-	<ul>
-	<li> binary operator (byte) <br>
-		This single byte is an index into a table of binary
-		operators.
-	<li> argument to binary operator (op_arg)
-	<li> another argument to binary operator (op_arg)
-	<li> variable slot which receives result of binary operation (short)
-	</ul>
-	XXX: Floating point operations must be distinguished from
-	int operations. In the interpreter, we should use a lookup table 
-	that maps bytes to the operations.
-
-<li> builtin_unop (32)
-	<ul>
-	<li> unary operator (byte)
-		An index into a table of unary operators.
-	<li> argument to unary operator (op_arg)
-	<li> variable slot which receives result of unary operation (short)
-	</ul>
-
-<li> builtin_bintest (33)
-	<ul>
-	<li> binary operator (byte)
-		An index into a table of binary operators.
-	<li> argument to binary test (op_arg)
-	<li> another argument to binary test op_arg)
-	</ul>
-	Note we must first push a choice point which we may follow should
-	the test fail.
-
-<li> builtin_untest (34)
-	<ul>
-	<li> unary operator (byte)
-		An index into a tabler of unary operators.
-	<li> argument to unary operator (op_arg)
-	</ul>
-	Note we must first push a choice point which we may follow should
-	the test fail.
-
-<li> semidet_succeed (35)
-
-<li> semidet_success_check (36)
-
-<li> fail (37)
-
-<li> context (38)
-	<ul>
-	<li> line number in Mercury source that the current bytecode
-		line corresponds to. (short)
-	</ul>
-	XXX: Still not clear how we should implement `step' in a debugger
-	since a single context may have other contexts interleaved in it.
-
-<li> not_supported (39)
-	<p>
-
-	Some unsupported feature is used. Inline C in Mercury code,
-	for instance. Any procedure thatr contains inline C
-	(or is compiled Mercury?) must have the format:
-		<ul>
-		<li> enter_pred ...
-		<li> not_supported
-		<li> endof_pred
-		</ul>
-
-</ul>
-
-
-<hr>
-
-Comments? See our <a href = "../../contact.html" >contact</a> page.<br>
-
-Last update was $Date: 2002-11-04 13:12:29 $ by $Author: stayl $@cs.mu.oz.au. <br>
-</body>
-</html>
diff --git a/development/developers/c_coding_standard.html b/development/developers/c_coding_standard.html
deleted file mode 100644
index 8114482..0000000
--- a/development/developers/c_coding_standard.html
+++ /dev/null
@@ -1,704 +0,0 @@
-<html>
-<head>
-<title>
-	C Coding Standard for the Mercury Project
-</title>
-</head>
-
-<body
-	bgcolor="#ffffff"
-	text="#000000"
->
-
-<hr>
-
-<h1>
-C Coding Standard for the Mercury Project</h1>
-<hr>
-
-These coding guidelines are presented in the briefest manner possible
-and therefore do not include rationales.  <p>
-
-Because the coding standard has been kept deliberately brief, there are
-some items missing that would be included in a more comprehensive 
-standard. For more on commonsense C programming, 
-consult the <a href="ftp://ftp.cs.toronto.edu/doc/programming/ihstyle.ps">
-Indian Hill C coding standard </a> or the 
-<a href="http://www.eskimo.com/~scs/C-faq/top.html">
-comp.lang.c FAQ</a>. <p>
-
-<h2>
-1. File organization</h2>
-
-<h3>
-1.1. Modules and interfaces</h3>
-
-We impose a discipline on C to allow us to emulate (poorly) the modules
-of languages such as Ada and Modula-3.
-<ul>
-<li>	Every .c file has a corresponding .h file with the
-	same basename. For example, list.c and list.h.
-
-<li>	We consider the .c file to be the module's implementation
-	and the .h file to be the module's interface. We'll
-	just use the terms `source file' and `header'.
-
-<li>	All items exported from a source file must be declared in
-	the header. These items include functions, variables, #defines,
-	typedefs, enums, structs, and so on. In short, an item is anything 
-	that doesn't allocate storage. 
-	Qualify function prototypes with the `extern' keyword.
-	Also, do qualify each variable declaration
-	with the `extern' keyword, otherwise storage for the
-	variable will be allocated in every source file that
-	includes the header containing the variable definition.
-
-<li>	We import a module by including its header.
-	Never give extern declarations for imported
-	functions in source files. Always include the header of the
-	module instead.
-
-<li>	Each header must #include any other headers on which it depends.
-	Hence it's imperative every header be protected against multiple
-	inclusion. Also, take care to avoid circular dependencies.
-
-<li>	Always include system headers using the angle brackets syntax, rather
-	than double quotes. That is 
-	<font color="#0000ff"><tt>#include <stdio.h></tt>
-	<font color="#000000">.
-
-	Mercury-specific headers should be included using the double
-	quotes syntax. That is
-	<font color="#0000ff"><tt>#include "mercury_module.h"</tt>
-	<font color="#000000">
-	Do not put root-relative or `..'-relative directories in
-	#includes.
-
-</ul>
-
-<h3>
-1.2. Organization within a file</h3>
-
-<h4>
-1.2.1. Source files</h4>
-
-Items in source files should in general be in this order:
-<ul>
-<li>	Prologue comment describing the module.
-<li>	#includes of system headers (such as stdio.h and unistd.h)
-<li>	#includes of headers specific to this project. But note that
-	for technical reasons,
-	<font color="#0000ff">mercury_imp.h<font color=#000000">
-	must be the first #include.
-<li>	Any local #defines.
-<li>	Definitions of any local (that is, file-static) global variables.
-<li>	Prototypes for any local (that is, file-static) functions.
-<li>	Definitions of functions.
-</ul>
-
-Within each section, items should generally be listed in top-down order,
-not bottom-up.  That is, if foo() calls bar(), then the definition of
-foo() should precede the definition of bar().  (An exception to this rule
-is functions that are explicitly declared inline; in that case, the
-definition should precede the call, to make it easier for the C compiler
-to perform the desired inlining.)
-
-<h4>
-1.2.2. Header files</h4>
-
-Items in headers should in general be in this order:
-<ul>
-<li>	typedefs, structs, unions, enums
-<li>	extern variable declarations
-<li>	function prototypes
-<li>	#defines 
-</ul>
-
-However, it is probably more important to group items
-which are conceptually related than to follow this
-order strictly.  Also note that #defines which define
-configuration macros used for conditional compilation
-or which define constants that are used for array sizes
-will need to come before the code that uses them.
-But in general configuration macros should be isolated
-in separate files (e.g. runtime/mercury_conf.h.in
-and runtime/mercury_conf_param.h) and fixed-length limits
-should be avoided, so those cases should not arise often.
-<p>
-
-Every header should be protected against multiple inclusion
-using the following idiom:
-<font color="#0000ff">
-<pre>
-#ifndef MODULE_H
-#define	MODULE_H
-
-/* body of module.h */
-
-#endif	/* not MODULE_H */
-</pre>
-<font color="#000000">
-
-
-
-<h2>
-2. Comments</h2>
-
-<h3>
-2.1. What should be commented</h3>
-
-<h4>
-2.1.1. Functions</h4>
-
-Each function should have a one-line description of what it does.
-Additionally, both the inputs and outputs (including pass-by-pointer)
-should be described. Any side-effects not passing through the explicit
-inputs and outputs should be described. If any memory is allocated,
-you should describe who is responsible for deallocation. 
-If memory can change upon successive invocations (such as function-static
-data), mention it. If memory should not be deallocated by anyone
-(such as constant string literals), mention this.
-<p>
-Note: memory allocation for C code that must interface
-with Mercury code or the Mercury runtime should be
-done using the routines defined and documented in
-mercury/runtime/mercury_memory.h and/or mercury/runtime/mercury_heap.h,
-according to the documentation in those files,
-in mercury/trace/README, and in the Mercury Language Reference Manual.
-
-<h4>
-2.1.2. Macros</h4>
-
-Each non-trivial macro should be documented just as for functions (see above).
-It is also a good idea to document the types of macro arguments and
-return values, e.g. by including a function declaration in a comment.
-
-<h4>
-2.1.3. Headers</h4>
-
-Such function comments should be present in header files for each function
-exported from a source file. Ideally, a client of the module should 
-not have to look at the implementation, only the interface. 
-In C terminology, the header should suffice for
-working out how an exported function works.
-
-<h4>
-2.1.4. Source files</h4>
-
-Every source file should have a prologue comment which includes:
-<ul>
-<li>	Copyright notice.
-<li>	Licence info (e.g. GPL or LGPL).
-<li>	Short description of the purpose of the module.
-<li>	Any design information or other details required to understand
-	and maintain the module.
-</ul>
-
-<h4>
-2.1.5. Global variables</h4>
-
-Any global variable should be excruciatingly documented. This is
-especially true when globals are exported from a module.
-In general, there are very few circumstances that justify use of 
-a global. 
-
-<h3>
-2.2. Comment style</h3>
-
-Use comments of this form:
-<font color="#0000ff">
-<pre>
-	/*
-	** Here is a comment.
-	** And here's some more comment.
-	*/
-</pre>
-<font color="#000000">
-For annotations to a single line of code:
-<font color="#0000ff">
-<pre>
-	i += 3; /* Here's a comment about this line of code. */
-</pre>
-<font color="#000000">
-
-<h3>
-2.3. Guidelines for comments</h3>
-
-<h4>
-2.3.1. Revisits</h4>
-
-Any code that needs to be revisited because it is a temporary hack
-(or some other expediency) must have a comment of the form:
-<font color="#0000ff">
-<pre>
-	/*
-	** XXX: <reason for revisit>
-	*/
-</pre>
-<font color="#000000">
-
-The <reason for revisit> should explain the problem in a way
-that can be understood by developers other than the author of the
-comment.
-
-<h4>
-2.3.2. Comments on preprocessor statements</h4>
-
-The <tt>#ifdef</tt> constructs should 
-be commented like so if they extend for more than a few lines of code:
-<font color="#0000ff">
-<pre>
-#ifdef SOME_VAR
-	/*...*/
-#else	/* not SOME_VAR */
-	/*...*/
-#endif	/* not SOME_VAR */
-</pre>
-<font color="#000000">
-
-Similarly for 
-<font color="#0000ff"><tt>#ifndef</tt><font color="#000000">.
-<p>
-Use the GNU convention of comments that indicate whether the variable
-is true in the #if and #else parts of an #ifdef or #ifndef. For
-instance:
-<font color="#0000ff">
-<pre>
-#ifdef SOME_VAR
-#endif /* SOME_VAR */
-
-#ifdef SOME_VAR
-	/*...*/
-#else /* not SOME_VAR */
-	/*...*/
-#endif /* not SOME_VAR */
-
-#ifndef SOME_VAR
-	/*...*/
-#else	/* SOME_VAR */
-	/*...*/
-#endif	/* SOME_VAR */
-</pre>
-<font color="#000000">
-
-<h2>
-3. Declarations</h2>
-
-<h3>
-3.1. Pointer declarations</h3>
-
-Attach the pointer qualifier to the variable name.
-<font color="#0000ff">
-<pre>
-	char	*str1, *str2;
-</pre>
-<font color="#000000">
-
-<h3>
-3.2. Static and extern declarations</h3>
-
-Limit module exports to the absolute essentials. Make as much static
-(that is, local) as possible since this keeps interfaces to modules simpler.
-
-<h3>
-3.3. Typedefs</h3>
-
-Use typedefs to make code self-documenting. They are especially
-useful on structs, unions, and enums.
-
-<h2>
-4. Naming conventions</h2>
-
-<h3>
-4.1. Functions, function-like macros, and variables</h3>
-
-Use all lowercase with underscores to separate words.
-For instance, <tt>MR_soul_machine</tt>.
-
-<h3>
-4.2. Enumeration constants, #define constants, and non-function-like macros</h3>
-
-Use all uppercase with underscores to separate words.
-For instance, <tt>ML_MAX_HEADROOM</tt>.
-
-<h3>
-4.3. Typedefs</h3>
-
-Use first letter uppercase for each word, other letters lowercase and
-underscores to separate words.
-For instance, <tt>MR_Directory_Entry</tt>.
-
-<h3>
-4.4. Structs and unions</h3>
-
-If something is both a struct and a typedef, the
-name for the struct should be formed by appending `_Struct'
-to the typedef name:
-<font color="#0000ff">
-<pre>
-	typedef struct MR_Directory_Entry_Struct {
-		...
-	} MR_DirectoryEntry;
-</pre>
-<font color="#000000">
-
-For unions, append `_Union' to the typedef name.
-
-<h3>
-4.5. Mercury specifics </h3>
-
-Every symbol that is externally visible (i.e. declared in a header
-file) should be prefixed with a prefix that is specific to the
-package that it comes from.
-
-For anything exported from mercury/runtime, prefix it with MR_.
-For anything exported from mercury/library, prefix it with ML_.
-
-<h2>
-5. Syntax and layout</h2>
-
-<h3>
-5.1. Minutiae</h3>
-
-Use 8 spaces to a tab. No line should be longer than 79 characters.
-If a statement is too long, continue it on the next line <em>indented 
-two levels deeper</em>. If the statement extends over more than two
-lines, then make sure the subsequent lines are indented to the
-same depth as the second line. For example:
-<font color="#0000ff">
-<pre>
-	here = is_a_really_long_statement_that_does_not_fit +
-			on_one_line + in_fact_it_doesnt_even_fit +
-			on_two_lines;
-
-	if (this_is_a_somewhat_long_conditional_test(
-			in_the_condition_of_an +
-			if_then))
-	{
-		/*...*/
-	}
-		
-</pre>
-<font color="#000000">
-
-<h3>
-5.2. Statements</h3>
-
-Use one statement per line.
-
-Here are example layout styles for the various syntactic constructs:
-
-<h4> 
-5.2.1. If statement</h4>
-
-Use the "/* end if */" comment if the if statement is larger than a page.
-
-<font color="#0000ff">
-<pre>
-/*
-** Curlies are placed in a K&R-ish manner.
-** And comments look like this.
-*/
-if (blah) {
-	/* Always use curlies, even when there's only
-	** one statement in the block.
-	*/
-} else {
-	/* ... */
-} /* end if */
-
-/*
-** if the condition is so long that the open curly doesn't 
-** fit on the same line as the `if', put it on a line of
-** its own
-*/
-if (a_very_long_condition() &&
-	another_long_condition_that_forces_a_line_wrap())
-{
-	/* ... */
-}
-
-</pre>
-<font color="#000000">
-
-<h4>
-5.2.2. Functions</h4>
-
-Function names are flush against the left margin. This makes it
-easier to grep for function definitions (as opposed to their invocations).
-In argument lists, put space after commas. And use the <tt>/* func */</tt>
-comment when the function is longer than a page.
-
-<font color="#0000ff">
-<pre>
-int
-rhododendron(int a, float b, double c) {
-	/* ... */
-} /* end rhododendron() */
-</pre>
-<font color="#000000">
-
-
-<h4>
-5.2.3. Variables</h4>
-
-Variable declarations shouldn't be flush left, however.
-<font color="#0000ff">
-<pre>
-int x = 0, y = 3, z;
-
-int a[] = {
-	1,2,3,4,5
-};
-</pre>
-<font color="#000000">
-
-
-<h4>
-5.2.4. Switches </h4>
-
-<font color="#0000ff">
-<pre>
-switch (blah) {
-	case BLAH1:
-		/*...*/
-		break;
-	case BLAH2: {
-		int i;
-
-		/*...*/
-		break;
-	}
-	default:
-		/*...*/
-		break;
-} /* switch */
-</pre>
-<font color="#000000">
-
-
-<h4>
-5.2.5. Structs, unions, and enums </h4>
-
-<font color="#0000ff">
-<pre>
-struct Point {
-	int	tag;
-	union 	cool {
-		int	ival;
-		double	dval;
-	} cool;
-};
-enum Stuff {
-	STUFF_A, STUFF_B /*...*/
-};
-</pre>
-<font color="#000000">
-
-<h4>
-5.2.6. Loops </h4>
-
-<font color="#0000ff">
-<pre>
-while (stuff) {
-	/*...*/
-}
-
-do {
-	/*...*/
-} while(stuff)
-
-for (this; that; those) {
-	/* Always use curlies, even if no body. */
-}
-
-/*
-** If no body, do this...
-*/
-while (stuff)
-	{}
-for (this; that; those)
-	{}
-
-</pre>
-<font color="#000000">
-
-<h3>
-5.3. Preprocessing </h3>
-
-<h4>
-5.3.1. Nesting</h4>
-
-Nested #ifdefs, #ifndefs and #ifs should be indented by two spaces for
-each level of nesting. For example:
-
-<font color="#0000ff">
-<pre>
-
-#ifdef GUAVA
-  #ifndef PAPAYA
-  #else /* PAPAYA */
-  #endif /* PAPAYA */
-#else /* not GUAVA */
-#endif /* not GUAVA */
-
-</pre>
-<font color="#000000">
-
-<h2>
-6. Portability</h2>
-
-<h3>
-6.1. Architecture specifics</h3>
-
-Avoid relying on properties of a specific machine architecture unless
-necessary, and if necessary localise such dependencies. One solution is
-to have architecture-specific macros to hide access to 
-machine-dependent code.
-
-Some machine-specific properties are:
-<ul>
-<li>	Size (in bits) of C builtin datatypes (short, int, long, float, 
-	double).
-<li>	Byte-order. Big- or little-endian (or other).
-<li>	Alignment requirements.
-</ul>
-
-<h3>
-6.2. Operating system specifics</h3>
-
-Operating system APIs differ from platform to platform. Although 
-most support standard POSIX calls such as `read', `write'
-and `unlink', you cannot rely on the presence of, for instance, 
-System V shared memory, or BSD sockets.
-<p>
-Adhere to POSIX-supported operating system calls whenever possible
-since they are widely supported, even by Windows and VMS.
-<p>
-When POSIX doesn't provide the required functionality, ensure that
-the operating system specific calls are localised. 
-
-<h3>
-6.3. Compiler and C library specifics</h3>
-
-ANSI C compilers are now widespread and hence we needn't pander to
-old K&R compilers. However compilers (in particular the GNU C compiler)
-often provide non-ANSI extensions. Ensure that any use of compiler
-extensions is localised and protected by #ifdefs.
-<p>
-Don't rely on features whose behaviour is undefined according to
-the ANSI C standard. For that matter, don't rely on C arcana 
-even if they <em>are</em> defined. For instance, 
-<tt>setjmp/longjmp</tt> and ANSI signals often have subtle differences
-in behaviour between platforms.
-<p>
-If you write threaded code, make sure any non-reentrant code is
-appropriately protected via mutual exclusion. The biggest cause
-of non-reentrant (non-threadsafe) code is function-static data.
-Note that some C library functions may be non-reentrant. This may
-or may not be documented in the man pages.
-
-<h3>
-6.4. Environment specifics</h3>
-
-This is one of the most important sections in the coding standard.
-Here we mention what other tools Mercury depends on.
-Mercury <em>must</em> depend on some tools, however every tool that
-is needed to use Mercury reduces the potential user base.
-<p>
-Bear this in mind when tempted to add YetAnotherTool<sup>TM</sup>.
-
-<h4>
-6.4.1. Tools required for Mercury</h4>
-
-In order to run Mercury (given that you have the binary installation), you need:
-<ul>
-<li>	A shell compatible with Bourne shell (sh)
-<li>	GNU make
-<li>	One of:
-	<ul>
-	<li>	The GNU C compiler
-	<li>	Any ANSI C compiler
-	</ul>
-</ul>
-
-In order to build the Mercury compiler, you need the above and also:
-<ul>
-<li>	gzip
-<li>	tar
-<li>	Various POSIX utilities: <br>
-	awk basename cat cp dirname echo egrep expr false fgrep grep head 
-	ln mkdir mv rmdir rm sed sort tail 
-<li>	Some Unix utilities: <br>
-		test true uniq xargs
-</ul>
-
-<p>
-
-In order to modify and maintain the source code of the Mercury compiler,
-you need the above and also:
-<ul>
-<li>	Perl <font color="#ff0000">XXX: Which version?<font color="#000000">
-<li>	CVS
-<li>	autoconf
-<li>	texinfo
-<li>	TeX
-</ul>
-
-<h4>
-6.4.2. Documenting the tools</h4>
-
-If further tools are required, you should add them to the above list.
-And similarly, if you eliminate dependence on a tool, remove
-it from the above list.
-
-<h2>
-7. Coding specifics</h2>
-
-<ul>
-
-<li>	Do not assume arbitrary limits in data structures. Don't
-	just allocate `lots' and hope that's enough. Either it's
-	too much or it will eventually hit the wall and have to be
-	debugged. 
-	Using highwater-marking is one possible solution for strings,
-	for instance.
-
-<li>	Always check return values when they exist, even malloc
-	and realloc.
-
-<li>	Always give prototypes (function declarations) for functions.
-	When the prototype is in a header, import the header; do not
-	write the prototype for an extern function.
-
-<li>	Stick to ANSI C whenever possible. Stick to POSIX when
-	ANSI doesn't provide what you need. 
-	Avoid platform specific code unless necessary.
-
-<li>	Use signals with extreme austerity. They are messy and subject
-	to platform idiosyncracies even within POSIX.
-
-<li>	Don't assume the sizes of C data types. Don't assume the
-	byteorder of the platform. 
-
-<li>	Prefer enums to lists of #defines. Note that enums constants
-	are of type int, hence if you want an enumeration of
-	chars or shorts, then you must use lists of #defines.
-
-<li>	Parameters to macros should be in parentheses.
-<font color="#0000ff">
-<pre>
-	#define STREQ(s1,s2)	(strcmp((s1),(s2)) == 0)
-</pre>
-<font color="#000000">
-
-</ul>
-
-<hr>
-
-Comments?  See our <a href = "../../contact.html" >contact</a> page.<br>
-
-Note: This coding standard is an amalgam of suggestions from the
-entire Mercury team, not ncessarily the opinion of any single author.
-</body>
-</html>
diff --git a/development/developers/coding_standards.html b/development/developers/coding_standards.html
deleted file mode 100644
index db7f402..0000000
--- a/development/developers/coding_standards.html
+++ /dev/null
@@ -1,536 +0,0 @@
-
-<html>
-<head>
-
-
-<title>
-	Mercury Coding Standard for the Mercury Project
-</title>
-</head>
-
-<body
-	bgcolor="#ffffff"
-	text="#000000"
->
-
-<hr>
-
-<h1>
-Mercury Coding Standard for the Mercury Project</h1>
-<hr>
-
-
-<h2> Documentation </h2>
-
-<p>
-
-Each module should contain header comments
-which state the module's name, main author(s), and purpose,
-and give an overview of what the module does,
-what are the major algorithms and data structures it uses, etc.
-
-<p>
-
-Everything that is exported from a module should have sufficient documentation
-that it can be understood without reference
-to the module's implementation section.
-
-<p>
-
-Each procedure that is implemented using foreign code
-should have sufficient documentation about its interface
-that it can be implemented just by referring to that documentation,
-without reference to the module's implementation section.
-
-<p>
-
-Each predicate other than trivial access predicates
-should have a short comment describing what the predicate is supposed to do,
-and what the meaning of the arguments is.
-Ideally this description should also note any conditions
-under which the predicate can fail or throw an exception.
-
-<p>
-
-There should be a comment for each field of a structure saying
-what the field represents.
-
-<p>
-
-Any user-visible changes such as new compiler options or new features
-should be documented in appropriate section of the Mercury documentation
-(usually the Mercury User's Guide and/or the Mercury Reference Manual).
-Any major new features should be documented in the NEWS file,
-as should even small changes to the library interface,
-or anything else that might cause anyone's existing code to break.
-
-<p>
-
-Any new compiler modules or other major design changes
-should be documented in `compiler/notes/compiler_design.html'.
-
-<p>
-
-Any feature which is incompletely implemented
-should be mentioned in `compiler/notes/work_in_progress.html'.
-
-<h2> Naming </h2>
-
-<p>
-
-Variables should always be given meaningful names,
-unless they are irrelevant to the code in question.
-For example, it is OK to use single-character names
-in an access predicate which just sets a single field of a structure,
-such as
-
-<pre>
-
-	bar_set_foo(Foo, bar(A, B, C, _, E), bar(A, B, C, Foo, E)).
-
-</pre>
-
-Variables which represent different states or different versions
-of the same entity should be named Foo0, Foo1, Foo2, ..., Foo.
-
-<p>
-
-Predicates which get or set a field of a structure or ADT
-should be named bar_get_foo and bar_set_foo respectively,
-where bar is the name of the structure or ADT and foo is the name of the field.
-
-<h2> Coding </h2>
-
-<p>
-
-Your code should make as much reuse of existing code as possible.
-"cut-and-paste" style reuse is highly discouraged.
-
-<p>
-
-Your code should be efficient.
-Performance is a quite serious issue for the Mercury compiler.
-
-<p>
-
-No fixed limits please! 
-(If you really must have a fixed limit,
-include detailed documentation explaining why it was so hard to avoid.)
-
-<p>
-
-Only use DCG notation for parsing, not for threading implicit arguments.
-
-Use state variables for threading the IO state etc.
-The conventional IO state variable name is <code>!IO</code>.
-
-<h2> Error handling </h2>
-
-<p>
-
-Code should check for both erroneous inputs from the user
-and also invalid data being passed from other parts of the Mercury compiler.
-You should also always check to make sure that
-the routines that you call have succeed;
-make sure you don't silently ignore failures.
-(This last point almost goes without saying in Mercury,
-but is particularly important to bear in mind
-if you are writing any C code or shell scripts,
-or if you are interfacing with the OS.)
-
-<p>
-
-Calls to error/1 should always indicate an internal software error,
-not merely incorrect inputs from the user,
-or failure of some library routine or system call.
-In the compiler, use unexpected/2 or sorry/2 from compiler_util.m
-rather than error/1.  Use expect/3 from compiler_util rather than
-require/2.
-
-<p>
-
-Error messages should follow a consistent format.
-For compiler error messages, each line should start
-with the source file name and line number in "%s:%03d: " format.
-Compiler error messages should be complete sentences;
-they should start with a capital letter and end in a full stop.
-For error messages that are spread over more than one line
-(as are most of them),
-the second and subsequent lines should be indented two spaces.
-If the `--verbose-errors' option was set,
-you should print out additional text explaining in detail
-what the error message means and what the likely causes are.
-The preferred method of printing error messages
-is via the predicates in error_util.m;
-use prog_out__write_context and io__write_strings
-only if there is no way to add the capability you require to error_util.m.
-
-<p>
-
-Error messages from the runtime system should begin with the text
-"Mercury Runtime:", preferably by using the MR_fatal_error() routine.
-
-<p>
-
-If a system call or C library function that sets errno fails,
-the error message should be printed with perror()
-or should contain strerror(errno).
-If it was a function manipulating some file,
-the error message should include the filename.
-
-<h2> Layout </h2>
-
-<p>
-
-Each module should be indented consistently,
-with either 4 or 8 spaces per level of indentation.
-The indentation should be consistently done,
-either only with tabs or only with spaces.
-A tab character should always mean 8 spaces;
-if a module is indented using 4 spaces per level of indentation,
-this should be indicated by four spaces,
-not by a tab with tab stops set to 4.
-
-<p>
-
-Files that use 8 spaces per level of indentation
-don't need any special setup.
-Files that use 4 spaces per level of indentation
-should have something like this at the top,
-even before the copyright line:
-<pre>
-	% vim: ft=mercury ts=4 sw=4 et
-</pre>
-
-<p>
-
-No line should extend beyond 79 characters.
-The reason we don't allow 80 character lines is that
-these lines wrap around in diffs,
-since diff adds an extra character at the start of each line.
-
-<p>
-
-Since "empty" lines that have spaces or tabs on them
-prevent the proper functioning of paragraph-oriented commands in vi,
-lines shouldn't have trailing white space.
-They can be removed with a vi macro such as the following.
-(Each pair of square brackets contains a space and a tab.)
-
-<pre>
-	map ;x :g/[     ][      ]*$/s///^M
-</pre>
-
-<p>
-
-String literals that don't fit on a single line should be split
-by writing them as two or more strings concatenated using the "++" operator;
-the compiler will evaluate this at compile time,
-if --optimize-constant-propagation is enabled (i.e. at -O3 or higher).
-
-<p>
-
-Predicates that have only one mode should use predmode declarations
-rather than having a separate mode declaration.
-
-<p>
-If-then-elses should always be parenthesized,
-except that an if-then-else that occurs as the else
-part of another if-then-else doesn't need to be parenthesized.
-The condition of an if-then-else can either be on the same
-line as the opening parenthesis and the `->',
-
-<pre>
-
-	( test1 ->
-		goal1
-	; test2 ->
-		goal2
-	;
-		goal
-	)
-
-</pre>
-
-or, if the test is complicated, it can be on a line of its own:
-
-<pre>
-
-	(
-		very_long_test_that_does_not_fit_on_one_line(VeryLongArgument1,
-			VeryLongArgument2)
-	->
-		goal1
-	;
-		test2a,
-		test2b,
-	->
-		goal2
-	;
-		test3	% would fit one one line, but separate for consistency
-	->
-		goal3
-	;
-		goal
-	).
-
-</pre>
-
-<p>
-
-Disjunctions should always be parenthesized.
-The semicolon of a disjunction should never be at the
-end of a line -- put it at the start of the next line instead.
-
-<p>
-
-Predicates and functions implemented via foreign code should be formatted
-like this:
-
-<pre>
-:- pragma foreign_proc("C",
-        int__to_float(IntVal::in, FloatVal::out),
-        [will_not_call_mercury, promise_pure],
-"
-        FloatVal = IntVal;
-").
-</pre>
-
-The predicate name and arguments should be on a line on their own,
-as should the list of annotations.
-The foreign code should also be on lines of its own;
-it shouldn't share lines with the double quote marks surrounding it.
-
-<p>
-
-Type definitions should be formatted in one of the following styles:
-
-<pre>
-	:- type my_type
-		--->	my_type(
-				some_other_type	% comment explaining it
-			).
-
-	:- type my_struct --->
-		my_struct(
-			field1,			% comment explaining it
-			...
-		).
-
-	:- type some_other_type == int.
-
-	:- type foo
-		--->	bar(
-				int,		% comment explaining it
-				float		% comment explaining it
-			)
-		;	baz
-		;	quux.
-
-</pre>
-
-<p>
-
-If an individual clause is long, it should be broken into sections,
-and each section should have a "block comment" describing what it does;
-blank lines should be used to show the separation into sections.
-Comments should precede the code to which they apply, rather than following it.
-
-<pre>
-	%
-	% This is a block comment; it applies to the code in the next
-	% section (up to the next blank line).
-	%
-	blah,
-	blah,
-	blahblah,
-	blah,
-</pre>
-
-If a particular line or two needs explanation, a "line" comment
-
-<pre>
-	% This is a "line" comment; it applies to the next line or two
-	% of code
-	blahblah
-</pre>
-
-or an "inline" comment
-
-<pre>
-	blahblah	% This is an "inline" comment
-</pre>
-
-should be used.
-
-<h2> Structuring </h2>
-
-Code should generally be arranged so that
-procedures (or types, etc.) are listed in top-down order, not bottom-up.
-
-<p>
-
-Code should be grouped into bunches of related predicates, functions, etc.,
-and sections of code that are conceptually separate
-should be separated with dashed lines:
-
-<pre>
-
-%---------------------------------------------------------------------------%
-
-</pre>
-
-Ideally such sections should be identified
-by "section heading" comments identifying the contents of the section,
-optionally followed by a more detailed description.
-These should be laid out like this:
-
-<pre>
-
-%---------------------------------------------------------------------------%
-%
-% Section title
-%
-
-% Detailed description of the contents of the section and/or
-% general comments about the contents of the section.
-% This part may go one for several lines.
-%
-% It can even contain several paragraphs.
-
-The actual code starts here.
-
-</pre>
-
-For example
-
-<pre>
-
-%---------------------------------------------------------------------------%
-%
-% Exception handling
-%
-
-% This section contains all the code that deals with throwing or catching
-% exceptions, including saving and restoring the virtual machine registers
-% if necessary.
-%
-% Note that we need to take care to ensure that this code is thread-safe!
-
-:- type foo ---> ...
-
-</pre>
-
-Double-dashed lines, i.e.
-
-<pre>
-
-%---------------------------------------------------------------------------%
-%---------------------------------------------------------------------------%
-
-</pre>
-
-can also be used to indicate divisions into major sections.
-Note that these dividing lines should not exceed the 79 character limit
-(see above).
-
-<h2> Module imports </h2>
-
-Each group of :- import_module items should list only one module per line,
-since this makes it much easier to read diffs
-that change the set of imported modules.
-In the compiler, when e.g. an interface section imports modules
-from both the compiler and the standard library,
-there should be two groups of imports,
-the imports from the compiler first and then the ones from the library.
-For the purposes of this rule,
-consider the modules of mdbcomp to belong to the compiler.
-
-<p>
-
-Each group of import_module items should be sorted,
-since this makes it easier to detect duplicate imports and missing imports.
-It also groups together the imported modules from the same package.
-There should be no blank lines between
-the imports of modules from different packages,
-since this makes it harder to resort the group with a single editor command.
-
-<h2> Standard library predicates </h2>
-
-The descriptive comment for any predicate or function
-that occurs in the interface of a standard library module
-must be positioned above the predicate/function declaration.
-It should be formatted like the following example: 
-
-<pre>
-
-		% Description of predicate foo.
-		%
-	:- pred foo(...
-	:- mode foo(...
-</pre>
-
-A group of related predicate, mode and function declarations
-may be grouped together under a single description
-provided that it is formatted as above.
-If there is a function declaration in such a grouping
-then it should be listed before the others. 
-
-For example:
-
-<pre>
-	
-		% Insert a new key and corresponding value into a map.
-		% Fail if the key already exists.
-		%
-	:- func map.insert(map(K, V), K, V) = map(K, V).
-	:- pred map.insert(map(K, V)::in, K::in, V::in, map(K, V)::out) is det.
-
-</pre>
-
-The reason for using this particular style is that
-the reference manual for the standard library
-is automatically generated from the module interfaces,
-and we want to maintain a uniform appearance as much as is possible.
-
-<h2> Testing </h2>
-
-<p>
-
-Every change should be tested before being committed.
-The level of testing required depends on the nature of the change.
-If this change fixes an existing bug,
-and is unlikely to introduce any new bugs,
-then just compiling it and running some tests by hand is sufficient.
-If the change might break the compiler,
-you should run a bootstrap check (using the `bootcheck' script)
-before committing.
-If the change means that old versions of the compiler
-will not be able to compile the new version of the compiler,
-you must notify all the other Mercury developers.
-
-<p>
-
-In addition to testing before a change is committed,
-you need to make sure that the code will not get broken in the future
-by adding tests to the test suite.
-Every time you add a new feature,
-you should add some test cases for that new feature to the test suite.
-Every time you fix a bug, you should add a regression test to the test suite.
-
-<h2> Committing changes </h2>
-
-<p>
-
-Before committing a change, you should get someone else to review your changes. 
-
-<p>
-
-The file <a href="/web/20121002213713/http://www.mercury.csse.unimelb.edu.au/information/doc-latest/reviews.html">compiler/notes/reviews.html</a>
-contains more information on review policy.
-
-<hr>
-
-</body>
-</html>
-
diff --git a/development/developers/compiler_design.html b/development/developers/compiler_design.html
deleted file mode 100644
index cd52c3d..0000000
--- a/development/developers/compiler_design.html
+++ /dev/null
@@ -1,1913 +0,0 @@
-<html>
-<head>
-
-
-<title>
-	Notes On The Design Of The Mercury Compiler
-</title>
-</head>
-
-<body bgcolor="#ffffff" text="#000000">
-
-
-<hr>
-<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
-
-<p>
-This file contains an overview of the design of the compiler.
-
-<p>
-See also <a href="/web/20121002213721/http://www.mercury.csse.unimelb.edu.au/information/doc-latest/overall_design.html">overall_design.html</a>
-for an overview of how the different sub-systems (compiler,
-library, runtime, etc.) fit together.
-
-<hr>
-<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
-
-<h2> OUTLINE </h2>
-
-<p>
-
-The main job of the compiler is to translate Mercury into C, although it
-can also translate (subsets of) Mercury to some other languages:
-Mercury bytecode (for a planned bytecode interpreter), MSIL (for the
-Microsoft .NET platform) and Erlang.
-
-<p>
-
-The top-level of the compiler is in the file mercury_compile.m,
-which is a sub-module of the top_level.m package.
-The basic design is that compilation is broken into the following
-stages:
-
-<ul>
-<li> 1. parsing (source files -> HLDS)
-<li> 2. semantic analysis and error checking (HLDS -> annotated HLDS)
-<li> 3. high-level transformations (annotated HLDS -> annotated HLDS)
-<li> 4. code generation (annotated HLDS -> target representation)
-<li> 5. low-level optimizations
-     (target representation -> target representation)
-<li> 6. output code (target representation -> target code)
-</ul>
-
-
-<p>
-Note that in reality the separation is not quite as simple as that.
-Although parsing is listed as step 1 and semantic analysis is listed
-as step 2, the last stage of parsing actually includes some semantic checks.
-And although optimization is listed as steps 3 and 5, it also occurs in
-steps 2, 4, and 6.  For example, elimination of assignments to dead
-variables is done in mode analysis; middle-recursion optimization and
-the use of static constants for ground terms is done during code generation;
-and a few low-level optimizations are done in llds_out.m
-as we are spitting out the C code.
-
-<p>
-
-In addition, the compiler is actually a multi-targeted compiler
-with several different back-ends.
-
-<p>
-
-mercury_compile.m itself supervises the parsing  (step 1),
-but it subcontracts the supervision of the later steps to other modules.
-Semantics analysis (step 2) is looked after by mercury_compile_front_end.m;
-high level transformations (step 3) by mercury_compile_middle_passes.m;
-and code generation, optimization and output (steps 4, 5 and 6)
-by mercury_compile_llds_backend.m, mercury_compile_mlds_backend.m
-and mercury_compile_erl_backend.m
-for the LLDS, MLDS and Erlang backends respectively.
-
-<p>
-
-The modules in the compiler are structured by being grouped into
-"packages".  A "package" is just a meta-module,
-i.e. a module that contains other modules as sub-modules.
-(The sub-modules are almost always stored in separate files,
-which are named only for their final module name.)
-We have a package for the top-level, a package for each main pass, and
-finally there are also some packages for library modules that are used
-by more than one pass.
-<p>
-
-Taking all this into account, the structure looks like this:
-
-<ul type=disc>
-<li> At the top of the dependency graph is the top_level.m package,
-     which currently contains only the mercury_compile*.m modules,
-     which invoke all the different passes in the compiler.
-<li> The next level down is all of the different passes of the compiler.
-     In general, we try to stick by the principle that later passes can
-     depend on data structures defined in earlier passes, but not vice
-     versa.
-     <ul type=disc>
-     <li> front-end
-          <ul type=disc>
-          <li> 1. parsing (source files -> HLDS)
-               <br> Packages: parse_tree.m and hlds.m
-          <li> 2. semantic analysis and error checking
-	       (HLDS -> annotated HLDS)
-               <br> Package: check_hlds.m
-	  <li> 3. high-level transformations
-	       (annotated HLDS -> annotated HLDS)
-               <br> Packages: transform_hlds.m and analysis.m
-          </ul>
-     <li> back-ends
-          <ul type=disc>
-          <li> a. LLDS back-end
-               <br> Package: ll_backend.m
-               <ul type=disc>
-               <li> 3a. LLDS-back-end-specific HLDS->HLDS transformations
-               <li> 4a. code generation (annotated HLDS -> LLDS)
-               <li> 5a. low-level optimizations (LLDS -> LLDS)
-               <li> 6a. output code (LLDS -> C)
-               </ul>
-          <li> b. MLDS back-end
-               <br> Package: ml_backend.m
-               <ul type=disc>
-               <li> 4b. code generation (annotated HLDS -> MLDS)
-               <li> 5b. MLDS transformations (MLDS -> MLDS)
-               <li> 6b. output code
-     	       (MLDS -> C or MLDS -> MSIL or MLDS -> Java, etc.)
-               </ul>
-          <li> c. bytecode back-end
-               <br> Package: bytecode_backend.m
-               <ul type=disc>
-               <li> 4c. code generation (annotated HLDS -> bytecode)
-               </ul>
-          <li> d. Erlang back-end
-               <br> Package: erl_backend.m
-               <ul type=disc>
-               <li> 4d. code generation (annotated HLDS -> ELDS)
-               <li> 6d. output code
-     	       (ELDS -> Erlang)
-               </ul>
-          <li> There's also a package backend_libs.m which contains
-	       modules which are shared between several different back-ends.
-          </ul>
-     </ul>
-<li> Finally, at the bottom of the dependency graph there is the package
-     libs.m.  libs.m contains the option handling code, and also library
-     modules which are not sufficiently general or sufficiently useful to
-     go in the Mercury standard library.
-</ul>
-
-<p>
-
-In addition to the packages mentioned above, there are also packages
-for the build system: make.m contains the support for the `--make' option,
-and recompilation.m contains the support for the `--smart-recompilation'
-option.
-
-<hr>
-<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
-
-<h2> DETAILED DESIGN </h2>
-
-<p>
-This section describes the role of each module in the compiler.
-For more information about the design of a particular module,
-see the documentation at the start of that module's source code.
-
-<hr>
-<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
-<p>
-
-The action is co-ordinated from mercury_compile.m or make.m (if `--make'
-was specified on the command line).
-
-
-<h3> Option handling </h3>
-
-<p>
-
-Option handling is part of the libs.m package.
-
-<p>
-
-The command-line options are defined in the module options.m.
-mercury_compile.m calls library/getopt.m, passing the predicates
-defined in options.m as arguments, to parse them.  It then invokes
-handle_options.m to postprocess the option set. The results are
-represented using the type globals, defined in globals.m.
-The globals structure is available in the HLDS representation,
-buy it is passed around as a separate argument both before the HLDS is built
-and after it is no longer needed.
-
-<h3> Build system </h3>
-
-<p>
-
-Support for `--make' is in the make.m package,
-which contains the following modules:
-
-<dl>
-
-<dt> make.m
-	<dd>
-	Categorizes targets passed on the command line and passes
-	them to the appropriate module to be built.
-
-<dt> make.program_target.m
-	<dd>
-	Handles whole program `mmc --make' targets, including
-	executables, libraries and cleanup.
-
-<dt> make.module_target.m
-	<dd>
-	Handles targets built by a compilation action associated
-	with a single module, for example making interface files,
-
-<dt> make.dependencies.m
-	<dd>
-	Compute dependencies between targets and between modules.
-
-<dt> make.module_dep_file.m
-	<dd>
-	Record the dependency information for each module between
-	compilations.
-
-<dt> make.util.m
-	<dd>
-	Utility predicates.
-
-<dt> options_file.m
-	<dd>
-	Read the options files specified by the `--options-file'
-	option. Also used by mercury_compile.m to collect the value
-	of DEFAULT_MCFLAGS, which contains the auto-configured flags
-	passed to the compiler.
-
-</dl>
-
-The build process also invokes routines in compile_target_code.m,
-which is part of the backend_libs.m package (see below).
-
-<p>
-<hr>
-<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
-
-<h3> FRONT END </h3>
-<h4> 1. Parsing </h4>
-<h5> The parse_tree.m package </h5>
-
-<p>
-The first part of parsing is in the parse_tree.m package,
-which contains the modules listed below
-(except for the library/*.m modules,
-which are in the standard library).
-This part produces the parse_tree.m data structure,
-which is intended to match up as closely as possible
-with the source code, so that it is suitable for tasks
-such as pretty-printing.
-
-<p>
-
-<ul>
-
-<li> <p> lexical analysis (library/lexer.m)
-
-<li> <p> stage 1 parsing - convert strings to terms. <p>
-
-	library/parser.m contains the code to do this, while
-	library/term.m and library/varset.m contain the term and varset
-	data structures that result, and predicates for manipulating them.
-
-<li> <p> stage 2 parsing - convert terms to `items'
-         (declarations, clauses, etc.)
-
-	<p>
-	The result of this stage is a parse tree that has a one-to-one
-	correspondence with the source code.  The parse tree data structure
-	definition is in prog_data.m and prog_item.m, while the code to create
-	it is in prog_io.m and its submodules prog_io_dcg.m (which handles
-	clauses using Definite Clause Grammar notation), prog_io_goal.m (which
-	handles goals), prog_io_pragma.m (which handles pragma declarations),
-	prog_io_typeclass.m (which handles typeclass and instance
-	declarations), prog_io_type_defn.m (which handles type definitions),
-	prog_io_mutable.m (which handles initialize, finalize
-	and mutable declarations), prog_io_sym_name.m (which handles parsing
-	symbol names and specifiers) and prog_io_util.m (which defines
-	types and predicates needed by the other prog_io*.m modules.
-	builtin_lib_types.m contains definitions about types, type constructors
-	and function symbols that the Mercury implementation needs to know
-	about.
-
-	<p>
-
-	The modules prog_out.m and mercury_to_mercury.m contain predicates
-	for printing the parse tree.
-	prog_util.m contains some utility predicates
-	for manipulating the parse tree,
-	prog_mode contains utility predicates
-	for manipulating insts and modes,
-	prog_type contains utility predicates
-	for manipulating types,
-	prog_type_subst contains predicates
-	for performing substitutions on types,
-	prog_foreign contains utility predicates
-	for manipulating foreign code,
-	prog_mutable contains utility predicates
-	for manipulating mutable variables,
-	prog_event contains utility predicates for working with events,
-	while error_util.m contains predicates
-	for printing nicely formatting error messages.
-
-<li><p> imports and exports are handled at this point (modules.m)
-
-	<p>
-	read_module.m has code to read in modules in the form of .m,
-	.int, .opt etc files.
-
-	<p>
-	modules.m has the code to write out `.int', `.int2', `.int3',
-	`.d' and `.dep' files.
-
-	<p>
-	write_deps_file.m writes out Makefile fragments.
-
-	<p>
-	file_names.m does conversions between module names and file names.
-	It uses java_names.m, which contains predicates for dealing with names
-	of things in Java.
-
-	<p>
-	module_cmds.m handles the commands for manipulating interface files of
-	various kinds.
-
-	<p>
-	module_imports.m contains the module_imports type and its access
-	predicates, and the predicates that compute various sorts of
-	direct dependencies (those caused by imports) between modules.
-
-	<p>
-	deps_map.m contains the data structure for recording indirect
-	dependencies between modules, and the predicates for creating it.
-
-	<p>
-	source_file_map.m contains code to read, write and search
-	the mapping between module names and file names.
-
-<li><p> module qualification of types, insts and modes
-
-	<p>
-	module_qual.m -  <br>
-	Adds module qualifiers to all types insts and modes,
-	checking that a given type, inst or mode exists and that
-	there is only possible match.  This is done here because
-	it must be done before the `.int' and `.int2' interface files
-	are written. This also checks whether imports are really needed
-	in the interface.
-
-	<p>
- 	Notes on module qualification:
-	<ul>
-	<li> all types, typeclasses, insts and modes occurring in pred, func,
-	  type, typeclass and mode declarations are module qualified by
-	  module_qual.m.
- 	<li> all types, insts and modes occurring in lambda expressions,
- 	  explicit type qualifications, and clause mode annotations
-	  are module qualified in make_hlds.m.
- 	<li> constructors occurring in predicate and function mode declarations
- 	  are module qualified during type checking.
- 	<li> predicate and function calls and constructors within goals
- 	  are module qualified during mode analysis.
-	</ul>
-
-
-<li><p> reading and writing of optimization interfaces
-     (intermod.m and trans_opt.m -- these are part of the
-     hlds.m package, not the parse_tree.m package).
-
-	<p>
-	<module>.opt contains clauses for exported preds suitable for
-	inlining or higher-order specialization. The `.opt' file for the
-	current module is written after type-checking. `.opt' files
-	for imported modules are read here.
-	<module>.opt contains termination analysis information
-	for exported preds (eventually it ought to contain other
-	"transitive" information too, e.g. for optimization, but
-	currently it is only used for termination analysis).
-	`.trans_opt' files for imported modules are read here.
-	The `.trans_opt' file for the current module is written
-	after the end of semantic analysis.
-
-<li><p> expansion of equivalence types (equiv_type.m)
-
-	<p>
-	`with_type` and `with_inst` annotations on predicate
-	and function type and mode declarations are also expanded.
-
-	<p>
-	Expansion of equivalence types is really part of type-checking,
-	but is done on the item_list rather than on the HLDS because it
-	turned out to be much easier to implement that way.
-</ul>
-
-<p>
-That's all the modules in the parse_tree.m package.
-
-<h5> The hlds.m package </h5>
-<p>
-Once the stages listed above are complete, we then convert from the parse_tree
-data structure to a simplified data structure, which no longer attempts
-to maintain a one-to-one correspondence with the source code.
-This simplified data structure is called the High Level Data Structure (HLDS),
-which is defined in the hlds.m package.
-
-<p>
-The last stage of parsing is this conversion to HLDS,
-which is done mostly by the following submodules
-of the make_hlds module in the hlds package.
-<dl>
-
-<dt>
-make_hlds_passes.m
-<dd>
-This submodule calls the others to perform the conversion, in several passes.
-(We cannot do everything in one pass;
-for example, we need to have seen a predicate's declaration
-before we can process its clauses.)
-
-<dt>
-superhomogeneous.m
-<dd>
-Performs the conversion of unifications into superhomogeneous form.
-
-<dt>
-state_var.m
-<dd>
-Expands away state variable syntax.
-
-<dt>
-field_access.m
-<dd>
-Expands away field access syntax.
-
-<dt>
-goal_expr_to_goal.m
-<dd>
-Converts clauses from parse_tree format to hlds format.
-Eliminates universal quantification
-(using `all [Vs] G' ===> `not (some [Vs] (not G))')
-and implication (using `A => B' ===> `not(A, not B)').
-
-<dt>
-add_clause.m
-<dd>
-Oversees the conversion of clauses from parse_tree format to hlds format.
-Handles their addition to procedures,
-which is nontrivial in the presence of mode-specific clauses.
-
-<dt>
-add_pred.m
-<dd>
-Handles type and mode declarations for predicates.
-
-<dt>
-add_type.m
-<dd>
-Handles the declarations of types.
-
-<dt>
-add_mode.m
-<dd>
-Handles the declarations of insts and modes,
-including checking for circular insts and modes.
-
-<dt>
-add_special_pred.m
-<dd>
-Adds unify, compare, and (if needed) index and init predicates
-to the HLDS as necessary.
-
-<dt>
-add_solver.m
-<dd>
-Adds the casting predicates needed by solver types to the HLDS as necessary.
-
-<dt>
-add_class.m
-<dd>
-Handles typeclass and instance declarations.
-
-<dt>
-qual_info.m
-<dd>
-Handles the abstract data types used for module qualification.
-
-<dt>
-make_hlds_warn.m
-<dd>
-Looks for constructs that merit warnings,
-such as singleton variables and variables with overlapping scopes.
-
-<dt>
-make_hlds_error.m
-<dd>
-Error messages used by more than one submodule of make_hlds.m.
-
-<dt>
-add_pragma.m
-<dd>
-Adds most kinds of pragmas to the HLDS,
-including import/export pragmas, tabling pragmas and foreign code.
-
-</dl>
-
-Fact table pragmas are handled by fact_table.m
-(which is part of the ll_backend.m package).
-That module also reads the facts from the declared file
-and compiles them into a separate C file
-used by the foreign_proc body of the relevant predicate.
-
-The conversion of the item list to HLDS also involves make_tags.m,
-which chooses the data representation for each discriminated union type
-by assigning tags to each functor.
-
-<p>
-The HLDS data structure itself is spread over the following modules:
-
-<ol>
-<li>
-hlds_args.m defines the parts of the HLDS concerned with predicate
-and function argument lists.
-<li>
-hlds_data.m defines the parts of the HLDS concerned with
-function symbols, types, insts, modes and determinisms;
-<li>
-hlds_goal.m defines the part of the HLDS concerned with the
-structure of goals, including the annotations on goals.
-<li>
-hlds_clauses.m defines the part of the HLDS concerning clauses.
-<li>
-hlds_rtti.m defines the part of the HLDS concerning RTTI.
-<li>
-const_struct.m defines the part of the HLDS concerning constant structures.
-<li>
-hlds_pred.m defines the part of the HLDS concerning predicates and procedures;
-<li>
-pred_table.m defines the tables that index predicates and functions
-on various combinations of (qualified and unqualified) names and arity.
-<li>
-hlds_module.m defines the top-level parts of the HLDS,
-including the type module_info.
-</ol>
-
-<p>
-The module hlds_out.m contains predicates to dump the HLDS to a file.
-These predicates print all the information the compiler has
-about each part of the HLDS.
-The module hlds_desc.m, by contrast contains predicates
-that describe some parts of the HLDS (e.g. goals) with brief strings,
-suitable for use in progress messages used for debugging.
-
-<p>
-The hlds.m package also contains some utility modules that contain
-various library routines which are used by other modules that manipulate
-the HLDS:
-
-<dl>
-<dt> mark_tail_calls.m
-<dd> Marks directly tail recursive calls as such,
-and marks procedures containing directly tail recursive calls as such.
-
-<dt> hlds_code_util.m
-<dd> Utility routines for use during HLDS generation.
-
-<dt> goal_form.m
-<dd> Contains predicates for determining whether
-HLDS goals match various criteria.
-
-<dt> goal_util.m
-<dd> Contains various miscellaneous utility predicates for manipulating
-HLDS goals, e.g. for renaming variables.
-
-<dt> passes_aux.m
-<dd> Contains code to write progress messages, and higher-order code
-to traverse all the predicates defined in the current module
-and do something with each one.
-
-<dt> hlds_error_util.m:
-<dd> Utility routines for printing nicely formatted error messages
-for symptoms involving HLDS data structures.
-For symptoms involving only structures defined in prog_data,
-use parse_tree.error_util.
-
-<dt> code_model.m:
-<dd> Defines a type for classifying determinisms
-in ways useful to the various backends,
-and utility predicates on that type.
-
-<dt> arg_info.m:
-<dd> Utility routines that the various backends use
-to analyze procedures' argument lists
-and decide on parameter passing conventions.
-
-<dt> hhf.m:
-<dd> Facilities for translating the bodies of predicates
-to hyperhomogeneous form, for constraint based mode analysis.
-
-<dt> inst_graph.m:
-<dd> Defines the inst_graph data type,
-which describes the structures of insts for constraint based mode analysis,
-as well as predicates operating on that type.
-
-<dt> from_ground_term_util.m
-<dd> Contains types and predicates for operating on
-from_ground_term scopes and their contents.
-</dl>
-
-<h4> 2. Semantic analysis and error checking </h4>
-
-<p>
-This is the check_hlds.m package,
-with support from the mode_robdd.m package for constraint based mode analysis.
-
-<p>
-
-Any pass which can report errors or warnings must be part of this stage,
-so that the compiler does the right thing for options such as
-`--halt-at-warn' (which turns warnings into errors) and
-`--error-check-only' (which makes the compiler only compile up to this stage).
-
-<dl>
-
-<dt> implicit quantification
-
-	<dd>
-	quantification.m (XXX which for some reason is part of the hlds.m
-	package rather than the check_hlds.m package)
-	handles implicit quantification and computes
-	the set of non-local variables for each sub-goal.
-	It also expands away bi-implication (unlike the expansion
-	of implication and universal quantification, this expansion
-	cannot be done until after quantification).
-	This pass is called from the `transform' predicate in make_hlds.m.
-	<p>
-
-<dt> checking typeclass instances (check_typeclass.m)
-	<dd>
-	check_typeclass.m both checks that instance declarations satisfy all
-	the appropriate superclass constraints
-	(including functional dependencies)
-	and performs a source-to-source transformation on the
-	methods from the instance declarations.
-	The transformed code is checked for type, mode, uniqueness, purity
-	and determinism correctness by the later passes, which has the effect
-	of checking the correctness of the instance methods themselves
-	(ie. that the instance methods match those expected by the typeclass
-	declaration).
-	During the transformation,
-	pred_ids and proc_ids are assigned to the methods for each instance.
-
-	<p>
-	While checking that the superclasses of a class are satisfied
-	by the instance declaration, a set of constraint_proofs are built up
-	for the superclass constraints. These are used by polymorphism.m when
-	generating the base_typeclass_info for the instance.
-
-	<p>
-	This module also checks that there are no ambiguous pred/func
-	declarations (that is, it checks that all type variables in constraints
-	are determined by type variables in arguments),
-	checks that there are no cycles in the typeclass hierarchy,
-	and checks that each abstract instance has a corresponding
-	typeclass instance.
-	<p>
-
-<dt> check user defined insts for consistency with types
-	<dd>
-	inst_check.m checks that all user defined bound insts are consistent
-	with at least one type in scope
-	(i.e. that the set of function symbols
-	in the bound list for the inst are a subset of the allowed function
-	symbols for at least one type in scope).
-
-	<p>
-	A warning is issued if it finds any user defined bound insts not
-	consistent with any types in scope.
-	<p>
-
-<dt> improving the names of head variables
-	<dd>
-	headvar_names.m tries to replace names of the form HeadVar__n
-	with actual names given by the programmer.
-	<p>
-	For efficiency, this phase not a standalone pass,
-	but is instead invoked by the typechecker.
-
-<dt> type checking
-
-	<dd>
-	<ul>
-	<li> typecheck.m handles type checking, overloading resolution &
-	  module name resolution, and almost fully qualifies all predicate
-	  and functor names.  It sets the map(var, type) field in the
-	  pred_info.  However, typecheck.m doesn't figure out the pred_id
-	  for function calls or calls to overloaded predicates; that can't
-	  be done in a single pass of typechecking, and so it is done
-	  later on (in post_typecheck.m, for both preds and function calls)
-	<li> typecheck_info.m defines the main data structures used by
-	  typechecking.
-	<li> typecheck_errors.m handles outputting of type errors.
-	<li> typeclasses.m checks typeclass constraints, and
-	  any redundant constraints that are eliminated are recorded (as
-	  constraint_proofs) in the pred_info for future reference.
-	<li> type_util.m contains utility predicates dealing with types
-	  that are used in a variety of different places within the compiler
-	<li> post_typecheck.m may also be considered to logically be a part
-	  of typechecking, but it is actually called from purity
-	  analysis (see below).  It contains the stuff related to
-	  type checking that can't be done in the main type checking pass.
-	  It also removes assertions from further processing.
-	  post_typecheck.m reports errors for unbound type and inst variables,
-	  for unsatisfied type class constraints and for indistinguishable
-	  predicate or function modes.
-	</ul>
-	<p>
-
-<dt> assertions
-
-	<dd>
-	assertion.m (XXX in the hlds.m package)
-	is the abstract interface to the assertion table.
-	Currently all the compiler does is type check the assertions and
-	record for each predicate that is used in an assertion, which
-	assertion it is used in.  The set up of the assertion table occurs
-	in post_typecheck.finish_assertion.
-	<p>
-
-<dt> purity analysis
-
-	<dd>
-	purity.m is responsible for purity checking, as well as
-	defining the <CODE>purity</CODE> type and a few public
-	operations on it.  It also calls post_typecheck.m to
-	complete the handling of predicate
-	overloading for cases which typecheck.m is unable to handle,
-	and to check for unbound type variables.
-	Elimination of double negation is also done here; that needs to
-	be done after quantification analysis and before mode analysis.
-	Calls to `private_builtin.unsafe_type_cast/2' are converted
-	into `generic_call(unsafe_cast, ...)' goals here.
-	<p>
-
-<dt> implementation-defined literals
-
-	<dd>
-	implementation_defined_literals.m replaces unifications
-	of the form <CODE>Var = $name</CODE> by unifications to string
-	or integer constants.
-	<p>
-
-<dt> polymorphism transformation
-
-	<dd>
-	polymorphism.m handles introduction of type_info arguments for
-	polymorphic predicates and introduction of typeclass_info arguments
-	for typeclass-constrained predicates.
-	This phase needs to come before mode analysis so that mode analysis
-	can properly reorder code involving existential types.
-	(It also needs to come before simplification so that simplify.m's
-	optimization of goals with no output variables doesn't do the
-	wrong thing for goals whose only output is the type_info for
-	an existentially quantified type parameter.)
-	<p>
-	This phase also
-	converts higher-order predicate terms into lambda expressions,
-	and copies the clauses to the proc_infos in preparation for
-	mode analysis.
-	<p>
-	The polymorphism.m module also exports some utility routines that
-	are used by other modules.  These include some routines for generating
-	code to create type_infos, which are used by simplify.m and magic.m
-	when those modules introduce new calls to polymorphic procedures.
-	<p>
-	When it has finished, polymorphism.m calls clause_to_proc.m to
-	make duplicate copies of the clauses for each different mode of
-	a predicate; all later stages work on procedures, not predicates.
-	<p>
-
-<dt> mode analysis
-
-	<dd>
-	<ul>
-	<li> modes.m is the top analysis module.
-	  It checks that procedures are mode-correct.
-	<li> modecheck_goal.m does most of the work.
-	  It handles the tasks that are common to all kinds of goals,
-	  including annotating each goal with a delta-instmap
-	  that specifies the changes in instantiatedness of each
-	  variable over that goal, and does the analysis of several
-	  kinds of goals.
-	<li> modecheck_conj.m is the sub-module which analyses conjunctions
-	  It reorders code as necessary.
-	  unification goals.
-	<li> modecheck_unify.m is the sub-module which analyses
-	  unification goals.
-	  It also module qualifies data constructors.
-	<li> modecheck_call.m is the sub-module which analyses calls.
-
-		<p>
-
-	  The following sub-modules are used:
-		<dl>
-		<dt> mode_info.m
-			<dd>
-			The main data structure for mode analysis.
-		<dt> delay_info.m
-			<dd>
-			A sub-component of the mode_info data
-			structure used for storing the information
-			for scheduling: which goals are currently
-			delayed, what variables they are delayed on, etc.
-		<dt> modecheck_util.m
-			<dd> Utility predicates useful during mode analysis.
-		<dt> instmap.m (XXX in the hlds.m package)
-			<dd>
-			Defines the instmap and instmap_delta ADTs
-			which store information on what instantiations
-			a set of variables may be bound to.
-		<dt> inst_match.m
-			<dd>
-			This contains the code for examining insts and
-			checking whether they match.
-		<dt> inst_util.m
-			<dd>
-			This contains the code for creating new insts from
-			old ones: unifying them, merging them and so on.
-		<dt> mode_errors.m
-			<dd>
-			This module contains all the code to
-			generate error messages for mode errors
-		</dl>
-	<li> mode_util.m contains miscellaneous useful predicates dealing
-	  with modes (many of these are used by lots of later stages
-	  of the compiler)
-	<li> mode_debug.m contains utility code for tracing the actions
-	  of the mode checker.
-	<li> delay_partial_inst.m adds a post-processing pass on mode-correct
-	  procedures to avoid creating intermediate, partially instantiated
-	  data structures.
-	</ul>
-	<p>
-
-<dt> constraint based mode analysis
-
-	<dd> This is an experimental alternative
-	to the usual mode analysis algorithm.
-	It works by building a system of boolean constraints
-	about where (parts of) variables can be bound,
-	and then solving those constraints.
-
-	<ul>
-	<li> mode_constraints.m is the module that finds the constraints
-	and adds them to the constraint store.
-	<li> mode_ordering.m is the module that uses solutions of the
-	constraint system to find an ordering for the goals in conjunctions.
-	<li> mode_constraint_robdd.m is the interface to the modules
-	that perform constraint solving using reduced ordered binary decision
-	diagrams (robdds).
-	<li> We have several implementations of solvers using robdds.
-	Each solver is in a module named mode_robdd.X.m, and they all belong
-	to the top-level mode_robdd.m.
-	</ul>
-	<p>
-
-<dt> constraint based mode analysis propagation solver
-
-	<dd> This is a new alternative
-	for the constraint based mode analysis algorithm.
-	It will perform conjunct reordering for mercury
-	programs of a limited syntax (it calls error if
-	it encounters higher order code or a parallel
-	conjunction, or is asked to infer modes).
-
-
-	<ul>
-	<li> prop_mode_constraints.m is the interface to the old
-	mode_constraints.m. It builds constraints for an SCC.
-	<li> build_mode_constraints.m is the module that traverses a predicate
-	to build constraints for it.
-	<li> abstract_mode_constraints.m describes data structures for the
-	constraints themselves.
-	<li> ordering_mode_constraints.m solves constraints to determine
-	the producing and consuming goals for program variables, and
-	performs conjunct reordering based on the result.
-	<li> mcsolver.m contains the constraint solver used by
-	ordering_mode_constraints.m.
-	</ul>
-	<p>
-
-<dt> indexing and determinism analysis
-
-	<dd>
-	<ul>
-	<li> switch_detection.m transforms into switches those disjunctions
-	  in which several disjuncts test the same variable against different
-	  function symbols.
-	<li> cse_detection.m looks for disjunctions in which each disjunct tests
-	  the same variable against the same function symbols, and hoists any
-	  such unifications out of the disjunction.
-	  If cse_detection.m modifies the code,
-	  it will re-run mode analysis and switch detection.
-	<li> det_analysis.m annotates each goal with its determinism;
-	  it inserts cuts in the form of "some" goals wherever the determinisms
-	  and delta instantiations of the goals involved make it necessary.
-	  Any errors found during determinism analysis are reported by
-	  det_report.m.
-	  det_util.m contains utility predicates used in several modules.
-	</ul>
-	<p>
-
-<dt> checking of unique modes (unique_modes.m)
-
-	<dd>
-	unique_modes.m checks that non-backtrackable unique modes were
-	not used in a context which might require backtracking.
-	Note that what unique_modes.m does is quite similar to
-	what modes.m does, and unique_modes calls lots of predicates
-	defined in modes.m to do it.
-	<p>
-
-<dt> stratification checking
-
-	<dd>
-	The module stratify.m implements the `--warn-non-stratification'
-	warning, which is an optional warning that checks for loops
-	through negation.
-	<p>
-
-<dt> try goal expansion
-
-	<dd>
-	try_expand.m expands `try' goals into calls to predicates in the
-	`exception' module instead.
-	<p>
-
-<dt> simplification (simplify.m)
-
-	<dd>
-	simplify.m finds and exploits opportunities for simplifying the
-	internal form of the program, both to optimize the code and to
-	massage the code into a form the code generator will accept.
-	It also warns the programmer about any constructs that are so simple
-	that they should not have been included in the program in the first
-	place.  (That's why this pass needs to be part of semantic analysis:
-	because it can report warnings.)
-	simplify.m converts complicated unifications into procedure calls.
-	simplify.m calls common.m which looks for (a) construction unifications
-	that construct a term that is the same as one that already exists,
-	or (b) repeated calls to a predicate with the same inputs, and replaces
-	them with assignment unifications.
-	simplify.m also attempts to partially evaluate calls to builtin
-	procedures if the inputs are all constants (this is const_prop.m
-	in the transform_hlds.m package).
-	simplify.m also calls format_call.m to look for
-	(possibly) incorrect uses of string.format io.format.
-	<p>
-
-<dt> unused imports (unused_imports.m)
-
-	<dd>
-	unused_imports.m determines which imports of the module
-	are not required for the module to compile.  It also identifies
-	which imports of a module can be moved from the interface to the
-	implementation.
-	<p>
-
-<dt> xml documentation (xml_documentation.m)
-
-	<dd>
-	xml_documentation.m outputs a XML representation of all the
-	declarations in the module.  This XML representation is designed
-	to be transformed via XSL into more human readable documentation.
-	<p>
-
-</dl>
-
-<h4> 3. High-level transformations </h4>
-
-<p>
-This is the transform_hlds.m package.
-
-<p>
-
-The first pass of this stage does tabling transformations (table_gen.m).
-This involves the insertion of several calls to tabling predicates
-defined in mercury_builtin.m and the addition of some scaffolding structure.
-Note that this pass can change the evaluation methods of some procedures to
-eval_table_io, so it should come before any passes that require definitive
-evaluation methods (e.g. inlining).
-
-<p>
-
-The next pass of this stage is a code simplification, namely
-removal of lambda expressions (lambda.m):
-
-<ul>
-<li>
-	lambda.m converts lambda expressions into higher-order predicate
-        terms referring to freshly introduced separate predicates.
-	This pass needs to come after unique_modes.m to ensure that
-	the modes we give to the introduced predicates are correct.
-	It also needs to come after polymorphism.m since polymorphism.m
-	doesn't handle higher-order predicate constants.
-</ul>
-
-(Is there any good reason why lambda.m comes after table_gen.m?)
-
-<p>
-
-The next pass also simplifies the HLDS by expanding out the atomic goals
-implementing Software Transactional Memory (stm_expand.m).
-
-<p>
-
-Expansion of equivalence types (equiv_type_hlds.m)
-
-<ul>
-<li>
-	This pass expands equivalences which are not meant to
-	be visible to the user of imported modules.  This
-	is necessary for the IL back-end and in some cases
-	for `:- pragma export' involving foreign types on
-	the C back-end.
-
-	<p>
-
-	It's also needed by the MLDS->C back-end, for
-	--high-level-data, and for cases involving abstract
-	equivalence types which are defined as "float".
-</ul>
-
-<p>
-
-Exception analysis. (exception_analysis.m)
-
-<ul>
-<li>
-	This pass annotates each module with information about whether
-	the procedures in the module may throw an exception or not.
-</ul>
-
-<p>
-
-The next pass is termination analysis. The various modules involved are:
-
-<ul>
-<li>
-termination.m is the control module. It sets the argument size and
-termination properties of builtin and compiler generated procedures,
-invokes term_pass1.m and term_pass2.m
-and writes .trans_opt files and error messages as appropriate.
-<li>
-term_pass1.m analyzes the argument size properties of user-defined procedures,
-<li>
-term_pass2.m analyzes the termination properties of user-defined procedures.
-<li>
-term_traversal.m contains code common to the two passes.
-<li>
-term_errors.m defines the various kinds of termination errors
-and prints the messages appropriate for each.
-<li>
-term_util.m defines the main types used in termination analysis
-and contains utility predicates.
-<li>
-post_term_analysis.m contains error checking routines and optimizations
-that depend upon the information obtained by termination analysis.
-</ul>
-
-<p>
-
-Trail usage analysis. (trailing_analysis.m)
-
-<ul>
-<li>
-	This pass annotates each module with information about whether
-	the procedures in the module modify the trail or not.  This
-	information can be used to avoid redundant trailing operations.
-</ul>
-
-<p>
-
-Minimal model tabling analysis. (tabling_analysis.m)
-
-<ul>
-<li>
-	This pass annotates each goal in a module with information about
-	whether the goal calls procedures that are evaluated using
-	minimal model tabling.  This information can be used to reduce
-	the overhead of minimal model tabling.
-
-</ul>
-
-<p>
-
-Most of the remaining HLDS-to-HLDS transformations are optimizations:
-
-<ul>
-<li> specialization of higher-order and polymorphic predicates where the
-  value of the higher-order/type_info/typeclass_info arguments are known
-  (higher_order.m)
-
-<li> attempt to introduce accumulators (accumulator.m).  This optimizes
-  procedures whose tail consists of independent associative computations
-  or independent chains of commutative computations into a tail
-  recursive form by the introduction of accumulators.  If lco is turned
-  on it can also transform some procedures so that only construction
-  unifications are after the recursive call.  This pass must come before
-  lco, unused_args (eliminating arguments makes it hard to relate the
-  code back to the assertion) and inlining (can make the associative
-  call disappear).
-  <p>
-  This pass makes use of the goal_store.m module, which is a dictionary-like
-  data structure for storing HLDS goals.
-
-<li> inlining (i.e. unfolding) of simple procedures (inlining.m)
-
-<li> loop_inv.m: loop invariant hoisting.  This transformation moves
-  computations within loops that are the same on every iteration to the outside
-  of the loop so that the invariant computations are only computed once.  The
-  transformation turns a single looping predicate containing invariant
-  computations into two: one that computes the invariants on the first
-  iteration and then loops by calling the second predicate with extra arguments
-  for the invariant values.  This pass should come after inlining, since
-  inlining can expose important opportunities for loop invariant hoisting.
-  Such opportunities might not be visible before inlining because only
-  *part* of the body of a called procedure is loop-invariant.
-
-<li> deforestation and partial evaluation (deforest.m). This optimizes
-  multiple traversals of data structures within a conjunction, and
-  avoids creating intermediate data structures. It also performs
-  loop unrolling where the clause used is known at compile time.
-  deforest.m makes use of the following sub-modules (`pd_' stands for
-  "partial deduction"):
-  <ul>
-  <li> constraint.m transforms goals so that goals which can fail are
-       executed earlier.
-  <li> pd_cost.m contains some predicates to estimate the improvement
-       caused by deforest.m.
-  <li> pd_debug.m produces debugging output.
-  <li> pd_info.m contains a state type for deforestation.
-  <li> pd_term.m contains predicates to check that the deforestation
-       algorithm terminates.
-  <li> pd_util.m contains various utility predicates.
-  </ul>
-
-<li> issue warnings about unused arguments from predicates, and create
-specialized versions without them (unused_args.m); type_infos are often unused.
-
-<li> delay_construct.m pushes construction unifications to the right in
-  semidet conjunctions, in an effort to reduce the probability that it will
-  need to be executed.
-
-<li> unneeded_code.m looks for goals whose results are either not needed
-  at all, or needed in some branches of computation but not others. Provided
-  that the goal in question satisfies some requirements (e.g. it is pure,
-  it cannot fail etc), it either deletes the goal or moves it to the
-  computation branches where its output is needed.
-
-<dt> lco.m finds predicates whose implementations would benefit
-  from last call optimization modulo constructor application.
-
-<li> elimination of dead procedures (dead_proc_elim.m). Inlining, higher-order
-  specialization and the elimination of unused args can make procedures dead
-  even if the user doesn't, and automatically constructed unification and
-  comparison predicates are often dead as well.
-
-<li> tupling.m looks for predicates that pass around several arguments,
-  and modifies the code to pass around a single tuple of these arguments
-  instead if this looks like reducing the cost of parameter passing.
-
-<li> untupling.m does the opposite of tupling.m: it replaces tuple arguments
-  with their components. This can be useful both for finding out how much
-  tupling has already been done manually in the source code, and to break up
-  manual tupling in favor of possibly more profitable automatic tupling.
-
-<li> dep_par_conj.m transforms parallel conjunctions to add the wait and signal
-  operations required by dependent AND parallelism. To maximize the amount of
-  parallelism available, it tries to push the signals as early as possible
-  in producers and the waits as late as possible in the consumers, creating
-  specialized versions of predicates as needed.
-
-<li> parallel_to_plain_conj.m transforms parallel conjunctions to plain
-  conjunctions, for use in grades that do not support AND-parallelism.
-
-<li> granularity.m tries to ensure that programs do not generate too much
-  parallelism. Its goal is to minimize parallelism's overhead while still
-  gaining all the parallelism the machine can actually exploit.
-
-<li> implicit_parallelism.m is a package whose task is to introduce parallelism
-  into sequential code automatically. Its submodules are
-	<ul>
-	<li> introduce_parallelism.m does the main task of the package.
-	<li> push_goals_together.m performs a transformation that allows
-	     introduce_parallelism.m to do a better job.
-	</ul>
-
-<dt> float_regs.m wraps higher-order terms which use float registers
-  if passed in contexts where regular registers would be expected,
-  and vice versa.
-
-</ul>
-
-<p>
-
-The module transform.m contains stuff that is supposed to be useful
-for high-level optimizations (but which is not yet used).
-
-<p>
-
-The last three HLDS-to-HLDS transformations implement
-term size profiling (size_prof.m and complexity.m) and
-deep profiling (deep_profiling.m, in the ll_backend.m package).
-Both passes insert into procedure bodies, among other things,
-calls to procedures (some of which are impure)
-that record profiling information.
-
-<h4> 4. Intermodule analysis framework </h4>
-
-<p>
-This is the analysis.m package.
-
-<p>
-
-The framework can be used by a few analyses in the transform_hlds.m package.
-It is documented in the analysis/README file.
-
-<hr>
-<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
-
-<h3> a. LLDS BACK-END </h3>
-
-<p>
-This is the ll_backend.m package.
-
-<h4> 3a. LLDS-specific HLDS -> HLDS transformations </h4>
-
-Before LLDS code generation, there are a few more passes which
-annotate the HLDS with information used for LLDS code generation,
-or perform LLDS-specific transformations on the HLDS:
-
-	<dl>
-		<dt> reducing the number of variables that have to be
-			saved across procedure calls (saved_vars.m)
-			<dd>
-			We do this by putting the code that generates
-			the value of a variable just before the use of
-			that variable, duplicating the variable and the
-			code that produces it if necessary, provided
-			the cost of doing so is smaller than the cost
-			of saving and restoring the variable would be.
-
-		<dt> transforming procedure definitions to reduce the number
-			of variables that need their own stack slots
-			(stack_opt.m)
-			<dd>
-			The main algorithm in stack_opt.m figures out when
-			variable A can be reached from a cell pointed to by
-			variable B, so that storing variable B on the stack
-			obviates the need to store variable A on the stack
-			as well.
-			This algorithm relies on an implementation of
-			the maximal matching algorithm in matching.m.
-		<dt> migration of builtins following branched structures
-		     (follow_code.m)
-			<dd>
-			This transformation the results of follow_vars.m
-			(see below)
-		<dt> simplification again (simplify.m, in the check_hlds.m
-		     package)
-			<dd>
-			We run this pass a second time in case the intervening
-			transformations have created new opportunities for
-			simplification.  It needs to be run immediately
-			before code generation, because it enforces some
-			invariants that the LLDS code generator relies on.
-		<dt> annotation of goals with liveness information (liveness.m)
-			<dd>
-			This records the birth and death of each variable
-			in the HLDS goal_info.
-		<dt> allocation of stack slots
-			<dd>
-			This is done by stack_alloc.m, with the assistance of
-			the following modules:
-
-			<ul>
-			<li> live_vars.m works out which variables need
-			to be saved on the stack when.
-
-			<li> graph_colour.m (in the libs.m package)
-			contains the algorithm that
-			stack_alloc.m calls to convert sets of variables
-			that must be saved on the stack at the same time
-			to an assignment of a stack slot to each such variable.
-			</ul>
-		<dt> allocating the follow vars (follow_vars.m)
-			<dd>
-			Traverses backwards over the HLDS, annotating some
-			goals with information about what locations variables
-			will be needed in next.  This allows us to generate
-			more efficient code by putting variables in the right
-			spot directly.  This module is not called from
-			mercury_compile_llds_back_end.m; it is called from
-			store_alloc.m.
-		<dt> allocating the store map (store_alloc.m)
-			<dd>
-			Annotates each branched goal with variable location
-			information so that we can generate correct code
-			by putting variables in the same spot at the end
-			of each branch.
-		<dt> computing goal paths (goal_path.m
-		     in the check_hlds.m package)
-			<dd>
-			The goal path of a goal defines its position in
-			the procedure body. This transformation attaches
-			its goal path to every goal, for use by the debugger.
-	</dl>
-
-<h4> 4a. Code generation. </h4>
-<dl>
-<dt> code generation
-
-	<dd>
-	Code generation converts HLDS into LLDS.
-	For the LLDS back-end, this is also the point at which we
-	insert code to handle debugging and trailing, and to do
-	heap reclamation on failure.
-	The top level code generation module is proc_gen.m,
-	which looks after the generation of code for procedures
-	(including prologues and epilogues).
-	The predicate for generating code for arbitrary goals is in code_gen.m,
-	but that module handles only sequential conjunctions; it calls
-	other modules to handle other kinds of goals:
-
-		<ul>
-		<li> ite_gen.m (if-then-elses)
-		<li> call_gen.m (predicate calls and also calls to
-			out-of-line unification procedures)
-		<li> disj_gen.m (disjunctions)
-		<li> par_conj_gen.m (parallel conjunctions)
-		<li> unify_gen.m (unifications)
-		<li> switch_gen.m (switches), which has sub-modules
-			<ul>
-			<li> dense_switch.m
-			<li> lookup_switch.m
-			<li> string_switch.m
-			<li> tag_switch.m
-			<li> switch_case.m
-			<li> switch_util.m -- this is in the backend_libs.m
-			     package, since it is also used by MLDS back-end
-			</ul>
-		<li> commit_gen.m (commits)
-		<li> pragma_c_gen.m (embedded C code)
-		</ul>
-
-	<p>
-
-	The code generator also calls middle_rec.m to do middle recursion
-	optimization, which is implemented during code generation.
-
-	<p>
-
-	The code generation modules make use of
-		<dl>
-		<dt> code_info.m
-			<dd>
-			The main data structure for the code generator.
-		<dt> var_locn.m
-			<dd>
-			This defines the var_locn type, which is a
-			sub-component of the code_info data structure;
-			it keeps track of the values and locations of variables.
-			It implements eager code generation.
-		<dt> exprn_aux.m
-			<dd>
-			Various utility predicates.
-		<dt> code_util.m
-			<dd>
-			Some miscellaneous preds used for code generation.
-		<dt> lookup_util.m
-			<dd>
-			Some miscellaneous preds used for lookup switch
-			(and lookup disjunction) generation.
-		<dt> continuation_info.m
-			<dd>
-			For accurate garbage collection, collects
-			information about each live value after calls,
-			and saves information about procedures.
-		<dt> trace_gen.m
-			<dd>
-			Inserts calls to the runtime debugger.
-		<dt> trace_params.m (in the libs.m package, since it
-		     is considered part of option handling)
-			<dd>
-			Holds the parameter settings controlling
-			the handling of execution tracing.
-		</dl>
-
-<dt> code generation for `pragma export' declarations (export.m)
-<dd> This is handled separately from the other parts of code generation.
-     mercury_compile*.m calls `export.produce_header_file' to produce
-     C code fragments which declare/define the C functions which are the
-     interface stubs for procedures exported to C.
-
-<dt> generation of constants for RTTI data structures
-<dd> This could also be considered a part of code generation,
-     but for the LLDS back-end this is currently done as part
-     of the output phase (see below).
-
-</dl>
-
-<p>
-
-The result of code generation is the Low Level Data Structure (llds.m),
-which may also contains some data structures whose types are defined in rtti.m.
-The code for each procedure is generated as a tree of code fragments
-which is then flattened.
-
-<h4> 5a. Low-level optimization (LLDS). </h4>
-
-<p>
-
-Most of the various LLDS-to-LLDS optimizations are invoked from optimize.m.
-They are:
-
-<ul>
-<li> optimization of jumps to jumps (jumpopt.m)
-
-<li> elimination of duplicate code sequences within procedures (dupelim.m)
-
-<li> elimination of duplicate procedure bodies (dupproc.m,
-invoked directly from mercury_compile_llds_back_end.m)
-
-<li> optimization of stack frame allocation/deallocation (frameopt.m)
-
-<li> filling branch delay slots (delay_slot.m)
-
-<li> dead code and dead label removal (labelopt.m)
-
-<li> peephole optimization (peephole.m)
-
-<li> introduction of local C variables (use_local_vars.m)
-
-<li> removal of redundant assignments, i.e. assignments that assign a value
-that the target location already holds (reassign.m)
-
-</ul>
-
-In addition, stdlabel.m performs standardization of labels.
-This is not an optimization itself,
-but it allows other optimizations to be evaluated more easily.
-
-<p>
-
-The module opt_debug.m contains utility routines used for debugging
-these LLDS-to-LLDS optimizations.
-
-<p>
-
-Several of these optimizations (frameopt and use_local_vars) also
-use livemap.m, a module that finds the set of locations live at each label.
-
-<p>
-
-Use_local_vars numbering also introduces
-references to temporary variables in extended basic blocks
-in the LLDS representation of the C code.
-The transformation to insert the block scopes
-and declare the temporary variables is performed by wrap_blocks.m.
-
-<p>
-
-Depending on which optimization flags are enabled,
-optimize.m may invoke many of these passes multiple times.
-
-<p>
-
-Some of the low-level optimization passes use basic_block.m,
-which defines predicates for converting sequences of instructions to
-basic block format and back, as well as opt_util.m, which contains
-miscellaneous predicates for LLDS-to-LLDS optimization.
-
-
-<h4> 6a. Output C code </h4>
-
-<ul>
-<li> type_ctor_info.m
-  (in the backend_libs.m package, since it is shared with the MLDS back-end)
-  generates the type_ctor_gen_info structures that list
-  items of information (including unification, index and compare predicates)
-  associated with each declared type constructor that go into the static
-  type_ctor_info data structure. If the type_ctor_gen_info structure is not
-  eliminated as inaccessible, this module adds the corresponding type_ctor_info
-  structure to the RTTI data structures defined in rtti.m,
-  which are part of the LLDS.
-
-<li> base_typeclass_info.m
-  (in the backend_libs.m package, since it is shared with the MLDS back-end)
-  generates the base_typeclass_info structures that
-  list the methods of a class for each instance declaration. These are added to
-  the RTTI data structures, which are part of the LLDS.
-
-<li> stack_layout.m generates the stack_layout structures for
-  accurate garbage collection. Tables are created from the data
-  collected in continuation_info.m.
-
-  Stack_layout.m uses prog_rep.m to generate bytecode representations
-  of procedure bodies for use by the declarative debugger.
-
-<li> Type_ctor_info structures and stack_layout structures both contain
-  pseudo_type_infos, which are type_infos with holes for type variables;
-  these are generated by pseudo_type_info.m
-  (in the backend_libs.m package, since it is shared with the MLDS back-end).
-
-<li> llds_common.m extracts static terms from the main body of the LLDS, and
-  puts them at the front. If a static term originally appeared several times,
-  it will now appear as a single static term with multiple references to it.
-  [XXX FIXME this module has now been replaced by global_data.m]
-
-<li> transform_llds.m is responsible for doing any source to source
-     transformations on the llds which are required to make the C output
-     acceptable to various C compilers.  Currently computed gotos can have
-     their maximum size limited to avoid a fixed limit in lcc.
-
-<li> Final generation of C code is done in llds_out.m, which subcontracts the
-     output of RTTI structures to rtti_out.m and of other static
-     compiler-generated data structures (such as those used by the debugger,
-     the deep profiler, and in the future by the garbage collector)
-     to layout_out.m.
-</ul>
-
-
-<hr>
-<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
-
-<h3> b. MLDS BACK-END </h3>
-
-<p>
-
-This is the ml_backend.m package.
-
-<p>
-
-The original LLDS code generator generates very low-level code,
-since the LLDS was designed to map easily to RISC architectures.
-We have developed a new back-end that generates much higher-level
-code, suitable for generating Java, high-level C, etc.
-This back-end uses the Medium Level Data Structure (mlds.m) as its
-intermediate representation.
-
-<h4> 3b. pre-passes to annotate/transform the HLDS </h4>
-
-<p>
-Before code generation there is a pass which annotates the HLDS with
-information used for code generation:
-
-<ul>
-<li> mark_static_terms.m (in the hlds.m package) marks
-     construction unifications which can be implemented using static constants
-     rather than heap allocation.
-</ul>
-
-<p>
-For the MLDS back-end, we've tried to keep the code generator simple.
-So we prefer to do things as HLDS to HLDS transformations where possible,
-rather than complicating the HLDS to MLDS code generator.
-Thus we have a pass which transforms the HLDS to handle trailing:
-
-<ul>
-<li> add_trail_ops.m inserts code to manipulate the trail,
-     in particular ensuring that we apply the appropriate
-     trail operations before each choice point, when execution
-     resumes after backtracking, and whenever we do a commit.
-     The trail operations are represented as (and implemented as)
-     calls to impure procedures defined in library/private_builtin.m.
-<li> add_heap_ops.m is very similar to add_trail_ops.m;
-     it inserts code to do heap reclamation on backtracking.
-</ul>
-
-<h4> 4b. MLDS code generation </h4>
-<ul>
-<li> ml_proc_gen.m is the top module of the package that converts HLDS code
-     to MLDS. Its main submodule is ml_code_gen.m, which handles the tasks
-     common to all kinds of goals, as well as the tasks specific to some
-     goals (conjunctions, if-then-elses, negations). For other kinds of goals,
-     ml_code_gen.m invokes some other submodules:
-	<ul>
-	<li> ml_unify_gen.m
-	<li> ml_closure_gen.m
-	<li> ml_call_gen.m
-	<li> ml_foreign_proc_gen.m
-	<li> ml_commit_gen.m
-	<li> ml_disj_gen.m
-	<li> ml_switch_gen.m, which calls upon:
-		<ul>
-		<li> ml_lookup_switch.m
-		<li> ml_string_switch.m
-		<li> ml_tag_switch.m
-		<li> ml_simplify_switch.m
-		<li> switch_util.m (in the backend_libs.m package,
-		     since it is also used by LLDS back-end)
-		</ul>
-	</ul>
-     The main data structure used by the MLDS code generator is defined
-     in ml_gen_info.m, while global data structures (those created at
-     module scope) are handled in ml_global_data.m.
-     The module ml_accurate_gc.m handles provisions for accurate garbage
-     collection, while the modules ml_code_util.m, ml_target_util.m and
-     ml_util.m provide some general utility routines.
-<li> ml_type_gen.m converts HLDS types to MLDS.
-<li> type_ctor_info.m and base_typeclass_info.m generate
-     the RTTI data structures defined in rtti.m and pseudo_type_info.m
-     (those four modules are in the backend_libs.m package, since they
-     are shared with the LLDS back-end)
-     and then rtti_to_mlds.m converts these to MLDS.
-</ul>
-
-<h4> 5b. MLDS transformations </h4>
-<ul>
-<li> ml_tailcall.m annotates the MLDS with information about tailcalls.
-     It also has a pass to implement the `--warn-non-tail-recursion' option.
-<li> ml_optimize.m does MLDS->MLDS optimizations
-<li> ml_elim_nested.m does two MLDS transformations that happen
-     to have a lot in common: (1) eliminating nested functions
-     and (2) adding code to handle accurate garbage collection.
-</ul>
-
-<h4> 6b. MLDS output </h4>
-
-<p>
-There are currently four backends that generate code from MLDS:
-one generates C/C++ code,
-one generates assembler (by interfacing with the GCC back-end),
-one generates Microsoft's Intermediate Language (MSIL or IL),
-and one generates Java.
-
-<ul>
-<li>mlds_to_c.m converts MLDS to C/C++ code.
-</ul>
-
-<p>
-
-The MLDS->asm backend is logically part of the MLDS back-ends,
-but it is in a module of its own (mlds_to_gcc.m), rather than being
-part of the ml_backend package, so that we can distribute a version
-of the Mercury compiler which does not include it.  There is a wrapper
-module called maybe_mlds_to_gcc.m which is generated at configuration time
-so that mlds_to_gcc.m will be linked in iff the GCC back-end is available.
-
-<p>
-
-The MLDS->IL backend is broken into several submodules.
-<ul>
-<li> mlds_to_ilasm.m converts MLDS to IL assembler and writes it to a .il file.
-<li> mlds_to_il.m converts MLDS to IL
-<li> ilds.m contains representations of IL
-<li> ilasm.m contains output routines for writing IL to assembler.
-<li> il_peephole.m performs peephole optimization on IL instructions.
-</ul>
-After IL assembler has been emitted, ILASM in invoked to turn the .il
-file into a .dll or .exe.
-
-<p>
-
-The MLDS->Java backend is broken into two submodules.
-<ul>
-<li> mlds_to_java.m converts MLDS to Java and writes it to a .java file.
-<li> java_util.m contains some utility routines.
-</ul>
-After the Java code has been emitted, a Java compiler (normally javac)
-is invoked to turn the .java file into a .class file containing Java bytecodes.
-
-<hr>
-<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
-
-<h3> c. BYTECODE BACK-END </h3>
-
-<p>
-This is the bytecode_backend.m package.
-
-<p>
-
-The Mercury compiler can translate Mercury programs into bytecode for
-interpretation by a bytecode interpreter.  The intent of this is to
-achieve faster turn-around time during development.  However, the
-bytecode interpreter has not yet been written.
-
-<ul>
-<li> bytecode.m defines the internal representation of bytecodes, and contains
-  the predicates to emit them in two forms. The raw bytecode form is emitted
-  into <filename>.bytecode for interpretation, while a human-readable
-  form is emitted into <filename>.bytedebug for visual inspection.
-
-<li> bytecode_gen.m contains the predicates that translate HLDS into bytecode.
-
-<li> bytecode_data.m contains the predicates that translate ints, strings
-  and floats into bytecode.
-</ul>
-
-<hr>
-<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
-
-<h3> d. ERLANG BACK-END </h3>
-
-<p>
-This is the erl_backend.m package.
-
-<p>
-
-The Mercury compiler can translate Mercury programs into Erlang.
-The intent of this is to take advantage of the features of the
-Erlang implementation (concurrency, fault tolerance, etc.)
-However, the backend is still incomplete.
-This back-end uses the Erlang Data Structure (elds.m) as its
-intermediate representation.
-
-<h4> 4d. ELDS code generation </h4>
-<ul>
-<li> erl_code_gen.m converts HLDS code to ELDS.
-	  The following sub-modules are used to handle different constructs:
-		<ul>
-		<li> erl_unify_gen.m
-		<li> erl_call_gen.m
-		</ul>
-	  The module erl_code_util.m provides utility routines for
-	  ELDS code generation.
-<li> erl_rtti.m converts RTTI data structures defined in rtti.m into
-     ELDS functions which return the same information when called.
-</ul>
-
-<h4> 6d. ELDS output </h4>
-
-<ul>
-<li>elds_to_erlang.m converts ELDS to Erlang code.
-</ul>
-
-<hr>
-<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
-
-<h3> SMART RECOMPILATION </h3>
-
-<p>
-This is the recompilation.m package.
-
-<p>
-
-The Mercury compiler can record program dependency information
-to avoid unnecessary recompilations when an imported module's
-interface changes in a way which does not invalidate previously
-compiled code.
-
-<ul>
-<li> recompilation.m contains types used by the other smart
-  recompilation modules.
-
-<li> recompilation_version.m generates version numbers for program items
-  in interface files.
-
-<li> recompilation_usage.m works out which program items were used
-  during a compilation.
-
-<li> recompilation_check.m is called before recompiling a module.
-  It uses the information written by recompilation_version.m and
-  recompilation_usage.m to work out whether the recompilation is
-  actually needed.
-</ul>
-
-<hr>
-<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
-
-<h3> MISCELLANEOUS </h3>
-
-
-The modules special_pred.m (in the hlds.m package) and unify_proc.m
-(in the check_hlds.m package) contain stuff for handling the special
-compiler-generated predicates which are generated for
-each type: unify/2, compare/3, and index/1 (used in the
-implementation of compare/3).
-
-<p>
-This module is part of the transform_hlds.m package.
-
-	<dl>
-	<dt> dependency_graph.m:
-		<dd>
-		This contains predicates to compute the call graph for a
-		module, and to print it out to a file.
-		(The call graph file is used by the profiler.)
-		The call graph may eventually also be used by det_analysis.m,
-		inlining.m, and other parts of the compiler which could benefit
-		from traversing the predicates in a module in a bottom-up or
-		top-down fashion with respect to the call graph.
-	</dl>
-
-<p>
-The following modules are part of the backend_libs.m package.
-
-	<dl>
-	<dt> arg_pack:
-		<dd>
-		This module defines utility routines to do with argument
-		packing.
-
-	<dt> builtin_ops:
-		<dd>
-		This module defines the types unary_op and binary_op
-		which are used by several of the different back-ends:
-		bytecode.m, llds.m, and mlds.m.
-
-	<dt> c_util:
-		<dd>
-		This module defines utility routines useful for generating
-		C code.  It is used by both llds_out.m and mlds_to_c.m.
-
-	<dt> name_mangle:
-		<dd>
-		This module defines utility routines useful for mangling
-		names to forms acceptable as identifiers in target languages.
-
-	<dt> compile_target_code.m
-		<dd>
-		Invoke C, C#, IL, Java, etc. compilers and linkers to compile
-		and link the generated code.
-
-	</dl>
-
-<p>
-The following modules are part of the libs.m package.
-
-	<dl>
-
-	<dt> file_util.m:
-		<dd>
-		Predicates to deal with files, such as searching for a file
-		in a list of directories.
-
-	<dt> process_util.m:
-		<dd>
-		Predicates to deal with process creation and signal handling.
-		This module is mainly used by make.m and its sub-modules.
-
-	<dt> timestamp.m
-		<dd>
-		Contains an ADT representing timestamps used by smart
-		recompilation and `mmc --make'.
-
-	<dt> graph_color.m
-		<dd>
-		Graph colouring. <br>
-		This is used by the LLDS back-end for register allocation
-
-	<dt> lp.m
-		<dd>
-		Implements the linear programming algorithm for optimizing
-		a set of linear constraints with respect to a linear
-		cost function.  This is used by termination analyser.
-
-	<dt> lp_rational.m
-		<dd>
-		Implements the linear programming algorithm for optimizing
-		a set of linear constraints with respect to a linear
-		cost function, for rational numbers.
-		This is used by termination analyser.
-
-	<dt> rat.m
-		<dd>
-		Implements rational numbers.
-
-	<dt> compiler_util.m:
-		<dd>
-		Generic utility predicates, mainly for error handling.
-	</dl>
-
-<hr>
-<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
-
-<h3> CURRENTLY UNDOCUMENTED </h3>
-
-<ul>
-<li> mmc_analysis.m
-</ul>
-
-<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
-
-<h3> CURRENTLY USELESS </h3>
-
-	<dl>
-	<dt> atsort.m (in the libs.m package)
-		<dd>
-		Approximate topological sort.
-		This was once used for traversing the call graph,
-		but nowadays we use relation.atsort from library/relation.m.
-
-	</dl>
-
-<hr>
-<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
-
-</body>
-</html>
-
diff --git a/development/developers/developer_intro.html b/development/developers/developer_intro.html
deleted file mode 100644
index 0adefbe..0000000
--- a/development/developers/developer_intro.html
+++ /dev/null
@@ -1,222 +0,0 @@
-<html>
-<head>
-
-<title>The Mercury Project: Developer Introduction </title>
-</head>
-<body bgcolor="#ABCDEF" text="#000000">
-
-<h2>An introduction to the Mercury source code and tools</h2>
-
-<p>
-
-The source code to Mercury is freely available and may be modified by
-anyone.  However, there's a bit of a difference between being legally
-allowed to modify the code, and actually being able to do it!  The
-Mercury system is quite large, and as the compiler for Mercury is
-written in Mercury itself, there are a few tricks worth learning if you
-are going to develop with Mercury.
-<p>
-This document aims to help developers get started with the Mercury
-development environment, by explaining some of the special tools that
-are available for developers. 
-<p>
-Other useful documents are in the 
-<a href="../developer.html">Developers Information</a> section
-of the web site.  In particular you may wish to see how to access the
-Mercury CVS repository and read about the design of the Mercury compiler. 
-<p>
-This document is a work-in-progress; if there is particular information
-you feel is useful, please let us know and we will write something about
-it.
-
-<h2>About grades</h2>
-
-The Mercury system uses the word "grade" to refer to a set of
-compilation options for the system.  Some of them are for benchmarking
-purposes, others enable debugging or profiling, and others enable
-research features.  Many grades are incompatible with each other.
-<p>
-Mixing and matching grades can be the cause of headaches.  It's a good
-idea to `mmake realclean' build from scratch if you run into weird
-problems and have been changing grades.  The Mercury system makes a
-pretty good attempt to try to stop this kind of thing resulting in a
-crashing program, but you will often get linker errors if you try to
-build different parts of the compiler in incompatible grades.
-
-<h2>Setting the installation path</h2>
-
-If you want to install your own compiler, you will probably want to keep
-it separate from a working stable compiler (or else you might make a mistake
-that makes it impossible to compile the compiler anymore!).
-<p>
-When you run configure, you can set an installation path.  For example:
-<pre>
-./configure --prefix /tmp/mercury/install
-</pre>
-will set the installation path to /tmp/mercury/install.  Make sure you
-set your PATH so that it includes the `bin' subdirectory of your install
-path -- in this example it would be /tmp/mercury/install/bin.  And be
-sure that this is earlier in your path than any other Mercury
-installation (for example, one in /usr/bin or /usr/local/bin).
-<p>
-See the files INSTALL and INSTALL_CVS for more information on
-installation of Mercury.
-
-<h2>Installing fewer grades</h2>
-
-If you make a lot of changes to the compiler, you will find it a bit time
-consuming to install the entire Mercury system to run a few tests.
-<p>
-The first thing to realize is that when you install the compiler, you
-don't have to install all the grades.  You can set the make variable
-LIBGRADES to set the list of "extra" grades to install.  If you set it
-to empty, it will install only the default grade (probably asm_fast.gc).
-<p>
-A good way to do this is to create (or modify an existing) Mmake.params
-file in the top-level of the mercury distribution (in the same directory
-as README and NEWS).  Mmake.params is used to set local workspace
-options and is very useful for overriding default mmake settings.
-Add the line
-<pre>
-LIBGRADES=
-</pre>
-and you won't have to wait for all those grades to be installed.
-You can also set this variable on a once off basis on the command line.
-<pre>
-mmake install LIBGRADES=
-</pre>
-<p>	
-There are some good default settings for libgrades you can set at
-configuration time, for example
-<pre>
-./configure --disable-most-grades
-./configure --enable-libgrades=...
-</pre>
-Run configure with the --help option to see more options.
-
-<p>
-Again, the INSTALL file in the Mercury distribution has more detailed
-documentation on installing grades.
-
-<h2>Using the local build directory</h2>
-
-If you only need to run your version of mmc (and don't need mmake), you
-don't need to install at all.  There is a script in the tools directory
-of the Mercury distribution called `lmc'.  You can use this script just
-like mmc, but it will use the Mercury compiler, library, runtime, etc that
-you have in an uninstalled workspace.
-<p>
-You need to set the environment variable WORKSPACE to point to the
-workspace you are using.  The easiest way to do this is to create a
-small script which sets WORKSPACE and runs lmc.
-For example if you are using $HOME/mercury as your workspace:
-<pre>
-#!/bin/sh                                                                    
-WORKSPACE=$HOME/mercury
-export WORKSPACE                                   
-$WORKSPACE/tools/lmc "$@"       
-</pre>
-See the tools/lmc file for further documentation on this tool -- it can
-also run the compiler under gdb or compile programs suitable for C level
-debugging.
-
-<p>
-There is also a script in the tools directory called `lml',
-which is similar to `lmc' except that it runs `ml' rather than `mmc'.
-You can use these with mmake:
-<pre>
-mmake MC=lmc ML=lml ...
-</pre>
-However, this will still use the installed version of `mmake',
-`c2init'/`mkinit', `mgnuc', etc.  So it isn't entirely foolproof.
-If you've made changes to the scripts, it may be best to install
-rather than trying to use the local build directory.
-
-<h2>Bootchecking</h2>
-
-If you've made changes to the compiler or library that you think are
-working fine, you should make sure you haven't messed up some other
-part of the compiler.
-<p>
-The bootcheck script in the tools directory of the Mercury compiler
-is just what you need to do this.  It works in a number of
-<em>stages</em>, where each stage is the output of the compiler we built
-in the previous stage.
-<p>
-Stage 1 is to build a working Mercury compiler (just like typing mmake
-at the top level of the mercury distribution).  We build this compiler
-using a known, trusted, stable Mercury compiler.
-<p>
-Stage 2 uses the stage 1 Mercury compiler to build a stage 2 Mercury
-compiler.  This ensures that you can still build the compiler using your
-modifications.
-<p>
-Bootcheck then uses the stage 2 Mercury compiler to build the C files of
-another Mercury compiler, the stage 3 compiler, and compares them with
-the C files of the stage 2 compiler, which were built by the stage
-1 compiler.  If they differ, then the stage 2 compiler does not
-execute the same algorithm as the stage 1 compiler. Since the stage 1
-and 2 compilers were built from the same source, the difference must
-have been introduced by differences in the compilers used to compile
-that source. Since stage 1 was compiled with a trusted compiler,
-the compiler used to generate the stage 2 executable (i.e. the stage
-1 compiler) must be buggy. If this happens, the compiler doesn't
-"bootstrap" -- it cannot reliably compile itself.               
-<p>
-Finally, if you have checked out the "tests" module from CVS, the
-bootcheck will use the stage 2 compiler, library and runtime to run all
-the tests in the testing hierarchy.
-<p>
-Check out the tools/bootcheck script to see further documentation on how
-it works.  You can build only specific stages, just run the tests, omit
-building certain parts of the compiler, and much much more.
-<p>
-Bootchecking can take quite a while -- 1-3 hours is not uncommon.  It's
-a good idea to run the bootcheck in the background and log the results
-to a file.  For example:
-<pre>
-./tools/bootcheck > bootchecklog.Jan21 2>&1 &
-tail -f bootchecklog.Jan21
-</pre>
-
-There is also a script tools/submit_patch, which can be used for
-testing and/or committing patches.  It takes as input a file containing
-a CVS log message and a patch file.  It checks out the Mercury sources,
-applies the patch file, and then tests the patch by running a couple of
-bootchecks in different grades.  If you specified the `--commit' option,
-and the tests pass, it then goes ahead and commits the patch.
-
-<p>
-
-<h2>Debugging the declarative debugger</h2>
-
-The browser directory contains the source code for the declarative debugger
-as well as the features of the procedural debugger implemented in Mercury.
-<p>
-By default this directory is compiled with no tracing enabled, even
-when a .debug or .decldebug grade is specified.  This allows the declarative
-debugger to take advantage of optimisations such as tail recursion and reduces
-the size of the installed libraries.
-<p>
-In order to debug the code in the browser directory add the following line
-to your Mmake.browser.params file in the browser directory:
-<pre>
-EXTRA_MCFLAGS=--no-force-disable-tracing
-</pre>
-<p>
-The `dd_dd' command can then be used from mdb to start the declarative debugger
-with interactive debugging turned on.
-<p>
-Since tracing turns off the tail recursion optimisation, you may also need
-to increase the size of the stack by setting the --detstack-size runtime
-option:
-<pre>
-export MERCURY_OPTIONS="--detstack-size 8128"
-</pre>
-<hr>
-<p>
-Comments? See our <a href = "../../contact.html" >contact</a> page.<br>
-
-</body>
-</html>
-
diff --git a/development/developers/gc_and_c_code.html b/development/developers/gc_and_c_code.html
deleted file mode 100644
index 6227bf6..0000000
--- a/development/developers/gc_and_c_code.html
+++ /dev/null
@@ -1,75 +0,0 @@
-<html>
-<head>
-
-<title>
-	Information On LLDS Accurate Garbage Collection And C Code
-</title>
-</head>
-
-<body
-	bgcolor="#ffffff"
-	text="#000000"
->
-
-<hr>
-
-When handwritten code is called from Mercury, the garbage collection 
-scheduler doesn't know anything about the code, so it cannot replace
-the succip on the stack (if there is one) with the collector's address.
-
-<p>
-
-If the handwritten code calls no other code, then this is fine, the
-scheduler knows it can replace the succip variable and when a
-proceed() occurs execution will return to mercury code which it 
-knows about.
-
-<p>
-
-If handwritten code calls other handwritten code, we have a problem, 
-as succip will be saved on the stack and we don't know where on
-the stack it is stored. So we use a global variable 'saved_succip' which
-is succip is saved into. Care must be taken to save saved_succip on the 
-stack so it doesn't get clobbered. <br>
-So
-	<pre>
-	detstackvar(1) = (int) succip;
-	</pre>
-becomes
-	<pre>
-	detstackvar(1) = (int) saved_succip;
-	saved_succip = (int) succip;
-	</pre>
-
-and, when restoring, 
-	<pre>
-	succip = (int) detstackvar(1);
-	</pre>
-becomes
-	<pre>
-	succip = saved_succip;
-	saved_succip = detstackvar(1);
-	</pre>
-
-(With appropriate LVALUE_CASTs).
-
-<p>
-
-In this way, garbage collection always knows where the succip is stored 
-in handwritten code. 
-
-<p>
-
-The garbage collection code must check that the current execution is not 
-still in a handwritten predicate - if it is, it must re-schedule (essentially
-just the same as before).
-
-<p>
-
-
-<hr>
-
-Last update was $Date: 2003/11/05 08:42:10 $ by $Author: fjh $@cs.mu.oz.au. <br>
-</body>
-</html>
-
diff --git a/development/developers/glossary.html b/development/developers/glossary.html
deleted file mode 100644
index 0ba7186..0000000
--- a/development/developers/glossary.html
+++ /dev/null
@@ -1,138 +0,0 @@
-<html>
-<head>
-
-
-<title>
-	Glossary Of Terms Used In Mercury
-</title>
-</head>
-
-<body
-	bgcolor="#ffffff"
-	text="#000000"
->
-
-<hr>
-
-<dl>
-
-<dt> assertion
-	<dd>
-	A particular form of promise which claims to the compiler 
-	that the specified goal will always hold. If useful, the 
-	compiler may use this information to perform optimisations.
-
-<dt> class context 
-	<dd>
-	The typeclass constraints on a predicate or function.
-
-<dt> codeinfo
-	<dd>
-	a structure used by codegen.m
-
-<dt> HLDS 
-	<dd>
-	The "High Level Data Structure".  See hlds.m.
-
-<dt> inst
-	<dd>
-	instantiatedness.  An inst holds three different sorts of
-      information.  It indicates whether a variable is free, partially
-      bound, or ground.  If a variable is bound, it may indicate
-      which functor(s) the variable can be bound to.  Also,
-      an inst records whether a value is unique, or whether
-      it may be aliased.
-
-<dt> liveness
-	<dd>
-	this term is used to mean two quite different things!
-	<ol>
-	<li> There's a notion of liveness used in mode analysis:
-	a variable is live if either it or an alias might be
-	used later on in the computation.
-	<li> There's a different notion of liveness used for code generation:
-	a variable becomes live (is "born") when the register or stack
-	slot holding the variable first acquires a value, and dies when
-	that value will definitely not be needed again within this procedure.
-	This notion is low-level because it could depend on the low-level
-	representation details (in particular, `no_tag' representations
-	ought to affect liveness).
-	</ol>
-
-<dt> LLDS
-	<dd>
-	The "Low Level Data Structure".  See llds.m.
-
-<dt> mode 
-	<dd>
-	this has two meanings:
-	<ol>
-	<li> a mapping from one instantiatedness to another
-		(the mode of a single variable)
-	<li> a mapping from an initial instantiatedness of a predicate's
-		arguments to their final instantiatedness
-		(the mode of a predicate)
-	</ol>
-	
-<dt> moduleinfo 
-	<dd>
-	Another name for the HLDS.
-
-<dt> NYI 
-	<dd>
-	Not Yet Implemented.
-
-<dt> predinfo
-	<dd>
-	the structure in HLDS which contains information about
-	a predicate.
-
-<dt> proc (procedure)
-	<dd>
-	a particular mode of a predicate.
-
-<dt> procinfo 
-	<dd>
-	the structure in HLDS which contains
-	information about a procedure.
-
-<dt> promise
-	<dd>
-    	A declaration that specifies a law that holds for the 
-	predicates/functions in the declaration. Thus, examples of promises
-	are assertions and promise ex declarations.  More generally, the term 
-	promise is often used for a declaration where extra information is 
-	given to the compiler which it cannot check itself, for example in
-	purity pragmas.
-
-<dt> promise ex
-	<dd>
-	A shorthand for promise_exclusive, promise_exhaustive, and
-	promise_exclusive_exhaustive declarations. These declarations 
-	are used to tell the compiler determinism properties of a 
-	disjunction.
-
-<dt> RTTI
-	<dd>
-	The "RunTime Type Information". See rtti.m. A copy of a paper given
-	on this topic is available 
-	<a href="/web/20121002213751/http://www.cs.mu.oz.au/research/mercury/information/papers/rtti_ppdp.ps.gz">here</a> in zipped Postscript format.
-
-<dt> super-homogenous form (SHF)
-	<dd>
-	 a simplified, flattened form of goals, where
-	each unification is split into its component pieces; in particular,
-	the arguments of each predicate call and functor must be distinct
-	variables.
-
-<dt> switch
-	<dd>
-	a disjunction which does a case analysis on the toplevel
-	functor of some variable.
-</dl>
-
-<hr>
-
-</body>
-</html>
-
diff --git a/development/developers/release_checklist.html b/development/developers/release_checklist.html
deleted file mode 100644
index 7655bf3..0000000
--- a/development/developers/release_checklist.html
+++ /dev/null
@@ -1,189 +0,0 @@
-<html>
-<head>
-
-
-<title>Release Checklist for the Mercury Project</title>
-</head>
-
-<body bgcolor="#ffffff" text="#000000">
-
-<hr>
-
-This file contains a checklist of the steps that must be
-taken when releasing a new version of Mercury.
-
-<hr>
-
-<ol>
-<li> Items for the next version (1.0) only:
-	<ol>
-	<li>
-	Update w3/include/globals.inc as explained in the XXX comment there.
-        Don't commit your changes to the main branch yet, because
-	otherwise it would be installed on the WWW pages overnight.
-	<li>
-	Make sure that the runtime headers contain no symbols (function names,
-	variable names, type names, struct/enum tags or macros) that do not
-	begin with MR_.
-	</ol>
-
-<li> Make sure configure.in is updated to check for new features.
-
-<li> Update the RELEASE_NOTES, NEWS, WORK_IN_PROGRESS, HISTORY,
-     LIMITATIONS and BUGS files, and the compiler/notes/todo.html file.
-     Don't forget to update the version number in RELEASE_NOTES for major
-     releases.
-     The HISTORY file should include the NEWS files from previous releases
-     (reordered if appropriate -- the HISTORY file is in cronological
-     order whereas the NEWS file is in reverse cronological order).
-
-<li> Update the WWW documentation in the `w3' directory.
-     Note that the sources for these HTML documents are in the files named
-     include/*.inc and *.php3.
-     <ul>
-     <li> Update the RELEASE_INFO file with the name and CVS tag
-          of the new release.
-
-     <li> For minor releases, update release.html with a new entry about
-	  this release (put it at the top of the page), and provide a
-	  new link to download the release. See old-release.html for
-	  examples.
-
-     <li> For major releases, you will need to create some new web pages:<br>
-          <dl>
-          <dt> release-VERSION.html
-          <dd> The release notes for this version.
-
-	  <dt> release-VERSION-bugs.html
-	  <dd> Any outstanding bugs for this release.
-	       This should be the same as the BUGS file.
-
-	  <dt> release-VERSION-contents.html
-	  <dd> The contents of this distribution.
-	       This should be the same as in the RELEASE_NOTES file.
-
-          </dl>
-	  You will need to add these new files to the list in the Makefile.
-	  You will also need to update release.html and
-	  current-release-bugs.html.
-	  Move the old information in release.html to old-release.html.
-	  Modify release.html to refer to the new html files you have
-	  created, and change the links to download the release. 
-
-     <li> Update the CURRENT_RELEASE and BETA_RELEASE variables in
-	  tools/generate_index_html so that the new release is listed
-	  first on the download page.
-
-     <li> Don't commit your changes to the main branch yet, because
-	  otherwise it would be installed on the WWW pages overnight.
-     </ul>
-
-<li> Use `cvs tag' or `cvs rtag' to tag all the files with a
-     `version-x_y_z' tag.  The cvs modules that need to be tagged
-     are `mercury', `clpr', `tests', and `mercury-gcc'.
-
-<li> Edit the tools/test_mercury script in
-     /home/mercury/public/test_mercury/scripts/mercury:
-     set the RELEASE_VERSION and CHECKOUT_OPTS variables
-     as explained in the comments there.
-
-<li> Run tools/run_all_tests_from_cron on earth.
-     (Or just wait 24 hours or so.) <p>
-
-     This should have the effect of checking out a fresh copy, and doing
-
-	<pre>
-	touch Mmake.params &&
-	autoconf &&
-	mercury_cv_low_tag_bits=2 \
-	mercury_cv_bits_per_word=32 \
-	mercury_cv_unboxed_floats=no \
-	sh configure --prefix=$INSTALL_DIR &&
-	mmake MMAKEFLAGS='EXTRA_MCFLAGS="-O5 --opt-space"' tar
-	</pre>
-	
-	<p>
-
-    If it passes all the tests, it should put the resulting tar file in
-    /home/mercury/public/test_mercury/test_dirs/earth/mercury-latest-stable
-    and ftp://ftp.mercury.cs.mu.oz.au/pub/mercury/beta-releases.
-
-<li>  Test it on lots of architectures. <br>
-
-	<p>
-    Make sure you test all the programs in the `samples' and `extras'
-    directories.
-
-<li>  Build binary distributions for those architectures.
-      This step is now automated as part of tools/test_mercury,
-      with the resulting binaries going in
-      /home/mercury/public/test_mercury/test_dirs/$HOST/mercury-latest-{un,}stable.
-
-<li>  Make sure to test the binary distributions!
-
-<li>  Move the gzipped tar files from the /pub/mercury/beta-releases directory
-      to the main /pub/mercury directory on the Mercury ftp site
-      ftp://ftp.mercury.cs.mu.oz.au/pub/mercury.
-      Copy the binary distributions to the same place.
-      <p>
-
-      For the Stonybrook mirror, email Konstantinos Sagonas 
-      (Kostis.Sagonas at cs.kuleuven.ac.be) to tell him to copy them to 
-      ftp://ftp.cs.sunysb.edu/pub/XSB/mercury. <p>
-      Unfortunately this mirror is not automated, so don't worry about it
-      except for major releases or important bug fixes. <p>
-
-      The mirror at ftp://ftp.csd.uu.se/pub/Mercury is also automated.
-      Sometimes the link to Sweden can cause delays.
-      The person to contact regarding this one is Thomas Lindgren 
-      (thomasl at csd.uu.se).
-
-<li> Prepare a new "mercury-VERSION.lsm" file for this Mercury release
-     (use the one already uploaded to
-     ftp://sunsite.unc.edu/pub/Linux/Incoming as a template). The
-     version number, date, file sizes, and file names need to be updated
-     for a new release.
-
-<li> Create new binary packages for Linux packaging systems.
-     The .spec file can be used to create .rpm packages.
-     The command <i>dpkg-buildpackage -rfakeroot</i> on hydra can be
-     used to create .deb packages, although you should probably
-     let (or make) the official maintainer do this so it can be
-     PGP signed and uploaded.
-
-<li> Upload "mercury-VERSION-compiler.tar.gz" and "mercury-VERSION.lsm" to
-     ftp://sunsite.unc.edu/incoming/Linux. They will be moved to
-     /pub/Linux/Incoming fairly quickly, and eventually should be moved
-     to /pub/linux/devel/lang/mercury.
-
-<li> Send "mercury-VERSION.lsm" to the lsm robot at lsm at execpc.com
-     with the subject "add".
-	
-<li> Append "mercury-VERSION.lsm" to a release notice and send it to
-     linux-announce at news.ornl.gov. This will post to comp.os.linux.announce.
-
-<li>  Email mercury-announce at cs.mu.oz.au and cross-post announcement to
-    comp.lang.misc, comp.lang.prolog, comp.lang.functional, comp.object.logic,
-    and for major releases also to comp.compilers and gnu.announce.
-
-<li>  Update the Mercury WWW home page (/local/dept/w3/unsupported/docs/mercury/*)
-      by commiting the changes you made earlier.
-
-<li> For major releases, move the commitlog file from its current location
-     (in $CVSROOT/CVSROOT/commitlog) into a file specific to that release,
-     such as "commitlog-0.12".  Create a new, empty commitlog file, making
-     sure it is readable by everyone and writeable by group mercury (the
-     commitlog file file is not managed by cvs itself, it is maintained by
-     our own check-in scripts, so you don't need to do anything special to
-     create this file).  Email the local mailing list to say that you have
-     done this.
-
-</ol>
-
-
-<hr>
-
-Last update was $Date: 2005/09/12 09:35:14 $ by $Author: mark $@cs.mu.oz.au. <br>
-</body>
-</html>
-
diff --git a/development/developers/reviews.html b/development/developers/reviews.html
deleted file mode 100644
index f956a00..0000000
--- a/development/developers/reviews.html
+++ /dev/null
@@ -1,284 +0,0 @@
-
-<html>
-<head>
-
-<title>
-	Reviews
-</title>
-</head>
-
-<body
-	bgcolor="#ffffff"
-	text="#000000"
->
-
-<hr>
-
-<h1> Reviews </h1> <p>
-
-This file outlines the policy on reviews for the Mercury system.
-
-<hr>
-
-<h2> Reviewable material </h2>
-
-<p>
-
-All changes to the Mercury repository, including the compiler,
-documentation, www pages, library predicates, runtime system, and tools
-need to be reviewed.
-
-<p>
-
-<h2> Review process </h2>
-
-<ol>
-<li>  Make sure you are working with an up-to-date copy of the
-	    module you are using.
-<li>  If change is a code change, test change. See "Testing" section
-	    of coding standards. Testing may take time - don't forget
-	    that steps 3, 4 and 5 can be done in parallel.
-<li>  Create diff - use `cvs diff -u'.  New files should be
-	    appended verbatim to the end of the diff, with descriptions
-	    indicating the name of the file.
-<li>  Write log message for this change - use template (see below).
-<li>  Review diff and log message yourself. (see below)
-<li>  Send to mercury-reviews at cs.mu.oz.au, with the subject
-	    "for review: <short description of change>".
-	    Nominate a reviewer at top of diff (see below).
-	    (If this change has been reviewed once before, it might
-	    fall into the "commit before review" category -- see the
-	    section on exceptions).
-<li>  Wait for review (see below).
-<li>  Fix any changes suggested. 
-<li>  Repeat above steps until approval.
-<li> Commit change (see below).
-</ol>
-
-
-<h2> Log Messages </h2>
-
-Use the template that cvs provides.
-
-<pre>
-	Estimated hours taken: _____
-
-	<overview or general description of changes>
-
-	<directory>/<file>:
-		<detailed description of changes>
-</pre>
-
-In estimated hours, include all your time to fix this problem -
-including debugging time.
-
-<p>
-
-The description should state why the changes were made, not just what
-the changes were.  All file modifications related to the same change
-should be committed together, and use the same log message, even over
-multiple directories.  The reason for this is that the log messages can
-be viewed on a file-by-file basis, and it is useful to know that a small
-change of a file in a subdirectory is related to a larger change in
-other subdirectories.
-
-<p>
-
-For very small changes, the <overview or general description> can be
-omitted, but the <detailed description> should stay.
-
-<p>
-
-If adding a new feature, this is a good place to describe the feature,
-how it works, how to turn it on and off, and any present limitations of
-the feature (note that all this should also be documented within the
-change, as well).  If fixing a bug, describe both the bug and the fix.
-
-<p>
-
-<h2> Self-Review </h2>
-
-<p>
-
-You should also review your own code first, and fix any obvious
-mistakes.  Where possible add documentation - if there was something you
-had to understand when making the change, document it - it makes it
-easier to review the change if it is documented, as well as generally
-improving the state of documentation of the compiler.
-
-<p>
-
-<h2> Review </h2>
-
-<p>
-
-We're now posting all diffs to mercury-reviews at cs.mu.oz.au.
-
-<p>
-
-The reasons for posting to mercury-reviews are:
-
-<ul>
-<li>	 To increase everyone's awareness of what changes are taking
-	  place.
-<li>	 Give everyone interested a chance to review your code, not
-	  just the reviewer. Remember, your changes may impact upon
-	  the uncommitted work of others, so they may want to give
-	  input.
-<li>	 Allow other people to read the reviewer's comments - so the same
-	  problems don't have to be explained again and again. 
-<li>	 People can try to see how your changes worked without having
-	  to figure out how to get cvs to generate the right set of
-	  diffs. 
-<li>	 Important decisions are often made or justified in reviews, so
-	  these should be recorded.
-</ul>
-
-You should try to match the reviewer to the code - someone familiar with
-a section of code can review faster and is more likely to catch errors.
-Put a preamble at the start of your diff to nominate who you would like
-to review the diff.
-
-<p>
-
-<h2> Waiting and approval </h2>
-
-<p>
-
-Waiting for approval need not be wasted time.  This is a good time to
-start working on something else, clean up unused workspaces, etc.  In
-particular, you might want to run long running tests that have not yet
-been run on the your change (different grades, different architectures,
-optimisation levels, etc).
-
-<p>
-
-The reviewer(s) should reply, indicate any problems that need to be
-corrected, and whether the change can be committed yet. Design issues
-may need to be fully justified before you commit. You may need to fix
-the problems, then go through another iteration of the review process,
-or you may be able to just fix a few small problems, then commit.
-
-<p>
-
-<h2> Committing </h2>
-
-If you have added any new files or directories, then before committing
-you must check the group-id and permissions of the newly created files
-or directories in the CVS repository.  Files should be readable by
-group mercury and directories should be both readable and writable by
-group mercury.  (Setting of permissions will be enforced by the
-pre-commit check script `CVSROOT/check.pl'.)
-
-<p>
-
-Use the log message you prepared for the review when committing.
-
-<p>
-
-<h2> Exceptions: Commit before review </h2>
-
-<p>
-
-The only time changes should be committed before being reviewed is when they
-satisfy all of the following conditions:
-
-<ul>
-<li>	(a) the change is simple 
-	
-<li>	(b) you are absolutely sure the change will not introduce bugs
-
-<li>	(c) you are sure that the change will pass review with only
-	    trivial corrections (spelling errors in comments, etc.)
-
-<li>	(d) there are no new design decisions or changes to previous
-	    design decisions in your change (the status quo should
-	    be the default; you must convince the reviewer(s) of
-	    the validity of your design decisions before the code
-	    is committed).
-
-<li>	(e) you will be around the next day or two to fix the bugs
-	    that you were sure could never happen
-	
-<li>	(f) committing it now will make life significantly easier
-	    for you or someone else in the group
-</ul>
-
-<p>
-
-If the compiler is already broken (i.e. it doesn't pass it's nightly
-tests), and your change is a bug fix, then it's not so important to be
-absolutely sure that your change won't introduce bugs.  You should
-still be careful, though.  Make sure you review the diffs yourself.
-
-<p>
-
-Similarly, if the code you are modifying is a presently unused part of
-code - for example a new feature that nobody else is using, that is
-switchable, and is switched off by default, or a new tool, or an `under
-development' webpage that is not linked to by other webpages yet, the
-criteria are a bit looser.  Don't use this one too often - only for
-small changes.  You don't want to go a long way down the wrong track
-with your new feature, before finding there's a much better way.
-
-<p>
-
-If these conditions are satisfied, then there shouldn't be any problem
-with mailing the diff, then committing, then fixing any problems that
-come up afterwards, provided you're pretty sure everything will be okay.
-This is particularly true if others are waiting for your work.
-
-<p>
-
-Usually, a change that has already been reviewed falls into this
-category, provided you have addressed the reviewers comments, and 
-there are no disputes over design decisions. If the reviewer has
-specifically asked for another review, or there were a large number of
-comments at the review, you should not commit before a second review.
-
-<p>
-
-If you are going to commit before the review, use the subject line:<br>
-	    "diff: <short description of change>".
-
-<h2> Exceptions: No review </h2>
-
-<p>
-
-The only time changes should be committed without review by a second
-person is when they satisfy all of the following conditions:
-
-<ul>
-<li>	(a) it is a very small diff that is obviously correct <br>
-	  eg: fix typographic errors <br>
- 	      fix syntax errors you accidently introduced <br>
- 	      fix spelling of people's names <br> <p>
- 
-		These usually don't need to be reviewed by a second
-		person.  Make sure that you review your own changes,
-		though.  Also make sure your log message is more
-		informative than "fixed a typo", try "s/foo/bar" or
-		something so that if you did make a change that people
-		don't approve of, at least it's seen quickly.
-
-<li>	(b) it is not going to be publically visible <br>
-	  eg: Web pages, documentation, library, man pages. <p>
-	 
-		Changes to publically visible stuff should always be
-		reviewed. It's just too easy to make spelling errors,
-		write incorrect information, commit libel, etc. This
-		stuff reflects on the whole group, so it shouldn't be
-		ignored.
-</ul>
-
-If your change falls into this category, you should still send the
-diff and log message to mercury-reviews, but use the subject line:<br>
-"trivial diff: <short description of change>".
-
-
-<hr>
-
-Last update was $Date: 2003/01/15 08:20:13 $ by $Author: mjwybrow $@cs.mu.oz.au. <br>
-</body>
-</html>
-
diff --git a/development/developers/todo.html b/development/developers/todo.html
deleted file mode 100644
index 790bcce..0000000
--- a/development/developers/todo.html
+++ /dev/null
@@ -1,385 +0,0 @@
-<html>
-<head>
-
-
-<title>To Do List</title>
-</head>
-
-<body bgcolor="#ffffff" text="#000000">
-
-<hr>
-<!--======================-->
-
-<h1> TODO LIST </h1>
-
-<hr>
-<!--======================-->
-
-<p>
-
-
-For more information on any of these issues, contact
-mercury at csse.unimelb.edu.au.
-
-<p>
-
-<h2> mode analysis </h2>
-
-<p>
-
-<ul>
-<li> fix various bugs in mode inference:
-     need to fix it to work properly in the presence of functions;
-     also need to change normalise_inst so that it handles complicated
-     insts such as `list_skel(any)'.
-
-<li> extend the mode system to allow known aliasing.
-     This is needed to make partially instantiated modes and unique modes work.
-	[supported on the "alias" branch, but there were some serious
-	 performance problems... has not been merged back into the main
-	 branch]
-
-</ul>
-
-<h2> determinism analysis </h2>
-
-<p>
-
-<ul>
-<li> add functionality for promise exclusive declarations:
-     <ul>
-     	<li> add error checking and type checking as for assertions
-	<li> include declaration information in the module_info
-	<li> take into account mutual exclusivity from promise_exclusive
-	     and promise_exclusive_exhaustive declarations during switch
-	     detection
-	<li> take into account exhaustiveness from promise_exhaustive and 
-	     promise_exclusive_exhaustive declarations during
-	     determinism analysis
-     </ul>
-</ul>
-     
-
-<h2> unique modes </h2>
-
-<ul>
-<li> handle nested unique modes
-
-<li> we will probably need to extend unique modes a bit,
-     in as-yet-unknown ways; need more experience here
-
-</ul>
-
-<h2> module system </h2>
-
-<ul>
-<li> check that the interface for a module is type-correct
-  independently of any declarations or imports in the implementation
-  section
-
-<li> there are some problems with nested modules (see the language
-  reference manual)
-
-</ul>
-
-<h2> C interface </h2>
-
-<ul>
-<li> exporting things for manipulating Mercury types from C
-
-<li> need to deal with memory management issues
-
-</ul>
-
-<h2> code generation </h2>
-
-<ul>
-<li> take advantage of unique modes to do compile-time garbage collection
-  and structure reuse.
-
-</ul>
-
-<h2> back-ends </h2>
-
-<h3> low-level (LLDS) back-end </h3>
-<ul>
-<li> support accurate garbage collection
-</ul>
-
-<h3> high-level C back-end </h3>
-<ul>
-<li> finish off support for accurate garbage collection;
-     see the comments in compiler/ml_elim_nested.m
-<li> see also the comments in compiler/ml_code_gen.m
-</ul>
-
-<h2> native code back-end </h2>
-<ul>
-<li> support on platforms other than Linux/x86.
-<li> commit GCC tail-call improvements to GCC CVS repository
-<li> support `--gc accurate'
-<li> support `--gc none'
-</ul>
-
-<h3> .NET back-end </h3>
-<ul>
-<li> finish off standard library implementation
-<li> see also the TODO list in compiler/mlds_to_il.m
-</ul>
-
-<h2> debugger </h2>
-
-<ul>
-<li> support back-ends other than LLDS
-<li> allow interactive queries to refer to values generated by
-     the program being debugged
-<li> trace semidet unifications
-</ul>
-
-<h2> Unicode </h2>
-
-<ul>
-<li> allow alternative <em>external</em> encodings, particularly iso-8859-1
-<li> consistent and robust handling of invalid strings
-     (overlong sequences, unpaired surrogates, etc.)
-<li> add analogue of wcwidth and make some formatting procedures use it
-<li> io.putback_char depends on multiple pushback in ungetc for
-     code points > 127
-</ul>
-
-<hr>
-<!--======================-->
-
-<h1> WISH LIST </h1>
-
-<h2> type-system </h2>
-
-<ul>
-
-<li> allow construct.construct/3 to work for existential types
-
-<li> remove limitation that higher-order terms are monomorphic.
-     i.e. allow universal quantifiers at the top level of
-     higher-order types, e.g. <samp>:- pred foo(all [T] pred(T)).</samp>.
-
-<li> constructor classes
-
-<li> allow a module exporting an abstract type to specify that other modules
-     should not be allowed to test two values of that type for equality (similar
-     to Ada's limited private types). This would be useful for e.g. sets
-     represented as unordered lists with possible duplicates.
-  	[this is a subset of the functionality of type classes]
-
-<li> subtypes?
-
-<li> optimisation of type representation and manipulation (possibly
-     profiler guided) 
-
-<li> fold/unfolding of types
-</ul>
-
-<h2> mode analysis </h2>
-
-<ul>
-<li> split construct/deconstruct unifications into their atomic
-     "micro-unification" pieces when necessary.
-     (When is it necessary?)
-
-<li> extend polymorphic modes,
-     e.g. to handle uniqueness polymorphism (some research issues?)
-
-<li> handle abstract insts in the same way abstract types are handled
-     (a research issue - is this possible at all?)
-
-<li> implement `willbe(Inst)' insts, for parallelism
-
-<li> mode segments & high-level transformation of circularly moded programs.
-</ul>
-
-<h2> determinism analysis: </h2>
-
-<ul>
-<li> propagate information about bindings from the condition of an if-then-else
-     to the else so that
-<pre>
-	(if X = [] then .... else X = [A|As], ...)
-</pre>
-     is considered det.
-
-<li> turn chains of if-then-elses into switchs where possible.
-	[done by fjh, but not committed; zs not convinced that
-	this is a good idea]
-
-</ul>
-
-<h2> higher-order preds: </h2>
-
-<ul>
-<li> implement single-use higher-order predicate modes.
-     Single-use higher-order predicates would be allowed to bind curried
-     arguments, and to have unique modes for curried arguments.
- 
-<li> allow taking the address of a predicate with multiple modes
-     [we do allow this in case where the mode can be determined from
-     the inst of the high-order arguments]
-
-
-<li> improve support for higher-order programming, eg. by providing
-     operators in the standard library which do things like:
-     <ul>
-     <li>compose functions
-     <li>take a predicate with one output argument and treat it like a function.
-     ie. <tt>:- func (pred(T)) = T.</tt>
-     </ul>
-</ul>
-
-<h2> module system: </h2>
-
-<ul>
-<li> produce warnings for implementation imports that are not needed
-
-<li> produce warnings for imports that are in the wrong place
-  (in the interface instead of the implementation, and vice versa)
-  	[vice versa done by stayl]
-</ul>
-
-<h2> source-level transformations </h2>
-
-<ul>
-<li> more work on module system, separate compilation, and the multiple
-     specialisation problem
-
-<li> transform non-tail-recursive predicates into tail-recursive form
-     using accumulators.  (This is already done, but not enabled by
-     default since it can make some programs run much more slowly.
-     More work is needed to only enable this optimization in cases
-     when it will improve performance rather than pessimize it.)
-
-<li> improvements to deforestation / partial deduction
-
-</ul>
-
-<h2> code generation: </h2>
-
-<ul>
-<li> allow floating point fields of structures without boxing
-	(need multi-word fields)
-
-<li> stack allocation of structures
-
-</ul>
-
-<h2> LLDS back-end: </h2>
-
-<ul>
-<li> inter-procedural register allocation 
-
-<li> other specializations, e.g. if argument is known to be bound to
-     f(X,Y), then just pass X and Y in registers
-
-<li> reduce the overhead of higher-order predicate calls (avoid copying
-     the real registers into the fake_reg array and back)
-
-<li> trim stack frames before making recursive calls, to minimize stack usage
-     (this would probably be a pessimization much of the time - zs)
-     and to minimize unnecessary garbage retention.
-
-<li> target C--
-</ul>
-
-<h2> native code back-end </h2>
-
-<ul>
-<li> consider supporting exception handling in a manner
-     that is compatible with C++ and Java
-<li> inline more of the standard library primitives that are
-     currently implemented in C
-</ul>
-
-<h2> garbage collection <h2>
-<ul>
-<li> implement liveness-accurate GC
-<li> implement incremental GC
-<li> implement generational GC
-<li> implement parallel GC
-<li> implement real-time GC
-</ul>
-  
-<h2> compilation speed </h2>
-
-<ul>
-<li> improve efficiency of the expansion of equivalence types (currently O(N^2))
-     (e.g. this is particularly bad when compiling live_vars.m).
-
-<li> improve efficiency of the module import handling (currently O(N^2))
-
-<li> use "store" rather than "map" for the major compiler data structures
-</ul>
-
-
-<h2> better diagnostics </h2>
-
-<ul>
-<li> optional warning for any implicit quantifiers whose scope is not
-     the entire clause (the "John Lloyd" option :-).
-
-<li> give a better error message for the use of if-then without else.
-
-<li> give a better error message for the use of `<=' instead of `=<'
-     (but how?)
-
-<li> give a better error message for type errors involving higher-order pred
-     constants (requested by Bart Demoen)
-
-<li> give better error messages for syntax errors in lambda expressions
-</ul>
-
-<h2> general </h2>
-
-<ul>
-<li> coroutining and parallel versions of Mercury
-
-<li> implement streams (need coroutining at least)
-
-<li> implement a very fast turn-around bytecode compiler/interpreter/debugger,
-     similar to Gofer
-     [not-so-fast bytecode compiler done, but bytecode interpreter
-     not implemented]
-
-<li> support for easier formal specification translation (eg a Z library,
-     or Z to Mercury).
-
-<li> implement a source visualisation tool
-
-<li> distributed Mercury
-
-<li> improved development environment
-
-<li> additional software engineering tools
-	<ul>
-  	<li> coverage analysis
-	<li> automatic testing
-	</ul>
-
-<li> literate Mercury
-
-<li> implement a GUI library (eg Hugs - Fudgets)
-
-<li> profiling guided optimisations
-	<ul>
-	<li> use profiling information to direct linker for optimal
-	     code placement (Alpha has a tool for this).
-	</ul>
-
-<li> use of attribute grammar technology
-	(including visit sequence optimization)
-	to implement code with circular modes
-</ul>
-
-<hr>
-<!--======================-->
-
-Last update was $Date: 2012/02/13 00:11:54 $ by $Author: wangp $@cs.mu.oz.au. <br>
-</body>
-</html>
-
diff --git a/development/developers/work_in_progress.html b/development/developers/work_in_progress.html
deleted file mode 100644
index 6f2aa2a..0000000
--- a/development/developers/work_in_progress.html
+++ /dev/null
@@ -1,106 +0,0 @@
-<html>
-<head>
-
-
-<title>
-	Work In Progress
-</title>
-</head>
-
-<body bgcolor="#ffffff" text="#000000">
-
-<hr>
-<!---------------------------------------------------------------------------->
-
-The compiler contains some code for the following features,
-which are not yet completed, but which we hope to complete
-at some time in the future:
-<p>
-
-<ul>
-<li> There is a 
-  <a href="/web/20121002213802/http://www.cs.mu.oz.au/mercury/dotnet.html">`--target il'</a>
-  option, which generates MSIL code for Microsoft's new
-  <a href="/web/20121002213802/http://msdn.microsoft.com/net/">.NET Common Language Runtime</a>.
-  We're still working on this.
-
-<li> Thread-safe engine (the `.par' grades).
-
-<li> Independent AND-parallelism (the `&' parallel conjunction operator).
-     See Tom Conway's PhD thesis.
-
-<li>
-We have incomplete support for a new, more expressive design for representing
-information about type classes and type class instances at runtime. When
-complete, the new design would allow runtime tests of type class membership,
-it would allow the tabling of predicates with type class constraints,
-and it would allow the debugger to print type_class_infos.
-
-<li> We have added support for dynamic link libraries (DLLs) on Windows.
-  This is not yet enabled by default because it has not yet been tested
-  properly.
-
-<li> There is a new garbage collector that does accurate garbage
-  collection (hlc.agc grade).  See the comments in
-  compiler/ml_elim_nested.m and the paper on our web page for more details.
-
-<li> There is a `--generate-bytecode' option, for a new back-end
-  that generates bytecode.  The bytecode generator is basically
-  complete, but we don't have a bytecode interpreter.
-</ul>
-<p>
-
-We also have some code that goes at least some part of the way towards
-implementing the features below.   However, for these features, the
-code has not yet been committed and thus is not part of the standard
-distribution.
-
-<p>
-
-<ul>
-<li> A new implementation of the mode system using constraints.
-  This is on the "mode-constraints" branch of our CVS repository.
-
-<li> Support for aliasing in the mode system.
-  This is on the "alias" branch of our CVS repository.
-
-<li> Support for automatic structure reuse (reusing old data
-  structures that are no longer live, rather than allocating
-  new memory on the heap) and compile time garbage collection
-  This is on the "reuse" branch of our CVS repository.
-
-<li> Better support for inter-module analysis and optimization.
-
-<li> Support for GCC 3.3 in the native code back-end.
-  This is on the "gcc_3_3" branch of our CVS repository.
-
-</ul>
-
-<hr>
-<h2>
-Work Not In Progress
-</h2>
-
-The compiler also contains some code for the following features,
-but work on them has stopped, since finishing them off would be
-quite a bit more work, and our current priorities lie elsewhere.
-Still, these could make interesting and worthwhile projects
-if someone has the time for it.
-<p>
-
-<ul>
-<li> A SOAP interface.
-
-<li> A bytecode interpreter, for use with the `--generate-bytecode' option.
-
-<li> Sequence quantification (see the
-  <a href="/web/20121002213802/http://www.cs.mu.oz.au/research/mercury/information/reports/minutes_15_12_00.html">description</a> from the meeting minutes).
-</ul>
-</html>
-
-<hr>
-
-Last update was $Date: 2010/07/13 05:48:04 $ by $Author: juliensf $@cs.mu.oz.au. <br>
-</body>
-</html>
-
diff --git a/development/include/developer.inc b/development/include/developer.inc
index 768e6e1..c16e991 100644
--- a/development/include/developer.inc
+++ b/development/include/developer.inc
@@ -5,6 +5,17 @@ These pages may contain out-of-date information.
 We hope to update or replace this information in the future.
 </p>
 
+<?php
+
+/*
+Note that this information is not stored in this repository.  It is stored
+in compiler/notes/*.html.  On the webserver we maintain a checkout of the
+main source repository as well as the www repository and configure apache to
+look in the main source repository to find these documents.
+*/
+
+?>
+
 <div class="developer">
 <ul class="nonindentlist">
 <li> 	
-- 
1.8.5.3




More information about the reviews mailing list