[m-rev.] www diff 3/4: Restore missing developer documents
Paul Bone
paul at bone.id.au
Thu Aug 15 00:17:29 AEST 2013
Restore missing developer documents
development/developers/allocation.html:
development/developers/bootstrapping.html:
development/developers/coding_standards.html:
development/developers/compiler_design.html:
development/developers/gc_and_c_code.html:
development/developers/glossary.html:
development/developers/release_checklist.html:
development/developers/reviews.html:
development/developers/todo.html:
development/developers/work_in_progress.html:
Recover the missing files.
development/include/developer.inc:
Update links.
---
development/developers/allocation.html | 557 +++++++
development/developers/bootstrapping.html | 50 +
development/developers/coding_standards.html | 539 +++++++
development/developers/compiler_design.html | 1913 +++++++++++++++++++++++++
development/developers/gc_and_c_code.html | 77 +
development/developers/glossary.html | 140 ++
development/developers/release_checklist.html | 192 +++
development/developers/reviews.html | 534 +++++++
development/developers/todo.html | 385 +++++
development/developers/work_in_progress.html | 108 ++
development/include/developer.inc | 22 +-
11 files changed, 4506 insertions(+), 11 deletions(-)
create mode 100644 development/developers/allocation.html
create mode 100644 development/developers/bootstrapping.html
create mode 100644 development/developers/coding_standards.html
create mode 100644 development/developers/compiler_design.html
create mode 100644 development/developers/gc_and_c_code.html
create mode 100644 development/developers/glossary.html
create mode 100644 development/developers/release_checklist.html
create mode 100644 development/developers/reviews.html
create mode 100644 development/developers/todo.html
create mode 100644 development/developers/work_in_progress.html
diff --git a/development/developers/allocation.html b/development/developers/allocation.html
new file mode 100644
index 0000000..6975f80
--- /dev/null
+++ b/development/developers/allocation.html
@@ -0,0 +1,557 @@
+<html>
+<head>
+
+<title>
+ The Storage Allocation Scheme
+</title>
+</head>
+
+<body
+ bgcolor="#ffffff"
+ text="#000000"
+>
+
+<hr>
+<!-------------------------->
+
+This document describes
+the storage allocation system used by the LLDS code generator.
+
+<hr>
+<!-------------------------->
+
+<h2> FORWARD LIVENESS </h2>
+
+<p>
+
+Each goal has four sets of variables associated with it to give information
+about changes in liveness on forward execution. (Backward execution is a
+different matter; see a later part of this document.) These four sets are
+
+<ul>
+<li> the pre-birth set
+<li> the pre-death set
+<li> the post-birth set
+<li> the post-death set
+</ul>
+
+<p>
+
+The goal that contains the first value-giving occurrence of a variable
+on a particular computation path will have that variable in its pre-birth set;
+the goal that contains the last value-using occurrence of a variable on
+a particular computation path will have that variable in its post-death set.
+
+<p>
+
+The different arms of a disjunction or a switch are different computation
+paths. The condition and then parts of an if-then-else on the one hand
+and the else part of that if-then-else on the other hand are also different
+computation paths.
+
+<p>
+
+An occurrence is value-giving if it requires the code generator to associate
+some value with the variable. At the moment, the only value-giving occurrences
+are those that bind the variable. In the future, occurrences that don't bind
+the variable but give the address where it should later be put may also be
+considered value-giving occurrences.
+
+<p>
+
+An occurrence is value-using if it requires access to some value the code
+generator associates with the variable. At the moment we consider all
+occurrences to be value-using; this is a conservative approximation.
+
+<p>
+
+Mode correctness requires that all branches of a branched control structure
+define the same set of nonlocal variables; the exceptions are branches that
+cannot succeed, as indicated by the instmap at the end of the branch being
+unreachable. Such branches are considered by mode analysis to "produce"
+any variable they are required to produce by parallel branches.
+To make it easier to write code that tracks the liveness of variables,
+we implement this fiction by filling the post-birth sets of goals representing
+such non-succeed branches with the set of variables that must "magically"
+become live at the unreachable point at end of the branch in order to
+match the set of live variables at the ends of the other branches.
+(Variables that have become live in the ordinary way before the unreachable
+point will not be included.) The post-birth sets of all other goals will be
+empty.
+
+<p>
+
+This guarantees that the set of variables born in each branch of a branched
+control structure will be the same, modulo variables local to each branch.
+
+<p>
+
+We can optimize the treatment of variables that are live inside a branched
+control structure but not after, because it is possible for the variable
+to be used in one branch without also being used in some other branches.
+Each variable that is live before the branched structure but not after
+must die in the branched structure. Branches in which the variable is used
+will include the variable in the post-death set of one of their subgoals.
+As far as branches in which the variable is not used are concerned, the
+variable becomes dead to forward execution as soon as control enters the
+branch. In such circumstances, we therefore include the variable in the
+pre-death set of the goal representing the branch. (See below for the method
+we use for making sure that the values of such "dead" variables are still
+available to later branches into which we may backtrack and which may need
+them.)
+
+<p>
+
+This guarantees that the set of variables that die in each branch of a branched
+control structure will be the same, modulo variables local to each branch.
+
+<p>
+
+It is an invariant that in each goal_info, a variable will be included
+in zero, one or two of these four sets; and that if it is included in
+two sets, then these must be the pre-birth and post-death sets. (This
+latter will occur for singleton variables.)
+
+<p>
+
+<hr>
+<!------------->
+<hr>
+<!------------->
+
+<h2> STORE MAPS </h2>
+
+<p>
+
+There are four kinds of situations in which the code generator must
+associate specific locations with every live variable, either to put
+those variables in those locations or to update its data structures
+to say that those variables are "magically" in those locations.
+
+<p>
+
+<ol>
+<li> At the ends of branched control structures, i.e. if-then-elses, switches
+ and disjunctions. All branches of a branched structure must agree exactly
+ on these locations.
+
+<li> At the start and end of the procedure.
+
+<li> At points at which execution may resume after a failure, i.e. at the
+ start of the else parts of if-then-elses, at the start of the second and
+ later disjuncts in disjunctions, and after negated goals.
+
+<li> Just before and just after calls and higher-order calls (but not
+ pragma_c_codes).
+</ol>
+
+<hr>
+<!------------->
+
+<h3> Ends of branched control structures </h3>
+
+<p>
+
+We handle these by including a store_map field in the goal_infos of
+if_then_else, switch and disj goals.
+This field, like most other goal_info fields
+we will talk about in the rest of this document,
+is a subfield of the code_gen_info field of the goal_info.
+Through most of the compilation process,
+the code_gen_info field contains no information;
+its individual subfields are filled in
+during the various pre-passes of the LLDS code generator.
+The store map subfield
+it is meaningful only from the follow_vars pass onwards.
+
+<p>
+
+The follow_vars pass fills this field of goals representing branched control
+structures with advisory information, saying where things that will be used
+in code following the branched structure should be.
+This advisory information may include duplicates (two variables
+mapped to the same location), it may miss some variables that are live at
+the end of the branched structure, and it may include variables that are
+not live at that point.
+
+<p>
+
+The store_map pass uses the advisory information left by the follow_vars pass
+to fill in these fields with definitive information. The definitive store maps
+guarantee that no two variables are allocated the same location, and they
+cover exactly the set of variables forward live at the end of the branched
+structure, plus the variables that are in the resume set of any enclosing
+resume point (see below).
+
+<p>
+
+The passes of the backend following store_map must not do anything to
+invalidate this invariant, which means that they must not rearrange the code
+or touch the field. The code generator will use these fields to know what
+variables to put where when flushing the expression cache at the end of
+each branch in a branched structure.
+
+<p>
+
+<hr>
+<!-------------------------->
+
+<h3> Starts and ends of procedures </h3>
+
+<p>
+
+We handle these using the mechanisms we use for the ends of branched
+structures, except the map of where things are at the start and where
+they should be at the end are computed by the code generator from the
+arg_info list.
+
+<p>
+
+<hr>
+<!-------------------------->
+
+
+<h3> Resumption points </h3>
+
+<p>
+
+We handle these through the resume_point subfield of the code_gen_info field
+in goal infos. During the liveness pass, we fill in this field for every goal
+that establishes a point at which execution may resume after backtracking.
+This means
+the conditions of if-then-elses (the resumption point is the start of
+the else part), every disjunct in a disjunction except the last (the
+resumption point is the start of the next disjunct), and goals inside
+negations (the resumption point is the start of the code following the
+negated goal). The value of this field will give the set of variables
+whose values may be needed when execution resumes at that point.
+Note that for the purposes of handling resumption points, it does not
+matter whether any part of an if-then-else, disjunction or negation
+can succeed more than once.
+
+<p>
+
+The resume_point field does not assign a location to these variables.
+The reason is that as an optimization, each conceptual resumption point
+is associated with either one or two labels, and if there are two labels,
+these will differ in where they expect these variables to be. The
+failure continuation stack entry created by the code generator
+that describes the resumption point will associate a resume map with
+each label, with each resume map assigning a location to each variable
+included in the resume vars set.
+
+<p>
+
+The usual case has two labels. The resume map of the first label maps each
+variable to its stack slot, while the resume map of the second label maps
+each variable to the location it was occupying on entry to the goal.
+The code emitted at the resumption point will have, in order, the first
+label, code that moves each variable from its location according to the
+first store map to its location according to the second store map
+(this will be a null operation if the two maps agree on the location
+of a variable). The idea is that any failure that occurs while all these
+variables are guaranteed to still be in their original locations can be
+implemented as a jump directly to the second label, while failures at
+other points (including those from to the right of the disjunct itself,
+as well as failures from semidet or nondet calls inside the disjunct)
+will jump (directly or indirectly via a redo() or fail()) to the first
+label. The section on backward liveness below discusses how we make sure
+that at these points all the variables in the resume_point set are actually
+in their stack slots.
+
+<p>
+
+We can omit the first label and the code following it up to but not including
+the second label if we can guarantee that the first label will never be
+jumped to, directly or indirectly. We can give this guarantee for negated
+goals, conditions in if-then-elses and disjuncts in disjunctions that cannot
+succeed more than once if the goal concerned cannot flush any variable to
+the stack (which means it contains only inline builtins). We cannot give
+this guarantee for disjuncts in disjunctions that can succeed more than once
+even if the goal concerned contains only inline builtins, since in that case
+we may backtrack to the next disjunct after leaving the current disjunct.
+
+<p>
+
+We can omit the second label if we can guarantee that it will never be
+jumped to, directly or indirectly. We can give this guarantee if the goal
+concerned has no failure points before a construct (such as a call)
+that requires all the resumption point variables to be stored on the stack.
+
+<p>
+
+The resume_locs part of the resume_point field will say which labels
+will be needed.
+
+<p>
+
+It is an invariant that in a disjunction, the resume_point field of one
+disjunct must contain all the variables included in the resume_point fields
+of later disjuncts.
+
+<p>
+
+When one control structure that establishes a resumption point occurs inside
+another one, all the variables included in the relevant resume_point of the
+outer construct must appear in *all* the resume_point fields associated
+with the inner construct. This is necessary to make sure that in establishing
+the inner resumption point, we do not destroy the values of the variables
+needed to restart forward execution at the resumption point established
+by the outer construct. (See the section on resumption liveness below.)
+
+<p>
+
+When one control structure which establishes a resumption point occurs after
+but not inside another one, there is no such requirement; see the section
+on backward liveness below.
+
+<p>
+
+
+<hr>
+<!-------------------------->
+
+<p>
+
+<h3> Calls and higher order calls </h3>
+
+<p>
+
+We handle these by flushing all variables that are live after the call
+except those produced by the call. This is equivalent to the set of
+variables that are live immediately after the call, minus the pre-birth
+and post-birth sets of the call, which in turn is equivalent to the set
+of variables live before the call minus the pre-death and post-death
+sets of the call.
+
+<p>
+
+The stack allocation code and the code generator figure out the set of
+variables that need to be flushed at each call independently, but based
+on the same algorithm. Not attaching the set of variables to be saved
+to each call reduces the space requirement of the compiler.
+
+<p>
+
+The same applies to higher order calls.
+
+<p>
+
+
+<hr>
+<!-------------------------->
+<hr>
+<!-------------------------->
+
+<p>
+
+<h2> BACKWARD LIVENESS </h2>
+
+<p>
+
+There are three kinds of goals that can introduce nondeterminism: nondet
+disjunctions, nondet calls and nondet higher order calls. All code that
+executes after one of these constructs must take care not to destroy the
+variables that are needed to resume in those constructs. (We are *not*
+talking here about preserving variables needed for later disjuncts;
+that is discussed in the next section.)
+
+<p>
+
+The variables needed to resume after nondet calls and higher order calls
+are the variables saved across the call in the normal fashion. The variables
+needed to resume after nondet disjunctions are the variables included in
+any of the resume_point sets associated with the disjuncts of the disjunction.
+
+<p>
+
+The achievement of this objective is in two parts. First, the code generator
+makes sure that each of these variables is flushed to its stack slot before
+control leaves the construct that introduces nondeterminism. For calls and
+higher order calls this is done as part of the call mechanism. For nondet
+disjunctions, the code generator emits code at the end of every disjunct
+to copy every variable in the resume_point set for that disjunct into its
+stack slot, if it isn't there already. (The mechanism whereby these variables
+survive to this point is discussed in the next section.)
+
+<p>
+
+Second, the stack slot allocation pass makes sure that each of the variables
+needed to resume in a construct that introduces nondeterminism is allocated
+a stack slot that is not reused in any following code from which one can
+backtrack to that construct. Normally, this is all following code, but if
+the construct that introduced the nondeterminism is inside a cut (a some
+that changes determinism), then it means only the following code inside
+the cut.
+
+<p>
+
+
+<hr>
+<!-------------------------->
+<hr>
+<!-------------------------->
+
+<p>
+
+<h2> RESUMPTION LIVENESS </h2>
+
+<p>
+
+Variables whose values are needed when execution resumes at a resumption point
+may become dead in the goal that establishes the resumption point. Some points
+of failure that may cause backtracking to the resumption point may occur
+after some of these variables have become dead wrt forward liveness.
+However, when generating the failure code the code generator must know
+the current locations of these variables so it can pick the correct label
+to branch to (and possibly generate some code to shuffle the variables
+to the locations expected at the picked label).
+
+<p>
+
+When entering a goal that establishes a resumption point, the code generator
+pushes the set of variables that are needed at that resumption point onto
+a resumption point variables stack inside code_info. When we make a variable
+dead, we consult the top entry on this stack. If the variable being made dead
+is in that set, we do not forget about it; we just insert it into a set of
+zombie variables.
+
+<p>
+
+To allow a test of membership in the top element of this stack to function
+as a test of membership of *any* element of this stack, we enforce the
+invariant that each entry on this stack includes all the other entries
+below it as subsets.
+
+<p>
+
+At the end of the goal that established the resumption point, after popping
+the resumption point stack, the code generator will attempt to kill all the
+zombie variables again (after saving them on the stack if we can backtrack
+to the resumption point from the following code, which is possible only for
+nondet disjunctions). Any zombie variables that occur in the next entry of
+the resumption point stack will stay zombies; any that don't occur there
+will finally die (i.e. the code generator will forget about them, and
+release the space they occupy.)
+
+<p>
+
+The sets of zombie variables and forward live variables are always
+disjoint, since a variable is not made a zombie until it is no longer
+forward live.
+
+<p>
+
+It is an invariant that at any point in the code generator, the code
+generator's "set of known variables" is the union of "set of zombie
+variables" maintained by the code generator and the set of forward
+live variables as defined in the forward liveness section above.
+
+<p>
+
+
+<hr>
+<!-------------------------->
+<hr>
+<!-------------------------->
+
+<p>
+
+<h2> FOLLOW VARS </h2>
+
+
+<p>
+
+When the code generator emits code to materialize the value of a variable,
+it ought to put it directly into the location where it is required to be next.
+
+<p>
+
+The code generator maintains a field in the code_info structure that records
+advisory information about this. The information comes from the follow_vars
+pass, which fills in the follow_vars field in the goal info structure of some
+goals. Whenever the code generator starts processing a goal, it sets the field
+in the code_info structure from the field of the goal info structure of that
+goal, if that field is filled in.
+
+<p>
+
+The follow_vars pass will fill in this field for the following goals:
+
+<ul>
+<li> the goal representing the entire procedure definition
+<li> each arm of a switch
+<li> each disjunct of a disjunction
+<li> the condition, then-part and else-part of an if-then-else
+<li> the first goal following any non-builtin goal in a conjunction
+ (the builtin goals are non-complicated unifications and calls to
+ inline builtin predicates and functions)
+</ul>
+
+<p>
+
+The semantics of a filled in follow_vars field:
+<ul>
+<li> If it maps a variable to a real location, that variable should be put
+ in that location.
+
+<li> If it maps a variable to register r(-1), that variable should be put
+ in a currently free register.
+
+<li> If it does not map a variable to anything, that variable should be put
+ in its stack slot, if that stack slot is free; otherwise it should be put
+ in a currently free register.
+</ul>
+
+<p>
+
+The follow_vars field should map a variable to a real location if the
+following code will require that variable to be in exactly that location.
+For example, if the variable is an input argument of a call, it will
+need to be in the register holding that argument; if the variable is not
+an input argument but will need to be saved across the call, it will need
+to be in its stack slot.
+
+<p>
+
+The follow_vars field should map a variable to register r(-1) if the
+variable is an input to a builtin that does not require its inputs to
+be anywhere in particular. In that case, we would prefer that the
+variable be in a register, since this should make the code generated
+for the builtin somewhat faster.
+
+<p>
+
+When the code generator materializes a variable in way that requires
+several accesses to the materialized location (e.g. filling in the fields
+of a structure), it should put the variable into a register even if
+the follow_vars field says otherwise.
+
+<p>
+
+Since there may be many variables that should be in their stack slots,
+and we don't want to represent all of these explicitly, the follow_vars
+field may omit any mention of these variables. This also makes it easier
+to merge follow_vars fields at the starts of branched control structures.
+If some branches want a variable in a register, their wishes should take
+precedence over the wishes of the branches that wish the variable to be
+in its stack slot or in which the variable is not used at all.
+
+<p>
+
+When the code generator picks a random free register, it should try to avoid
+registers that are needed for variables in the follow_vars map.
+
+<p>
+
+When a variable that is currently in its stack slot is supposed to be put
+in any currently free register for speed of future access, the code generator
+should refuse to use any virtual machine registers that are not real machine
+registers. Instead, it should keep the variable in its stack slot.
+
+<p>
+
+<hr>
+</body>
+</html>
+
diff --git a/development/developers/bootstrapping.html b/development/developers/bootstrapping.html
new file mode 100644
index 0000000..7b8b6a9
--- /dev/null
+++ b/development/developers/bootstrapping.html
@@ -0,0 +1,50 @@
+
+<html>
+<head>
+
+
+<title>
+ Bootstrapping
+</title>
+</head>
+
+<body
+ bgcolor="#ffffff"
+ text="#000000"
+>
+
+
+<hr>
+
+<h2> Changes that don't bootstrap </h2>
+
+<p>
+
+Sometimes changes need to be made to the Mercury system that mean
+previous versions of the compiler will no longer successfully compile
+the new version.
+<p>
+
+Whenever anyone makes a change which prevents bootstrapping with a
+previous version of the compiler, they should add a cvs tag to all the
+files in the relevant directories <em>before committing</em>, and
+mention this tag in the log message. The tag should be of the form
+bootstrap_YYYYMMDD_<short_description_of_change>.
+<p>
+
+The rationale for the cvs tag is that it allows machines to be
+bootstrapped (if they didn't manage to do it in a daily build)
+by doing `cvs update -r<tag>' on the relevant build directory.
+After that compiler has been installed, a `cvs update -A' will remove
+the cvs sticky tags.
+<p>
+
+Optionally, a test should be added to the configuration script so
+that people installing from CVS don't use an outdated compiler to
+bootstrap. Practically this may be difficult to achieve in some cases.
+
+<hr>
+
+</body>
+</html>
+
diff --git a/development/developers/coding_standards.html b/development/developers/coding_standards.html
new file mode 100644
index 0000000..e1c5565
--- /dev/null
+++ b/development/developers/coding_standards.html
@@ -0,0 +1,539 @@
+
+<html>
+<head>
+
+
+<title>
+ Mercury Coding Standard for the Mercury Project
+</title>
+</head>
+
+<body
+ bgcolor="#ffffff"
+ text="#000000"
+>
+
+<hr>
+<!-------------------------->
+
+<h1>
+Mercury Coding Standard for the Mercury Project</h1>
+<hr>
+
+<!-------------------------->
+
+<h2> Documentation </h2>
+
+<p>
+
+Each module should contain header comments
+which state the module's name, main author(s), and purpose,
+and give an overview of what the module does,
+what are the major algorithms and data structures it uses, etc.
+
+<p>
+
+Everything that is exported from a module should have sufficient documentation
+that it can be understood without reference
+to the module's implementation section.
+
+<p>
+
+Each procedure that is implemented using foreign code
+should have sufficient documentation about its interface
+that it can be implemented just by referring to that documentation,
+without reference to the module's implementation section.
+
+<p>
+
+Each predicate other than trivial access predicates
+should have a short comment describing what the predicate is supposed to do,
+and what the meaning of the arguments is.
+Ideally this description should also note any conditions
+under which the predicate can fail or throw an exception.
+
+<p>
+
+There should be a comment for each field of a structure saying
+what the field represents.
+
+<p>
+
+Any user-visible changes such as new compiler options or new features
+should be documented in appropriate section of the Mercury documentation
+(usually the Mercury User's Guide and/or the Mercury Reference Manual).
+Any major new features should be documented in the NEWS file,
+as should even small changes to the library interface,
+or anything else that might cause anyone's existing code to break.
+
+<p>
+
+Any new compiler modules or other major design changes
+should be documented in `compiler/notes/compiler_design.html'.
+
+<p>
+
+Any feature which is incompletely implemented
+should be mentioned in `compiler/notes/work_in_progress.html'.
+
+<h2> Naming </h2>
+
+<p>
+
+Variables should always be given meaningful names,
+unless they are irrelevant to the code in question.
+For example, it is OK to use single-character names
+in an access predicate which just sets a single field of a structure,
+such as
+
+<pre>
+
+ bar_set_foo(Foo, bar(A, B, C, _, E), bar(A, B, C, Foo, E)).
+
+</pre>
+
+Variables which represent different states or different versions
+of the same entity should be named Foo0, Foo1, Foo2, ..., Foo.
+
+<p>
+
+Predicates which get or set a field of a structure or ADT
+should be named bar_get_foo and bar_set_foo respectively,
+where bar is the name of the structure or ADT and foo is the name of the field.
+
+<h2> Coding </h2>
+
+<p>
+
+Your code should make as much reuse of existing code as possible.
+"cut-and-paste" style reuse is highly discouraged.
+
+<p>
+
+Your code should be efficient.
+Performance is a quite serious issue for the Mercury compiler.
+
+<p>
+
+No fixed limits please!
+(If you really must have a fixed limit,
+include detailed documentation explaining why it was so hard to avoid.)
+
+<p>
+
+Only use DCG notation for parsing, not for threading implicit arguments.
+
+Use state variables for threading the IO state etc.
+The conventional IO state variable name is <code>!IO</code>.
+
+<h2> Error handling </h2>
+
+<p>
+
+Code should check for both erroneous inputs from the user
+and also invalid data being passed from other parts of the Mercury compiler.
+You should also always check to make sure that
+the routines that you call have succeed;
+make sure you don't silently ignore failures.
+(This last point almost goes without saying in Mercury,
+but is particularly important to bear in mind
+if you are writing any C code or shell scripts,
+or if you are interfacing with the OS.)
+
+<p>
+
+Calls to error/1 should always indicate an internal software error,
+not merely incorrect inputs from the user,
+or failure of some library routine or system call.
+In the compiler, use unexpected/2 or sorry/2 from compiler_util.m
+rather than error/1. Use expect/3 from compiler_util rather than
+require/2.
+
+<p>
+
+Error messages should follow a consistent format.
+For compiler error messages, each line should start
+with the source file name and line number in "%s:%03d: " format.
+Compiler error messages should be complete sentences;
+they should start with a capital letter and end in a full stop.
+For error messages that are spread over more than one line
+(as are most of them),
+the second and subsequent lines should be indented two spaces.
+If the `--verbose-errors' option was set,
+you should print out additional text explaining in detail
+what the error message means and what the likely causes are.
+The preferred method of printing error messages
+is via the predicates in error_util.m;
+use prog_out__write_context and io__write_strings
+only if there is no way to add the capability you require to error_util.m.
+
+<p>
+
+Error messages from the runtime system should begin with the text
+"Mercury Runtime:", preferably by using the MR_fatal_error() routine.
+
+<p>
+
+If a system call or C library function that sets errno fails,
+the error message should be printed with perror()
+or should contain strerror(errno).
+If it was a function manipulating some file,
+the error message should include the filename.
+
+<h2> Layout </h2>
+
+<p>
+
+Each module should be indented consistently,
+with either 4 or 8 spaces per level of indentation.
+The indentation should be consistently done,
+either only with tabs or only with spaces.
+A tab character should always mean 8 spaces;
+if a module is indented using 4 spaces per level of indentation,
+this should be indicated by four spaces,
+not by a tab with tab stops set to 4.
+
+<p>
+
+Files that use 8 spaces per level of indentation
+don't need any special setup.
+Files that use 4 spaces per level of indentation
+should have something like this at the top,
+even before the copyright line:
+<pre>
+ % vim: ft=mercury ts=4 sw=4 et
+</pre>
+
+<p>
+
+No line should extend beyond 79 characters.
+The reason we don't allow 80 character lines is that
+these lines wrap around in diffs,
+since diff adds an extra character at the start of each line.
+
+<p>
+
+Since "empty" lines that have spaces or tabs on them
+prevent the proper functioning of paragraph-oriented commands in vi,
+lines shouldn't have trailing white space.
+They can be removed with a vi macro such as the following.
+(Each pair of square brackets contains a space and a tab.)
+
+<pre>
+ map ;x :g/[ ][ ]*$/s///^M
+</pre>
+
+<p>
+
+String literals that don't fit on a single line should be split
+by writing them as two or more strings concatenated using the "++" operator;
+the compiler will evaluate this at compile time,
+if --optimize-constant-propagation is enabled (i.e. at -O3 or higher).
+
+<p>
+
+Predicates that have only one mode should use predmode declarations
+rather than having a separate mode declaration.
+
+<p>
+If-then-elses should always be parenthesized,
+except that an if-then-else that occurs as the else
+part of another if-then-else doesn't need to be parenthesized.
+The condition of an if-then-else can either be on the same
+line as the opening parenthesis and the `->',
+
+<pre>
+
+ ( test1 ->
+ goal1
+ ; test2 ->
+ goal2
+ ;
+ goal
+ )
+
+</pre>
+
+or, if the test is complicated, it can be on a line of its own:
+
+<pre>
+
+ (
+ very_long_test_that_does_not_fit_on_one_line(VeryLongArgument1,
+ VeryLongArgument2)
+ ->
+ goal1
+ ;
+ test2a,
+ test2b,
+ ->
+ goal2
+ ;
+ test3 % would fit one one line, but separate for consistency
+ ->
+ goal3
+ ;
+ goal
+ ).
+
+</pre>
+
+<p>
+
+Disjunctions should always be parenthesized.
+The semicolon of a disjunction should never be at the
+end of a line -- put it at the start of the next line instead.
+
+<p>
+
+Predicates and functions implemented via foreign code should be formatted
+like this:
+
+<pre>
+:- pragma foreign_proc("C",
+ int__to_float(IntVal::in, FloatVal::out),
+ [will_not_call_mercury, promise_pure],
+"
+ FloatVal = IntVal;
+").
+</pre>
+
+The predicate name and arguments should be on a line on their own,
+as should the list of annotations.
+The foreign code should also be on lines of its own;
+it shouldn't share lines with the double quote marks surrounding it.
+
+<p>
+
+Type definitions should be formatted in one of the following styles:
+
+<pre>
+ :- type my_type
+ ---> my_type(
+ some_other_type % comment explaining it
+ ).
+
+ :- type my_struct --->
+ my_struct(
+ field1, % comment explaining it
+ ...
+ ).
+
+ :- type some_other_type == int.
+
+ :- type foo
+ ---> bar(
+ int, % comment explaining it
+ float % comment explaining it
+ )
+ ; baz
+ ; quux.
+
+</pre>
+
+<p>
+
+If an individual clause is long, it should be broken into sections,
+and each section should have a "block comment" describing what it does;
+blank lines should be used to show the separation into sections.
+Comments should precede the code to which they apply, rather than following it.
+
+<pre>
+ %
+ % This is a block comment; it applies to the code in the next
+ % section (up to the next blank line).
+ %
+ blah,
+ blah,
+ blahblah,
+ blah,
+</pre>
+
+If a particular line or two needs explanation, a "line" comment
+
+<pre>
+ % This is a "line" comment; it applies to the next line or two
+ % of code
+ blahblah
+</pre>
+
+or an "inline" comment
+
+<pre>
+ blahblah % This is an "inline" comment
+</pre>
+
+should be used.
+
+<h2> Structuring </h2>
+
+Code should generally be arranged so that
+procedures (or types, etc.) are listed in top-down order, not bottom-up.
+
+<p>
+
+Code should be grouped into bunches of related predicates, functions, etc.,
+and sections of code that are conceptually separate
+should be separated with dashed lines:
+
+<pre>
+
+%---------------------------------------------------------------------------%
+
+</pre>
+
+Ideally such sections should be identified
+by "section heading" comments identifying the contents of the section,
+optionally followed by a more detailed description.
+These should be laid out like this:
+
+<pre>
+
+%---------------------------------------------------------------------------%
+%
+% Section title
+%
+
+% Detailed description of the contents of the section and/or
+% general comments about the contents of the section.
+% This part may go one for several lines.
+%
+% It can even contain several paragraphs.
+
+The actual code starts here.
+
+</pre>
+
+For example
+
+<pre>
+
+%---------------------------------------------------------------------------%
+%
+% Exception handling
+%
+
+% This section contains all the code that deals with throwing or catching
+% exceptions, including saving and restoring the virtual machine registers
+% if necessary.
+%
+% Note that we need to take care to ensure that this code is thread-safe!
+
+:- type foo ---> ...
+
+</pre>
+
+Double-dashed lines, i.e.
+
+<pre>
+
+%---------------------------------------------------------------------------%
+%---------------------------------------------------------------------------%
+
+</pre>
+
+can also be used to indicate divisions into major sections.
+Note that these dividing lines should not exceed the 79 character limit
+(see above).
+
+<h2> Module imports </h2>
+
+Each group of :- import_module items should list only one module per line,
+since this makes it much easier to read diffs
+that change the set of imported modules.
+In the compiler, when e.g. an interface section imports modules
+from both the compiler and the standard library,
+there should be two groups of imports,
+the imports from the compiler first and then the ones from the library.
+For the purposes of this rule,
+consider the modules of mdbcomp to belong to the compiler.
+
+<p>
+
+Each group of import_module items should be sorted,
+since this makes it easier to detect duplicate imports and missing imports.
+It also groups together the imported modules from the same package.
+There should be no blank lines between
+the imports of modules from different packages,
+since this makes it harder to resort the group with a single editor command.
+
+<h2> Standard library predicates </h2>
+
+The descriptive comment for any predicate or function
+that occurs in the interface of a standard library module
+must be positioned above the predicate/function declaration.
+It should be formatted like the following example:
+
+<pre>
+
+ % Description of predicate foo.
+ %
+ :- pred foo(...
+ :- mode foo(...
+</pre>
+
+A group of related predicate, mode and function declarations
+may be grouped together under a single description
+provided that it is formatted as above.
+If there is a function declaration in such a grouping
+then it should be listed before the others.
+
+For example:
+
+<pre>
+
+ % Insert a new key and corresponding value into a map.
+ % Fail if the key already exists.
+ %
+ :- func map.insert(map(K, V), K, V) = map(K, V).
+ :- pred map.insert(map(K, V)::in, K::in, V::in, map(K, V)::out) is det.
+
+</pre>
+
+The reason for using this particular style is that
+the reference manual for the standard library
+is automatically generated from the module interfaces,
+and we want to maintain a uniform appearance as much as is possible.
+
+<h2> Testing </h2>
+
+<p>
+
+Every change should be tested before being committed.
+The level of testing required depends on the nature of the change.
+If this change fixes an existing bug,
+and is unlikely to introduce any new bugs,
+then just compiling it and running some tests by hand is sufficient.
+If the change might break the compiler,
+you should run a bootstrap check (using the `bootcheck' script)
+before committing.
+If the change means that old versions of the compiler
+will not be able to compile the new version of the compiler,
+you must notify all the other Mercury developers.
+
+<p>
+
+In addition to testing before a change is committed,
+you need to make sure that the code will not get broken in the future
+by adding tests to the test suite.
+Every time you add a new feature,
+you should add some test cases for that new feature to the test suite.
+Every time you fix a bug, you should add a regression test to the test suite.
+
+<h2> Committing changes </h2>
+
+<p>
+
+Before committing a change, you should get someone else to review your changes.
+
+<p>
+
+The file <a href="/web/20121002213713/http://www.mercury.csse.unimelb.edu.au/information/doc-latest/reviews.html">compiler/notes/reviews.html</a>
+contains more information on review policy.
+
+<hr>
+<!-------------------------->
+
+</body>
+</html>
+
diff --git a/development/developers/compiler_design.html b/development/developers/compiler_design.html
new file mode 100644
index 0000000..cd52c3d
--- /dev/null
+++ b/development/developers/compiler_design.html
@@ -0,0 +1,1913 @@
+<html>
+<head>
+
+
+<title>
+ Notes On The Design Of The Mercury Compiler
+</title>
+</head>
+
+<body bgcolor="#ffffff" text="#000000">
+
+
+<hr>
+<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
+
+<p>
+This file contains an overview of the design of the compiler.
+
+<p>
+See also <a href="/web/20121002213721/http://www.mercury.csse.unimelb.edu.au/information/doc-latest/overall_design.html">overall_design.html</a>
+for an overview of how the different sub-systems (compiler,
+library, runtime, etc.) fit together.
+
+<hr>
+<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
+
+<h2> OUTLINE </h2>
+
+<p>
+
+The main job of the compiler is to translate Mercury into C, although it
+can also translate (subsets of) Mercury to some other languages:
+Mercury bytecode (for a planned bytecode interpreter), MSIL (for the
+Microsoft .NET platform) and Erlang.
+
+<p>
+
+The top-level of the compiler is in the file mercury_compile.m,
+which is a sub-module of the top_level.m package.
+The basic design is that compilation is broken into the following
+stages:
+
+<ul>
+<li> 1. parsing (source files -> HLDS)
+<li> 2. semantic analysis and error checking (HLDS -> annotated HLDS)
+<li> 3. high-level transformations (annotated HLDS -> annotated HLDS)
+<li> 4. code generation (annotated HLDS -> target representation)
+<li> 5. low-level optimizations
+ (target representation -> target representation)
+<li> 6. output code (target representation -> target code)
+</ul>
+
+
+<p>
+Note that in reality the separation is not quite as simple as that.
+Although parsing is listed as step 1 and semantic analysis is listed
+as step 2, the last stage of parsing actually includes some semantic checks.
+And although optimization is listed as steps 3 and 5, it also occurs in
+steps 2, 4, and 6. For example, elimination of assignments to dead
+variables is done in mode analysis; middle-recursion optimization and
+the use of static constants for ground terms is done during code generation;
+and a few low-level optimizations are done in llds_out.m
+as we are spitting out the C code.
+
+<p>
+
+In addition, the compiler is actually a multi-targeted compiler
+with several different back-ends.
+
+<p>
+
+mercury_compile.m itself supervises the parsing (step 1),
+but it subcontracts the supervision of the later steps to other modules.
+Semantics analysis (step 2) is looked after by mercury_compile_front_end.m;
+high level transformations (step 3) by mercury_compile_middle_passes.m;
+and code generation, optimization and output (steps 4, 5 and 6)
+by mercury_compile_llds_backend.m, mercury_compile_mlds_backend.m
+and mercury_compile_erl_backend.m
+for the LLDS, MLDS and Erlang backends respectively.
+
+<p>
+
+The modules in the compiler are structured by being grouped into
+"packages". A "package" is just a meta-module,
+i.e. a module that contains other modules as sub-modules.
+(The sub-modules are almost always stored in separate files,
+which are named only for their final module name.)
+We have a package for the top-level, a package for each main pass, and
+finally there are also some packages for library modules that are used
+by more than one pass.
+<p>
+
+Taking all this into account, the structure looks like this:
+
+<ul type=disc>
+<li> At the top of the dependency graph is the top_level.m package,
+ which currently contains only the mercury_compile*.m modules,
+ which invoke all the different passes in the compiler.
+<li> The next level down is all of the different passes of the compiler.
+ In general, we try to stick by the principle that later passes can
+ depend on data structures defined in earlier passes, but not vice
+ versa.
+ <ul type=disc>
+ <li> front-end
+ <ul type=disc>
+ <li> 1. parsing (source files -> HLDS)
+ <br> Packages: parse_tree.m and hlds.m
+ <li> 2. semantic analysis and error checking
+ (HLDS -> annotated HLDS)
+ <br> Package: check_hlds.m
+ <li> 3. high-level transformations
+ (annotated HLDS -> annotated HLDS)
+ <br> Packages: transform_hlds.m and analysis.m
+ </ul>
+ <li> back-ends
+ <ul type=disc>
+ <li> a. LLDS back-end
+ <br> Package: ll_backend.m
+ <ul type=disc>
+ <li> 3a. LLDS-back-end-specific HLDS->HLDS transformations
+ <li> 4a. code generation (annotated HLDS -> LLDS)
+ <li> 5a. low-level optimizations (LLDS -> LLDS)
+ <li> 6a. output code (LLDS -> C)
+ </ul>
+ <li> b. MLDS back-end
+ <br> Package: ml_backend.m
+ <ul type=disc>
+ <li> 4b. code generation (annotated HLDS -> MLDS)
+ <li> 5b. MLDS transformations (MLDS -> MLDS)
+ <li> 6b. output code
+ (MLDS -> C or MLDS -> MSIL or MLDS -> Java, etc.)
+ </ul>
+ <li> c. bytecode back-end
+ <br> Package: bytecode_backend.m
+ <ul type=disc>
+ <li> 4c. code generation (annotated HLDS -> bytecode)
+ </ul>
+ <li> d. Erlang back-end
+ <br> Package: erl_backend.m
+ <ul type=disc>
+ <li> 4d. code generation (annotated HLDS -> ELDS)
+ <li> 6d. output code
+ (ELDS -> Erlang)
+ </ul>
+ <li> There's also a package backend_libs.m which contains
+ modules which are shared between several different back-ends.
+ </ul>
+ </ul>
+<li> Finally, at the bottom of the dependency graph there is the package
+ libs.m. libs.m contains the option handling code, and also library
+ modules which are not sufficiently general or sufficiently useful to
+ go in the Mercury standard library.
+</ul>
+
+<p>
+
+In addition to the packages mentioned above, there are also packages
+for the build system: make.m contains the support for the `--make' option,
+and recompilation.m contains the support for the `--smart-recompilation'
+option.
+
+<hr>
+<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
+
+<h2> DETAILED DESIGN </h2>
+
+<p>
+This section describes the role of each module in the compiler.
+For more information about the design of a particular module,
+see the documentation at the start of that module's source code.
+
+<hr>
+<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
+<p>
+
+The action is co-ordinated from mercury_compile.m or make.m (if `--make'
+was specified on the command line).
+
+
+<h3> Option handling </h3>
+
+<p>
+
+Option handling is part of the libs.m package.
+
+<p>
+
+The command-line options are defined in the module options.m.
+mercury_compile.m calls library/getopt.m, passing the predicates
+defined in options.m as arguments, to parse them. It then invokes
+handle_options.m to postprocess the option set. The results are
+represented using the type globals, defined in globals.m.
+The globals structure is available in the HLDS representation,
+buy it is passed around as a separate argument both before the HLDS is built
+and after it is no longer needed.
+
+<h3> Build system </h3>
+
+<p>
+
+Support for `--make' is in the make.m package,
+which contains the following modules:
+
+<dl>
+
+<dt> make.m
+ <dd>
+ Categorizes targets passed on the command line and passes
+ them to the appropriate module to be built.
+
+<dt> make.program_target.m
+ <dd>
+ Handles whole program `mmc --make' targets, including
+ executables, libraries and cleanup.
+
+<dt> make.module_target.m
+ <dd>
+ Handles targets built by a compilation action associated
+ with a single module, for example making interface files,
+
+<dt> make.dependencies.m
+ <dd>
+ Compute dependencies between targets and between modules.
+
+<dt> make.module_dep_file.m
+ <dd>
+ Record the dependency information for each module between
+ compilations.
+
+<dt> make.util.m
+ <dd>
+ Utility predicates.
+
+<dt> options_file.m
+ <dd>
+ Read the options files specified by the `--options-file'
+ option. Also used by mercury_compile.m to collect the value
+ of DEFAULT_MCFLAGS, which contains the auto-configured flags
+ passed to the compiler.
+
+</dl>
+
+The build process also invokes routines in compile_target_code.m,
+which is part of the backend_libs.m package (see below).
+
+<p>
+<hr>
+<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
+
+<h3> FRONT END </h3>
+<h4> 1. Parsing </h4>
+<h5> The parse_tree.m package </h5>
+
+<p>
+The first part of parsing is in the parse_tree.m package,
+which contains the modules listed below
+(except for the library/*.m modules,
+which are in the standard library).
+This part produces the parse_tree.m data structure,
+which is intended to match up as closely as possible
+with the source code, so that it is suitable for tasks
+such as pretty-printing.
+
+<p>
+
+<ul>
+
+<li> <p> lexical analysis (library/lexer.m)
+
+<li> <p> stage 1 parsing - convert strings to terms. <p>
+
+ library/parser.m contains the code to do this, while
+ library/term.m and library/varset.m contain the term and varset
+ data structures that result, and predicates for manipulating them.
+
+<li> <p> stage 2 parsing - convert terms to `items'
+ (declarations, clauses, etc.)
+
+ <p>
+ The result of this stage is a parse tree that has a one-to-one
+ correspondence with the source code. The parse tree data structure
+ definition is in prog_data.m and prog_item.m, while the code to create
+ it is in prog_io.m and its submodules prog_io_dcg.m (which handles
+ clauses using Definite Clause Grammar notation), prog_io_goal.m (which
+ handles goals), prog_io_pragma.m (which handles pragma declarations),
+ prog_io_typeclass.m (which handles typeclass and instance
+ declarations), prog_io_type_defn.m (which handles type definitions),
+ prog_io_mutable.m (which handles initialize, finalize
+ and mutable declarations), prog_io_sym_name.m (which handles parsing
+ symbol names and specifiers) and prog_io_util.m (which defines
+ types and predicates needed by the other prog_io*.m modules.
+ builtin_lib_types.m contains definitions about types, type constructors
+ and function symbols that the Mercury implementation needs to know
+ about.
+
+ <p>
+
+ The modules prog_out.m and mercury_to_mercury.m contain predicates
+ for printing the parse tree.
+ prog_util.m contains some utility predicates
+ for manipulating the parse tree,
+ prog_mode contains utility predicates
+ for manipulating insts and modes,
+ prog_type contains utility predicates
+ for manipulating types,
+ prog_type_subst contains predicates
+ for performing substitutions on types,
+ prog_foreign contains utility predicates
+ for manipulating foreign code,
+ prog_mutable contains utility predicates
+ for manipulating mutable variables,
+ prog_event contains utility predicates for working with events,
+ while error_util.m contains predicates
+ for printing nicely formatting error messages.
+
+<li><p> imports and exports are handled at this point (modules.m)
+
+ <p>
+ read_module.m has code to read in modules in the form of .m,
+ .int, .opt etc files.
+
+ <p>
+ modules.m has the code to write out `.int', `.int2', `.int3',
+ `.d' and `.dep' files.
+
+ <p>
+ write_deps_file.m writes out Makefile fragments.
+
+ <p>
+ file_names.m does conversions between module names and file names.
+ It uses java_names.m, which contains predicates for dealing with names
+ of things in Java.
+
+ <p>
+ module_cmds.m handles the commands for manipulating interface files of
+ various kinds.
+
+ <p>
+ module_imports.m contains the module_imports type and its access
+ predicates, and the predicates that compute various sorts of
+ direct dependencies (those caused by imports) between modules.
+
+ <p>
+ deps_map.m contains the data structure for recording indirect
+ dependencies between modules, and the predicates for creating it.
+
+ <p>
+ source_file_map.m contains code to read, write and search
+ the mapping between module names and file names.
+
+<li><p> module qualification of types, insts and modes
+
+ <p>
+ module_qual.m - <br>
+ Adds module qualifiers to all types insts and modes,
+ checking that a given type, inst or mode exists and that
+ there is only possible match. This is done here because
+ it must be done before the `.int' and `.int2' interface files
+ are written. This also checks whether imports are really needed
+ in the interface.
+
+ <p>
+ Notes on module qualification:
+ <ul>
+ <li> all types, typeclasses, insts and modes occurring in pred, func,
+ type, typeclass and mode declarations are module qualified by
+ module_qual.m.
+ <li> all types, insts and modes occurring in lambda expressions,
+ explicit type qualifications, and clause mode annotations
+ are module qualified in make_hlds.m.
+ <li> constructors occurring in predicate and function mode declarations
+ are module qualified during type checking.
+ <li> predicate and function calls and constructors within goals
+ are module qualified during mode analysis.
+ </ul>
+
+
+<li><p> reading and writing of optimization interfaces
+ (intermod.m and trans_opt.m -- these are part of the
+ hlds.m package, not the parse_tree.m package).
+
+ <p>
+ <module>.opt contains clauses for exported preds suitable for
+ inlining or higher-order specialization. The `.opt' file for the
+ current module is written after type-checking. `.opt' files
+ for imported modules are read here.
+ <module>.opt contains termination analysis information
+ for exported preds (eventually it ought to contain other
+ "transitive" information too, e.g. for optimization, but
+ currently it is only used for termination analysis).
+ `.trans_opt' files for imported modules are read here.
+ The `.trans_opt' file for the current module is written
+ after the end of semantic analysis.
+
+<li><p> expansion of equivalence types (equiv_type.m)
+
+ <p>
+ `with_type` and `with_inst` annotations on predicate
+ and function type and mode declarations are also expanded.
+
+ <p>
+ Expansion of equivalence types is really part of type-checking,
+ but is done on the item_list rather than on the HLDS because it
+ turned out to be much easier to implement that way.
+</ul>
+
+<p>
+That's all the modules in the parse_tree.m package.
+
+<h5> The hlds.m package </h5>
+<p>
+Once the stages listed above are complete, we then convert from the parse_tree
+data structure to a simplified data structure, which no longer attempts
+to maintain a one-to-one correspondence with the source code.
+This simplified data structure is called the High Level Data Structure (HLDS),
+which is defined in the hlds.m package.
+
+<p>
+The last stage of parsing is this conversion to HLDS,
+which is done mostly by the following submodules
+of the make_hlds module in the hlds package.
+<dl>
+
+<dt>
+make_hlds_passes.m
+<dd>
+This submodule calls the others to perform the conversion, in several passes.
+(We cannot do everything in one pass;
+for example, we need to have seen a predicate's declaration
+before we can process its clauses.)
+
+<dt>
+superhomogeneous.m
+<dd>
+Performs the conversion of unifications into superhomogeneous form.
+
+<dt>
+state_var.m
+<dd>
+Expands away state variable syntax.
+
+<dt>
+field_access.m
+<dd>
+Expands away field access syntax.
+
+<dt>
+goal_expr_to_goal.m
+<dd>
+Converts clauses from parse_tree format to hlds format.
+Eliminates universal quantification
+(using `all [Vs] G' ===> `not (some [Vs] (not G))')
+and implication (using `A => B' ===> `not(A, not B)').
+
+<dt>
+add_clause.m
+<dd>
+Oversees the conversion of clauses from parse_tree format to hlds format.
+Handles their addition to procedures,
+which is nontrivial in the presence of mode-specific clauses.
+
+<dt>
+add_pred.m
+<dd>
+Handles type and mode declarations for predicates.
+
+<dt>
+add_type.m
+<dd>
+Handles the declarations of types.
+
+<dt>
+add_mode.m
+<dd>
+Handles the declarations of insts and modes,
+including checking for circular insts and modes.
+
+<dt>
+add_special_pred.m
+<dd>
+Adds unify, compare, and (if needed) index and init predicates
+to the HLDS as necessary.
+
+<dt>
+add_solver.m
+<dd>
+Adds the casting predicates needed by solver types to the HLDS as necessary.
+
+<dt>
+add_class.m
+<dd>
+Handles typeclass and instance declarations.
+
+<dt>
+qual_info.m
+<dd>
+Handles the abstract data types used for module qualification.
+
+<dt>
+make_hlds_warn.m
+<dd>
+Looks for constructs that merit warnings,
+such as singleton variables and variables with overlapping scopes.
+
+<dt>
+make_hlds_error.m
+<dd>
+Error messages used by more than one submodule of make_hlds.m.
+
+<dt>
+add_pragma.m
+<dd>
+Adds most kinds of pragmas to the HLDS,
+including import/export pragmas, tabling pragmas and foreign code.
+
+</dl>
+
+Fact table pragmas are handled by fact_table.m
+(which is part of the ll_backend.m package).
+That module also reads the facts from the declared file
+and compiles them into a separate C file
+used by the foreign_proc body of the relevant predicate.
+
+The conversion of the item list to HLDS also involves make_tags.m,
+which chooses the data representation for each discriminated union type
+by assigning tags to each functor.
+
+<p>
+The HLDS data structure itself is spread over the following modules:
+
+<ol>
+<li>
+hlds_args.m defines the parts of the HLDS concerned with predicate
+and function argument lists.
+<li>
+hlds_data.m defines the parts of the HLDS concerned with
+function symbols, types, insts, modes and determinisms;
+<li>
+hlds_goal.m defines the part of the HLDS concerned with the
+structure of goals, including the annotations on goals.
+<li>
+hlds_clauses.m defines the part of the HLDS concerning clauses.
+<li>
+hlds_rtti.m defines the part of the HLDS concerning RTTI.
+<li>
+const_struct.m defines the part of the HLDS concerning constant structures.
+<li>
+hlds_pred.m defines the part of the HLDS concerning predicates and procedures;
+<li>
+pred_table.m defines the tables that index predicates and functions
+on various combinations of (qualified and unqualified) names and arity.
+<li>
+hlds_module.m defines the top-level parts of the HLDS,
+including the type module_info.
+</ol>
+
+<p>
+The module hlds_out.m contains predicates to dump the HLDS to a file.
+These predicates print all the information the compiler has
+about each part of the HLDS.
+The module hlds_desc.m, by contrast contains predicates
+that describe some parts of the HLDS (e.g. goals) with brief strings,
+suitable for use in progress messages used for debugging.
+
+<p>
+The hlds.m package also contains some utility modules that contain
+various library routines which are used by other modules that manipulate
+the HLDS:
+
+<dl>
+<dt> mark_tail_calls.m
+<dd> Marks directly tail recursive calls as such,
+and marks procedures containing directly tail recursive calls as such.
+
+<dt> hlds_code_util.m
+<dd> Utility routines for use during HLDS generation.
+
+<dt> goal_form.m
+<dd> Contains predicates for determining whether
+HLDS goals match various criteria.
+
+<dt> goal_util.m
+<dd> Contains various miscellaneous utility predicates for manipulating
+HLDS goals, e.g. for renaming variables.
+
+<dt> passes_aux.m
+<dd> Contains code to write progress messages, and higher-order code
+to traverse all the predicates defined in the current module
+and do something with each one.
+
+<dt> hlds_error_util.m:
+<dd> Utility routines for printing nicely formatted error messages
+for symptoms involving HLDS data structures.
+For symptoms involving only structures defined in prog_data,
+use parse_tree.error_util.
+
+<dt> code_model.m:
+<dd> Defines a type for classifying determinisms
+in ways useful to the various backends,
+and utility predicates on that type.
+
+<dt> arg_info.m:
+<dd> Utility routines that the various backends use
+to analyze procedures' argument lists
+and decide on parameter passing conventions.
+
+<dt> hhf.m:
+<dd> Facilities for translating the bodies of predicates
+to hyperhomogeneous form, for constraint based mode analysis.
+
+<dt> inst_graph.m:
+<dd> Defines the inst_graph data type,
+which describes the structures of insts for constraint based mode analysis,
+as well as predicates operating on that type.
+
+<dt> from_ground_term_util.m
+<dd> Contains types and predicates for operating on
+from_ground_term scopes and their contents.
+</dl>
+
+<h4> 2. Semantic analysis and error checking </h4>
+
+<p>
+This is the check_hlds.m package,
+with support from the mode_robdd.m package for constraint based mode analysis.
+
+<p>
+
+Any pass which can report errors or warnings must be part of this stage,
+so that the compiler does the right thing for options such as
+`--halt-at-warn' (which turns warnings into errors) and
+`--error-check-only' (which makes the compiler only compile up to this stage).
+
+<dl>
+
+<dt> implicit quantification
+
+ <dd>
+ quantification.m (XXX which for some reason is part of the hlds.m
+ package rather than the check_hlds.m package)
+ handles implicit quantification and computes
+ the set of non-local variables for each sub-goal.
+ It also expands away bi-implication (unlike the expansion
+ of implication and universal quantification, this expansion
+ cannot be done until after quantification).
+ This pass is called from the `transform' predicate in make_hlds.m.
+ <p>
+
+<dt> checking typeclass instances (check_typeclass.m)
+ <dd>
+ check_typeclass.m both checks that instance declarations satisfy all
+ the appropriate superclass constraints
+ (including functional dependencies)
+ and performs a source-to-source transformation on the
+ methods from the instance declarations.
+ The transformed code is checked for type, mode, uniqueness, purity
+ and determinism correctness by the later passes, which has the effect
+ of checking the correctness of the instance methods themselves
+ (ie. that the instance methods match those expected by the typeclass
+ declaration).
+ During the transformation,
+ pred_ids and proc_ids are assigned to the methods for each instance.
+
+ <p>
+ While checking that the superclasses of a class are satisfied
+ by the instance declaration, a set of constraint_proofs are built up
+ for the superclass constraints. These are used by polymorphism.m when
+ generating the base_typeclass_info for the instance.
+
+ <p>
+ This module also checks that there are no ambiguous pred/func
+ declarations (that is, it checks that all type variables in constraints
+ are determined by type variables in arguments),
+ checks that there are no cycles in the typeclass hierarchy,
+ and checks that each abstract instance has a corresponding
+ typeclass instance.
+ <p>
+
+<dt> check user defined insts for consistency with types
+ <dd>
+ inst_check.m checks that all user defined bound insts are consistent
+ with at least one type in scope
+ (i.e. that the set of function symbols
+ in the bound list for the inst are a subset of the allowed function
+ symbols for at least one type in scope).
+
+ <p>
+ A warning is issued if it finds any user defined bound insts not
+ consistent with any types in scope.
+ <p>
+
+<dt> improving the names of head variables
+ <dd>
+ headvar_names.m tries to replace names of the form HeadVar__n
+ with actual names given by the programmer.
+ <p>
+ For efficiency, this phase not a standalone pass,
+ but is instead invoked by the typechecker.
+
+<dt> type checking
+
+ <dd>
+ <ul>
+ <li> typecheck.m handles type checking, overloading resolution &
+ module name resolution, and almost fully qualifies all predicate
+ and functor names. It sets the map(var, type) field in the
+ pred_info. However, typecheck.m doesn't figure out the pred_id
+ for function calls or calls to overloaded predicates; that can't
+ be done in a single pass of typechecking, and so it is done
+ later on (in post_typecheck.m, for both preds and function calls)
+ <li> typecheck_info.m defines the main data structures used by
+ typechecking.
+ <li> typecheck_errors.m handles outputting of type errors.
+ <li> typeclasses.m checks typeclass constraints, and
+ any redundant constraints that are eliminated are recorded (as
+ constraint_proofs) in the pred_info for future reference.
+ <li> type_util.m contains utility predicates dealing with types
+ that are used in a variety of different places within the compiler
+ <li> post_typecheck.m may also be considered to logically be a part
+ of typechecking, but it is actually called from purity
+ analysis (see below). It contains the stuff related to
+ type checking that can't be done in the main type checking pass.
+ It also removes assertions from further processing.
+ post_typecheck.m reports errors for unbound type and inst variables,
+ for unsatisfied type class constraints and for indistinguishable
+ predicate or function modes.
+ </ul>
+ <p>
+
+<dt> assertions
+
+ <dd>
+ assertion.m (XXX in the hlds.m package)
+ is the abstract interface to the assertion table.
+ Currently all the compiler does is type check the assertions and
+ record for each predicate that is used in an assertion, which
+ assertion it is used in. The set up of the assertion table occurs
+ in post_typecheck.finish_assertion.
+ <p>
+
+<dt> purity analysis
+
+ <dd>
+ purity.m is responsible for purity checking, as well as
+ defining the <CODE>purity</CODE> type and a few public
+ operations on it. It also calls post_typecheck.m to
+ complete the handling of predicate
+ overloading for cases which typecheck.m is unable to handle,
+ and to check for unbound type variables.
+ Elimination of double negation is also done here; that needs to
+ be done after quantification analysis and before mode analysis.
+ Calls to `private_builtin.unsafe_type_cast/2' are converted
+ into `generic_call(unsafe_cast, ...)' goals here.
+ <p>
+
+<dt> implementation-defined literals
+
+ <dd>
+ implementation_defined_literals.m replaces unifications
+ of the form <CODE>Var = $name</CODE> by unifications to string
+ or integer constants.
+ <p>
+
+<dt> polymorphism transformation
+
+ <dd>
+ polymorphism.m handles introduction of type_info arguments for
+ polymorphic predicates and introduction of typeclass_info arguments
+ for typeclass-constrained predicates.
+ This phase needs to come before mode analysis so that mode analysis
+ can properly reorder code involving existential types.
+ (It also needs to come before simplification so that simplify.m's
+ optimization of goals with no output variables doesn't do the
+ wrong thing for goals whose only output is the type_info for
+ an existentially quantified type parameter.)
+ <p>
+ This phase also
+ converts higher-order predicate terms into lambda expressions,
+ and copies the clauses to the proc_infos in preparation for
+ mode analysis.
+ <p>
+ The polymorphism.m module also exports some utility routines that
+ are used by other modules. These include some routines for generating
+ code to create type_infos, which are used by simplify.m and magic.m
+ when those modules introduce new calls to polymorphic procedures.
+ <p>
+ When it has finished, polymorphism.m calls clause_to_proc.m to
+ make duplicate copies of the clauses for each different mode of
+ a predicate; all later stages work on procedures, not predicates.
+ <p>
+
+<dt> mode analysis
+
+ <dd>
+ <ul>
+ <li> modes.m is the top analysis module.
+ It checks that procedures are mode-correct.
+ <li> modecheck_goal.m does most of the work.
+ It handles the tasks that are common to all kinds of goals,
+ including annotating each goal with a delta-instmap
+ that specifies the changes in instantiatedness of each
+ variable over that goal, and does the analysis of several
+ kinds of goals.
+ <li> modecheck_conj.m is the sub-module which analyses conjunctions
+ It reorders code as necessary.
+ unification goals.
+ <li> modecheck_unify.m is the sub-module which analyses
+ unification goals.
+ It also module qualifies data constructors.
+ <li> modecheck_call.m is the sub-module which analyses calls.
+
+ <p>
+
+ The following sub-modules are used:
+ <dl>
+ <dt> mode_info.m
+ <dd>
+ The main data structure for mode analysis.
+ <dt> delay_info.m
+ <dd>
+ A sub-component of the mode_info data
+ structure used for storing the information
+ for scheduling: which goals are currently
+ delayed, what variables they are delayed on, etc.
+ <dt> modecheck_util.m
+ <dd> Utility predicates useful during mode analysis.
+ <dt> instmap.m (XXX in the hlds.m package)
+ <dd>
+ Defines the instmap and instmap_delta ADTs
+ which store information on what instantiations
+ a set of variables may be bound to.
+ <dt> inst_match.m
+ <dd>
+ This contains the code for examining insts and
+ checking whether they match.
+ <dt> inst_util.m
+ <dd>
+ This contains the code for creating new insts from
+ old ones: unifying them, merging them and so on.
+ <dt> mode_errors.m
+ <dd>
+ This module contains all the code to
+ generate error messages for mode errors
+ </dl>
+ <li> mode_util.m contains miscellaneous useful predicates dealing
+ with modes (many of these are used by lots of later stages
+ of the compiler)
+ <li> mode_debug.m contains utility code for tracing the actions
+ of the mode checker.
+ <li> delay_partial_inst.m adds a post-processing pass on mode-correct
+ procedures to avoid creating intermediate, partially instantiated
+ data structures.
+ </ul>
+ <p>
+
+<dt> constraint based mode analysis
+
+ <dd> This is an experimental alternative
+ to the usual mode analysis algorithm.
+ It works by building a system of boolean constraints
+ about where (parts of) variables can be bound,
+ and then solving those constraints.
+
+ <ul>
+ <li> mode_constraints.m is the module that finds the constraints
+ and adds them to the constraint store.
+ <li> mode_ordering.m is the module that uses solutions of the
+ constraint system to find an ordering for the goals in conjunctions.
+ <li> mode_constraint_robdd.m is the interface to the modules
+ that perform constraint solving using reduced ordered binary decision
+ diagrams (robdds).
+ <li> We have several implementations of solvers using robdds.
+ Each solver is in a module named mode_robdd.X.m, and they all belong
+ to the top-level mode_robdd.m.
+ </ul>
+ <p>
+
+<dt> constraint based mode analysis propagation solver
+
+ <dd> This is a new alternative
+ for the constraint based mode analysis algorithm.
+ It will perform conjunct reordering for mercury
+ programs of a limited syntax (it calls error if
+ it encounters higher order code or a parallel
+ conjunction, or is asked to infer modes).
+
+
+ <ul>
+ <li> prop_mode_constraints.m is the interface to the old
+ mode_constraints.m. It builds constraints for an SCC.
+ <li> build_mode_constraints.m is the module that traverses a predicate
+ to build constraints for it.
+ <li> abstract_mode_constraints.m describes data structures for the
+ constraints themselves.
+ <li> ordering_mode_constraints.m solves constraints to determine
+ the producing and consuming goals for program variables, and
+ performs conjunct reordering based on the result.
+ <li> mcsolver.m contains the constraint solver used by
+ ordering_mode_constraints.m.
+ </ul>
+ <p>
+
+<dt> indexing and determinism analysis
+
+ <dd>
+ <ul>
+ <li> switch_detection.m transforms into switches those disjunctions
+ in which several disjuncts test the same variable against different
+ function symbols.
+ <li> cse_detection.m looks for disjunctions in which each disjunct tests
+ the same variable against the same function symbols, and hoists any
+ such unifications out of the disjunction.
+ If cse_detection.m modifies the code,
+ it will re-run mode analysis and switch detection.
+ <li> det_analysis.m annotates each goal with its determinism;
+ it inserts cuts in the form of "some" goals wherever the determinisms
+ and delta instantiations of the goals involved make it necessary.
+ Any errors found during determinism analysis are reported by
+ det_report.m.
+ det_util.m contains utility predicates used in several modules.
+ </ul>
+ <p>
+
+<dt> checking of unique modes (unique_modes.m)
+
+ <dd>
+ unique_modes.m checks that non-backtrackable unique modes were
+ not used in a context which might require backtracking.
+ Note that what unique_modes.m does is quite similar to
+ what modes.m does, and unique_modes calls lots of predicates
+ defined in modes.m to do it.
+ <p>
+
+<dt> stratification checking
+
+ <dd>
+ The module stratify.m implements the `--warn-non-stratification'
+ warning, which is an optional warning that checks for loops
+ through negation.
+ <p>
+
+<dt> try goal expansion
+
+ <dd>
+ try_expand.m expands `try' goals into calls to predicates in the
+ `exception' module instead.
+ <p>
+
+<dt> simplification (simplify.m)
+
+ <dd>
+ simplify.m finds and exploits opportunities for simplifying the
+ internal form of the program, both to optimize the code and to
+ massage the code into a form the code generator will accept.
+ It also warns the programmer about any constructs that are so simple
+ that they should not have been included in the program in the first
+ place. (That's why this pass needs to be part of semantic analysis:
+ because it can report warnings.)
+ simplify.m converts complicated unifications into procedure calls.
+ simplify.m calls common.m which looks for (a) construction unifications
+ that construct a term that is the same as one that already exists,
+ or (b) repeated calls to a predicate with the same inputs, and replaces
+ them with assignment unifications.
+ simplify.m also attempts to partially evaluate calls to builtin
+ procedures if the inputs are all constants (this is const_prop.m
+ in the transform_hlds.m package).
+ simplify.m also calls format_call.m to look for
+ (possibly) incorrect uses of string.format io.format.
+ <p>
+
+<dt> unused imports (unused_imports.m)
+
+ <dd>
+ unused_imports.m determines which imports of the module
+ are not required for the module to compile. It also identifies
+ which imports of a module can be moved from the interface to the
+ implementation.
+ <p>
+
+<dt> xml documentation (xml_documentation.m)
+
+ <dd>
+ xml_documentation.m outputs a XML representation of all the
+ declarations in the module. This XML representation is designed
+ to be transformed via XSL into more human readable documentation.
+ <p>
+
+</dl>
+
+<h4> 3. High-level transformations </h4>
+
+<p>
+This is the transform_hlds.m package.
+
+<p>
+
+The first pass of this stage does tabling transformations (table_gen.m).
+This involves the insertion of several calls to tabling predicates
+defined in mercury_builtin.m and the addition of some scaffolding structure.
+Note that this pass can change the evaluation methods of some procedures to
+eval_table_io, so it should come before any passes that require definitive
+evaluation methods (e.g. inlining).
+
+<p>
+
+The next pass of this stage is a code simplification, namely
+removal of lambda expressions (lambda.m):
+
+<ul>
+<li>
+ lambda.m converts lambda expressions into higher-order predicate
+ terms referring to freshly introduced separate predicates.
+ This pass needs to come after unique_modes.m to ensure that
+ the modes we give to the introduced predicates are correct.
+ It also needs to come after polymorphism.m since polymorphism.m
+ doesn't handle higher-order predicate constants.
+</ul>
+
+(Is there any good reason why lambda.m comes after table_gen.m?)
+
+<p>
+
+The next pass also simplifies the HLDS by expanding out the atomic goals
+implementing Software Transactional Memory (stm_expand.m).
+
+<p>
+
+Expansion of equivalence types (equiv_type_hlds.m)
+
+<ul>
+<li>
+ This pass expands equivalences which are not meant to
+ be visible to the user of imported modules. This
+ is necessary for the IL back-end and in some cases
+ for `:- pragma export' involving foreign types on
+ the C back-end.
+
+ <p>
+
+ It's also needed by the MLDS->C back-end, for
+ --high-level-data, and for cases involving abstract
+ equivalence types which are defined as "float".
+</ul>
+
+<p>
+
+Exception analysis. (exception_analysis.m)
+
+<ul>
+<li>
+ This pass annotates each module with information about whether
+ the procedures in the module may throw an exception or not.
+</ul>
+
+<p>
+
+The next pass is termination analysis. The various modules involved are:
+
+<ul>
+<li>
+termination.m is the control module. It sets the argument size and
+termination properties of builtin and compiler generated procedures,
+invokes term_pass1.m and term_pass2.m
+and writes .trans_opt files and error messages as appropriate.
+<li>
+term_pass1.m analyzes the argument size properties of user-defined procedures,
+<li>
+term_pass2.m analyzes the termination properties of user-defined procedures.
+<li>
+term_traversal.m contains code common to the two passes.
+<li>
+term_errors.m defines the various kinds of termination errors
+and prints the messages appropriate for each.
+<li>
+term_util.m defines the main types used in termination analysis
+and contains utility predicates.
+<li>
+post_term_analysis.m contains error checking routines and optimizations
+that depend upon the information obtained by termination analysis.
+</ul>
+
+<p>
+
+Trail usage analysis. (trailing_analysis.m)
+
+<ul>
+<li>
+ This pass annotates each module with information about whether
+ the procedures in the module modify the trail or not. This
+ information can be used to avoid redundant trailing operations.
+</ul>
+
+<p>
+
+Minimal model tabling analysis. (tabling_analysis.m)
+
+<ul>
+<li>
+ This pass annotates each goal in a module with information about
+ whether the goal calls procedures that are evaluated using
+ minimal model tabling. This information can be used to reduce
+ the overhead of minimal model tabling.
+
+</ul>
+
+<p>
+
+Most of the remaining HLDS-to-HLDS transformations are optimizations:
+
+<ul>
+<li> specialization of higher-order and polymorphic predicates where the
+ value of the higher-order/type_info/typeclass_info arguments are known
+ (higher_order.m)
+
+<li> attempt to introduce accumulators (accumulator.m). This optimizes
+ procedures whose tail consists of independent associative computations
+ or independent chains of commutative computations into a tail
+ recursive form by the introduction of accumulators. If lco is turned
+ on it can also transform some procedures so that only construction
+ unifications are after the recursive call. This pass must come before
+ lco, unused_args (eliminating arguments makes it hard to relate the
+ code back to the assertion) and inlining (can make the associative
+ call disappear).
+ <p>
+ This pass makes use of the goal_store.m module, which is a dictionary-like
+ data structure for storing HLDS goals.
+
+<li> inlining (i.e. unfolding) of simple procedures (inlining.m)
+
+<li> loop_inv.m: loop invariant hoisting. This transformation moves
+ computations within loops that are the same on every iteration to the outside
+ of the loop so that the invariant computations are only computed once. The
+ transformation turns a single looping predicate containing invariant
+ computations into two: one that computes the invariants on the first
+ iteration and then loops by calling the second predicate with extra arguments
+ for the invariant values. This pass should come after inlining, since
+ inlining can expose important opportunities for loop invariant hoisting.
+ Such opportunities might not be visible before inlining because only
+ *part* of the body of a called procedure is loop-invariant.
+
+<li> deforestation and partial evaluation (deforest.m). This optimizes
+ multiple traversals of data structures within a conjunction, and
+ avoids creating intermediate data structures. It also performs
+ loop unrolling where the clause used is known at compile time.
+ deforest.m makes use of the following sub-modules (`pd_' stands for
+ "partial deduction"):
+ <ul>
+ <li> constraint.m transforms goals so that goals which can fail are
+ executed earlier.
+ <li> pd_cost.m contains some predicates to estimate the improvement
+ caused by deforest.m.
+ <li> pd_debug.m produces debugging output.
+ <li> pd_info.m contains a state type for deforestation.
+ <li> pd_term.m contains predicates to check that the deforestation
+ algorithm terminates.
+ <li> pd_util.m contains various utility predicates.
+ </ul>
+
+<li> issue warnings about unused arguments from predicates, and create
+specialized versions without them (unused_args.m); type_infos are often unused.
+
+<li> delay_construct.m pushes construction unifications to the right in
+ semidet conjunctions, in an effort to reduce the probability that it will
+ need to be executed.
+
+<li> unneeded_code.m looks for goals whose results are either not needed
+ at all, or needed in some branches of computation but not others. Provided
+ that the goal in question satisfies some requirements (e.g. it is pure,
+ it cannot fail etc), it either deletes the goal or moves it to the
+ computation branches where its output is needed.
+
+<dt> lco.m finds predicates whose implementations would benefit
+ from last call optimization modulo constructor application.
+
+<li> elimination of dead procedures (dead_proc_elim.m). Inlining, higher-order
+ specialization and the elimination of unused args can make procedures dead
+ even if the user doesn't, and automatically constructed unification and
+ comparison predicates are often dead as well.
+
+<li> tupling.m looks for predicates that pass around several arguments,
+ and modifies the code to pass around a single tuple of these arguments
+ instead if this looks like reducing the cost of parameter passing.
+
+<li> untupling.m does the opposite of tupling.m: it replaces tuple arguments
+ with their components. This can be useful both for finding out how much
+ tupling has already been done manually in the source code, and to break up
+ manual tupling in favor of possibly more profitable automatic tupling.
+
+<li> dep_par_conj.m transforms parallel conjunctions to add the wait and signal
+ operations required by dependent AND parallelism. To maximize the amount of
+ parallelism available, it tries to push the signals as early as possible
+ in producers and the waits as late as possible in the consumers, creating
+ specialized versions of predicates as needed.
+
+<li> parallel_to_plain_conj.m transforms parallel conjunctions to plain
+ conjunctions, for use in grades that do not support AND-parallelism.
+
+<li> granularity.m tries to ensure that programs do not generate too much
+ parallelism. Its goal is to minimize parallelism's overhead while still
+ gaining all the parallelism the machine can actually exploit.
+
+<li> implicit_parallelism.m is a package whose task is to introduce parallelism
+ into sequential code automatically. Its submodules are
+ <ul>
+ <li> introduce_parallelism.m does the main task of the package.
+ <li> push_goals_together.m performs a transformation that allows
+ introduce_parallelism.m to do a better job.
+ </ul>
+
+<dt> float_regs.m wraps higher-order terms which use float registers
+ if passed in contexts where regular registers would be expected,
+ and vice versa.
+
+</ul>
+
+<p>
+
+The module transform.m contains stuff that is supposed to be useful
+for high-level optimizations (but which is not yet used).
+
+<p>
+
+The last three HLDS-to-HLDS transformations implement
+term size profiling (size_prof.m and complexity.m) and
+deep profiling (deep_profiling.m, in the ll_backend.m package).
+Both passes insert into procedure bodies, among other things,
+calls to procedures (some of which are impure)
+that record profiling information.
+
+<h4> 4. Intermodule analysis framework </h4>
+
+<p>
+This is the analysis.m package.
+
+<p>
+
+The framework can be used by a few analyses in the transform_hlds.m package.
+It is documented in the analysis/README file.
+
+<hr>
+<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
+
+<h3> a. LLDS BACK-END </h3>
+
+<p>
+This is the ll_backend.m package.
+
+<h4> 3a. LLDS-specific HLDS -> HLDS transformations </h4>
+
+Before LLDS code generation, there are a few more passes which
+annotate the HLDS with information used for LLDS code generation,
+or perform LLDS-specific transformations on the HLDS:
+
+ <dl>
+ <dt> reducing the number of variables that have to be
+ saved across procedure calls (saved_vars.m)
+ <dd>
+ We do this by putting the code that generates
+ the value of a variable just before the use of
+ that variable, duplicating the variable and the
+ code that produces it if necessary, provided
+ the cost of doing so is smaller than the cost
+ of saving and restoring the variable would be.
+
+ <dt> transforming procedure definitions to reduce the number
+ of variables that need their own stack slots
+ (stack_opt.m)
+ <dd>
+ The main algorithm in stack_opt.m figures out when
+ variable A can be reached from a cell pointed to by
+ variable B, so that storing variable B on the stack
+ obviates the need to store variable A on the stack
+ as well.
+ This algorithm relies on an implementation of
+ the maximal matching algorithm in matching.m.
+ <dt> migration of builtins following branched structures
+ (follow_code.m)
+ <dd>
+ This transformation the results of follow_vars.m
+ (see below)
+ <dt> simplification again (simplify.m, in the check_hlds.m
+ package)
+ <dd>
+ We run this pass a second time in case the intervening
+ transformations have created new opportunities for
+ simplification. It needs to be run immediately
+ before code generation, because it enforces some
+ invariants that the LLDS code generator relies on.
+ <dt> annotation of goals with liveness information (liveness.m)
+ <dd>
+ This records the birth and death of each variable
+ in the HLDS goal_info.
+ <dt> allocation of stack slots
+ <dd>
+ This is done by stack_alloc.m, with the assistance of
+ the following modules:
+
+ <ul>
+ <li> live_vars.m works out which variables need
+ to be saved on the stack when.
+
+ <li> graph_colour.m (in the libs.m package)
+ contains the algorithm that
+ stack_alloc.m calls to convert sets of variables
+ that must be saved on the stack at the same time
+ to an assignment of a stack slot to each such variable.
+ </ul>
+ <dt> allocating the follow vars (follow_vars.m)
+ <dd>
+ Traverses backwards over the HLDS, annotating some
+ goals with information about what locations variables
+ will be needed in next. This allows us to generate
+ more efficient code by putting variables in the right
+ spot directly. This module is not called from
+ mercury_compile_llds_back_end.m; it is called from
+ store_alloc.m.
+ <dt> allocating the store map (store_alloc.m)
+ <dd>
+ Annotates each branched goal with variable location
+ information so that we can generate correct code
+ by putting variables in the same spot at the end
+ of each branch.
+ <dt> computing goal paths (goal_path.m
+ in the check_hlds.m package)
+ <dd>
+ The goal path of a goal defines its position in
+ the procedure body. This transformation attaches
+ its goal path to every goal, for use by the debugger.
+ </dl>
+
+<h4> 4a. Code generation. </h4>
+<dl>
+<dt> code generation
+
+ <dd>
+ Code generation converts HLDS into LLDS.
+ For the LLDS back-end, this is also the point at which we
+ insert code to handle debugging and trailing, and to do
+ heap reclamation on failure.
+ The top level code generation module is proc_gen.m,
+ which looks after the generation of code for procedures
+ (including prologues and epilogues).
+ The predicate for generating code for arbitrary goals is in code_gen.m,
+ but that module handles only sequential conjunctions; it calls
+ other modules to handle other kinds of goals:
+
+ <ul>
+ <li> ite_gen.m (if-then-elses)
+ <li> call_gen.m (predicate calls and also calls to
+ out-of-line unification procedures)
+ <li> disj_gen.m (disjunctions)
+ <li> par_conj_gen.m (parallel conjunctions)
+ <li> unify_gen.m (unifications)
+ <li> switch_gen.m (switches), which has sub-modules
+ <ul>
+ <li> dense_switch.m
+ <li> lookup_switch.m
+ <li> string_switch.m
+ <li> tag_switch.m
+ <li> switch_case.m
+ <li> switch_util.m -- this is in the backend_libs.m
+ package, since it is also used by MLDS back-end
+ </ul>
+ <li> commit_gen.m (commits)
+ <li> pragma_c_gen.m (embedded C code)
+ </ul>
+
+ <p>
+
+ The code generator also calls middle_rec.m to do middle recursion
+ optimization, which is implemented during code generation.
+
+ <p>
+
+ The code generation modules make use of
+ <dl>
+ <dt> code_info.m
+ <dd>
+ The main data structure for the code generator.
+ <dt> var_locn.m
+ <dd>
+ This defines the var_locn type, which is a
+ sub-component of the code_info data structure;
+ it keeps track of the values and locations of variables.
+ It implements eager code generation.
+ <dt> exprn_aux.m
+ <dd>
+ Various utility predicates.
+ <dt> code_util.m
+ <dd>
+ Some miscellaneous preds used for code generation.
+ <dt> lookup_util.m
+ <dd>
+ Some miscellaneous preds used for lookup switch
+ (and lookup disjunction) generation.
+ <dt> continuation_info.m
+ <dd>
+ For accurate garbage collection, collects
+ information about each live value after calls,
+ and saves information about procedures.
+ <dt> trace_gen.m
+ <dd>
+ Inserts calls to the runtime debugger.
+ <dt> trace_params.m (in the libs.m package, since it
+ is considered part of option handling)
+ <dd>
+ Holds the parameter settings controlling
+ the handling of execution tracing.
+ </dl>
+
+<dt> code generation for `pragma export' declarations (export.m)
+<dd> This is handled separately from the other parts of code generation.
+ mercury_compile*.m calls `export.produce_header_file' to produce
+ C code fragments which declare/define the C functions which are the
+ interface stubs for procedures exported to C.
+
+<dt> generation of constants for RTTI data structures
+<dd> This could also be considered a part of code generation,
+ but for the LLDS back-end this is currently done as part
+ of the output phase (see below).
+
+</dl>
+
+<p>
+
+The result of code generation is the Low Level Data Structure (llds.m),
+which may also contains some data structures whose types are defined in rtti.m.
+The code for each procedure is generated as a tree of code fragments
+which is then flattened.
+
+<h4> 5a. Low-level optimization (LLDS). </h4>
+
+<p>
+
+Most of the various LLDS-to-LLDS optimizations are invoked from optimize.m.
+They are:
+
+<ul>
+<li> optimization of jumps to jumps (jumpopt.m)
+
+<li> elimination of duplicate code sequences within procedures (dupelim.m)
+
+<li> elimination of duplicate procedure bodies (dupproc.m,
+invoked directly from mercury_compile_llds_back_end.m)
+
+<li> optimization of stack frame allocation/deallocation (frameopt.m)
+
+<li> filling branch delay slots (delay_slot.m)
+
+<li> dead code and dead label removal (labelopt.m)
+
+<li> peephole optimization (peephole.m)
+
+<li> introduction of local C variables (use_local_vars.m)
+
+<li> removal of redundant assignments, i.e. assignments that assign a value
+that the target location already holds (reassign.m)
+
+</ul>
+
+In addition, stdlabel.m performs standardization of labels.
+This is not an optimization itself,
+but it allows other optimizations to be evaluated more easily.
+
+<p>
+
+The module opt_debug.m contains utility routines used for debugging
+these LLDS-to-LLDS optimizations.
+
+<p>
+
+Several of these optimizations (frameopt and use_local_vars) also
+use livemap.m, a module that finds the set of locations live at each label.
+
+<p>
+
+Use_local_vars numbering also introduces
+references to temporary variables in extended basic blocks
+in the LLDS representation of the C code.
+The transformation to insert the block scopes
+and declare the temporary variables is performed by wrap_blocks.m.
+
+<p>
+
+Depending on which optimization flags are enabled,
+optimize.m may invoke many of these passes multiple times.
+
+<p>
+
+Some of the low-level optimization passes use basic_block.m,
+which defines predicates for converting sequences of instructions to
+basic block format and back, as well as opt_util.m, which contains
+miscellaneous predicates for LLDS-to-LLDS optimization.
+
+
+<h4> 6a. Output C code </h4>
+
+<ul>
+<li> type_ctor_info.m
+ (in the backend_libs.m package, since it is shared with the MLDS back-end)
+ generates the type_ctor_gen_info structures that list
+ items of information (including unification, index and compare predicates)
+ associated with each declared type constructor that go into the static
+ type_ctor_info data structure. If the type_ctor_gen_info structure is not
+ eliminated as inaccessible, this module adds the corresponding type_ctor_info
+ structure to the RTTI data structures defined in rtti.m,
+ which are part of the LLDS.
+
+<li> base_typeclass_info.m
+ (in the backend_libs.m package, since it is shared with the MLDS back-end)
+ generates the base_typeclass_info structures that
+ list the methods of a class for each instance declaration. These are added to
+ the RTTI data structures, which are part of the LLDS.
+
+<li> stack_layout.m generates the stack_layout structures for
+ accurate garbage collection. Tables are created from the data
+ collected in continuation_info.m.
+
+ Stack_layout.m uses prog_rep.m to generate bytecode representations
+ of procedure bodies for use by the declarative debugger.
+
+<li> Type_ctor_info structures and stack_layout structures both contain
+ pseudo_type_infos, which are type_infos with holes for type variables;
+ these are generated by pseudo_type_info.m
+ (in the backend_libs.m package, since it is shared with the MLDS back-end).
+
+<li> llds_common.m extracts static terms from the main body of the LLDS, and
+ puts them at the front. If a static term originally appeared several times,
+ it will now appear as a single static term with multiple references to it.
+ [XXX FIXME this module has now been replaced by global_data.m]
+
+<li> transform_llds.m is responsible for doing any source to source
+ transformations on the llds which are required to make the C output
+ acceptable to various C compilers. Currently computed gotos can have
+ their maximum size limited to avoid a fixed limit in lcc.
+
+<li> Final generation of C code is done in llds_out.m, which subcontracts the
+ output of RTTI structures to rtti_out.m and of other static
+ compiler-generated data structures (such as those used by the debugger,
+ the deep profiler, and in the future by the garbage collector)
+ to layout_out.m.
+</ul>
+
+
+<hr>
+<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
+
+<h3> b. MLDS BACK-END </h3>
+
+<p>
+
+This is the ml_backend.m package.
+
+<p>
+
+The original LLDS code generator generates very low-level code,
+since the LLDS was designed to map easily to RISC architectures.
+We have developed a new back-end that generates much higher-level
+code, suitable for generating Java, high-level C, etc.
+This back-end uses the Medium Level Data Structure (mlds.m) as its
+intermediate representation.
+
+<h4> 3b. pre-passes to annotate/transform the HLDS </h4>
+
+<p>
+Before code generation there is a pass which annotates the HLDS with
+information used for code generation:
+
+<ul>
+<li> mark_static_terms.m (in the hlds.m package) marks
+ construction unifications which can be implemented using static constants
+ rather than heap allocation.
+</ul>
+
+<p>
+For the MLDS back-end, we've tried to keep the code generator simple.
+So we prefer to do things as HLDS to HLDS transformations where possible,
+rather than complicating the HLDS to MLDS code generator.
+Thus we have a pass which transforms the HLDS to handle trailing:
+
+<ul>
+<li> add_trail_ops.m inserts code to manipulate the trail,
+ in particular ensuring that we apply the appropriate
+ trail operations before each choice point, when execution
+ resumes after backtracking, and whenever we do a commit.
+ The trail operations are represented as (and implemented as)
+ calls to impure procedures defined in library/private_builtin.m.
+<li> add_heap_ops.m is very similar to add_trail_ops.m;
+ it inserts code to do heap reclamation on backtracking.
+</ul>
+
+<h4> 4b. MLDS code generation </h4>
+<ul>
+<li> ml_proc_gen.m is the top module of the package that converts HLDS code
+ to MLDS. Its main submodule is ml_code_gen.m, which handles the tasks
+ common to all kinds of goals, as well as the tasks specific to some
+ goals (conjunctions, if-then-elses, negations). For other kinds of goals,
+ ml_code_gen.m invokes some other submodules:
+ <ul>
+ <li> ml_unify_gen.m
+ <li> ml_closure_gen.m
+ <li> ml_call_gen.m
+ <li> ml_foreign_proc_gen.m
+ <li> ml_commit_gen.m
+ <li> ml_disj_gen.m
+ <li> ml_switch_gen.m, which calls upon:
+ <ul>
+ <li> ml_lookup_switch.m
+ <li> ml_string_switch.m
+ <li> ml_tag_switch.m
+ <li> ml_simplify_switch.m
+ <li> switch_util.m (in the backend_libs.m package,
+ since it is also used by LLDS back-end)
+ </ul>
+ </ul>
+ The main data structure used by the MLDS code generator is defined
+ in ml_gen_info.m, while global data structures (those created at
+ module scope) are handled in ml_global_data.m.
+ The module ml_accurate_gc.m handles provisions for accurate garbage
+ collection, while the modules ml_code_util.m, ml_target_util.m and
+ ml_util.m provide some general utility routines.
+<li> ml_type_gen.m converts HLDS types to MLDS.
+<li> type_ctor_info.m and base_typeclass_info.m generate
+ the RTTI data structures defined in rtti.m and pseudo_type_info.m
+ (those four modules are in the backend_libs.m package, since they
+ are shared with the LLDS back-end)
+ and then rtti_to_mlds.m converts these to MLDS.
+</ul>
+
+<h4> 5b. MLDS transformations </h4>
+<ul>
+<li> ml_tailcall.m annotates the MLDS with information about tailcalls.
+ It also has a pass to implement the `--warn-non-tail-recursion' option.
+<li> ml_optimize.m does MLDS->MLDS optimizations
+<li> ml_elim_nested.m does two MLDS transformations that happen
+ to have a lot in common: (1) eliminating nested functions
+ and (2) adding code to handle accurate garbage collection.
+</ul>
+
+<h4> 6b. MLDS output </h4>
+
+<p>
+There are currently four backends that generate code from MLDS:
+one generates C/C++ code,
+one generates assembler (by interfacing with the GCC back-end),
+one generates Microsoft's Intermediate Language (MSIL or IL),
+and one generates Java.
+
+<ul>
+<li>mlds_to_c.m converts MLDS to C/C++ code.
+</ul>
+
+<p>
+
+The MLDS->asm backend is logically part of the MLDS back-ends,
+but it is in a module of its own (mlds_to_gcc.m), rather than being
+part of the ml_backend package, so that we can distribute a version
+of the Mercury compiler which does not include it. There is a wrapper
+module called maybe_mlds_to_gcc.m which is generated at configuration time
+so that mlds_to_gcc.m will be linked in iff the GCC back-end is available.
+
+<p>
+
+The MLDS->IL backend is broken into several submodules.
+<ul>
+<li> mlds_to_ilasm.m converts MLDS to IL assembler and writes it to a .il file.
+<li> mlds_to_il.m converts MLDS to IL
+<li> ilds.m contains representations of IL
+<li> ilasm.m contains output routines for writing IL to assembler.
+<li> il_peephole.m performs peephole optimization on IL instructions.
+</ul>
+After IL assembler has been emitted, ILASM in invoked to turn the .il
+file into a .dll or .exe.
+
+<p>
+
+The MLDS->Java backend is broken into two submodules.
+<ul>
+<li> mlds_to_java.m converts MLDS to Java and writes it to a .java file.
+<li> java_util.m contains some utility routines.
+</ul>
+After the Java code has been emitted, a Java compiler (normally javac)
+is invoked to turn the .java file into a .class file containing Java bytecodes.
+
+<hr>
+<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
+
+<h3> c. BYTECODE BACK-END </h3>
+
+<p>
+This is the bytecode_backend.m package.
+
+<p>
+
+The Mercury compiler can translate Mercury programs into bytecode for
+interpretation by a bytecode interpreter. The intent of this is to
+achieve faster turn-around time during development. However, the
+bytecode interpreter has not yet been written.
+
+<ul>
+<li> bytecode.m defines the internal representation of bytecodes, and contains
+ the predicates to emit them in two forms. The raw bytecode form is emitted
+ into <filename>.bytecode for interpretation, while a human-readable
+ form is emitted into <filename>.bytedebug for visual inspection.
+
+<li> bytecode_gen.m contains the predicates that translate HLDS into bytecode.
+
+<li> bytecode_data.m contains the predicates that translate ints, strings
+ and floats into bytecode.
+</ul>
+
+<hr>
+<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
+
+<h3> d. ERLANG BACK-END </h3>
+
+<p>
+This is the erl_backend.m package.
+
+<p>
+
+The Mercury compiler can translate Mercury programs into Erlang.
+The intent of this is to take advantage of the features of the
+Erlang implementation (concurrency, fault tolerance, etc.)
+However, the backend is still incomplete.
+This back-end uses the Erlang Data Structure (elds.m) as its
+intermediate representation.
+
+<h4> 4d. ELDS code generation </h4>
+<ul>
+<li> erl_code_gen.m converts HLDS code to ELDS.
+ The following sub-modules are used to handle different constructs:
+ <ul>
+ <li> erl_unify_gen.m
+ <li> erl_call_gen.m
+ </ul>
+ The module erl_code_util.m provides utility routines for
+ ELDS code generation.
+<li> erl_rtti.m converts RTTI data structures defined in rtti.m into
+ ELDS functions which return the same information when called.
+</ul>
+
+<h4> 6d. ELDS output </h4>
+
+<ul>
+<li>elds_to_erlang.m converts ELDS to Erlang code.
+</ul>
+
+<hr>
+<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
+
+<h3> SMART RECOMPILATION </h3>
+
+<p>
+This is the recompilation.m package.
+
+<p>
+
+The Mercury compiler can record program dependency information
+to avoid unnecessary recompilations when an imported module's
+interface changes in a way which does not invalidate previously
+compiled code.
+
+<ul>
+<li> recompilation.m contains types used by the other smart
+ recompilation modules.
+
+<li> recompilation_version.m generates version numbers for program items
+ in interface files.
+
+<li> recompilation_usage.m works out which program items were used
+ during a compilation.
+
+<li> recompilation_check.m is called before recompiling a module.
+ It uses the information written by recompilation_version.m and
+ recompilation_usage.m to work out whether the recompilation is
+ actually needed.
+</ul>
+
+<hr>
+<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
+
+<h3> MISCELLANEOUS </h3>
+
+
+The modules special_pred.m (in the hlds.m package) and unify_proc.m
+(in the check_hlds.m package) contain stuff for handling the special
+compiler-generated predicates which are generated for
+each type: unify/2, compare/3, and index/1 (used in the
+implementation of compare/3).
+
+<p>
+This module is part of the transform_hlds.m package.
+
+ <dl>
+ <dt> dependency_graph.m:
+ <dd>
+ This contains predicates to compute the call graph for a
+ module, and to print it out to a file.
+ (The call graph file is used by the profiler.)
+ The call graph may eventually also be used by det_analysis.m,
+ inlining.m, and other parts of the compiler which could benefit
+ from traversing the predicates in a module in a bottom-up or
+ top-down fashion with respect to the call graph.
+ </dl>
+
+<p>
+The following modules are part of the backend_libs.m package.
+
+ <dl>
+ <dt> arg_pack:
+ <dd>
+ This module defines utility routines to do with argument
+ packing.
+
+ <dt> builtin_ops:
+ <dd>
+ This module defines the types unary_op and binary_op
+ which are used by several of the different back-ends:
+ bytecode.m, llds.m, and mlds.m.
+
+ <dt> c_util:
+ <dd>
+ This module defines utility routines useful for generating
+ C code. It is used by both llds_out.m and mlds_to_c.m.
+
+ <dt> name_mangle:
+ <dd>
+ This module defines utility routines useful for mangling
+ names to forms acceptable as identifiers in target languages.
+
+ <dt> compile_target_code.m
+ <dd>
+ Invoke C, C#, IL, Java, etc. compilers and linkers to compile
+ and link the generated code.
+
+ </dl>
+
+<p>
+The following modules are part of the libs.m package.
+
+ <dl>
+
+ <dt> file_util.m:
+ <dd>
+ Predicates to deal with files, such as searching for a file
+ in a list of directories.
+
+ <dt> process_util.m:
+ <dd>
+ Predicates to deal with process creation and signal handling.
+ This module is mainly used by make.m and its sub-modules.
+
+ <dt> timestamp.m
+ <dd>
+ Contains an ADT representing timestamps used by smart
+ recompilation and `mmc --make'.
+
+ <dt> graph_color.m
+ <dd>
+ Graph colouring. <br>
+ This is used by the LLDS back-end for register allocation
+
+ <dt> lp.m
+ <dd>
+ Implements the linear programming algorithm for optimizing
+ a set of linear constraints with respect to a linear
+ cost function. This is used by termination analyser.
+
+ <dt> lp_rational.m
+ <dd>
+ Implements the linear programming algorithm for optimizing
+ a set of linear constraints with respect to a linear
+ cost function, for rational numbers.
+ This is used by termination analyser.
+
+ <dt> rat.m
+ <dd>
+ Implements rational numbers.
+
+ <dt> compiler_util.m:
+ <dd>
+ Generic utility predicates, mainly for error handling.
+ </dl>
+
+<hr>
+<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
+
+<h3> CURRENTLY UNDOCUMENTED </h3>
+
+<ul>
+<li> mmc_analysis.m
+</ul>
+
+<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
+
+<h3> CURRENTLY USELESS </h3>
+
+ <dl>
+ <dt> atsort.m (in the libs.m package)
+ <dd>
+ Approximate topological sort.
+ This was once used for traversing the call graph,
+ but nowadays we use relation.atsort from library/relation.m.
+
+ </dl>
+
+<hr>
+<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
+
+</body>
+</html>
+
diff --git a/development/developers/gc_and_c_code.html b/development/developers/gc_and_c_code.html
new file mode 100644
index 0000000..d7990e0
--- /dev/null
+++ b/development/developers/gc_and_c_code.html
@@ -0,0 +1,77 @@
+<html>
+<head>
+
+<title>
+ Information On LLDS Accurate Garbage Collection And C Code
+</title>
+</head>
+
+<body
+ bgcolor="#ffffff"
+ text="#000000"
+>
+
+<hr>
+<!-------------------------->
+
+When handwritten code is called from Mercury, the garbage collection
+scheduler doesn't know anything about the code, so it cannot replace
+the succip on the stack (if there is one) with the collector's address.
+
+<p>
+
+If the handwritten code calls no other code, then this is fine, the
+scheduler knows it can replace the succip variable and when a
+proceed() occurs execution will return to mercury code which it
+knows about.
+
+<p>
+
+If handwritten code calls other handwritten code, we have a problem,
+as succip will be saved on the stack and we don't know where on
+the stack it is stored. So we use a global variable 'saved_succip' which
+is succip is saved into. Care must be taken to save saved_succip on the
+stack so it doesn't get clobbered. <br>
+So
+ <pre>
+ detstackvar(1) = (int) succip;
+ </pre>
+becomes
+ <pre>
+ detstackvar(1) = (int) saved_succip;
+ saved_succip = (int) succip;
+ </pre>
+
+and, when restoring,
+ <pre>
+ succip = (int) detstackvar(1);
+ </pre>
+becomes
+ <pre>
+ succip = saved_succip;
+ saved_succip = detstackvar(1);
+ </pre>
+
+(With appropriate LVALUE_CASTs).
+
+<p>
+
+In this way, garbage collection always knows where the succip is stored
+in handwritten code.
+
+<p>
+
+The garbage collection code must check that the current execution is not
+still in a handwritten predicate - if it is, it must re-schedule (essentially
+just the same as before).
+
+<p>
+
+
+<hr>
+<!-------------------------->
+
+Last update was $Date: 2003/11/05 08:42:10 $ by $Author: fjh $@cs.mu.oz.au. <br>
+</body>
+</html>
+
diff --git a/development/developers/glossary.html b/development/developers/glossary.html
new file mode 100644
index 0000000..0b39094
--- /dev/null
+++ b/development/developers/glossary.html
@@ -0,0 +1,140 @@
+<html>
+<head>
+
+
+<title>
+ Glossary Of Terms Used In Mercury
+</title>
+</head>
+
+<body
+ bgcolor="#ffffff"
+ text="#000000"
+>
+
+<hr>
+<!-------------------------->
+
+<dl>
+
+<dt> assertion
+ <dd>
+ A particular form of promise which claims to the compiler
+ that the specified goal will always hold. If useful, the
+ compiler may use this information to perform optimisations.
+
+<dt> class context
+ <dd>
+ The typeclass constraints on a predicate or function.
+
+<dt> codeinfo
+ <dd>
+ a structure used by codegen.m
+
+<dt> HLDS
+ <dd>
+ The "High Level Data Structure". See hlds.m.
+
+<dt> inst
+ <dd>
+ instantiatedness. An inst holds three different sorts of
+ information. It indicates whether a variable is free, partially
+ bound, or ground. If a variable is bound, it may indicate
+ which functor(s) the variable can be bound to. Also,
+ an inst records whether a value is unique, or whether
+ it may be aliased.
+
+<dt> liveness
+ <dd>
+ this term is used to mean two quite different things!
+ <ol>
+ <li> There's a notion of liveness used in mode analysis:
+ a variable is live if either it or an alias might be
+ used later on in the computation.
+ <li> There's a different notion of liveness used for code generation:
+ a variable becomes live (is "born") when the register or stack
+ slot holding the variable first acquires a value, and dies when
+ that value will definitely not be needed again within this procedure.
+ This notion is low-level because it could depend on the low-level
+ representation details (in particular, `no_tag' representations
+ ought to affect liveness).
+ </ol>
+
+<dt> LLDS
+ <dd>
+ The "Low Level Data Structure". See llds.m.
+
+<dt> mode
+ <dd>
+ this has two meanings:
+ <ol>
+ <li> a mapping from one instantiatedness to another
+ (the mode of a single variable)
+ <li> a mapping from an initial instantiatedness of a predicate's
+ arguments to their final instantiatedness
+ (the mode of a predicate)
+ </ol>
+
+<dt> moduleinfo
+ <dd>
+ Another name for the HLDS.
+
+<dt> NYI
+ <dd>
+ Not Yet Implemented.
+
+<dt> predinfo
+ <dd>
+ the structure in HLDS which contains information about
+ a predicate.
+
+<dt> proc (procedure)
+ <dd>
+ a particular mode of a predicate.
+
+<dt> procinfo
+ <dd>
+ the structure in HLDS which contains
+ information about a procedure.
+
+<dt> promise
+ <dd>
+ A declaration that specifies a law that holds for the
+ predicates/functions in the declaration. Thus, examples of promises
+ are assertions and promise ex declarations. More generally, the term
+ promise is often used for a declaration where extra information is
+ given to the compiler which it cannot check itself, for example in
+ purity pragmas.
+
+<dt> promise ex
+ <dd>
+ A shorthand for promise_exclusive, promise_exhaustive, and
+ promise_exclusive_exhaustive declarations. These declarations
+ are used to tell the compiler determinism properties of a
+ disjunction.
+
+<dt> RTTI
+ <dd>
+ The "RunTime Type Information". See rtti.m. A copy of a paper given
+ on this topic is available
+ <a href="/web/20121002213751/http://www.cs.mu.oz.au/research/mercury/information/papers/rtti_ppdp.ps.gz">here</a> in zipped Postscript format.
+
+<dt> super-homogenous form (SHF)
+ <dd>
+ a simplified, flattened form of goals, where
+ each unification is split into its component pieces; in particular,
+ the arguments of each predicate call and functor must be distinct
+ variables.
+
+<dt> switch
+ <dd>
+ a disjunction which does a case analysis on the toplevel
+ functor of some variable.
+</dl>
+
+<hr>
+<!-------------------------->
+
+</body>
+</html>
+
diff --git a/development/developers/release_checklist.html b/development/developers/release_checklist.html
new file mode 100644
index 0000000..7b94cab
--- /dev/null
+++ b/development/developers/release_checklist.html
@@ -0,0 +1,192 @@
+<html>
+<head>
+
+
+<title>Release Checklist for the Mercury Project</title>
+</head>
+
+<body bgcolor="#ffffff" text="#000000">
+
+<hr>
+<!-------------------------->
+
+This file contains a checklist of the steps that must be
+taken when releasing a new version of Mercury.
+
+<hr>
+<!-------------------------->
+
+<ol>
+<li> Items for the next version (1.0) only:
+ <ol>
+ <li>
+ Update w3/include/globals.inc as explained in the XXX comment there.
+ Don't commit your changes to the main branch yet, because
+ otherwise it would be installed on the WWW pages overnight.
+ <li>
+ Make sure that the runtime headers contain no symbols (function names,
+ variable names, type names, struct/enum tags or macros) that do not
+ begin with MR_.
+ </ol>
+
+<li> Make sure configure.in is updated to check for new features.
+
+<li> Update the RELEASE_NOTES, NEWS, WORK_IN_PROGRESS, HISTORY,
+ LIMITATIONS and BUGS files, and the compiler/notes/todo.html file.
+ Don't forget to update the version number in RELEASE_NOTES for major
+ releases.
+ The HISTORY file should include the NEWS files from previous releases
+ (reordered if appropriate -- the HISTORY file is in cronological
+ order whereas the NEWS file is in reverse cronological order).
+
+<li> Update the WWW documentation in the `w3' directory.
+ Note that the sources for these HTML documents are in the files named
+ include/*.inc and *.php3.
+ <ul>
+ <li> Update the RELEASE_INFO file with the name and CVS tag
+ of the new release.
+
+ <li> For minor releases, update release.html with a new entry about
+ this release (put it at the top of the page), and provide a
+ new link to download the release. See old-release.html for
+ examples.
+
+ <li> For major releases, you will need to create some new web pages:<br>
+ <dl>
+ <dt> release-VERSION.html
+ <dd> The release notes for this version.
+
+ <dt> release-VERSION-bugs.html
+ <dd> Any outstanding bugs for this release.
+ This should be the same as the BUGS file.
+
+ <dt> release-VERSION-contents.html
+ <dd> The contents of this distribution.
+ This should be the same as in the RELEASE_NOTES file.
+
+ </dl>
+ You will need to add these new files to the list in the Makefile.
+ You will also need to update release.html and
+ current-release-bugs.html.
+ Move the old information in release.html to old-release.html.
+ Modify release.html to refer to the new html files you have
+ created, and change the links to download the release.
+
+ <li> Update the CURRENT_RELEASE and BETA_RELEASE variables in
+ tools/generate_index_html so that the new release is listed
+ first on the download page.
+
+ <li> Don't commit your changes to the main branch yet, because
+ otherwise it would be installed on the WWW pages overnight.
+ </ul>
+
+<li> Use `cvs tag' or `cvs rtag' to tag all the files with a
+ `version-x_y_z' tag. The cvs modules that need to be tagged
+ are `mercury', `clpr', `tests', and `mercury-gcc'.
+
+<li> Edit the tools/test_mercury script in
+ /home/mercury/public/test_mercury/scripts/mercury:
+ set the RELEASE_VERSION and CHECKOUT_OPTS variables
+ as explained in the comments there.
+
+<li> Run tools/run_all_tests_from_cron on earth.
+ (Or just wait 24 hours or so.) <p>
+
+ This should have the effect of checking out a fresh copy, and doing
+
+ <pre>
+ touch Mmake.params &&
+ autoconf &&
+ mercury_cv_low_tag_bits=2 \
+ mercury_cv_bits_per_word=32 \
+ mercury_cv_unboxed_floats=no \
+ sh configure --prefix=$INSTALL_DIR &&
+ mmake MMAKEFLAGS='EXTRA_MCFLAGS="-O5 --opt-space"' tar
+ </pre>
+
+ <p>
+
+ If it passes all the tests, it should put the resulting tar file in
+ /home/mercury/public/test_mercury/test_dirs/earth/mercury-latest-stable
+ and ftp://ftp.mercury.cs.mu.oz.au/pub/mercury/beta-releases.
+
+<li> Test it on lots of architectures. <br>
+
+ <p>
+ Make sure you test all the programs in the `samples' and `extras'
+ directories.
+
+<li> Build binary distributions for those architectures.
+ This step is now automated as part of tools/test_mercury,
+ with the resulting binaries going in
+ /home/mercury/public/test_mercury/test_dirs/$HOST/mercury-latest-{un,}stable.
+
+<li> Make sure to test the binary distributions!
+
+<li> Move the gzipped tar files from the /pub/mercury/beta-releases directory
+ to the main /pub/mercury directory on the Mercury ftp site
+ ftp://ftp.mercury.cs.mu.oz.au/pub/mercury.
+ Copy the binary distributions to the same place.
+ <p>
+
+ For the Stonybrook mirror, email Konstantinos Sagonas
+ (Kostis.Sagonas at cs.kuleuven.ac.be) to tell him to copy them to
+ ftp://ftp.cs.sunysb.edu/pub/XSB/mercury. <p>
+ Unfortunately this mirror is not automated, so don't worry about it
+ except for major releases or important bug fixes. <p>
+
+ The mirror at ftp://ftp.csd.uu.se/pub/Mercury is also automated.
+ Sometimes the link to Sweden can cause delays.
+ The person to contact regarding this one is Thomas Lindgren
+ (thomasl at csd.uu.se).
+
+<li> Prepare a new "mercury-VERSION.lsm" file for this Mercury release
+ (use the one already uploaded to
+ ftp://sunsite.unc.edu/pub/Linux/Incoming as a template). The
+ version number, date, file sizes, and file names need to be updated
+ for a new release.
+
+<li> Create new binary packages for Linux packaging systems.
+ The .spec file can be used to create .rpm packages.
+ The command <i>dpkg-buildpackage -rfakeroot</i> on hydra can be
+ used to create .deb packages, although you should probably
+ let (or make) the official maintainer do this so it can be
+ PGP signed and uploaded.
+
+<li> Upload "mercury-VERSION-compiler.tar.gz" and "mercury-VERSION.lsm" to
+ ftp://sunsite.unc.edu/incoming/Linux. They will be moved to
+ /pub/Linux/Incoming fairly quickly, and eventually should be moved
+ to /pub/linux/devel/lang/mercury.
+
+<li> Send "mercury-VERSION.lsm" to the lsm robot at lsm at execpc.com
+ with the subject "add".
+
+<li> Append "mercury-VERSION.lsm" to a release notice and send it to
+ linux-announce at news.ornl.gov. This will post to comp.os.linux.announce.
+
+<li> Email mercury-announce at cs.mu.oz.au and cross-post announcement to
+ comp.lang.misc, comp.lang.prolog, comp.lang.functional, comp.object.logic,
+ and for major releases also to comp.compilers and gnu.announce.
+
+<li> Update the Mercury WWW home page (/local/dept/w3/unsupported/docs/mercury/*)
+ by commiting the changes you made earlier.
+
+<li> For major releases, move the commitlog file from its current location
+ (in $CVSROOT/CVSROOT/commitlog) into a file specific to that release,
+ such as "commitlog-0.12". Create a new, empty commitlog file, making
+ sure it is readable by everyone and writeable by group mercury (the
+ commitlog file file is not managed by cvs itself, it is maintained by
+ our own check-in scripts, so you don't need to do anything special to
+ create this file). Email the local mailing list to say that you have
+ done this.
+
+</ol>
+
+
+<hr>
+<!-------------------------->
+
+Last update was $Date: 2005/09/12 09:35:14 $ by $Author: mark $@cs.mu.oz.au. <br>
+</body>
+</html>
+
diff --git a/development/developers/reviews.html b/development/developers/reviews.html
new file mode 100644
index 0000000..40ba4ce
--- /dev/null
+++ b/development/developers/reviews.html
@@ -0,0 +1,534 @@
+
+<html>
+<head>
+
+
+<!-- Start Wayback Rewrite JS Include -->
+<script type="text/javascript" src="/static/js/jwplayer/jwplayer.js" ></script>
+<script type="text/javascript" src="/static/js/video-embed-rewriter.js"></script>
+<script type="text/javascript">
+function initYTVideo(id)
+{
+ _wmVideos_.init("/web/", id);
+}
+</script>
+<!-- End Wayback Rewrite JS Include -->
+
+<title>
+ Reviews
+</title>
+</head>
+
+<body
+ bgcolor="#ffffff"
+ text="#000000"
+>
+<!-- BEGIN WAYBACK TOOLBAR INSERT -->
+
+<script type="text/javascript" src="/static/js/disclaim-element.js" ></script>
+<script type="text/javascript" src="/static/js/graph-calc.js" ></script>
+<script type="text/javascript" src="/static/jflot/jquery.min.js" ></script>
+<script type="text/javascript">
+//<![CDATA[
+var firstDate = 820454400000;
+var lastDate = 1388534399999;
+var wbPrefix = "/web/";
+var wbCurrentUrl = "http:\/\/www.mercury.csse.unimelb.edu.au\/information\/doc-latest\/reviews.html";
+
+var curYear = -1;
+var curMonth = -1;
+var yearCount = 18;
+var firstYear = 1996;
+var imgWidth = 450;
+var yearImgWidth = 25;
+var monthImgWidth = 2;
+var trackerVal = "none";
+var displayDay = "2";
+var displayMonth = "Oct";
+var displayYear = "2012";
+var prettyMonths = ["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"];
+
+function showTrackers(val) {
+ if(val == trackerVal) {
+ return;
+ }
+ if(val == "inline") {
+ document.getElementById("displayYearEl").style.color = "#ec008c";
+ document.getElementById("displayMonthEl").style.color = "#ec008c";
+ document.getElementById("displayDayEl").style.color = "#ec008c";
+ } else {
+ document.getElementById("displayYearEl").innerHTML = displayYear;
+ document.getElementById("displayYearEl").style.color = "#ff0";
+ document.getElementById("displayMonthEl").innerHTML = displayMonth;
+ document.getElementById("displayMonthEl").style.color = "#ff0";
+ document.getElementById("displayDayEl").innerHTML = displayDay;
+ document.getElementById("displayDayEl").style.color = "#ff0";
+ }
+ document.getElementById("wbMouseTrackYearImg").style.display = val;
+ document.getElementById("wbMouseTrackMonthImg").style.display = val;
+ trackerVal = val;
+}
+function getElementX2(obj) {
+ var thing = jQuery(obj);
+ if((thing == undefined)
+ || (typeof thing == "undefined")
+ || (typeof thing.offset == "undefined")) {
+ return getElementX(obj);
+ }
+ return Math.round(thing.offset().left);
+}
+function trackMouseMove(event,element) {
+
+ var eventX = getEventX(event);
+ var elementX = getElementX2(element);
+ var xOff = eventX - elementX;
+ if(xOff < 0) {
+ xOff = 0;
+ } else if(xOff > imgWidth) {
+ xOff = imgWidth;
+ }
+ var monthOff = xOff % yearImgWidth;
+
+ var year = Math.floor(xOff / yearImgWidth);
+ var yearStart = year * yearImgWidth;
+ var monthOfYear = Math.floor(monthOff / monthImgWidth);
+ if(monthOfYear > 11) {
+ monthOfYear = 11;
+ }
+ // 1 extra border pixel at the left edge of the year:
+ var month = (year * 12) + monthOfYear;
+ var day = 1;
+ if(monthOff % 2 == 1) {
+ day = 15;
+ }
+ var dateString =
+ zeroPad(year + firstYear) +
+ zeroPad(monthOfYear+1,2) +
+ zeroPad(day,2) + "000000";
+
+ var monthString = prettyMonths[monthOfYear];
+ document.getElementById("displayYearEl").innerHTML = year + 1996;
+ document.getElementById("displayMonthEl").innerHTML = monthString;
+ // looks too jarring when it changes..
+ //document.getElementById("displayDayEl").innerHTML = zeroPad(day,2);
+
+ var url = wbPrefix + dateString + '/' + wbCurrentUrl;
+ document.getElementById('wm-graph-anchor').href = url;
+
+ //document.getElementById("wmtbURL").value="evX("+eventX+") elX("+elementX+") xO("+xOff+") y("+year+") m("+month+") monthOff("+monthOff+") DS("+dateString+") Moy("+monthOfYear+") ms("+monthString+")";
+ if(curYear != year) {
+ var yrOff = year * yearImgWidth;
+ document.getElementById("wbMouseTrackYearImg").style.left = yrOff + "px";
+ curYear = year;
+ }
+ if(curMonth != month) {
+ var mtOff = year + (month * monthImgWidth) + 1;
+ document.getElementById("wbMouseTrackMonthImg").style.left = mtOff + "px";
+ curMonth = month;
+ }
+}
+//]]>
+</script>
+
+<style type="text/css">body{margin-top:0!important;padding-top:0!important;min-width:800px!important;}#wm-ipp a:hover{text-decoration:underline!important;}</style>
+<div id="wm-ipp" style="display:none; position:relative;padding:0 5px;min-height:70px;min-width:800px; z-index:9000;">
+<div id="wm-ipp-inside" style="position:fixed;padding:0!important;margin:0!important;width:97%;min-width:780px;border:5px solid #000;border-top:none;background-image:url(/static/images/toolbar/wm_tb_bk_trns.png);text-align:center;-moz-box-shadow:1px 1px 3px #333;-webkit-box-shadow:1px 1px 3px #333;box-shadow:1px 1px 3px #333;font-size:11px!important;font-family:'Lucida Grande','Arial',sans-serif!important;">
+ <table style="border-collapse:collapse;margin:0;padding:0;width:100%;"><tbody><tr>
+ <td style="padding:10px;vertical-align:top;min-width:110px;">
+ <a href="/web/" title="Wayback Machine home page" style="background-color:transparent;border:none;"><img src="/static/images/toolbar/wayback-toolbar-logo.png" alt="Wayback Machine" width="110" height="39" border="0"/></a>
+ </td>
+ <td style="padding:0!important;text-align:center;vertical-align:top;width:100%;">
+
+ <table style="border-collapse:collapse;margin:0 auto;padding:0;width:570px;"><tbody><tr>
+ <td style="padding:3px 0;" colspan="2">
+ <form target="_top" method="get" action="/web/form-submit.jsp" name="wmtb" id="wmtb" style="margin:0!important;padding:0!important;"><input type="text" name="url" id="wmtbURL" value="http://www.mercury.csse.unimelb.edu.au/information/doc-latest/reviews.html" style="width:400px;font-size:11px;font-family:'Lucida Grande','Arial',sans-serif;" onfocus="javascript:this.focus();this.select();" /><input type="hidden" name="type" value="replay" /><input type="hidden" name="date" value="20121002213526" /><input type="submit" value="Go" style="font-size:11px;font-family:'Lucida Grande','Arial',sans-serif;margin-left:5px;" /><span id="wm_tb_options" style="display:block;"></span></form>
+ </td>
+ <td style="vertical-align:bottom;padding:5px 0 0 0!important;" rowspan="2">
+ <table style="border-collapse:collapse;width:110px;color:#99a;font-family:'Helvetica','Lucida Grande','Arial',sans-serif;"><tbody>
+
+ <!-- NEXT/PREV MONTH NAV AND MONTH INDICATOR -->
+ <tr style="width:110px;height:16px;font-size:10px!important;">
+ <td style="padding-right:9px;font-size:11px!important;font-weight:bold;text-transform:uppercase;text-align:right;white-space:nowrap;overflow:visible;" nowrap="nowrap">
+
+ <a href="/web/20120702235216/http://www.mercury.csse.unimelb.edu.au/information/doc-latest/reviews.html" style="text-decoration:none;color:#33f;font-weight:bold;background-color:transparent;border:none;" title="2 Jul 2012"><strong>JUL</strong></a>
+
+ </td>
+ <td id="displayMonthEl" style="background:#000;color:#ff0;font-size:11px!important;font-weight:bold;text-transform:uppercase;width:34px;height:15px;padding-top:1px;text-align:center;" title="You are here: 21:35:26 Oct 2, 2012">OCT</td>
+ <td style="padding-left:9px;font-size:11px!important;font-weight:bold;text-transform:uppercase;white-space:nowrap;overflow:visible;" nowrap="nowrap">
+
+ <a href="/web/20130102214512/http://www.mercury.csse.unimelb.edu.au/information/doc-latest/reviews.html" style="text-decoration:none;color:#33f;font-weight:bold;background-color:transparent;border:none;" title="2 Jan 2013"><strong>JAN</strong></a>
+
+ </td>
+ </tr>
+
+ <!-- NEXT/PREV CAPTURE NAV AND DAY OF MONTH INDICATOR -->
+ <tr>
+ <td style="padding-right:9px;white-space:nowrap;overflow:visible;text-align:right!important;vertical-align:middle!important;" nowrap="nowrap">
+
+ <a href="/web/20120702235216/http://www.mercury.csse.unimelb.edu.au/information/doc-latest/reviews.html" title="23:52:16 Jul 2, 2012" style="background-color:transparent;border:none;"><img src="/static/images/toolbar/wm_tb_prv_on.png" alt="Previous capture" width="14" height="16" border="0" /></a>
+
+ </td>
+ <td id="displayDayEl" style="background:#000;color:#ff0;width:34px;height:24px;padding:2px 0 0 0;text-align:center;font-size:24px;font-weight: bold;" title="You are here: 21:35:26 Oct 2, 2012">2</td>
+ <td style="padding-left:9px;white-space:nowrap;overflow:visible;text-align:left!important;vertical-align:middle!important;" nowrap="nowrap">
+
+ <a href="/web/20130102214512/http://www.mercury.csse.unimelb.edu.au/information/doc-latest/reviews.html" title="21:45:12 Jan 2, 2013" style="background-color:transparent;border:none;"><img src="/static/images/toolbar/wm_tb_nxt_on.png" alt="Next capture" width="14" height="16" border="0"/></a>
+
+ </td>
+ </tr>
+
+ <!-- NEXT/PREV YEAR NAV AND YEAR INDICATOR -->
+ <tr style="width:110px;height:13px;font-size:9px!important;">
+ <td style="padding-right:9px;font-size:11px!important;font-weight: bold;text-align:right;white-space:nowrap;overflow:visible;" nowrap="nowrap">
+
+ <a href="/web/20110822145257/http://www.mercury.csse.unimelb.edu.au/information/doc-latest/reviews.html" style="text-decoration:none;color:#33f;font-weight:bold;background-color:transparent;border:none;" title="22 Aug 2011"><strong>2011</strong></a>
+
+ </td>
+ <td id="displayYearEl" style="background:#000;color:#ff0;font-size:11px!important;font-weight: bold;padding-top:1px;width:34px;height:13px;text-align:center;" title="You are here: 21:35:26 Oct 2, 2012">2012</td>
+ <td style="padding-left:9px;font-size:11px!important;font-weight: bold;white-space:nowrap;overflow:visible;" nowrap="nowrap">
+
+ 2013
+
+ </td>
+ </tr>
+ </tbody></table>
+ </td>
+
+ </tr>
+ <tr>
+ <td style="vertical-align:middle;padding:0!important;">
+ <a href="/web/20121002213526*/http://www.mercury.csse.unimelb.edu.au/information/doc-latest/reviews.html" style="color:#33f;font-size:11px;font-weight:bold;background-color:transparent;border:none;" title="See a list of every capture for this URL"><strong>14 captures</strong></a>
+ <div style="margin:0!important;padding:0!important;color:#666;font-size:9px;padding-top:2px!important;white-space:nowrap;" title="Timespan for captures of this URL">14 Sep 06 - 2 Jan 13</div>
+ </td>
+ <td style="padding:0!important;">
+ <a style="position:relative; white-space:nowrap; width:450px;height:27px;" href="" id="wm-graph-anchor">
+ <div id="wm-ipp-sparkline" style="position:relative; white-space:nowrap; width:450px;height:27px;background-color:#fff;cursor:pointer;border-right:1px solid #ccc;" title="Explore captures for this URL">
+ <img id="sparklineImgId" style="position:absolute; z-index:9012; top:0px; left:0px;"
+ onmouseover="showTrackers('inline');"
+ onmouseout="showTrackers('none');"
+ onmousemove="trackMouseMove(event,this)"
+ alt="sparklines"
+ width="450"
+ height="27"
+ border="0"
+ src="/web/jsp/graph.jsp?graphdata=450_27_1996:-1:000000000000_1997:-1:000000000000_1998:-1:000000000000_1999:-1:000000000000_2000:-1:000000000000_2001:-1:000000000000_2002:-1:000000000000_2003:-1:000000000000_2004:-1:000000000000_2005:-1:000000000000_2006:-1:000000001000_2007:-1:000000001000_2008:-1:000000100000_2009:-1:000000001000_2010:-1:000000000000_2011:-1:010100010000_2012:9:101200100100_2013:-1:100000000000"></img>
+ <img id="wbMouseTrackYearImg"
+ style="display:none; position:absolute; z-index:9010;"
+ width="25"
+ height="27"
+ border="0"
+ src="/static/images/toolbar/transp-yellow-pixel.png"></img>
+ <img id="wbMouseTrackMonthImg"
+ style="display:none; position:absolute; z-index:9011; "
+ width="2"
+ height="27"
+ border="0"
+ src="/static/images/toolbar/transp-red-pixel.png"></img>
+ </div>
+ </a>
+
+ </td>
+ </tr></tbody></table>
+ </td>
+ <td style="text-align:right;padding:5px;width:65px;font-size:11px!important;">
+ <a href="javascript:;" onclick="document.getElementById('wm-ipp').style.display='none';" style="display:block;padding-right:18px;background:url(/static/images/toolbar/wm_tb_close.png) no-repeat 100% 0;color:#33f;font-family:'Lucida Grande','Arial',sans-serif;margin-bottom:23px;background-color:transparent;border:none;" title="Close the toolbar">Close</a>
+ <a href="http://faq.web.archive.org/" style="display:block;padding-right:18px;background:url(/static/images/toolbar/wm_tb_help.png) no-repeat 100% 0;color:#33f;font-family:'Lucida Grande','Arial',sans-serif;background-color:transparent;border:none;" title="Get some help using the Wayback Machine">Help</a>
+ </td>
+ </tr></tbody></table>
+
+</div>
+</div>
+<script type="text/javascript">
+ var wmDisclaimBanner = document.getElementById("wm-ipp");
+ if(wmDisclaimBanner != null) {
+ disclaimElement(wmDisclaimBanner);
+ }
+</script>
+<!-- END WAYBACK TOOLBAR INSERT -->
+
+
+
+<hr>
+<!----------------------->
+
+<h1> Reviews </h1> <p>
+
+This file outlines the policy on reviews for the Mercury system.
+
+<hr>
+<!----------------------->
+
+<h2> Reviewable material </h2>
+
+<p>
+
+All changes to the Mercury repository, including the compiler,
+documentation, www pages, library predicates, runtime system, and tools
+need to be reviewed.
+
+<p>
+
+<h2> Review process </h2>
+
+<ol>
+<li> Make sure you are working with an up-to-date copy of the
+ module you are using.
+<li> If change is a code change, test change. See "Testing" section
+ of coding standards. Testing may take time - don't forget
+ that steps 3, 4 and 5 can be done in parallel.
+<li> Create diff - use `cvs diff -u'. New files should be
+ appended verbatim to the end of the diff, with descriptions
+ indicating the name of the file.
+<li> Write log message for this change - use template (see below).
+<li> Review diff and log message yourself. (see below)
+<li> Send to mercury-reviews at cs.mu.oz.au, with the subject
+ "for review: <short description of change>".
+ Nominate a reviewer at top of diff (see below).
+ (If this change has been reviewed once before, it might
+ fall into the "commit before review" category -- see the
+ section on exceptions).
+<li> Wait for review (see below).
+<li> Fix any changes suggested.
+<li> Repeat above steps until approval.
+<li> Commit change (see below).
+</ol>
+
+
+<h2> Log Messages </h2>
+
+Use the template that cvs provides.
+
+<pre>
+ Estimated hours taken: _____
+
+ <overview or general description of changes>
+
+ <directory>/<file>:
+ <detailed description of changes>
+</pre>
+
+In estimated hours, include all your time to fix this problem -
+including debugging time.
+
+<p>
+
+The description should state why the changes were made, not just what
+the changes were. All file modifications related to the same change
+should be committed together, and use the same log message, even over
+multiple directories. The reason for this is that the log messages can
+be viewed on a file-by-file basis, and it is useful to know that a small
+change of a file in a subdirectory is related to a larger change in
+other subdirectories.
+
+<p>
+
+For very small changes, the <overview or general description> can be
+omitted, but the <detailed description> should stay.
+
+<p>
+
+If adding a new feature, this is a good place to describe the feature,
+how it works, how to turn it on and off, and any present limitations of
+the feature (note that all this should also be documented within the
+change, as well). If fixing a bug, describe both the bug and the fix.
+
+<p>
+
+<h2> Self-Review </h2>
+
+<p>
+
+You should also review your own code first, and fix any obvious
+mistakes. Where possible add documentation - if there was something you
+had to understand when making the change, document it - it makes it
+easier to review the change if it is documented, as well as generally
+improving the state of documentation of the compiler.
+
+<p>
+
+<h2> Review </h2>
+
+<p>
+
+We're now posting all diffs to mercury-reviews at cs.mu.oz.au.
+
+<p>
+
+The reasons for posting to mercury-reviews are:
+
+<ul>
+<li> To increase everyone's awareness of what changes are taking
+ place.
+<li> Give everyone interested a chance to review your code, not
+ just the reviewer. Remember, your changes may impact upon
+ the uncommitted work of others, so they may want to give
+ input.
+<li> Allow other people to read the reviewer's comments - so the same
+ problems don't have to be explained again and again.
+<li> People can try to see how your changes worked without having
+ to figure out how to get cvs to generate the right set of
+ diffs.
+<li> Important decisions are often made or justified in reviews, so
+ these should be recorded.
+</ul>
+
+You should try to match the reviewer to the code - someone familiar with
+a section of code can review faster and is more likely to catch errors.
+Put a preamble at the start of your diff to nominate who you would like
+to review the diff.
+
+<p>
+
+<h2> Waiting and approval </h2>
+
+<p>
+
+Waiting for approval need not be wasted time. This is a good time to
+start working on something else, clean up unused workspaces, etc. In
+particular, you might want to run long running tests that have not yet
+been run on the your change (different grades, different architectures,
+optimisation levels, etc).
+
+<p>
+
+The reviewer(s) should reply, indicate any problems that need to be
+corrected, and whether the change can be committed yet. Design issues
+may need to be fully justified before you commit. You may need to fix
+the problems, then go through another iteration of the review process,
+or you may be able to just fix a few small problems, then commit.
+
+<p>
+
+<h2> Committing </h2>
+
+If you have added any new files or directories, then before committing
+you must check the group-id and permissions of the newly created files
+or directories in the CVS repository. Files should be readable by
+group mercury and directories should be both readable and writable by
+group mercury. (Setting of permissions will be enforced by the
+pre-commit check script `CVSROOT/check.pl'.)
+
+<p>
+
+Use the log message you prepared for the review when committing.
+
+<p>
+
+<h2> Exceptions: Commit before review </h2>
+
+<p>
+
+The only time changes should be committed before being reviewed is when they
+satisfy all of the following conditions:
+
+<ul>
+<li> (a) the change is simple
+
+<li> (b) you are absolutely sure the change will not introduce bugs
+
+<li> (c) you are sure that the change will pass review with only
+ trivial corrections (spelling errors in comments, etc.)
+
+<li> (d) there are no new design decisions or changes to previous
+ design decisions in your change (the status quo should
+ be the default; you must convince the reviewer(s) of
+ the validity of your design decisions before the code
+ is committed).
+
+<li> (e) you will be around the next day or two to fix the bugs
+ that you were sure could never happen
+
+<li> (f) committing it now will make life significantly easier
+ for you or someone else in the group
+</ul>
+
+<p>
+
+If the compiler is already broken (i.e. it doesn't pass it's nightly
+tests), and your change is a bug fix, then it's not so important to be
+absolutely sure that your change won't introduce bugs. You should
+still be careful, though. Make sure you review the diffs yourself.
+
+<p>
+
+Similarly, if the code you are modifying is a presently unused part of
+code - for example a new feature that nobody else is using, that is
+switchable, and is switched off by default, or a new tool, or an `under
+development' webpage that is not linked to by other webpages yet, the
+criteria are a bit looser. Don't use this one too often - only for
+small changes. You don't want to go a long way down the wrong track
+with your new feature, before finding there's a much better way.
+
+<p>
+
+If these conditions are satisfied, then there shouldn't be any problem
+with mailing the diff, then committing, then fixing any problems that
+come up afterwards, provided you're pretty sure everything will be okay.
+This is particularly true if others are waiting for your work.
+
+<p>
+
+Usually, a change that has already been reviewed falls into this
+category, provided you have addressed the reviewers comments, and
+there are no disputes over design decisions. If the reviewer has
+specifically asked for another review, or there were a large number of
+comments at the review, you should not commit before a second review.
+
+<p>
+
+If you are going to commit before the review, use the subject line:<br>
+ "diff: <short description of change>".
+
+<h2> Exceptions: No review </h2>
+
+<p>
+
+The only time changes should be committed without review by a second
+person is when they satisfy all of the following conditions:
+
+<ul>
+<li> (a) it is a very small diff that is obviously correct <br>
+ eg: fix typographic errors <br>
+ fix syntax errors you accidently introduced <br>
+ fix spelling of people's names <br> <p>
+
+ These usually don't need to be reviewed by a second
+ person. Make sure that you review your own changes,
+ though. Also make sure your log message is more
+ informative than "fixed a typo", try "s/foo/bar" or
+ something so that if you did make a change that people
+ don't approve of, at least it's seen quickly.
+
+<li> (b) it is not going to be publically visible <br>
+ eg: Web pages, documentation, library, man pages. <p>
+
+ Changes to publically visible stuff should always be
+ reviewed. It's just too easy to make spelling errors,
+ write incorrect information, commit libel, etc. This
+ stuff reflects on the whole group, so it shouldn't be
+ ignored.
+</ul>
+
+If your change falls into this category, you should still send the
+diff and log message to mercury-reviews, but use the subject line:<br>
+"trivial diff: <short description of change>".
+
+
+<hr>
+<!-------------------------->
+
+Last update was $Date: 2003/01/15 08:20:13 $ by $Author: mjwybrow $@cs.mu.oz.au. <br>
+</body>
+</html>
+
+
+
+
+
+<!--
+ FILE ARCHIVED ON 21:35:26 Oct 2, 2012 AND RETRIEVED FROM THE
+ INTERNET ARCHIVE ON 13:53:37 Aug 14, 2013.
+ JAVASCRIPT APPENDED BY WAYBACK MACHINE, COPYRIGHT INTERNET ARCHIVE.
+
+ ALL OTHER CONTENT MAY ALSO BE PROTECTED BY COPYRIGHT (17 U.S.C.
+ SECTION 108(a)(3)).
+-->
diff --git a/development/developers/todo.html b/development/developers/todo.html
new file mode 100644
index 0000000..790bcce
--- /dev/null
+++ b/development/developers/todo.html
@@ -0,0 +1,385 @@
+<html>
+<head>
+
+
+<title>To Do List</title>
+</head>
+
+<body bgcolor="#ffffff" text="#000000">
+
+<hr>
+<!--======================-->
+
+<h1> TODO LIST </h1>
+
+<hr>
+<!--======================-->
+
+<p>
+
+
+For more information on any of these issues, contact
+mercury at csse.unimelb.edu.au.
+
+<p>
+
+<h2> mode analysis </h2>
+
+<p>
+
+<ul>
+<li> fix various bugs in mode inference:
+ need to fix it to work properly in the presence of functions;
+ also need to change normalise_inst so that it handles complicated
+ insts such as `list_skel(any)'.
+
+<li> extend the mode system to allow known aliasing.
+ This is needed to make partially instantiated modes and unique modes work.
+ [supported on the "alias" branch, but there were some serious
+ performance problems... has not been merged back into the main
+ branch]
+
+</ul>
+
+<h2> determinism analysis </h2>
+
+<p>
+
+<ul>
+<li> add functionality for promise exclusive declarations:
+ <ul>
+ <li> add error checking and type checking as for assertions
+ <li> include declaration information in the module_info
+ <li> take into account mutual exclusivity from promise_exclusive
+ and promise_exclusive_exhaustive declarations during switch
+ detection
+ <li> take into account exhaustiveness from promise_exhaustive and
+ promise_exclusive_exhaustive declarations during
+ determinism analysis
+ </ul>
+</ul>
+
+
+<h2> unique modes </h2>
+
+<ul>
+<li> handle nested unique modes
+
+<li> we will probably need to extend unique modes a bit,
+ in as-yet-unknown ways; need more experience here
+
+</ul>
+
+<h2> module system </h2>
+
+<ul>
+<li> check that the interface for a module is type-correct
+ independently of any declarations or imports in the implementation
+ section
+
+<li> there are some problems with nested modules (see the language
+ reference manual)
+
+</ul>
+
+<h2> C interface </h2>
+
+<ul>
+<li> exporting things for manipulating Mercury types from C
+
+<li> need to deal with memory management issues
+
+</ul>
+
+<h2> code generation </h2>
+
+<ul>
+<li> take advantage of unique modes to do compile-time garbage collection
+ and structure reuse.
+
+</ul>
+
+<h2> back-ends </h2>
+
+<h3> low-level (LLDS) back-end </h3>
+<ul>
+<li> support accurate garbage collection
+</ul>
+
+<h3> high-level C back-end </h3>
+<ul>
+<li> finish off support for accurate garbage collection;
+ see the comments in compiler/ml_elim_nested.m
+<li> see also the comments in compiler/ml_code_gen.m
+</ul>
+
+<h2> native code back-end </h2>
+<ul>
+<li> support on platforms other than Linux/x86.
+<li> commit GCC tail-call improvements to GCC CVS repository
+<li> support `--gc accurate'
+<li> support `--gc none'
+</ul>
+
+<h3> .NET back-end </h3>
+<ul>
+<li> finish off standard library implementation
+<li> see also the TODO list in compiler/mlds_to_il.m
+</ul>
+
+<h2> debugger </h2>
+
+<ul>
+<li> support back-ends other than LLDS
+<li> allow interactive queries to refer to values generated by
+ the program being debugged
+<li> trace semidet unifications
+</ul>
+
+<h2> Unicode </h2>
+
+<ul>
+<li> allow alternative <em>external</em> encodings, particularly iso-8859-1
+<li> consistent and robust handling of invalid strings
+ (overlong sequences, unpaired surrogates, etc.)
+<li> add analogue of wcwidth and make some formatting procedures use it
+<li> io.putback_char depends on multiple pushback in ungetc for
+ code points > 127
+</ul>
+
+<hr>
+<!--======================-->
+
+<h1> WISH LIST </h1>
+
+<h2> type-system </h2>
+
+<ul>
+
+<li> allow construct.construct/3 to work for existential types
+
+<li> remove limitation that higher-order terms are monomorphic.
+ i.e. allow universal quantifiers at the top level of
+ higher-order types, e.g. <samp>:- pred foo(all [T] pred(T)).</samp>.
+
+<li> constructor classes
+
+<li> allow a module exporting an abstract type to specify that other modules
+ should not be allowed to test two values of that type for equality (similar
+ to Ada's limited private types). This would be useful for e.g. sets
+ represented as unordered lists with possible duplicates.
+ [this is a subset of the functionality of type classes]
+
+<li> subtypes?
+
+<li> optimisation of type representation and manipulation (possibly
+ profiler guided)
+
+<li> fold/unfolding of types
+</ul>
+
+<h2> mode analysis </h2>
+
+<ul>
+<li> split construct/deconstruct unifications into their atomic
+ "micro-unification" pieces when necessary.
+ (When is it necessary?)
+
+<li> extend polymorphic modes,
+ e.g. to handle uniqueness polymorphism (some research issues?)
+
+<li> handle abstract insts in the same way abstract types are handled
+ (a research issue - is this possible at all?)
+
+<li> implement `willbe(Inst)' insts, for parallelism
+
+<li> mode segments & high-level transformation of circularly moded programs.
+</ul>
+
+<h2> determinism analysis: </h2>
+
+<ul>
+<li> propagate information about bindings from the condition of an if-then-else
+ to the else so that
+<pre>
+ (if X = [] then .... else X = [A|As], ...)
+</pre>
+ is considered det.
+
+<li> turn chains of if-then-elses into switchs where possible.
+ [done by fjh, but not committed; zs not convinced that
+ this is a good idea]
+
+</ul>
+
+<h2> higher-order preds: </h2>
+
+<ul>
+<li> implement single-use higher-order predicate modes.
+ Single-use higher-order predicates would be allowed to bind curried
+ arguments, and to have unique modes for curried arguments.
+
+<li> allow taking the address of a predicate with multiple modes
+ [we do allow this in case where the mode can be determined from
+ the inst of the high-order arguments]
+
+
+<li> improve support for higher-order programming, eg. by providing
+ operators in the standard library which do things like:
+ <ul>
+ <li>compose functions
+ <li>take a predicate with one output argument and treat it like a function.
+ ie. <tt>:- func (pred(T)) = T.</tt>
+ </ul>
+</ul>
+
+<h2> module system: </h2>
+
+<ul>
+<li> produce warnings for implementation imports that are not needed
+
+<li> produce warnings for imports that are in the wrong place
+ (in the interface instead of the implementation, and vice versa)
+ [vice versa done by stayl]
+</ul>
+
+<h2> source-level transformations </h2>
+
+<ul>
+<li> more work on module system, separate compilation, and the multiple
+ specialisation problem
+
+<li> transform non-tail-recursive predicates into tail-recursive form
+ using accumulators. (This is already done, but not enabled by
+ default since it can make some programs run much more slowly.
+ More work is needed to only enable this optimization in cases
+ when it will improve performance rather than pessimize it.)
+
+<li> improvements to deforestation / partial deduction
+
+</ul>
+
+<h2> code generation: </h2>
+
+<ul>
+<li> allow floating point fields of structures without boxing
+ (need multi-word fields)
+
+<li> stack allocation of structures
+
+</ul>
+
+<h2> LLDS back-end: </h2>
+
+<ul>
+<li> inter-procedural register allocation
+
+<li> other specializations, e.g. if argument is known to be bound to
+ f(X,Y), then just pass X and Y in registers
+
+<li> reduce the overhead of higher-order predicate calls (avoid copying
+ the real registers into the fake_reg array and back)
+
+<li> trim stack frames before making recursive calls, to minimize stack usage
+ (this would probably be a pessimization much of the time - zs)
+ and to minimize unnecessary garbage retention.
+
+<li> target C--
+</ul>
+
+<h2> native code back-end </h2>
+
+<ul>
+<li> consider supporting exception handling in a manner
+ that is compatible with C++ and Java
+<li> inline more of the standard library primitives that are
+ currently implemented in C
+</ul>
+
+<h2> garbage collection <h2>
+<ul>
+<li> implement liveness-accurate GC
+<li> implement incremental GC
+<li> implement generational GC
+<li> implement parallel GC
+<li> implement real-time GC
+</ul>
+
+<h2> compilation speed </h2>
+
+<ul>
+<li> improve efficiency of the expansion of equivalence types (currently O(N^2))
+ (e.g. this is particularly bad when compiling live_vars.m).
+
+<li> improve efficiency of the module import handling (currently O(N^2))
+
+<li> use "store" rather than "map" for the major compiler data structures
+</ul>
+
+
+<h2> better diagnostics </h2>
+
+<ul>
+<li> optional warning for any implicit quantifiers whose scope is not
+ the entire clause (the "John Lloyd" option :-).
+
+<li> give a better error message for the use of if-then without else.
+
+<li> give a better error message for the use of `<=' instead of `=<'
+ (but how?)
+
+<li> give a better error message for type errors involving higher-order pred
+ constants (requested by Bart Demoen)
+
+<li> give better error messages for syntax errors in lambda expressions
+</ul>
+
+<h2> general </h2>
+
+<ul>
+<li> coroutining and parallel versions of Mercury
+
+<li> implement streams (need coroutining at least)
+
+<li> implement a very fast turn-around bytecode compiler/interpreter/debugger,
+ similar to Gofer
+ [not-so-fast bytecode compiler done, but bytecode interpreter
+ not implemented]
+
+<li> support for easier formal specification translation (eg a Z library,
+ or Z to Mercury).
+
+<li> implement a source visualisation tool
+
+<li> distributed Mercury
+
+<li> improved development environment
+
+<li> additional software engineering tools
+ <ul>
+ <li> coverage analysis
+ <li> automatic testing
+ </ul>
+
+<li> literate Mercury
+
+<li> implement a GUI library (eg Hugs - Fudgets)
+
+<li> profiling guided optimisations
+ <ul>
+ <li> use profiling information to direct linker for optimal
+ code placement (Alpha has a tool for this).
+ </ul>
+
+<li> use of attribute grammar technology
+ (including visit sequence optimization)
+ to implement code with circular modes
+</ul>
+
+<hr>
+<!--======================-->
+
+Last update was $Date: 2012/02/13 00:11:54 $ by $Author: wangp $@cs.mu.oz.au. <br>
+</body>
+</html>
+
diff --git a/development/developers/work_in_progress.html b/development/developers/work_in_progress.html
new file mode 100644
index 0000000..03325aa
--- /dev/null
+++ b/development/developers/work_in_progress.html
@@ -0,0 +1,108 @@
+<html>
+<head>
+
+
+<title>
+ Work In Progress
+</title>
+</head>
+
+<body bgcolor="#ffffff" text="#000000">
+
+<hr>
+<!---------------------------------------------------------------------------->
+
+The compiler contains some code for the following features,
+which are not yet completed, but which we hope to complete
+at some time in the future:
+<p>
+
+<ul>
+<li> There is a
+ <a href="/web/20121002213802/http://www.cs.mu.oz.au/mercury/dotnet.html">`--target il'</a>
+ option, which generates MSIL code for Microsoft's new
+ <a href="/web/20121002213802/http://msdn.microsoft.com/net/">.NET Common Language Runtime</a>.
+ We're still working on this.
+
+<li> Thread-safe engine (the `.par' grades).
+
+<li> Independent AND-parallelism (the `&' parallel conjunction operator).
+ See Tom Conway's PhD thesis.
+
+<li>
+We have incomplete support for a new, more expressive design for representing
+information about type classes and type class instances at runtime. When
+complete, the new design would allow runtime tests of type class membership,
+it would allow the tabling of predicates with type class constraints,
+and it would allow the debugger to print type_class_infos.
+
+<li> We have added support for dynamic link libraries (DLLs) on Windows.
+ This is not yet enabled by default because it has not yet been tested
+ properly.
+
+<li> There is a new garbage collector that does accurate garbage
+ collection (hlc.agc grade). See the comments in
+ compiler/ml_elim_nested.m and the paper on our web page for more details.
+
+<li> There is a `--generate-bytecode' option, for a new back-end
+ that generates bytecode. The bytecode generator is basically
+ complete, but we don't have a bytecode interpreter.
+</ul>
+<p>
+
+We also have some code that goes at least some part of the way towards
+implementing the features below. However, for these features, the
+code has not yet been committed and thus is not part of the standard
+distribution.
+
+<p>
+
+<ul>
+<li> A new implementation of the mode system using constraints.
+ This is on the "mode-constraints" branch of our CVS repository.
+
+<li> Support for aliasing in the mode system.
+ This is on the "alias" branch of our CVS repository.
+
+<li> Support for automatic structure reuse (reusing old data
+ structures that are no longer live, rather than allocating
+ new memory on the heap) and compile time garbage collection
+ This is on the "reuse" branch of our CVS repository.
+
+<li> Better support for inter-module analysis and optimization.
+
+<li> Support for GCC 3.3 in the native code back-end.
+ This is on the "gcc_3_3" branch of our CVS repository.
+
+</ul>
+
+<hr>
+<!-------------------------->
+<h2>
+Work Not In Progress
+</h2>
+
+The compiler also contains some code for the following features,
+but work on them has stopped, since finishing them off would be
+quite a bit more work, and our current priorities lie elsewhere.
+Still, these could make interesting and worthwhile projects
+if someone has the time for it.
+<p>
+
+<ul>
+<li> A SOAP interface.
+
+<li> A bytecode interpreter, for use with the `--generate-bytecode' option.
+
+<li> Sequence quantification (see the
+ <a href="/web/20121002213802/http://www.cs.mu.oz.au/research/mercury/information/reports/minutes_15_12_00.html">description</a> from the meeting minutes).
+</ul>
+</html>
+
+<hr>
+<!-------------------------->
+
+Last update was $Date: 2010/07/13 05:48:04 $ by $Author: juliensf $@cs.mu.oz.au. <br>
+</body>
+</html>
+
diff --git a/development/include/developer.inc b/development/include/developer.inc
index 8cffccd..7aede8f 100644
--- a/development/include/developer.inc
+++ b/development/include/developer.inc
@@ -29,50 +29,50 @@ We hope to update or replace this information in the future.
</li>
<li>
-<h3><a href="doc-latest/reviews.html">Reviews</a></h3>
+<h3><a href="developers/reviews.html">Reviews</a></h3>
<p>Outlines reviewing procedure. </p>
</li>
<li>
-<h3><a href="doc-latest/coding_standards.html">Mercury Coding Standards</a> </h3>
+<h3><a href="developers/coding_standards.html">Mercury Coding Standards</a> </h3>
<p>Standard for Mercury code. </p>
</li>
<li>
-<h3><a href="doc-latest/compiler_design.html">Compiler Design</a> </h3>
+<h3><a href="developers/compiler_design.html">Compiler Design</a> </h3>
<p>Details of the compiler design. </p>
</li>
<li>
-<h3><a href="doc-latest/allocation.html">Allocation</a> </h3>
+<h3><a href="developers/allocation.html">Allocation</a> </h3>
<p> Details of the allocation scheme currently being implemented. </p>
</li>
-<li> <h3><a href="doc-latest/release_checklist.html">Release checklist</a> </h3>
+<li> <h3><a href="developers/release_checklist.html">Release checklist</a> </h3>
<p>The release procedure. </p>
-<li> <h3><a href="doc-latest/gc_and_c_code.html">Garbage collection and C code</a> </h3>
+<li> <h3><a href="developers/gc_and_c_code.html">Garbage collection and C code</a> </h3>
</li>
<br/><br/>
<li>
<h3>
-<a href="doc-latest/glossary.html">Glossary</a> </h3>
+<a href="developers/glossary.html">Glossary</a> </h3>
<p>Terms used in the Mercury implementation. </p>
</li>
<li>
-<h3><a href="doc-latest/todo.html">To do list</a> </h3>
+<h3><a href="developers/todo.html">To do list</a> </h3>
<p>Things still to do in the Mercury project. </p>
</li>
<li>
-<h3><a href="doc-latest/work_in_progress.html">Work in progress</a> </h3>
+<h3><a href="developers/work_in_progress.html">Work in progress</a> </h3>
<p>Things currently being done on the Mercury project. </p>
</li>
<li>
-<h3><a href="doc-latest/bootstrapping.html">Bootstrapping</a> </h3>
+<h3><a href="developers/bootstrapping.html">Bootstrapping</a> </h3>
<p>What to do when a change requires bootstrapping. </p>
</li>
@@ -89,4 +89,4 @@ We hope to update or replace this information in the future.
</ul>
<p>
-</div>
\ No newline at end of file
+</div>
--
1.7.10.4
More information about the reviews
mailing list