[m-rev.] For review: Improvments to Stack Segments code.

Paul Bone pbone at csse.unimelb.edu.au
Tue Apr 5 20:28:26 AEST 2011


I've tidyed this change up and committed it.

----

Make improvements to stack segments code.

The main benefits of these changes are:

    Stack segments (and other memory zones) are cached when they are released
    and can be re-used.

    Some thread safety-fixes have been added.

    All stack segments on all stacks are now the same size:
        Small contexts (which had small stacks) aren't used with stack
        segments.

        The first segment on any stack is the same size as any other segment.

    The first segment on any stack no-longer has a redzone.

    Hard zones on all memory zones have been set to the minimum of one page
    rather than one MR_unit which is usually two pages.

The caching of stack segments results in the following benchmark results.  The
benefit is negligible under normal circumstances, but becomes important when
small segment sizes are used.  Small segment sizes are common in
asm_fast.gc.par.stseg configurations as they reduce the memory required for
suspended contexts.

Non-segmented stack (32MB)
    asm_fast.gc                      average of 5 with ignore=1     18.16 (1.00)

With 512KB (normal) segments:
    asm_fast.gc.stseg and NO caching average of 5 with ignore=1     19.20 (1.06)
    asm_fast.gc.stseg WITH caching   average of 5 with ignore=1     19.16 (1.06)

With 4KB segments:
    asm_fast.gc.stseg and NO caching average of 5 with ignore=1     20.66 (1.14)
    asm_fast.gc.stseg WITH caching   average of 5 with ignore=1     19.66 (1.08)

Other changes include corrections in code comments, clearer function names and
a documentation fix.

runtime/mercury_memory_zones.h:
runtime/mercury_memory_zones.c:
    Re-write a lot of the code that managed the zone lists.  The old code did
    not re-use previously allocated but saved zones.  The changes ensure that
    MR_create_or_reuse_zone (formerly MR_create_zone) checks for a free zone
    of at least the required size before allocating a new one.  When zones are
    released they are put on the free list.

    As above MR_create_zone is now MR_create_or_reuse_zone,

    MR_unget_zone is now MR_release_zone.

    MR_construct_zone has been removed, it was only ever called by
    MR_create_or_reuse_zone.  MR_create_or_reuse_zone now contains the code for
    MR_construct_zone.

    To avoid an unnecessary sychronisation in parallel code some zones are not
    added to the used list.  The only zones put on the used list are those that
    are useful to have on the used list because they have a non-default signal
    handler or a redzone.

    Updates to used_memory_zones now use a pthread mutex so that only one
    thread may be updating the list at once.  This lock is shared with the
    free_memory_zones structure.

    Updates to used_memory_zones now use memory barriers to guarantee that
    concurrent reads always read a consistent, but possibly incomplete,
    data-structure.  This is necessary because it is read from a signal handler
    which cannot call pthread_mutex().

    Rename MR_get_used_memory_zones() to MR_get_used_memory_zones_readonly()
    and document that the zone lists may be incomplete.

    Make the MR_zone_next field of the MR_MemoryZone_Struct structure volatile.

    Remove MAX_ZONES, it wasn't being used anywhere.

    Insert some calls to MR_debug_log_message to help with debugging.

    Use the correct printf integer length modifier for MR_Unsigned values.

    Rename MR_context_id_counter to zone_id_counter, protect it with a lock in
    HLC thread safe grades and use atomic operations in LLC thread-safe
    grades..

    The offset at which we start using a memory zone is allocated in sequence
    from a table.  This table was protected by Mercury's global lock, this is
    now a CAS operation which prevents deadlocks when using trail segment,
    parallel grades.

runtime/mercury_stacks.c:
    Conform to changes in mercury_memory_zones.c.

    Use MR_debug_log_message for printf-style debugging rather than printf.

runtime/mercury_wrapper.h:
runtime/mercury_wrapper.c:
    Remove support for the smaller sized stacks in grades with stack segments.

    Disable redzones when using stack segments.  The MR_(non)detstack_zone_size
    variables affect the first segment on every stack.  Regardless of the type
    of contaxt that owns that stack.

    Conform to changes in runtime/mercury_memory_zones.h.

runtime/mercury_context.h:
runtime/mercury_context.c:
    Removed an extra declaration for MR_init_context_maybe_generator

    Small contexts are problematic since it's unclear to the programmer which
    computations will be executed on smaller contexts and therefore whether
    their stacks would overflow. 

    Conform to changes in runtime/mercury_memory_zones.h.
    Conform to changes in runtime/mercury_wrapper.h.

runtime/mercury_memory.c:
    Adjust the definition of MR_unit.  It is now guaranteed to be a multiple of
    the page size which is required by its use in mercury_memory_zones.c

    Conform to changes in mercury_wrapper.h.

runtime/mercury_engine.c:
runtime/mercury_memory_handlers.c:
runtime/mercury_trail.c:
    Conform to changes in runtime/mercury_memory_zones.h.

runtime/mercury_memory_handlers.c:
    Use the correct printf integer length modifier for MR_Unsigned values.

runtime/mercury_misc.c:
    Print out the meaning of errno if it is nonzero in MR_fatal_error.

    Use the correct printf integer length modifier for MR_Unsigned values.

runtime/mercury_atomic_ops.h:
    Define MR_THREADSAFE_VOLATILE to expand to volatile when MR_THREADSAFE is
    defined.  Otherwise it expands to nothing.

    Make memory fences macros and atomic operations available in all thread safe
    grades, not just low level C grades.

doc/user_guide.texi:
    Corrected the default detstack size.

Index: doc/user_guide.texi
===================================================================
RCS file: /home/mercury1/repository/mercury/doc/user_guide.texi,v
retrieving revision 1.623
diff -u -p -b -r1.623 user_guide.texi
--- doc/user_guide.texi	7 Mar 2011 03:59:34 -0000	1.623
+++ doc/user_guide.texi	5 Apr 2011 09:01:56 -0000
@@ -431,9 +431,10 @@ The most useful of these are the options
 (For the full list of available options, see @ref{Environment}.)
 
 @c XXX FIXME This is wrong for the case when --high-level-code is enabled.
+ at c Note: The definition for these defaults is in runtime/mercury_wrapper.c
 The det stack and the nondet stack
 are allocated fixed sizes at program start-up.
-The default size is 1024k times the word size (in bytes) for the det stack
+The default size is 4096k times the word size (in bytes) for the det stack
 and 64k times the word size (in bytes) for the nondet stack,
 but these can be overridden with the
 @samp{--detstack-size} and @samp{--nondetstack-size} options,
Index: runtime/mercury_atomic_ops.h
===================================================================
RCS file: /home/mercury1/repository/mercury/runtime/mercury_atomic_ops.h,v
retrieving revision 1.16
diff -u -p -b -r1.16 mercury_atomic_ops.h
--- runtime/mercury_atomic_ops.h	21 Jun 2010 23:52:27 -0000	1.16
+++ runtime/mercury_atomic_ops.h	5 Apr 2011 09:01:56 -0000
@@ -19,7 +19,16 @@
 
 /*---------------------------------------------------------------------------*/
 
-#if defined(MR_LL_PARALLEL_CONJ)
+/*
+** Use this to make some storage volatile only when using a threadsafe grade.
+*/
+#ifdef MR_THREAD_SAFE
+#define MR_THREADSAFE_VOLATILE volatile
+#else
+#define MR_THREADSAFE_VOLATILE
+#endif
+
+#if defined(MR_THREAD_SAFE)
 
 /*
  * Intel and AMD support a pause instruction that is roughly equivalent
@@ -550,9 +559,13 @@ MR_atomic_dec_and_is_zero_uint(volatile 
     }
 #endif
 
+#endif /* MR_THREAD_SAFE */
+
 /*---------------------------------------------------------------------------*/
 /*---------------------------------------------------------------------------*/
 
+#ifdef MR_THREAD_SAFE
+
 /*
 ** Memory fence operations.
 */
@@ -605,8 +618,12 @@ MR_atomic_dec_and_is_zero_uint(volatile 
 
 #endif
 
+#endif /* MR_THREAD_SAFE */
+
 /*---------------------------------------------------------------------------*/
 
+#ifdef MR_LL_PARALLEL_CONJ
+
 /*
 ** Roll our own cheap user-space mutual exclusion locks.  Blocking without
 ** spinning is not supported.  Storage for these locks should be volatile.
Index: runtime/mercury_context.c
===================================================================
RCS file: /home/mercury1/repository/mercury/runtime/mercury_context.c,v
retrieving revision 1.87
diff -u -p -b -r1.87 mercury_context.c
--- runtime/mercury_context.c	25 Mar 2011 03:13:41 -0000	1.87
+++ runtime/mercury_context.c	5 Apr 2011 09:01:56 -0000
@@ -144,7 +144,9 @@ allocate_context_id(void);
 ** MR_MemoryZones.
 */
 static MR_Context       *free_context_list = NULL;
+#ifndef MR_STACK_SEGMENTS
 static MR_Context       *free_small_context_list = NULL;
+#endif
 #ifdef  MR_THREAD_SAFE
   static MercuryLock    free_context_list_lock;
 #endif
@@ -164,10 +166,6 @@ static MR_Integer       MR_victim_counte
 
 /*---------------------------------------------------------------------------*/
 
-static void
-MR_init_context_maybe_generator(MR_Context *c, const char *id,
-    MR_GeneratorPtr gen);
-
 /*
 ** Write out the profiling data that we collect during execution.
 */
@@ -509,22 +507,24 @@ MR_init_context_maybe_generator(MR_Conte
             detstack_size  = MR_detstack_size;
             nondetstack_size = MR_nondetstack_size;
             break;
+#ifndef MR_STACK_SEGMENTS
         case MR_CONTEXT_SIZE_SMALL:
             detstack_name  = "small_detstack";
             nondetstack_name = "small_nondetstack";
             detstack_size  = MR_small_detstack_size;
             nondetstack_size = MR_small_nondetstack_size;
             break;
+#endif
     }
 
     if (c->MR_ctxt_detstack_zone == NULL) {
         if (gen != NULL) {
-            c->MR_ctxt_detstack_zone = MR_create_zone("gen_detstack",
-                    0, MR_gen_detstack_size, MR_next_offset(),
+            c->MR_ctxt_detstack_zone = MR_create_or_reuse_zone("gen_detstack",
+                    MR_gen_detstack_size, MR_next_offset(),
                     MR_gen_detstack_zone_size, MR_default_handler);
         } else {
-            c->MR_ctxt_detstack_zone = MR_create_zone(detstack_name,
-                    0, detstack_size, MR_next_offset(),
+            c->MR_ctxt_detstack_zone = MR_create_or_reuse_zone(detstack_name,
+                    detstack_size, MR_next_offset(),
                     MR_detstack_zone_size, MR_default_handler);
         }
 
@@ -541,12 +541,12 @@ MR_init_context_maybe_generator(MR_Conte
 
     if (c->MR_ctxt_nondetstack_zone == NULL) {
         if (gen != NULL) {
-            c->MR_ctxt_nondetstack_zone = MR_create_zone("gen_nondetstack",
-                    0, MR_gen_nondetstack_size, MR_next_offset(),
+            c->MR_ctxt_nondetstack_zone = MR_create_or_reuse_zone("gen_nondetstack",
+                    MR_gen_nondetstack_size, MR_next_offset(),
                     MR_gen_nondetstack_zone_size, MR_default_handler);
         } else {
-            c->MR_ctxt_nondetstack_zone = MR_create_zone(nondetstack_name,
-                    0, nondetstack_size, MR_next_offset(),
+            c->MR_ctxt_nondetstack_zone = MR_create_or_reuse_zone(nondetstack_name,
+                    nondetstack_size, MR_next_offset(),
                     MR_nondetstack_zone_size, MR_default_handler);
         }
 
@@ -584,21 +584,21 @@ MR_init_context_maybe_generator(MR_Conte
     }
 
     if (c->MR_ctxt_genstack_zone == NULL) {
-        c->MR_ctxt_genstack_zone = MR_create_zone("genstack", 0,
+        c->MR_ctxt_genstack_zone = MR_create_or_reuse_zone("genstack",
             MR_genstack_size, MR_next_offset(),
             MR_genstack_zone_size, MR_default_handler);
     }
     c->MR_ctxt_gen_next = 0;
 
     if (c->MR_ctxt_cutstack_zone == NULL) {
-        c->MR_ctxt_cutstack_zone = MR_create_zone("cutstack", 0,
+        c->MR_ctxt_cutstack_zone = MR_create_or_reuse_zone("cutstack",
             MR_cutstack_size, MR_next_offset(),
             MR_cutstack_zone_size, MR_default_handler);
     }
     c->MR_ctxt_cut_next = 0;
 
     if (c->MR_ctxt_pnegstack_zone == NULL) {
-        c->MR_ctxt_pnegstack_zone = MR_create_zone("pnegstack", 0,
+        c->MR_ctxt_pnegstack_zone = MR_create_or_reuse_zone("pnegstack",
             MR_pnegstack_size, MR_next_offset(),
             MR_pnegstack_zone_size, MR_default_handler);
     }
@@ -624,7 +624,7 @@ MR_init_context_maybe_generator(MR_Conte
     }
 
     if (c->MR_ctxt_trail_zone == NULL) {
-        c->MR_ctxt_trail_zone = MR_create_zone("trail", 0,
+        c->MR_ctxt_trail_zone = MR_create_or_reuse_zone("trail",
             MR_trail_size, MR_next_offset(),
             MR_trail_zone_size, MR_default_handler);
     }
@@ -661,7 +661,7 @@ MR_init_context_maybe_generator(MR_Conte
 MR_Context *
 MR_create_context(const char *id, MR_ContextSize ctxt_size, MR_Generator *gen)
 {
-    MR_Context  *c;
+    MR_Context  *c = NULL;
 
 #ifdef MR_LL_PARALLEL_CONJ
     MR_atomic_inc_int(&MR_num_outstanding_contexts);
@@ -674,6 +674,7 @@ MR_create_context(const char *id, MR_Con
     ** so we can return a regular context in place of a small context
     ** if one is already available.
     */
+#ifndef MR_STACK_SEGMENTS
     if (ctxt_size == MR_CONTEXT_SIZE_SMALL && free_small_context_list) {
         c = free_small_context_list;
         free_small_context_list = c->MR_ctxt_next;
@@ -682,7 +683,9 @@ MR_create_context(const char *id, MR_Con
             MR_profile_parallel_small_context_reused++;
         }
 #endif
-    } else if (free_context_list != NULL) {
+    }
+#endif
+    if (c == NULL && free_context_list != NULL) {
         c = free_context_list;
         free_context_list = c->MR_ctxt_next;
 #ifdef MR_PROFILE_PARALLEL_EXECUTION_SUPPORT
@@ -690,8 +693,6 @@ MR_create_context(const char *id, MR_Con
             MR_profile_parallel_regular_context_reused++;
         }
 #endif
-    } else {
-        c = NULL;
     }
     MR_UNLOCK(&free_context_list_lock, "create_context i");
 
@@ -773,6 +774,7 @@ MR_destroy_context(MR_Context *c)
             }
 #endif
             break;
+#ifndef MR_STACK_SEGMENTS
         case MR_CONTEXT_SIZE_SMALL:
             c->MR_ctxt_next = free_small_context_list;
             free_small_context_list = c;
@@ -782,6 +784,7 @@ MR_destroy_context(MR_Context *c)
             }
 #endif
             break;
+#endif
     }
     MR_UNLOCK(&free_context_list_lock, "destroy_context");
 }
@@ -1284,7 +1287,7 @@ ReadySpark:
     /* Grab a new context if we haven't got one then begin execution. */
     if (MR_ENGINE(MR_eng_this_context) == NULL) {
         MR_ENGINE(MR_eng_this_context) = MR_create_context("from spark",
-            MR_CONTEXT_SIZE_SMALL, NULL);
+            MR_CONTEXT_SIZE_FOR_SPARK, NULL);
     #ifdef MR_THREADSCOPE
         MR_threadscope_post_create_context_for_spark(
             MR_ENGINE(MR_eng_this_context));
Index: runtime/mercury_context.h
===================================================================
RCS file: /home/mercury1/repository/mercury/runtime/mercury_context.h,v
retrieving revision 1.63
diff -u -p -b -r1.63 mercury_context.h
--- runtime/mercury_context.h	25 Mar 2011 03:13:41 -0000	1.63
+++ runtime/mercury_context.h	5 Apr 2011 09:01:56 -0000
@@ -206,9 +206,20 @@ typedef struct MR_Context_Struct        
 
 typedef enum {
     MR_CONTEXT_SIZE_REGULAR,
+/*
+** Stack segment grades don't need differently sized contexts.
+*/
+#ifndef MR_STACK_SEGMENTS
     MR_CONTEXT_SIZE_SMALL
+#endif
 } MR_ContextSize;
 
+#ifdef MR_STACK_SEGMENTS
+#define MR_CONTEXT_SIZE_FOR_SPARK MR_CONTEXT_SIZE_REGULAR
+#else
+#define MR_CONTEXT_SIZE_FOR_SPARK MR_CONTEXT_SIZE_SMALL
+#endif
+
 #ifdef MR_THREAD_SAFE
 typedef struct MR_SavedOwner_Struct     MR_SavedOwner;
 
Index: runtime/mercury_engine.c
===================================================================
RCS file: /home/mercury1/repository/mercury/runtime/mercury_engine.c,v
retrieving revision 1.63
diff -u -p -b -r1.63 mercury_engine.c
--- runtime/mercury_engine.c	13 Dec 2010 05:59:42 -0000	1.63
+++ runtime/mercury_engine.c	5 Apr 2011 09:01:56 -0000
@@ -123,28 +123,28 @@ MR_init_engine(MercuryEngine *eng)
     */
 
 #ifndef MR_CONSERVATIVE_GC
-    eng->MR_eng_heap_zone = MR_create_zone("heap", 1,
+    eng->MR_eng_heap_zone = MR_create_or_reuse_zone("heap",
         MR_heap_size, MR_next_offset(), MR_heap_zone_size, MR_default_handler);
     eng->MR_eng_hp = eng->MR_eng_heap_zone->MR_zone_min;
 
   #ifdef MR_NATIVE_GC
-    eng->MR_eng_heap_zone2 = MR_create_zone("heap2", 1,
+    eng->MR_eng_heap_zone2 = MR_create_or_reuse_zone("heap2",
         MR_heap_size, MR_next_offset(), MR_heap_zone_size, MR_default_handler);
 
     #ifdef MR_DEBUG_AGC_PRINT_VARS
-    eng->MR_eng_debug_heap_zone = MR_create_zone("debug_heap", 1,
+    eng->MR_eng_debug_heap_zone = MR_create_or_reuse_zone("debug_heap",
         MR_debug_heap_size, MR_next_offset(),
         MR_debug_heap_zone_size, MR_default_handler);
     #endif
   #endif /* MR_NATIVE_GC */
 
   #ifdef MR_MIGHT_RECLAIM_HP_ON_FAILURE
-    eng->MR_eng_solutions_heap_zone = MR_create_zone("solutions_heap", 1,
+    eng->MR_eng_solutions_heap_zone = MR_create_or_reuse_zone("solutions_heap",
         MR_solutions_heap_size, MR_next_offset(),
         MR_solutions_heap_zone_size, MR_default_handler);
     eng->MR_eng_sol_hp = eng->MR_eng_solutions_heap_zone->MR_zone_min;
 
-    eng->MR_eng_global_heap_zone = MR_create_zone("global_heap", 1,
+    eng->MR_eng_global_heap_zone = MR_create_or_reuse_zone("global_heap",
         MR_global_heap_size, MR_next_offset(),
         MR_global_heap_zone_size, MR_default_handler);
     eng->MR_eng_global_hp = eng->MR_eng_global_heap_zone->MR_zone_min;
Index: runtime/mercury_memory.c
===================================================================
RCS file: /home/mercury1/repository/mercury/runtime/mercury_memory.c,v
retrieving revision 1.41
diff -u -p -b -r1.41 mercury_memory.c
--- runtime/mercury_memory.c	14 Dec 2010 14:14:10 -0000	1.41
+++ runtime/mercury_memory.c	5 Apr 2011 09:01:56 -0000
@@ -141,11 +141,12 @@ MR_init_memory(void)
 
     /*
     ** Convert all the sizes are from kilobytes to bytes and
-    ** make sure they are multiples of the page and cache sizes.
+    ** make sure they are multiples of the page size and at least as big as the
+    ** cache size.
     */
 
     MR_page_size = getpagesize();
-    MR_unit = MR_max(MR_page_size, MR_pcache_size);
+    MR_unit = MR_round_up(MR_max(MR_page_size, MR_pcache_size), MR_page_size);
 
 #ifdef MR_CONSERVATIVE_GC
     MR_heap_size                = 0;
@@ -170,10 +171,14 @@ MR_init_memory(void)
     MR_heap_margin_size  = MR_heap_margin_size * 1024;
 #endif
     MR_kilobytes_to_bytes_and_round_up(MR_detstack_size);
+#ifndef MR_STACK_SEGMENTS
     MR_kilobytes_to_bytes_and_round_up(MR_small_detstack_size);
+#endif
     MR_kilobytes_to_bytes_and_round_up(MR_detstack_zone_size);
     MR_kilobytes_to_bytes_and_round_up(MR_nondetstack_size);
+#ifndef MR_STACK_SEGMENTS
     MR_kilobytes_to_bytes_and_round_up(MR_small_nondetstack_size);
+#endif
     MR_kilobytes_to_bytes_and_round_up(MR_nondetstack_zone_size);
 #ifdef  MR_USE_MINIMAL_MODEL_STACK_COPY
     MR_kilobytes_to_bytes_and_round_up(MR_genstack_size);
Index: runtime/mercury_memory_handlers.c
===================================================================
RCS file: /home/mercury1/repository/mercury/runtime/mercury_memory_handlers.c,v
retrieving revision 1.34
diff -u -p -b -r1.34 mercury_memory_handlers.c
--- runtime/mercury_memory_handlers.c	26 May 2010 07:45:48 -0000	1.34
+++ runtime/mercury_memory_handlers.c	5 Apr 2011 09:01:56 -0000
@@ -132,7 +132,7 @@ MR_try_munprotect(void *addr, void *cont
 
     fault_addr = (MR_Word *) addr;
 
-    zone = MR_get_used_memory_zones();
+    zone = MR_get_used_memory_zones_readonly();
 
     if (MR_memdebug) {
         fprintf(stderr, "caught fault at %p\n", (void *)addr);
@@ -141,7 +141,8 @@ MR_try_munprotect(void *addr, void *cont
     while(zone != NULL) {
   #ifdef MR_CHECK_OVERFLOW_VIA_MPROTECT
         if (MR_memdebug) {
-            fprintf(stderr, "checking %s#%d: %p - %p\n",
+            fprintf(stderr, "checking %s#%" MR_INTEGER_LENGTH_MODIFIER
+                    "d: %p - %p\n",
                 zone->MR_zone_name, zone->MR_zone_id,
                 (void *) zone->MR_zone_redzone,
                 (void *) zone->MR_zone_top);
@@ -151,7 +152,8 @@ MR_try_munprotect(void *addr, void *cont
             && fault_addr <= zone->MR_zone_top)
         {
             if (MR_memdebug) {
-                fprintf(stderr, "address is in %s#%d redzone\n",
+                fprintf(stderr, "address is in %s#% "
+                        MR_INTEGER_LENGTH_MODIFIER "d redzone\n",
                     zone->MR_zone_name, zone->MR_zone_id);
             }
 
@@ -219,7 +221,8 @@ MR_default_handler(MR_Word *fault_addr, 
         zone_size = (char *) new_zone - (char *) zone->MR_zone_redzone;
 
         if (MR_memdebug) {
-            fprintf(stderr, "trying to unprotect %s#%d from %p to %p (%x)\n",
+            fprintf(stderr, "trying to unprotect %s#%"
+                MR_INTEGER_LENGTH_MODIFIER "d from %p to %p (%x)\n",
             zone->MR_zone_name, zone->MR_zone_id,
             (void *) zone->MR_zone_redzone, (void *) new_zone,
             (int) zone_size);
@@ -228,7 +231,8 @@ MR_default_handler(MR_Word *fault_addr, 
             PROT_READ|PROT_WRITE) < 0)
         {
             char buf[2560];
-            sprintf(buf, "Mercury runtime: cannot unprotect %s#%d zone",
+            sprintf(buf, "Mercury runtime: cannot unprotect %s#%"
+                    MR_INTEGER_LENGTH_MODIFIER "d zone",
                 zone->MR_zone_name, zone->MR_zone_id);
             perror(buf);
             exit(1);
@@ -237,7 +241,8 @@ MR_default_handler(MR_Word *fault_addr, 
         zone->MR_zone_redzone = new_zone;
 
         if (MR_memdebug) {
-            fprintf(stderr, "successful: %s#%d redzone now %p to %p\n",
+            fprintf(stderr, "successful: %s#%" MR_INTEGER_LENGTH_MODIFIER
+                    "d redzone now %p to %p\n",
                 zone->MR_zone_name, zone->MR_zone_id,
                 (void *) zone->MR_zone_redzone, (void *) zone->MR_zone_top);
         }
@@ -250,7 +255,8 @@ MR_default_handler(MR_Word *fault_addr, 
     } else {
         char buf[2560];
         if (MR_memdebug) {
-            fprintf(stderr, "can't unprotect last page of %s#%d\n",
+            fprintf(stderr, "can't unprotect last page of %s#%"
+                    MR_INTEGER_LENGTH_MODIFIER "d\n",
                 zone->MR_zone_name, zone->MR_zone_id);
             fflush(stdout);
         }
@@ -259,7 +265,8 @@ MR_default_handler(MR_Word *fault_addr, 
         fprintf(stderr, "sp = %p, maxfr = %p\n", MR_sp, MR_maxfr);
         MR_debug_memory_zone(stderr, zone);
 #endif
-        sprintf(buf, "\nMercury runtime: memory zone %s#%d overflowed\n",
+        sprintf(buf, "\nMercury runtime: memory zone %s#%"
+                MR_INTEGER_LENGTH_MODIFIER "d overflowed\n",
             zone->MR_zone_name, zone->MR_zone_id);
         MR_fatal_abort(context, buf, MR_TRUE);
     }
@@ -708,7 +715,8 @@ MR_explain_exception_record(EXCEPTION_RE
             zone = MR_get_used_memory_zones();
             while(zone != NULL) {
                 fprintf(stderr,
-                        "\n***    Checking zone %s#%d: "
+                        "\n***    Checking zone %s#%"
+                        MR_INTEGER_LENGTH_MODIFIER "d: "
                         "0x%08lx - 0x%08lx - 0x%08lx",
                         zone->MR_zone_name, zone->MR_zone_id,
                         (unsigned long) zone->MR_zone_bottom,
@@ -721,13 +729,15 @@ MR_explain_exception_record(EXCEPTION_RE
                     fprintf(stderr,
                         "\n***     Address is within"
                         " redzone of "
-                        "%s#%d (!!zone overflowed!!)\n",
+                        "%s#%" MR_INTEGER_LENGTH_MODIFIER
+                        "d (!!zone overflowed!!)\n",
                         zone->MR_zone_name, zone->MR_zone_id);
                 } else if ((zone->MR_zone_bottom <= address) &&
                         (address <= zone->MR_zone_top))
                 {
                     fprintf(stderr, "\n***     Address is"
-                            " within zone %s#%d\n",
+                            " within zone %s#%" MR_INTEGER_LENGTH_MODIFIER
+                            "d\n",
                             zone->MR_zone_name, zone->MR_zone_id);
                 }
                 /*
Index: runtime/mercury_memory_zones.c
===================================================================
RCS file: /home/mercury1/repository/mercury/runtime/mercury_memory_zones.c,v
retrieving revision 1.36
diff -u -p -b -r1.36 mercury_memory_zones.c
--- runtime/mercury_memory_zones.c	13 Feb 2010 07:29:10 -0000	1.36
+++ runtime/mercury_memory_zones.c	5 Apr 2011 09:01:56 -0000
@@ -63,8 +63,15 @@
 
 #ifdef MR_THREAD_SAFE
   #include "mercury_thread.h"
+  #include "mercury_atomic_ops.h"
 #endif
 
+#include "mercury_memory_handlers.h"
+
+/*
+** XXX: Why is this included here and not above with the other system
+** includes?
+*/
 #ifdef MR_WIN32_VIRTUAL_ALLOC
   #include <windows.h>
 #endif
@@ -339,19 +346,23 @@ MR_dealloc_zone_memory(void *base, size_
 
 /*---------------------------------------------------------------------------*/
 
-#define MAX_ZONES   16
+static void             MR_init_offsets(void);
 
-/* Enables a workaround in MR_unget_zone(). */
-#define MEMORY_ZONE_FREELIST_WORKAROUND 1
+static MR_MemoryZone*   MR_get_free_zone(size_t size);
+static void             MR_add_zone_to_used_list(MR_MemoryZone *zone);
+static void             MR_remove_zone_from_used_list(MR_MemoryZone *zone);
+static void             MR_return_zone_to_free_list(MR_MemoryZone *zone);
+static void             MR_free_zone(MR_MemoryZone *zone);
+static size_t           get_zone_alloc_size(MR_MemoryZone *zone);
+static void             MR_maybe_gc_zones(void);
 
-static MR_MemoryZone    *used_memory_zones = NULL;
-static MR_MemoryZone    *free_memory_zones = NULL;
-#ifdef  MR_THREAD_SAFE
-  static MercuryLock    free_memory_zones_lock;
+#ifdef MR_CHECK_OVERFLOW_VIA_MPROTECT
+static void
+MR_configure_redzone_size(MR_MemoryZone *zone, size_t new_redsize);
 #endif
 
-static void             MR_init_offsets(void);
-static MR_MemoryZone    *MR_get_zone(void);
+static MR_MemoryZone *
+MR_create_new_zone(size_t desired_size, size_t redzone_size);
 
     /*
     ** We manage the handing out of offsets through the cache by
@@ -364,14 +375,56 @@ static MR_MemoryZone    *MR_get_zone(voi
 #define CACHE_SLICES    8
 
 static  size_t          *offset_vector;
-static  int             offset_counter;
+static MR_THREADSAFE_VOLATILE MR_Integer
+    offset_counter;
 extern  size_t          next_offset(void);
 
+static MR_THREADSAFE_VOLATILE MR_Unsigned zone_id_counter = 0;
+
+#if ! defined(MR_LL_PARALLEL_CONJ) && defined(MR_THREAD_SAFE)
+static MercuryLock  zone_id_counter_lock;
+#endif
+
+/*
+** This list contains used zones that need a signal handler, for example those
+** that have redzones.  Other used zones may exist that are not on this list
+** because:
+**
+**   1) They don't have a redzone.
+**
+**   2) Putting them on this list in a threadsafe grade requires extra
+**      synchronisation.
+**
+** Zones are removed from this list when they're returned to the free list.
+** We only attempt to remove the zones that we would have added.
+*/
+static MR_MemoryZone * MR_THREADSAFE_VOLATILE
+used_memory_zones = NULL;
+
+#ifdef  MR_THREAD_SAFE
+  /*
+  ** You must take this lock to write to either of the zone lists, or to read
+  ** the complete zone lists.  Reading a zone list without taking the lock is
+  ** also supported _iff_ partial information is okay.  Any code that writes
+  ** the list must guarantee that memory writes occur in the correct order so
+  ** that the list is always well formed from a reader's perspective.
+  **
+  ** This is necessary so that signal handlers can read the list without taking
+  ** a lock.  They may not take a lock because pthread_mutex_lock cannot be
+  ** used safely within a signal handler.
+  */
+  static MercuryLock    memory_zones_lock;
+#endif
+
+
 void
 MR_init_zones()
 {
 #ifdef  MR_THREAD_SAFE
-    pthread_mutex_init(&free_memory_zones_lock, MR_MUTEX_ATTR);
+    pthread_mutex_init(&memory_zones_lock, MR_MUTEX_ATTR);
+#if ! defined(MR_LL_PARALLEL_CONJ)
+    pthread_mutex_init(&zone_id_counter_lock, MR_MUTEX_ATTR);
+#endif
 #endif
 
     MR_init_offsets();
@@ -396,55 +449,45 @@ MR_init_offsets()
     }
 }
 
-static MR_MemoryZone *
-MR_get_zone(void)
+static void
+MR_add_zone_to_used_list(MR_MemoryZone *zone)
 {
-    MR_MemoryZone   *zone;
+    MR_LOCK(&memory_zones_lock, "MR_add_zone_to_used_list");
 
+    zone->MR_zone_next = used_memory_zones;
     /*
-    ** Unlink the first zone on the free-list, link it onto the used-list
-    ** and return it.
+    ** This change must occur before we replace the head of the list.
     */
-    MR_LOCK(&free_memory_zones_lock, "get_zone");
-    if (free_memory_zones == NULL) {
-        zone = MR_GC_NEW(MR_MemoryZone);
-    } else {
-        zone = free_memory_zones;
-        free_memory_zones = free_memory_zones->MR_zone_next;
-    }
-
-    zone->MR_zone_next = used_memory_zones;
+#ifdef MR_THREAD_SAFE
+    MR_CPU_SFENCE;
+#endif
     used_memory_zones = zone;
-    MR_UNLOCK(&free_memory_zones_lock, "get_zone");
 
-    return zone;
+    MR_UNLOCK(&memory_zones_lock, "MR_add_zone_to_used_list");
 }
 
-void
-MR_unget_zone(MR_MemoryZone *zone)
+static void
+MR_free_zone(MR_MemoryZone *zone)
 {
-#ifdef  MEMORY_ZONE_FREELIST_WORKAROUND
-    /*
-    ** XXX MR_construct_zone() does not yet reuse previously allocated memory
-    ** zones properly and simply leaks memory when it tries to do so.  As a
-    ** workaround, we never put memory zones on the free list and deallocate
-    ** them immediately here.
-    */
-  #ifdef MR_CHECK_OVERFLOW_VIA_MPROTECT
+#ifdef MR_CHECK_OVERFLOW_VIA_MPROTECT
     size_t          redsize;
     int             res;
 
     redsize = zone->MR_zone_redzone_size;
     res = MR_protect_pages((char *) zone->MR_zone_redzone,
-        redsize + MR_unit, NORMAL_PROT);
-    assert(res == 0);
-  #endif
+        redsize + MR_page_size, NORMAL_PROT);
+    if (res) {
+        MR_fatal_error("Could not unprotect memory pages in MR_free_zone");
+    }
+#endif
 
     MR_dealloc_zone_memory(zone->MR_zone_bottom,
         ((char *) zone->MR_zone_top) - ((char *) zone->MR_zone_bottom));
+}
 
-#else   /* !MEMORY_ZONE_FREELIST_WORKAROUND */
-
+static void
+MR_remove_zone_from_used_list(MR_MemoryZone *zone)
+{
     MR_MemoryZone   *prev;
     MR_MemoryZone   *tmp;
 
@@ -453,11 +496,12 @@ MR_unget_zone(MR_MemoryZone *zone)
     ** then link it onto the start of the free-list.
     */
 
-    MR_LOCK(&free_memory_zones_lock, "unget_zone");
-    for(prev = NULL, tmp = used_memory_zones; tmp != NULL && tmp != zone;
-        prev = tmp, tmp = tmp->MR_zone_next)
-    {
-        /* VOID */
+    MR_LOCK(&memory_zones_lock, "MR_remove_zone_from_used_list");
+    prev = NULL;
+    tmp = used_memory_zones;
+    while (tmp != NULL && tmp != zone) {
+        prev = tmp;
+        tmp = tmp->MR_zone_next;
     }
 
     if (tmp == NULL) {
@@ -469,12 +513,17 @@ MR_unget_zone(MR_MemoryZone *zone)
     } else {
         prev->MR_zone_next = tmp->MR_zone_next;
     }
+    MR_UNLOCK(&memory_zones_lock, "MR_remove_zone_from_used_list");
+}
 
-    zone->MR_zone_next = free_memory_zones;
-    free_memory_zones = zone;
-    MR_UNLOCK(&free_memory_zones_lock, "unget_zone");
-
-#endif  /* MEMORY_ZONE_FREELIST_WORKAROUND */
+static size_t
+get_zone_alloc_size(MR_MemoryZone *zone)
+{
+#ifdef  MR_PROTECTPAGE
+    return (size_t)((char *)zone->MR_zone_hardmax - (char *)zone->MR_zone_min);
+#else
+    return (size_t)((char *)zone->MR_zone_top - (char *)zone->MR_zone_min);
+#endif
 }
 
 /*
@@ -491,81 +540,139 @@ MR_unget_zone(MR_MemoryZone *zone)
 size_t
 MR_next_offset(void)
 {
-    size_t offset;
+    MR_Integer old_counter;
+    MR_Integer new_counter;
+
+    old_counter = offset_counter;
+    new_counter = (old_counter + 1) % (CACHE_SLICES - 1);
+#if defined(MR_THREAD_SAFE)
+    /*
+    ** The critical section here is really small, a CAS will work well.
+    */
+    while (!MR_compare_and_swap_int(&offset_counter, old_counter,
+            new_counter)) {
+        MR_ATOMIC_PAUSE;
+        old_counter = offset_counter;
+        new_counter = (old_counter + 1) % (CACHE_SLICES - 1);
+    }
+#else
+    offset_counter = new_counter;
+#endif
 
-    MR_OBTAIN_GLOBAL_LOCK("MR_next_offset");
-    offset = offset_vector[offset_counter];
-    offset_counter = (offset_counter + 1) % (CACHE_SLICES - 1);
-    MR_RELEASE_GLOBAL_LOCK("MR_next_offset");
-    return offset;
+    return offset_vector[new_counter];
 }
 
 MR_MemoryZone *
-MR_create_zone(const char *name, int id, size_t size, size_t offset,
-    size_t redsize, MR_ZoneHandler handler)
+MR_create_or_reuse_zone(const char *name, size_t size, size_t offset,
+    size_t redzone_size, MR_ZoneHandler handler)
 {
     MR_Word     *base;
     size_t      total_size;
+    MR_MemoryZone   *zone;
+    MR_bool         is_new_zone;
 
+    zone = MR_get_free_zone(size + redzone_size);
+    if (zone != NULL) {
+#ifdef MR_DEBUG_STACK_SEGMENTS
+        MR_debug_log_message("re-using existing zone");
+#endif
+        is_new_zone = MR_FALSE;
+        zone->MR_zone_desired_size = size;
+    } else {
+#ifdef  MR_DEBUG_STACK_SEGMENTS
+        MR_debug_log_message("allocating new zone");
+#endif
+        is_new_zone = MR_TRUE;
+        zone = MR_create_new_zone(size, redzone_size);
+    }
+
+    zone->MR_zone_name = name;
+#ifdef MR_CHECK_OVERFLOW_VIA_MPROTECT
+    zone->MR_zone_handler = handler;
+    if (!is_new_zone && (redzone_size != zone->MR_zone_redzone_size)) {
     /*
-    ** total allocation is:
-    **  unit        (roundup to page boundary)
-    **  size        (including redzone)
-    **  unit        (an extra page for protection if mprotect is being used)
+        ** The redzone must be reconfigured.
     */
-#ifdef  MR_PROTECTPAGE
-    total_size = size + 2 * MR_unit;
+        MR_configure_redzone_size(zone, redzone_size);
+        MR_reset_redzone(zone);
+    }
 #else
-    total_size = size + MR_unit;
+    if (!is_new_zone) {
+        zone->MR_zone_redzone_size = redzone_size;
+    }
 #endif
 
-    base = (MR_Word *) MR_alloc_zone_memory(total_size);
-    if (base == NULL) {
-        char buf[2560];
-        sprintf(buf, "unable allocate memory zone: %s#%d", name, id);
-        MR_fatal_error(buf);
+    if (redzone_size > 0 || (handler != MR_null_handler)) {
+        /*
+        ** Any zone with a redzone or a non-default handler must be
+        ** added to the used zones list.
+        */
+        MR_add_zone_to_used_list(zone);
     }
 
-    return MR_construct_zone(name, id, base, size, offset, redsize, handler);
+    return zone;
 }
 
-MR_MemoryZone *
-MR_construct_zone(const char *name, int id, MR_Word *base,
-    size_t size, size_t offset, size_t redsize, MR_ZoneHandler handler)
+static MR_MemoryZone *
+MR_create_new_zone(size_t desired_size, size_t redzone_size)
 {
+    size_t          offset;
     MR_MemoryZone   *zone;
+    MR_Word         *base;
     size_t          total_size;
 
+    offset = MR_next_offset();
+    /*
+    ** Ignore the offset if it is at least half the desired size of the zone.
+    ** This should only happen for very small zones.
+    */
+    if ((offset * 2) > desired_size) {
+        offset = 0;
+    }
+
+    /*
+    ** The redzone must be page aligned and its size must be a multiple of
+    ** the page size.
+    */
+    redzone_size = MR_round_up(redzone_size, MR_page_size);
+    /*
+    ** Include an extra page size for the hardzone.
+    */
+    total_size = desired_size + redzone_size + MR_page_size;
+    /*
+    ** The total size must also be rounded to a page boundary, so that it can
+    ** be allocated from mmap if we're using accurate GC.
+    */
+    total_size = MR_round_up(total_size, MR_page_size);
+
+    base = (MR_Word *) MR_alloc_zone_memory(total_size);
     if (base == NULL) {
-        MR_fatal_error("MR_construct_zone called with NULL pointer");
+        MR_fatal_error("Unable to allocate memory for zone");
     }
 
-    zone = MR_get_zone();
+    zone = MR_GC_NEW(MR_MemoryZone);
 
-    zone->MR_zone_name = name;
-    zone->MR_zone_id = id;
-    zone->MR_zone_desired_size = size;
-    zone->MR_zone_redzone_size = redsize;
+    /* The name is initialized by our caller */
+    zone->MR_zone_name = NULL;
+#ifdef MR_LL_PARALLEL_CONJ
+    zone->MR_zone_id = MR_atomic_add_and_fetch_uint(&zone_id_counter, 1);
+#elif defined(MR_THREAD_SAFE)
+    MR_LOCK(&zone_id_counter_lock, "MR_create_new_zone");
+    zone->MR_zone_id = ++zone_id_counter;
+    MR_UNLOCK(&zone_id_counter_lock, "MR_create_new_zone");
+#else
+    zone->MR_zone_id = ++zone_id_counter;
+#endif
+    zone->MR_zone_desired_size = desired_size;
+    zone->MR_zone_redzone_size = redzone_size;
 
 #ifdef  MR_CHECK_OVERFLOW_VIA_MPROTECT
-    zone->MR_zone_handler = handler;
+    /* Our caller will set the handler */
+    zone->MR_zone_handler = NULL;
 #endif /* MR_CHECK_OVERFLOW_VIA_MPROTECT */
 
-    /*
-    ** XXX If `zone' is pulled off the free-list (rather than newly allocated)
-    ** then zone->MR_zone_bottom will be pointing to a previously allocated
-    ** memory region.  Setting it to point to `base' therefore causes a memory
-    ** leak.  `base' should not be allocated in the first place if `zone' is
-    ** to be reused.  See also the workaround in MR_unget_zone().
-    */
     zone->MR_zone_bottom = base;
 
-#ifdef  MR_PROTECTPAGE
-    total_size = size + MR_unit;
-#else
-    total_size = size;
-#endif  /* MR_PROTECTPAGE */
-
     zone->MR_zone_top = (MR_Word *) ((char *) base + total_size);
     zone->MR_zone_min = (MR_Word *) ((char *) base + offset);
 #ifdef  MR_LOWLEVEL_DEBUG
@@ -593,6 +700,11 @@ MR_extend_zone(MR_MemoryZone *zone, size
         MR_fatal_error("MR_extend_zone called with NULL pointer");
     }
 
+    /*
+    ** XXX: This value is strange for new_total_size, it's a page bigger than
+    ** it needs to be.  However, this allows for a hardzone in some cases.  We
+    ** should fix this in the future.
+    */
 #ifdef  MR_PROTECTPAGE
     new_total_size = new_size + 2 * MR_unit;
 #else
@@ -610,7 +722,8 @@ MR_extend_zone(MR_MemoryZone *zone, size
         NORMAL_PROT);
     if (res < 0) {
         char buf[2560];
-        sprintf(buf, "unable to reset %s#%d total area\nbase=%p, redzone=%p",
+        sprintf(buf, "unable to reset %s#%" MR_INTEGER_LENGTH_MODIFIER
+                "d total area\nbase=%p, redzone=%p",
             zone->MR_zone_name, zone->MR_zone_id,
             zone->MR_zone_bottom, zone->MR_zone_top);
         MR_fatal_error(buf);
@@ -620,7 +733,8 @@ MR_extend_zone(MR_MemoryZone *zone, size
     new_base = MR_realloc_zone_memory(old_base, copy_size, new_size);
     if (new_base == NULL) {
         char buf[2560];
-        sprintf(buf, "unable reallocate memory zone: %s#%d",
+        sprintf(buf, "unable reallocate memory zone: %s#%"
+                MR_INTEGER_LENGTH_MODIFIER "d",
             zone->MR_zone_name, zone->MR_zone_id);
         MR_fatal_error(buf);
     }
@@ -645,6 +759,43 @@ MR_extend_zone(MR_MemoryZone *zone, size
     return base_incr;
 }
 
+void
+MR_release_zone(MR_MemoryZone *zone) {
+#ifdef MR_CHECK_OVERFLOW_VIA_MPROTECT
+    if (zone->MR_zone_redzone_size || (zone->MR_zone_handler != MR_null_handler)) {
+        MR_remove_zone_from_used_list(zone);
+    }
+#endif
+    MR_return_zone_to_free_list(zone);
+
+    MR_maybe_gc_zones();
+}
+
+#ifdef MR_CHECK_OVERFLOW_VIA_MPROTECT
+static void
+MR_configure_redzone_size(MR_MemoryZone *zone, size_t redsize)
+{
+    size_t size = zone->MR_zone_desired_size;
+
+    zone->MR_zone_redzone = (MR_Word *)
+        MR_round_up((MR_Unsigned) zone->MR_zone_bottom + size - redsize,
+            MR_page_size);
+    zone->MR_zone_redzone_base = zone->MR_zone_redzone;
+
+    /*
+    ** When using small memory zones, the offset given by MR_next_offset()
+    ** might have us starting in the middle of the redzone.  Don't do that.
+    */
+    if (zone->MR_zone_min >= zone->MR_zone_redzone) {
+        zone->MR_zone_min = zone->MR_zone_bottom;
+    }
+
+    MR_assert(zone->MR_zone_redzone < zone->MR_zone_top);
+    MR_assert(((MR_Unsigned)zone->MR_zone_redzone + redsize) <
+        (MR_Unsigned)zone->MR_zone_top);
+}
+#endif
+
 static void
 MR_setup_redzones(MR_MemoryZone *zone)
 {
@@ -661,24 +812,14 @@ MR_setup_redzones(MR_MemoryZone *zone)
     ** setup the redzone
     */
 #ifdef MR_CHECK_OVERFLOW_VIA_MPROTECT
-    zone->MR_zone_redzone = (MR_Word *)
-        MR_round_up((MR_Unsigned) zone->MR_zone_bottom + size - redsize,
-            MR_unit);
-    zone->MR_zone_redzone_base = zone->MR_zone_redzone;
+    MR_configure_redzone_size(zone, redsize);
 
-    /*
-    ** When using small memory zones, the offset given by MR_next_offset()
-    ** might have us starting in the middle of the redzone.  Don't do that.
-    */
-    if (zone->MR_zone_min >= zone->MR_zone_redzone) {
-        zone->MR_zone_min = zone->MR_zone_bottom;
-    }
-
-    res = MR_protect_pages((char *) zone->MR_zone_redzone, redsize + MR_unit,
+    res = MR_protect_pages((char *) zone->MR_zone_redzone, redsize + MR_page_size,
         REDZONE_PROT);
     if (res < 0) {
         char buf[2560];
-        sprintf(buf, "unable to set %s#%d redzone\nbase=%p, redzone=%p",
+        sprintf(buf, "unable to set %s#%" MR_INTEGER_LENGTH_MODIFIER
+                "d redzone\nbase=%p, redzone=%p",
             zone->MR_zone_name, zone->MR_zone_id,
             zone->MR_zone_bottom, zone->MR_zone_redzone);
         MR_fatal_error(buf);
@@ -689,13 +830,13 @@ MR_setup_redzones(MR_MemoryZone *zone)
     ** setup the hardzone
     */
 #if defined(MR_PROTECTPAGE)
-    zone->MR_zone_hardmax = (MR_Word *)
-        MR_round_up((MR_Unsigned) zone->MR_zone_top - MR_unit, MR_unit);
-    res = MR_protect_pages((char *) zone->MR_zone_hardmax, MR_unit,
+    zone->MR_zone_hardmax = (MR_Word *)((MR_Unsigned)zone->MR_zone_top - MR_page_size);
+    res = MR_protect_pages((char *) zone->MR_zone_hardmax, MR_page_size,
         REDZONE_PROT);
     if (res < 0) {
         char buf[2560];
-        sprintf(buf, "unable to set %s#%d hardmax\nbase=%p, hardmax=%p top=%p",
+        sprintf(buf, "unable to set %s#%" MR_INTEGER_LENGTH_MODIFIER
+                "d hardmax\nbase=%p, hardmax=%p top=%p",
             zone->MR_zone_name, zone->MR_zone_id,
             zone->MR_zone_bottom, zone->MR_zone_hardmax, zone->MR_zone_top);
         MR_fatal_error(buf);
@@ -733,7 +874,8 @@ MR_reset_redzone(MR_MemoryZone *zone)
         NORMAL_PROT);
     if (res < 0) {
         char buf[2560];
-        sprintf(buf, "unable to reset %s#%d normal area\nbase=%p, redzone=%p",
+        sprintf(buf, "unable to reset %s#%" MR_INTEGER_LENGTH_MODIFIER
+                "d normal area\nbase=%p, redzone=%p",
             zone->MR_zone_name, zone->MR_zone_id,
             zone->MR_zone_bottom, zone->MR_zone_redzone);
         MR_fatal_error(buf);
@@ -744,7 +886,8 @@ MR_reset_redzone(MR_MemoryZone *zone)
         REDZONE_PROT);
     if (res < 0) {
         char buf[2560];
-        sprintf(buf, "unable to reset %s#%d redzone\nbase=%p, redzone=%p",
+        sprintf(buf, "unable to reset %s#%" MR_INTEGER_LENGTH_MODIFIER
+                "d redzone\nbase=%p, redzone=%p",
             zone->MR_zone_name, zone->MR_zone_id,
             zone->MR_zone_bottom, zone->MR_zone_redzone);
         MR_fatal_error(buf);
@@ -753,7 +896,7 @@ MR_reset_redzone(MR_MemoryZone *zone)
 }
 
 MR_MemoryZone *
-MR_get_used_memory_zones(void)
+MR_get_used_memory_zones_readonly(void)
 {
     return used_memory_zones;
 }
@@ -764,6 +907,347 @@ MR_in_zone(const MR_Word *ptr, const MR_
     return (zone->MR_zone_bottom <= ptr && ptr < zone->MR_zone_top);
 }
 
+/****************************************************************************
+**
+** Cacheing of memory zones.
+*/
+
+/* Define this macro to test the performance without caching */
+#ifdef MR_DO_NOT_CACHE_FREE_MEMORY_ZONES
+
+static MR_MemoryZone *
+MR_get_free_zone(size_t size)
+{
+    return NULL;
+}
+
+static void
+MR_return_zone_to_free_list(MR_MemoryZone *zone)
+{
+    MR_free_zone(zone);
+}
+
+static void
+MR_maybe_gc_zones(void)
+{
+    return;
+}
+
+#else /* ! MR_DO_NOT_CACHE_FREE_MEMORY_ZONES */
+
+/*
+** Currently we use high and low water marks to manage the cache of free zones,
+** collection begins if either the number of zones or total number of pages is
+** above their respective high water marks and stops when both are below their
+** low water marks.
+**
+** TODO: Test for optimal values of these settings, however there's probably
+** not much to be gained here that can't be more easily gained somewhere else
+** first.
+*/
+
+/*
+** Collection of old zones.
+**
+** MR_gc_zones() - Collect zones until MR_should_stop_gc_memory_zones() returns
+** true.
+**
+** MR_should_gc_memory_zones() - True if either number and number of pages are
+** above the high water mark.
+**
+** MR_should_stop_gc_memory_zones() - True if both the number nad number of
+** pages are below the low water mark.
+*/
+static void             MR_gc_zones(void);
+static MR_bool          MR_should_gc_memory_zones(void);
+static MR_bool          MR_should_stop_gc_memory_zones(void);
+
+/*
+** TODO: These should be controlable via MERCURY_OPTIONS
+*/
+/* 16 zones per thread */
+#define MR_FREE_MEMORY_ZONES_NUM_HIGH   (16*MR_num_threads)
+/* 4 zones per thread */
+#define MR_FREE_MEMORY_ZONES_NUM_LOW    (4*MR_num_threads)
+/* 16MB per thread */
+#define MR_FREE_MEMORY_ZONES_PAGES_HIGH (((16*1024*1024)/MR_page_size)*MR_num_threads)
+/* 4MB per thread */
+#define MR_FREE_MEMORY_ZONES_PAGES_LOW  (((4*1024*1024)/MR_page_size)*MR_num_threads)
+
+static MR_MemoryZonesFree * MR_THREADSAFE_VOLATILE
+    free_memory_zones = NULL;
+
+/*
+** This value is used to maintain a position within the list of free zones.  If
+** it is null then no position is maintained.  Otherwise it points to a
+** position within the list, in this case it represents a sub-list of free
+** zones.  This sub list always contains the least-recently-used zones.
+**
+** Some actions can invalidate this pointer, in these cases it should be set to
+** NULL.
+**
+** When this value is non-null, it can be used to quickly find the least
+** recently used zones.  This is used by the garbage collection loop.
+*/
+static MR_MemoryZonesFree * MR_THREADSAFE_VOLATILE
+    lru_free_memory_zones = NULL;
+
+/*
+** The number of free zones cached.
+*/
+static MR_THREADSAFE_VOLATILE MR_Unsigned free_memory_zones_num = 0;
+
+/*
+** The number pages used bo the cached free zones.
+*/
+static MR_THREADSAFE_VOLATILE MR_Unsigned free_memory_zones_pages = 0;
+
+/*
+** The next token to be given to a memory zone as it is added to the free list.
+** The tokens are used to detect the least recently used zones when freeing
+** them.  Zones with larger tokens have been used more recently than zones with
+** smaller tokens.
+*/
+static MR_THREADSAFE_VOLATILE MR_Unsigned lru_memory_zone_token = 0;
+
+static MR_MemoryZone *
+MR_get_free_zone(size_t size)
+{
+    MR_MemoryZone       *zone;
+    MR_MemoryZonesFree  *zones_list;
+    MR_MemoryZonesFree  *zones_list_prev;
+
+    /*
+    ** Unlink the first zone on the free-list, link it onto the used-list
+    ** and return it.
+    */
+    MR_LOCK(&memory_zones_lock, "MR_get_free_zone");
+
+    zones_list = free_memory_zones;
+    zones_list_prev = NULL;
+    while (zones_list != NULL) {
+        if (zones_list->MR_zonesfree_size <= size) {
+            /*
+            ** A zone on this list will fit our needs.
+            */
+            break;
+        }
+        zones_list_prev = zones_list;
+        zones_list = zones_list->MR_zonesfree_major_next;
+    }
+
+    if (zones_list != NULL) {
+        zone = zones_list->MR_zonesfree_minor_head;
+        if (zone->MR_zone_next != NULL) {
+            zones_list->MR_zonesfree_minor_head = zone->MR_zone_next;
+        } else {
+            /*
+            ** This inner list is now empty, we should remove it from the
+            ** outer list.
+            */
+            if (zones_list_prev != NULL) {
+                zones_list_prev->MR_zonesfree_major_next = zones_list->MR_zonesfree_major_next;
+            } else {
+                free_memory_zones = zones_list->MR_zonesfree_major_next;
+            }
+            if (zones_list->MR_zonesfree_major_next != NULL) {
+                zones_list->MR_zonesfree_major_next->MR_zonesfree_major_prev = zones_list_prev;
+            }
+            if (lru_free_memory_zones == zones_list) {
+                /*
+                ** This zone list had the least recently used zone on it,
+                ** invalidate the lru_free_memory_zones pointer.  The garbage
+                ** collection loop will re-initialise this.
+                */
+                lru_free_memory_zones = NULL;
+            }
+        }
+    } else {
+        zone = NULL;
+    }
+
+    if (zone != NULL) {
+        free_memory_zones_num--;
+        free_memory_zones_pages -= get_zone_alloc_size(zone) / MR_page_size;
+    }
+
+    MR_UNLOCK(&memory_zones_lock, "MR_get_free_zone");
+
+    return zone;
+}
+
+static void
+MR_return_zone_to_free_list(MR_MemoryZone *zone)
+{
+    /* The current list in iterations over the list of free lists */
+    MR_MemoryZonesFree      *cur_list;
+    size_t                  size;
+
+    size = get_zone_alloc_size(zone);
+
+#ifdef MR_CONSERVATIVE_GC
+    /* Make sure the GC doesn't find any pointers in this zone */
+    MR_clear_zone_for_GC(zone, zone->MR_zone_min);
+#endif
+
+    MR_LOCK(&memory_zones_lock, "MR_return_zone_to_free_list");
+
+    free_memory_zones_num++;
+    free_memory_zones_pages += size / MR_page_size;
+
+    zone->MR_zone_lru_token = lru_memory_zone_token++;
+
+    cur_list = free_memory_zones;
+    while (cur_list) {
+        if (cur_list->MR_zonesfree_size == size)
+        {
+            /*
+            ** We found the correct zone list.
+            */
+            break;
+        }
+        /*
+        ** Test to see if we can exit the loop early.
+        */
+        else if (cur_list->MR_zonesfree_size > size)
+        {
+            /*
+            ** Set this to null to represent our failure to find a zone list of
+            ** the right size.
+            */
+            cur_list = NULL;
+            break;
+        }
+        cur_list = cur_list->MR_zonesfree_major_next;
+    }
+
+    if (cur_list == NULL) {
+        MR_MemoryZonesFree *new_list;
+        MR_MemoryZonesFree *prev_list;
+
+        new_list = MR_GC_NEW(MR_MemoryZonesFree);
+        new_list->MR_zonesfree_size = size;
+        new_list->MR_zonesfree_minor_head = NULL;
+        new_list->MR_zonesfree_minor_tail = NULL;
+        cur_list = free_memory_zones;
+        prev_list = NULL;
+        while (cur_list) {
+            if (cur_list->MR_zonesfree_size > size) {
+                /*
+                ** We've just passed the position where this list item belongs.
+                */
+                break;
+            }
+            prev_list = cur_list;
+            cur_list = cur_list->MR_zonesfree_major_next;
+        }
+        /*
+        ** Insert it between prev_list and cur_list.
+        */
+        new_list->MR_zonesfree_major_next = cur_list;
+        new_list->MR_zonesfree_major_prev = prev_list;
+        if (prev_list) {
+            prev_list->MR_zonesfree_major_next = new_list;
+        } else {
+            free_memory_zones = new_list;
+        }
+        if (cur_list) {
+            cur_list->MR_zonesfree_major_prev = new_list;
+        }
+
+        /*
+        ** Reset cur_list so that it is pointing at the correct outer list
+        ** item regardless of whether this branch was executed or not.
+        */
+        cur_list = new_list;
+    }
+
+    zone->MR_zone_next = cur_list->MR_zonesfree_minor_head;
+    cur_list->MR_zonesfree_minor_head = zone;
+    if (!cur_list->MR_zonesfree_minor_tail) {
+        /*
+        ** This is the first zone on this list, so set up the tail pointer.
+        */
+        cur_list->MR_zonesfree_minor_tail = zone;
+    }
+
+    MR_UNLOCK(&memory_zones_lock, "MR_return_zone_to_free_list");
+}
+
+static MR_bool
+MR_should_gc_memory_zones(void)
+{
+    return (free_memory_zones_num > MR_FREE_MEMORY_ZONES_NUM_HIGH) ||
+        (free_memory_zones_pages > MR_FREE_MEMORY_ZONES_PAGES_HIGH);
+}
+
+static MR_bool
+MR_should_stop_gc_memory_zones(void)
+{
+    return (free_memory_zones_num < MR_FREE_MEMORY_ZONES_NUM_LOW) &&
+        (free_memory_zones_pages < MR_FREE_MEMORY_ZONES_PAGES_LOW);
+}
+
+static void
+MR_maybe_gc_zones(void) {
+    if (MR_should_gc_memory_zones()) {
+        MR_gc_zones();
+    }
+}
+
+static void
+MR_gc_zones(void)
+{
+    do {
+        MR_LOCK(&memory_zones_lock, "MR_gc_zones");
+        MR_MemoryZonesFree  *cur_list;
+        MR_Unsigned         oldest_lru_token, cur_lru_token;
+
+        if (NULL == lru_free_memory_zones) {
+            /*
+            ** There is no cached LRU information, find the free list with the
+            ** oldest zone on it.
+            */
+            cur_list = free_memory_zones;
+
+            while(cur_list != NULL) {
+                cur_lru_token = cur_list->MR_zonesfree_minor_tail->MR_zone_lru_token;
+                if (!lru_free_memory_zones) {
+                    oldest_lru_token = cur_lru_token;
+                    lru_free_memory_zones = cur_list;
+                } else if (cur_lru_token < oldest_lru_token) {
+                    /*
+                    ** The current zone has an older token.
+                    */
+                    oldest_lru_token = cur_lru_token;
+                    lru_free_memory_zones = cur_list;
+                }
+
+                cur_list = cur_list->MR_zonesfree_major_next;
+            }
+        }
+
+        if (NULL == lru_free_memory_zones) {
+            /*
+            ** There's no memory to collect, perhaps there was a race before we
+            ** locked mercury_zones_lock.
+            */
+            MR_UNLOCK(&memory_zones_lock, "MR_gc_zones");
+            return;
+        }
+
+
+        MR_UNLOCK(&memory_zones_lock, "MR_gc_zones");
+    } while (MR_should_stop_gc_memory_zones());
+}
+
+#endif /* ! MR_DO_NOT_CACHE_FREE_MEMORY_ZONES */
+
+/****************************************************************************
+**
+** Debugging code.
+*/
+
 void
 MR_debug_memory(FILE *fp)
 {
@@ -782,53 +1266,49 @@ MR_debug_memory(FILE *fp)
         (void *) MR_fake_reg, (long) MR_fake_reg & (MR_unit-1));
     fprintf(fp, "\n");
 
+    MR_LOCK(&memory_zones_lock, "MR_debug_memory");
     for (zone = used_memory_zones; zone; zone = zone->MR_zone_next) {
         MR_debug_memory_zone(fp, zone);
     }
+    MR_UNLOCK(&memory_zones_lock, "MR_debug_memory");
 }
 
 void
 MR_debug_memory_zone(FILE *fp, MR_MemoryZone *zone)
 {
-    fprintf(fp, "%-16s#%d-dessize   = %lu\n",
+    fprintf(fp, "%-16s#%" MR_INTEGER_LENGTH_MODIFIER "d-dessize   = %lu\n",
         zone->MR_zone_name, zone->MR_zone_id,
         (unsigned long) zone->MR_zone_desired_size);
-    fprintf(fp, "%-16s#%d-base  = %p\n",
+    fprintf(fp, "%-16s#%" MR_INTEGER_LENGTH_MODIFIER "d-base  = %p\n",
         zone->MR_zone_name, zone->MR_zone_id,
         (void *) zone->MR_zone_bottom);
-    fprintf(fp, "%-16s#%d-min       = %p\n",
+    fprintf(fp, "%-16s#%" MR_INTEGER_LENGTH_MODIFIER "d-min       = %p\n",
         zone->MR_zone_name, zone->MR_zone_id,
         (void *) zone->MR_zone_min);
-    fprintf(fp, "%-16s#%d-top       = %p\n",
+    fprintf(fp, "%-16s#%" MR_INTEGER_LENGTH_MODIFIER "d-top       = %p\n",
         zone->MR_zone_name, zone->MR_zone_id,
         (void *) zone->MR_zone_top);
-    fprintf(fp, "%-16s#%d-end       = %p\n",
+    fprintf(fp, "%-16s#%" MR_INTEGER_LENGTH_MODIFIER "d-end       = %p\n",
         zone->MR_zone_name, zone->MR_zone_id,
         (void *) zone->MR_zone_end);
 #ifdef  MR_CHECK_OVERFLOW_VIA_MPROTECT
-    fprintf(fp, "%-16s#%d-redsize   = %lu\n",
+    fprintf(fp, "%-16s#%" MR_INTEGER_LENGTH_MODIFIER "d-redsize   = %lu\n",
         zone->MR_zone_name, zone->MR_zone_id,
         (unsigned long) zone->MR_zone_redzone_size);
-    fprintf(fp, "%-16s#%d-redzone   = %p\n",
+    fprintf(fp, "%-16s#%" MR_INTEGER_LENGTH_MODIFIER "d-redzone   = %p\n",
         zone->MR_zone_name, zone->MR_zone_id,
         (void *) zone->MR_zone_redzone);
-    fprintf(fp, "%-16s#%d-redzone_base  = %p\n",
+    fprintf(fp, "%-16s#%" MR_INTEGER_LENGTH_MODIFIER "d-redzone_base  = %p\n",
         zone->MR_zone_name, zone->MR_zone_id,
         (void *) zone->MR_zone_redzone_base);
 #endif  /* MR_CHECK_OVERFLOW_VIA_MPROTECT */
 #ifdef  MR_PROTECTPAGE
-    fprintf(fp, "%-16s#%d-hardmax       = %p\n",
+    fprintf(fp, "%-16s#%" MR_INTEGER_LENGTH_MODIFIER "d-hardmax       = %p\n",
         zone->MR_zone_name, zone->MR_zone_id,
         (void *) zone->MR_zone_hardmax);
-    fprintf(fp, "%-16s#%d-size      = %lu\n",
-        zone->MR_zone_name, zone->MR_zone_id,
-        (unsigned long) ((char *) zone->MR_zone_hardmax
-             - (char *) zone->MR_zone_min));
-#else
-    fprintf(fp, "%-16s#%d-size      = %lu\n",
+#endif
+    fprintf(fp, "%-16s#%" MR_INTEGER_LENGTH_MODIFIER "d-size      = %lu\n",
         zone->MR_zone_name, zone->MR_zone_id,
-        (unsigned long) ((char *) zone->MR_zone_top
-             - (char *) zone->MR_zone_min));
-#endif  /* MR_PROTECTPAGE */
+        get_zone_alloc_size(zone));
     fprintf(fp, "\n");
 }
Index: runtime/mercury_memory_zones.h
===================================================================
RCS file: /home/mercury1/repository/mercury/runtime/mercury_memory_zones.h,v
retrieving revision 1.20
diff -u -p -b -r1.20 mercury_memory_zones.h
--- runtime/mercury_memory_zones.h	5 Sep 2008 11:19:33 -0000	1.20
+++ runtime/mercury_memory_zones.h	5 Apr 2011 09:01:56 -0000
@@ -1,4 +1,7 @@
 /*
+** vim:sw=8 ts=8
+*/
+/*
 ** Copyright (C) 1998-2002, 2004-2006 The University of Melbourne.
 ** This file may only be copied under the terms of the GNU Library General
 ** Public License - see the file COPYING.LIB in the Mercury distribution.
@@ -24,6 +27,7 @@
 
 #include "mercury_types.h"		/* for MR_Word */
 #include "mercury_std.h"		/* for MR_bool */
+#include "mercury_atomic_ops.h"		/* for MR_THREADSAFE_VOLATILE */
 
 typedef struct MR_MemoryZone_Struct	MR_MemoryZone;
 
@@ -46,6 +50,12 @@ typedef MR_bool	MR_ZoneHandler(MR_Word *
 ** id		An integer which together with the name should uniquely
 **		identify the allocated area.
 **
+** lru_token	This field is filled with a token each time a zone is freed,
+**		it's used to track the least-recently-used zones in the free
+**		list so that they can be handed back to the garbage
+**		collector/OS.  Zones with larger tokens have been used more recently
+**		than zones with smaller tokens.
+**
 ** desired_size The desired size of the zone in kilobytes. The actual size
 ** 		may be larger due to roundups.
 **
@@ -98,9 +108,12 @@ typedef MR_bool	MR_ZoneHandler(MR_Word *
 */
 
 struct MR_MemoryZone_Struct {
-	MR_MemoryZone		*MR_zone_next;
+	MR_MemoryZone * MR_THREADSAFE_VOLATILE	MR_zone_next;
 	const char		*MR_zone_name;
-	int			MR_zone_id;
+	MR_Unsigned				MR_zone_id;
+#ifndef MR_DO_NOT_CACHE_FREE_MEMORY_ZONES
+	MR_Unsigned				MR_zone_lru_token;
+#endif
 	size_t			MR_zone_desired_size;
 	size_t			MR_zone_redzone_size;
 	MR_Word			*MR_zone_bottom;
@@ -134,6 +147,26 @@ struct MR_MemoryZones_Struct {
     MR_MemoryZones      *MR_zones_tail;
 };
 
+#ifndef MR_DO_NOT_CACHE_FREE_MEMORY_ZONES
+/*
+** Free memory zones are arranged in a list of lists, The outer list (below)
+** associates a size of the zones in each inner list.  It is to be kept in
+** sorted order from smallest to largest.  So that a traversal of this list
+** returns the zones that are the 'best fit' as the 'first fit'.  The inner
+** lists (using the MR_zone_next field of the zones) contain zones of all the
+** same size.
+*/
+typedef struct MR_MemoryZonesFree_Struct MR_MemoryZonesFree;
+
+struct MR_MemoryZonesFree_Struct {
+    size_t              MR_zonesfree_size;
+    MR_MemoryZonesFree  *MR_zonesfree_major_next;
+    MR_MemoryZonesFree 	*MR_zonesfree_major_prev;
+    MR_MemoryZone       *MR_zonesfree_minor_head;
+    MR_MemoryZone       *MR_zonesfree_minor_tail;
+};
+#endif
+
 	/*
 	** MR_zone_end specifies the end of the area accessible without
 	** a page fault. It is used by MR_clear_zone_for_GC().
@@ -198,28 +231,16 @@ extern	void		MR_init_zones(void);
 ** If it fails to allocate or protect the zone, then it exits.
 ** If MR_CHECK_OVERFLOW_VIA_MPROTECT is unavailable, then the last two
 ** arguments are ignored.
+**
+** This may re-use previously allocated memory but will re-configure the
+** name, redzone and handler.
 */
 
-extern	MR_MemoryZone	*MR_create_zone(const char *name, int id,
+extern	MR_MemoryZone	*MR_create_or_reuse_zone(const char *name,
 				size_t size, size_t offset, size_t redsize,
 				MR_ZoneHandler *handler);
 
-/*
-** MR_construct_zone(Name, Id, Base, Size, Offset, RedZoneSize, FaultHandler)
-** has the same behaviour as MR_create_zone, except instead of allocating
-** the memory, it takes a pointer to a region of memory that must be at
-** least Size + MR_unit[*] bytes, or if MR_PROTECTPAGE is defined, then it
-** must be at least Size + 2 * MR_unit[*] bytes.
-** If it fails to protect the redzone then it exits.
-** If MR_CHECK_OVERFLOW_VIA_MPROTECT is unavailable, then the last two
-** arguments are ignored.
-**
-** [*] MR_unit is a global variable containing the page size in bytes
-*/
-
-extern	MR_MemoryZone	*MR_construct_zone(const char *name, int Id,
-				MR_Word *base, size_t size, size_t offset,
-				size_t redsize, MR_ZoneHandler *handler);
+extern void 		MR_release_zone(MR_MemoryZone *zone);
 
 /*
 ** MR_extend_zone(Zone, NewSize) extends Zone to increase its size to NewSize,
@@ -232,18 +253,20 @@ extern	MR_Integer	MR_extend_zone(MR_Memo
 
 /*
 ** MR_reset_redzone(Zone) resets the redzone on the given MR_MemoryZone to the
-** original zone specified in the call to {create,construct}_zone() if
+** original zone specified in the call to create_or_reuse_zone() if
 ** MR_CHECK_OVERFLOW_VIA_MPROTECT is defined.  Otherwise it does nothing.
 */
 
 extern	void		MR_reset_redzone(MR_MemoryZone *zone);
 
 /*
-** MR_get_used_memory_zones() returns a pointer to the linked list of
-** used memory zones.
+** MR_get_used_memory_zones_readonly() returns a pointer to the linked list of
+** used memory zones.  The list should be considered read only and may not be
+** complete.  This is suitable for use where locking is impossible but
+** incomplete data is okay.
 */
 
-extern	MR_MemoryZone	*MR_get_used_memory_zones(void);
+extern	MR_MemoryZone	*MR_get_used_memory_zones_readonly(void);
 
 /*
 ** Returns true iff ptr is the given zone.
@@ -267,12 +290,6 @@ extern	void		MR_debug_memory(FILE *fp);
 extern	void		MR_debug_memory_zone(FILE *fp, MR_MemoryZone *zone);
 
 /*
-** Return the given zone to the list of free zones.
-*/
-
-extern	void		MR_unget_zone(MR_MemoryZone *zone);
-
-/*
 ** MR_next_offset() returns sucessive offsets across the primary cache. Useful
 ** when calling {create,construct}_zone().
 */
Index: runtime/mercury_misc.c
===================================================================
RCS file: /home/mercury1/repository/mercury/runtime/mercury_misc.c,v
retrieving revision 1.30
diff -u -p -b -r1.30 mercury_misc.c
--- runtime/mercury_misc.c	11 Oct 2010 00:39:20 -0000	1.30
+++ runtime/mercury_misc.c	5 Apr 2011 09:01:56 -0000
@@ -45,7 +45,7 @@ MR_print_warning(const char *prog, const
 {
     fflush(stdout);     /* in case stdout and stderr are the same */
 
-    fprintf(stderr, "%s:", prog);
+    fprintf(stderr, "%s: ", prog);
     vfprintf(stderr, fmt, args);
     fprintf(stderr, "\n");
 
@@ -72,7 +72,7 @@ MR_do_perror(const char *prog, const cha
     saved_errno = errno;
     fflush(stdout);     /* in case stdout and stderr are the same */
 
-    fprintf(stderr, "%s:", prog);
+    fprintf(stderr, "%s: ", prog);
     errno = saved_errno;
     perror(message);
 }
@@ -86,9 +86,13 @@ void
 MR_fatal_error(const char *fmt, ...)
 {
     va_list args;
+    int error = errno;
 
     fflush(stdout);     /* in case stdout and stderr are the same */
 
+    if (error != 0) {
+        fprintf(stderr, "Errno = %d: %s \n", error, strerror(error));
+    }
     fprintf(stderr, "Mercury runtime: ");
     va_start(args, fmt);
     vfprintf(stderr, fmt, args);
Index: runtime/mercury_stacks.c
===================================================================
RCS file: /home/mercury1/repository/mercury/runtime/mercury_stacks.c,v
retrieving revision 1.20
diff -u -p -b -r1.20 mercury_stacks.c
--- runtime/mercury_stacks.c	25 Jan 2010 06:08:06 -0000	1.20
+++ runtime/mercury_stacks.c	5 Apr 2011 09:01:56 -0000
@@ -40,6 +40,8 @@ ENDINIT
 #include "mercury_imp.h"
 #include "mercury_runtime_util.h"
 #include "mercury_memory_handlers.h"    /* for MR_default_handler */
+#include "mercury_context.h"
+
 #include <stdio.h>
 
 /***************************************************************************/
@@ -212,18 +214,15 @@ MR_Word *MR_new_detstack_segment(MR_Word
     old_sp = sp;
 
     /* We perform explicit overflow checks so redzones just waste space. */
-    new_zone = MR_create_zone("detstack_segment", 0, MR_detstack_size, 0,
+    new_zone = MR_create_or_reuse_zone("detstack_segment", MR_detstack_size, 0,
         0, MR_default_handler);
 
     list = MR_GC_malloc_uncollectable(sizeof(MR_MemoryZones));
 
 #ifdef  MR_DEBUG_STACK_SEGMENTS
-    printf("create new det segment: old zone: %p, old sp %p\n",
-        MR_CONTEXT(MR_ctxt_detstack_zone), old_sp);
-    printf("old sp: ");
-    MR_printdetstack(stdout, old_sp);
-    printf(", old succip: ");
-    MR_printlabel(stdout, MR_succip);
+    MR_debug_log_message(
+        "create new det segment: old zone: %p, old sp %p, old succip %p",
+        MR_CONTEXT(MR_ctxt_detstack_zone), old_sp, MR_succip);
 #endif
 
     list->MR_zones_head = MR_CONTEXT(MR_ctxt_detstack_zone);
@@ -241,12 +240,10 @@ MR_Word *MR_new_detstack_segment(MR_Word
     MR_incr_sp_leaf(n);
 
 #ifdef  MR_DEBUG_STACK_SEGMENTS
-    printf("create new det segment: new zone: %p, new sp %p\n",
-        MR_CONTEXT(MR_ctxt_detstack_zone), MR_sp);
-    printf("new sp: ");
-    MR_printdetstack(stdout, MR_sp);
-    printf(", new succip: ");
-    MR_printlabel(stdout, MR_ENTRY(MR_pop_detstack_segment));
+    MR_debug_log_message(
+        "create new det segment: new zone: %p, new sp %p new succip: %p",
+        MR_CONTEXT(MR_ctxt_detstack_zone), MR_sp,
+        MR_ENTRY(MR_pop_detstack_segment));
 #endif
 
     return MR_sp;
@@ -274,14 +271,14 @@ MR_nondetstack_segment_extend_slow_path(
     {
         MR_maxfr_word = (MR_Word) new_maxfr;
         if (new_zone != NULL) {
-            MR_unget_zone(new_zone);
+            MR_release_zone(new_zone);
         }
         return;
     }
 
     if (new_zone == NULL) {
         /* We perform explicit overflow checks so redzones just waste space. */
-        new_zone = MR_create_zone("nondetstack_segment", 0,
+        new_zone = MR_create_or_reuse_zone("nondetstack_segment",
             MR_nondetstack_size, 0, 0, MR_default_handler);
     }
 
@@ -335,7 +332,7 @@ MR_rewind_nondetstack_segments(MR_Word *
         if (reusable_zone == NULL) {
             reusable_zone = zone;
         } else {
-            MR_unget_zone(zone);
+            MR_release_zone(zone);
         }
 
         list = MR_CONTEXT(MR_ctxt_prev_nondetstack_zones);
@@ -367,15 +364,12 @@ MR_define_entry(MR_pop_detstack_segment)
     orig_succip = (MR_Code *) MR_stackvar(2);
 
 #ifdef  MR_DEBUG_STACK_SEGMENTS
-    printf("restore old det segment: old zone %p, old sp %p\n",
-        MR_CONTEXT(MR_ctxt_detstack_zone), MR_sp);
-    printf("old sp: ");
-    MR_printdetstack(stdout, MR_sp);
-    printf(", old succip: ");
-    MR_printlabel(stdout, MR_succip);
+    MR_debug_log_message(
+        "restore old det segment: old zone %p, old sp %p old succip: %p",
+        MR_CONTEXT(MR_ctxt_detstack_zone), MR_sp, MR_succip);
 #endif
 
-    MR_unget_zone(MR_CONTEXT(MR_ctxt_detstack_zone));
+    MR_release_zone(MR_CONTEXT(MR_ctxt_detstack_zone));
 
     list = MR_CONTEXT(MR_ctxt_prev_detstack_zones);
     MR_CONTEXT(MR_ctxt_detstack_zone) = list->MR_zones_head;
@@ -384,12 +378,9 @@ MR_define_entry(MR_pop_detstack_segment)
     MR_GC_free(list);
 
 #ifdef  MR_DEBUG_STACK_SEGMENTS
-    printf("restore old det segment: new zone %p, new sp %p\n",
-        MR_CONTEXT(MR_ctxt_detstack_zone), orig_sp);
-    printf("new sp: ");
-    MR_printdetstack(stdout, orig_sp);
-    printf(", new succip: ");
-    MR_printlabel(stdout, orig_succip);
+    MR_debug_log_message(
+        "restore old det segment: new zone %p, new sp %p new succip: %p",
+        MR_CONTEXT(MR_ctxt_detstack_zone), orig_sp, orig_succip);
 #endif
 
     MR_sp_word = (MR_Word) orig_sp;
Index: runtime/mercury_trail.c
===================================================================
RCS file: /home/mercury1/repository/mercury/runtime/mercury_trail.c,v
retrieving revision 1.19
diff -u -p -b -r1.19 mercury_trail.c
--- runtime/mercury_trail.c	26 Sep 2008 00:24:46 -0000	1.19
+++ runtime/mercury_trail.c	5 Apr 2011 09:01:56 -0000
@@ -225,7 +225,7 @@ MR_new_trail_segment(void)
     /*
     ** We perform explicit overflow checks so redzones just waste space. 
     */
-    new_zone = MR_create_zone("trail_segment", 0, MR_trail_size, 0,
+    new_zone = MR_create_or_reuse_zone("trail_segment", MR_trail_size, 0,
         0, MR_default_handler);
     
     list = MR_GC_malloc_uncollectable(sizeof(MR_MemoryZones));
@@ -258,7 +258,7 @@ MR_pop_trail_segment(void)
         MR_TRAIL_ZONE, MR_trail_ptr);
 #endif
 
-    MR_unget_zone(MR_TRAIL_ZONE);
+    MR_release_zone(MR_TRAIL_ZONE);
 
     list = MR_PREV_TRAIL_ZONES;
     MR_TRAIL_ZONE = list->MR_zones_head;
Index: runtime/mercury_wrapper.c
===================================================================
RCS file: /home/mercury1/repository/mercury/runtime/mercury_wrapper.c,v
retrieving revision 1.217
diff -u -p -b -r1.217 mercury_wrapper.c
--- runtime/mercury_wrapper.c	25 Mar 2011 03:13:42 -0000	1.217
+++ runtime/mercury_wrapper.c	5 Apr 2011 09:01:56 -0000
@@ -77,7 +77,8 @@ ENDINIT
 
 /*
 ** Sizes of data areas (including redzones), in kilobytes
-** (but we later multiply by 1024 to convert to bytes).
+** (but we later multiply by 1024 to convert to bytes, then make sure they're
+** at least as big as the primary cache size then round up to the page size).
 **
 ** Note that it is OK to allocate a large heap, since we will only touch
 ** the part of it that we use; we're really only allocating address space,
@@ -103,8 +104,6 @@ ENDINIT
 #ifdef MR_STACK_SEGMENTS
 size_t      MR_detstack_size =            64 * sizeof(MR_Word);
 size_t      MR_nondetstack_size =         16 * sizeof(MR_Word);
-size_t      MR_small_detstack_size =       8 * sizeof(MR_Word);
-size_t      MR_small_nondetstack_size =    8 * sizeof(MR_Word);
 #else
 size_t      MR_detstack_size =          4096 * sizeof(MR_Word);
 size_t      MR_nondetstack_size =         64 * sizeof(MR_Word);
@@ -144,8 +143,16 @@ size_t      MR_gen_nondetstack_size =   
 #else
   size_t        MR_heap_zone_size =        4 * sizeof(MR_Word);
 #endif
+#ifdef MR_STACK_SEGMENTS
+/*
+** We don't use redzones with stack segments.
+*/
+size_t      MR_detstack_zone_size =        0;
+size_t      MR_nondetstack_zone_size =     0;
+#else
 size_t      MR_detstack_zone_size =        4 * sizeof(MR_Word);
 size_t      MR_nondetstack_zone_size =     4 * sizeof(MR_Word);
+#endif
 size_t      MR_solutions_heap_zone_size =  4 * sizeof(MR_Word);
 size_t      MR_global_heap_zone_size =     4 * sizeof(MR_Word);
 size_t      MR_trail_zone_size =           4 * sizeof(MR_Word);
@@ -636,7 +643,7 @@ mercury_runtime_init(int argc, char **ar
     MR_init_memory();
   #ifdef MR_USE_TRAIL
     /* initialize the trail */
-    MR_trail_zone = MR_create_zone("trail", 0,
+    MR_trail_zone = MR_create_or_reuse_zone("trail",
         MR_trail_size, MR_next_offset(),
         MR_trail_zone_size, MR_default_handler);
     MR_trail_ptr = (MR_TrailEntry *) MR_trail_zone->min;
@@ -1505,7 +1512,9 @@ MR_process_options(int argc, char **argv
                     MR_usage();
                 }
 
+#ifndef MR_STACK_SEGMENTS
                 MR_small_detstack_size = size;
+#endif
                 break;
 
             case MR_SMALL_DETSTACK_SIZE_KWORDS:
@@ -1513,7 +1522,9 @@ MR_process_options(int argc, char **argv
                     MR_usage();
                 }
 
+#ifndef MR_STACK_SEGMENTS
                 MR_small_detstack_size = size * sizeof(MR_Word);
+#endif
                 break;
 
             case MR_SMALL_NONDETSTACK_SIZE:
@@ -1521,15 +1532,18 @@ MR_process_options(int argc, char **argv
                     MR_usage();
                 }
 
+#ifndef MR_STACK_SEGMENTS
                 MR_small_nondetstack_size = size;
+#endif
                 break;
 
             case MR_SMALL_NONDETSTACK_SIZE_KWORDS:
                 if (sscanf(MR_optarg, "%lu", &size) != 1) {
                     MR_usage();
                 }
-
+#ifndef MR_STACK_SEGMENTS
                 MR_small_nondetstack_size = size * sizeof(MR_Word);
+#endif
                 break;
 
             case MR_SOLUTIONS_HEAP_SIZE:
@@ -2345,7 +2359,8 @@ MR_process_options(int argc, char **argv
         exit(1);
     }
 
-#if !defined(MR_HIGHLEVEL_CODE) && defined(MR_THREAD_SAFE)
+#if !defined(MR_HIGHLEVEL_CODE) && defined(MR_THREAD_SAFE) && \
+    !defined(MR_STACK_SEGMENTS)
     if (MR_small_detstack_size > MR_detstack_size) {
         printf("The small detstack size must be smaller than the "
             "regular detstack size.\n");
Index: runtime/mercury_wrapper.h
===================================================================
RCS file: /home/mercury1/repository/mercury/runtime/mercury_wrapper.h,v
retrieving revision 1.85
diff -u -p -b -r1.85 mercury_wrapper.h
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 489 bytes
Desc: Digital signature
URL: <http://lists.mercurylang.org/archives/reviews/attachments/20110405/2d6767eb/attachment.sig>


More information about the reviews mailing list