[m-rev.] for review: memory attribution profiling

Julien Fischer juliensf at csse.unimelb.edu.au
Thu Apr 28 14:54:50 AEST 2011


Hi Peter,

The following is a review of the changes to the user's guide.  I have
some comments regarding the rest of the diff which I will post
separately, but feel free to deal with those post-commit if you wish.

On Thu, 21 Apr 2011, Peter Wang wrote:

> Branches: main
>
> Implement a new form of memory profiling, which tells the user what memory
> is being retained during a program run.  This is done by allocating an extra
> word before each cell, which is used to "attribute" the cell to an
> allocation site.  The attribution, or "allocation id", is an address to an
> MR_AllocSiteInfo structure generated by the Mercury compiler, giving the
> procedure, filename and line number of the allocation, and the type
> constructor and arity of the cell that it allocates.
>
> The user must manually instrument the program with calls to
> `benchmarking.report_memory_attribution', which forces a GC and summarises
> the live objects on the heap using the attributions.  The mprof tool is
> extended with a new mode to parse and present that data.
>
> Objects which are unattributed (e.g. by hand-written C code which hasn't
> been updated) are still accounted for, but show up in profiles as "unknown".
>
> Currently this profiling mode only works in conjunction wtih the Boehm
> garbage collector, though in principle it can work with any memory allocator
> for which we can access a list of the live objects.  Since term size
> profiling relies on the same technique of using an extra word per memory
> cell, the two profiling modes are incompatible.

...

> diff --git a/doc/user_guide.texi b/doc/user_guide.texi
> index a89d68f..5f7de70 100644
> --- a/doc/user_guide.texi
> +++ b/doc/user_guide.texi
> @@ -5576,8 +5576,11 @@ then a progress message will be displayed as each file is read.
> * Creating profiles::               How to create profile data.
> * Using mprof for time profiling::  How to analyze the time performance of a
>                                     program with mprof.
> -* Using mprof for memory profiling::How to analyze the memory performance of a
> +* Using mprof for profiling memory allocation::
> +                                    How to analyze the memory performance of a
>                                     program with mprof.
> +* Using mprof -s for profiling memory retention::

I suggest removing "-s" from the section heading.  The body of the text
can mention that detail, as with -m option.

> +                                    How to analyze what memory is on the heap.
> * Using mdprof::                    How to analyze the time and/or memory
>                                     performance of a program with mdprof.
> * Using threadscope::               How to analyse the parallel
> @@ -5965,15 +5968,14 @@ time represent the proportion of the current procedure's self and descendent
> time due to that parent.  These times are obtained using the assumption that
> each call contributes equally to the total time of the current procedure.
>
> - at node Using mprof for memory profiling
> - at section Using mprof for memory profiling
> + at node Using mprof for profiling memory allocation
> + at section Using mprof for profiling memory allocation
> @pindex mprof
> @cindex Memory profiling
> @cindex Allocation profiling
> - at cindex Heap profiling
> @cindex Profiling memory allocation
>
> -To create a memory profile, you can invoke @samp{mprof}
> +To create a profile of memory allocations, you can invoke @samp{mprof}
> with the @samp{-m} (@samp{--profile memory-words}) option.
> This will profile the amount of memory allocated, measured in units of words.
> (A word is 4 bytes on a 32-bit architecture,
> @@ -5992,13 +5994,72 @@ With memory profiling, just as with time profiling,
> you can use the @samp{-c} (@samp{--call-graph}) option to display
> call graph profiles in addition to flat profiles.
>
> -Note that Mercury's memory profiler will only tell you about allocation,
> +The options so far will only tell you about allocation,

I suggest:

     When invoked with the @samp{-m} option, mprof only reports
     allocations, not deallocations (garbage collection).

> not about deallocation (garbage collection).
> It can tell you how much memory was allocated by each procedure,
> but it won't tell you how long the memory was live for,
> or how much of that memory was garbage-collected.
> This is also true for @samp{mdprof}.
>
> +The @samp{mprof -s} tool described in the next section can tell

Replace with:

     The memory retention profiling tool described in the next section ...

> +you which memory cells remain on the heap.
> +
> + at node Using mprof -s for profiling memory retention
> + at section Using mprof -s for profiling memory retention
> + at pindex mprof -s
> + at cindex Memory attribution
> + at cindex Memory retention
> + at cindex Heap profiling
> +
> +When a program is built with memory profiling and uses the Boehm

s/memory profiling/memory profiling enabled/

> +garbage collector, i.e. a grade with @samp{.memprof.gc} modifiers,
> +each memory cell is ``attributed'' with information about where it
> +was originated, and its type constructor.  This information can be

I suggest:

    ... with information about its origin and type.

> +collated to tell you what kinds of objects are being retained when
> +the program executes.
> +
> +To you this, you must instrument the program by adding calls to

s/To you this/To do this/

> + at code{benchmarking.report_memory_attribution/1} or
> + at code{benchmarking.report_memory_attribution/3}
> +at points of interest, passing an appropriate label for
> +your reference.

In find that last bit unclear.  I suggest something like:

   The first argument of the report_memory_attribution predicates
   is a string that is used to label the the memory retention data
   for corresponding to that call in the profiling output.

> For example, if a program operates in distinct phases
> +you may want to add a call in between the phases.
> +The @samp{report_memory_attribution} predicates do nothing in other grades,
> +so are safe to leave in the program.

Shift the above sentence to after the example - it breaks the flow here.

> You may want to call them from
> +within @samp{trace} goals:
> +
> + at example
> +trace [run_time(env("SNAPSHOTS")), io(!IO)] (
> +    benchmarking.report_memory_attribution("Phase 2", !IO)
> +)
> + at end example
> +
> +Next, build the program in a @samp{.memprof.gc} grade.
> +After the program has finished executing, it will generate a file
> +called @samp{Prof.Snapshots} in the current directory.
> +Run @samp{mprof -s} to view the profile.
> +You will see the memory cells which were on the heap at each time
> +that @samp{report_memory_attribution} was called: the origin of the cells, and
> +their type constructors.
> +
> +Passing the additional option @samp{-g type} will group the profile first by
> +type constructors, then by procedure.  The @samp{-H} option hides the secondary
> +level of information.  Memory allocated by the Mercury runtime system itself

I think you mean to say "Memory cells allocated by the ..." there.

> +are normally excluded from the profile; they can be viewed by passing the
> + at samp{-r} option.
> +
> +Mercury values which are dead may in fact be still reachable from the various

Begin that sentence with: "Note that".

> +execution stacks. This is particularly noticeable on the high-level C back-end,
> +as the C compiler does not take conservative garbage collection into account;

Replace the semicolon with the word "and".

> +the values of Mercury variables may linger on the C stack for much longer than

s/the values of Mercury variables/Mercury values/

Delete "much".

Julien.
--------------------------------------------------------------------------
mercury-reviews mailing list
Post messages to:       mercury-reviews at csse.unimelb.edu.au
Administrative Queries: owner-mercury-reviews at csse.unimelb.edu.au
Subscriptions:          mercury-reviews-request at csse.unimelb.edu.au
--------------------------------------------------------------------------



More information about the reviews mailing list