[mercury-users] Debugging 'memory leaks' (fwd)

Peter Wang novalazy at gmail.com
Fri Mar 28 17:26:27 AEDT 2008


On 2008-03-27, Julien Fischer <juliensf at csse.unimelb.edu.au> wrote:
> Hi, all.
>
> My Mercury code is kind of greedy and does not seem to de-allocate all
> it should. Parts of the code are in C but I use always
> MR_GC_NEW/NEW_ARRAY to allocate. The main looks like this:
>
> main(!IO) :-
>   do_something_big(!IO),
>   print_memusage(!IO),
>   gc.garbage_collect(!IO),
>   print_memusage(!IO).
>
> The do_something_big(io::di, io::uo) contains a 'cached' call. From some
> hardwired input files, it collects some statistics and saves them to a
> cache (and say prints them to stdout). Next time when I run the program,
> it just loads the cache and prints the statistics, so stdout looks
> identical but the collection was not performed.
>
> There are two things I don't like:
>
> - memory usage as reported by the first print_memusage is drastically
> different in the two runs (cached vs. not-yet-cached), do_something_big
> allocates some temporary structures when doing the statistics but does
> not release it properly, although nothing is passed back to main (except
> for altered IO).
> - print_memusage shows exactly the same amount of memory used before and
> after garbage_collection in both runs, ie. what was left in the memory
> still looks reachable for the garbage collector (or gc.garbage_collect
> is just a NOOP).

How does print_memusage get its statistics?

You might try running with GC_PRINT_STATS environment variable set.
Perhaps the output will help.

Or you could see if using munmap helps. Search for --boehm-gc-munmap in
the user's guide, or --munmap in slightly older versions.

Peter

--------------------------------------------------------------------------
mercury-users mailing list
Post messages to:       mercury-users at csse.unimelb.edu.au
Administrative Queries: owner-mercury-users at csse.unimelb.edu.au
Subscriptions:          mercury-users-request at csse.unimelb.edu.au
--------------------------------------------------------------------------



More information about the users mailing list