[mercury-users] Parallelizing the garbage collector?
Ondrej Bojar
obo at cuni.cz
Wed Mar 22 07:31:53 AEDT 2006
Hi.
I'm still hoping to use threads in my computations, but as I posted
recently and Peter Ross suggested, Boehm garbage collector wastes far
more time than the parallel processing can bring. After reading some of
the documentation, I believe that the gc needs to stop all the threads
too often to perform collection.
Boehm gc supports some extra flags to improve performance:
-DPARALLEL_MARK for parallel marking and -DTHREAD_LOCAL_ALLOC for
separate collectors for individual threads. (Parallel mark should help
straight away, thread local needs to be actually used in mmc.)
I tried to add these options to rotd/configure into
CFLAGS_FOR_THREADS for my platform (linux, x64), but mmc compiled with
these extra flags fails to link my programs:
/home/bojar/opt/mercury-compiler-rotd-2006-03-18-x64-gcthread/lib/mercury/lib/libpar_gc.a(alloc.o)(.text+0x5a5):
In function `GC_set_fl_marks':
: undefined reference to `GC_compare_and_exchange'
/home/bojar/opt/mercury-compiler-rotd-2006-03-18-x64-gcthread/lib/mercury/lib/libpar_gc.a(mark.o)(.text+0xeb):
In function `GC_set_mark_bit':
: undefined reference to `GC_compare_and_exchange'
...
As far as I understand, boehm_gc/configure (which would support option
--enable-parallel-mark) is not called from generic configure, so I
cannot check if it would work better.
Would you recommend any workaround?
Or any chances for par.agc grade? (I have not read about accurate gc,
but I expect it does not need to stop the world to reclaim unused blocks.)
Thanks, Ondrej.
--
Ondrej Bojar (mailto:obo at cuni.cz)
http://www.cuni.cz/~obo
--------------------------------------------------------------------------
mercury-users mailing list
post: mercury-users at cs.mu.oz.au
administrative address: owner-mercury-users at cs.mu.oz.au
unsubscribe: Address: mercury-users-request at cs.mu.oz.au Message: unsubscribe
subscribe: Address: mercury-users-request at cs.mu.oz.au Message: subscribe
--------------------------------------------------------------------------
More information about the users
mailing list