[mercury-users] Structure reuse

Michael Day mikeday at yeslogic.com
Tue Jul 15 11:30:33 AEST 2003

> I believe the heap is checked for sufficient space on each allocation.
> If there is insufficient space then garbage collection occurs.  GC
> traverses the entire set of live data, so one does not want to do this 
> too frequently (e.g. after every few allocations.)  Having garbage lying
> around should not affect your program's performance, other than possibly
> causing more CPU cache misses.  It's the size of the working set that is
> important, not the size of the heap.

Hmm, what happens to garbage on the heap that has not yet been reclaimed,
when the heap does not fit into real memory? Will it be swapped out? When
the garbage collector runs, will it be swapped back in to be traversed?

Basically, will the current behaviour lead to unnecessary page faults and
thrashing as well as CPU cache misses, when the heap contains a lot of
garbage that could have been reclaimed earlier?

> Have you considered using a store and referencing subtrees via mutvars?

Yes, however then I'm programming in C++ again, albeit with a slightly
more awkward syntax :)

Seriously, I've looked at that approach, but it does bulk out the code 

> One can't backtrack over these structures, but since you're interested
> in destructive update that shouldn't be an issue.

...as it also stops you backtracking in a read-only fashion and using 
semidet predicates to match portions of the tree for updating.

Once you make all your predicates update the io state and return bool
instead of failing Mercury becomes much more tiresome. Specifically,
declarative languages make manipulating trees fun! I just want it to be
efficient as well as fun :)


YesLogic Prince prints XML!

mercury-users mailing list
post:  mercury-users at cs.mu.oz.au
administrative address: owner-mercury-users at cs.mu.oz.au
unsubscribe: Address: mercury-users-request at cs.mu.oz.au Message: unsubscribe
subscribe:   Address: mercury-users-request at cs.mu.oz.au Message: subscribe

More information about the users mailing list