[mercury-users] term_io__read_term fills detstack

Ondrej Bojar obo at cuni.cz
Fri Jan 20 02:31:27 AEDT 2006


(Sorry if you receive multiple copies of this mail, the first copy got 
stuck somewhere.)

Hi.

I would like to parallelize some computation over several machines with
a shared network file system using something as simple as "ssh machine
my_subtask". In fact, I have already implemented a library that performs
this. (The library must be called once at the very beginning of main/2
to support "slave" mode for computation. If one needs to call a
predicate in the background, one calls the library twice:
start(pred(S::in, T::out)is det, S::in, process_handle(S, T)::out) and
then later finish(process_handle(S, T)::in, T::out) [the modes should be
unique and there are other hacks].)

The problem is if either the input or the output of the predicate are a
bit bigger. My library uses io__write to dump the input to 'ssh machine
"this_prog_name hey_slave_run_my_predicate"' and term_io__read_term to
load the data in the slave. (And similarly with the results.)

term_io__read_term tends to fill up detstack very soon. I know how to
use a bigger detstack, but this is not a solution.

Is there another way of passing large chunks of structured data of any
type? Is it possible to implement term_io__read_term differently?

I know I could serialize and deserialize my data structures by hand, but
this is not exactly what I want to implement.

Or is there some quite different way to achieve my goal?

Thanks for any hints, Ondrej.

-- 
Ondrej Bojar (mailto:obo at cuni.cz / bojar at ufal.mff.cuni.cz)
http://www.cuni.cz/~obo
--------------------------------------------------------------------------
mercury-users mailing list
post:  mercury-users at cs.mu.oz.au
administrative address: owner-mercury-users at cs.mu.oz.au
unsubscribe: Address: mercury-users-request at cs.mu.oz.au Message: unsubscribe
subscribe:   Address: mercury-users-request at cs.mu.oz.au Message: subscribe
--------------------------------------------------------------------------



More information about the users mailing list