[m-dev.] thread.spawn_native

Paul Bone paul at bone.id.au
Wed Jun 11 09:44:34 AEST 2014


On Tue, Jun 10, 2014 at 06:09:06PM +1000, Peter Wang wrote:
> On Tue, 10 Jun 2014 17:06:21 +1000, Paul Bone <paul at bone.id.au> wrote:
> > On Tue, Jun 10, 2014 at 04:34:47PM +1000, Peter Wang wrote:
> > > 
> > > The proposal is unclear to me.
> > > 
> > > Let's say the annotation is called `may_not_migrate'.  If a Mercury
> > > context calls a `may_not_migrate' foreign proc, then it is permanently
> > > tied to some Mercury engine (OS thread) from then on?  Presumably the
> > > foreign proc is executed directly in that OS thread, and not rerouted to
> > > an IO thread as in the case of blocking foreign procs.  Otherwise, it
> > > would not solve the API issue.
> > > 
> > > Where does the Mercury engine come from?  If the number of Mercury
> > > engines is fixed, doesn't it mean that a blocking `may_not_migrate'
> > > foreign proc will prevent progress of other Mercury threads?  Or can
> > > `may_not_migrate' foreign procs not block?
> > > 
> > > Can two Mercury contexts be bound to the same Mercury engine?
> > > Presumably so, if the number of Mercury engines is fixed.
> > > But then one context could stomp on the thread-local state expected
> > > by another context.
> > 
> > Good point.  Previously I hadn't thought about how these two features would
> > interact.  Because there's no clear way to find out aheas of time if any of
> > the foreign calls will block or not, then any may_not_migrate calls must be
> > done on an IO Worker thread (A pthread that is not part of the mormal set of
> > Mercury Engines).
> 
> That's how I initially interpreted your proposal.
> 
> What happens for a may_not_migrate, may_call_mercury foreign proc?
> Presumably the foreign code in the IO worker will need to arrange
> for the Mercury proc to be executed by one of the fixed number Mercury
> engines, and wait for the result.  If the Mercury proc calls
> another `may_not_migrate' foreign proc, it deadlocks?

Urgh.  This is getting messy.

Yeah, I'd say that the Mercury engine would have to execute the Mercury
code, and the IO worker may have to wait around without executing any other
C code.  Effectivly, it's not an IO worker anymore but a slave to that
particular Mercury context's foreign calls.

Another "solution" is to make may_not_migrate and may_call_mercury
incompatible.

> > Yes, my intention is that multiple contexts could be bound to the same
> > Mercury engine.  I beleive the managment of a thread-local state that's used
> > by foreign code is the responsability of the foreign code.  Mercury cannot
> > protect some foreign code from stepping on some other foreign code's state.
> 
> That severely reduces the value of a may_not_migrate annotation.
> I don't know how you could practically use any API that depends on
> OS-thread-state then, except one thread at a time.

I'm worried that we don't understand each other's meaning.  Lets say that we
have two C libraries,  liba and libb.  Both use thread local storage and
once initialised, must always be called by the same pthread (without extra
work/reinitialisation).  However, each library's thread local storage is
handled correctly so a won't clobber b and b won't clobber a (if they did
then thay'd be terrible and would probably clobber Mercury's state too).

I see may_not_migrate as a solution to this problem, once a call is made to
liba then all following calls to liba are made by the same OS thread.  Ditto
for b.  But this still allows a thread to call a, then b, then a again, or
whatever (which is necessary EG in a non-parallel context/grade).

I don't see how allowing an OS thread to call either library creates a
problem, which seems to be what you're saying.


> > > > 
> > > > What if your explicit concurrency thread executes a parallel conjunction? or
> > > > is executed by a parallel conjunction.  We at least have to accept the
> > > > possibility.
> > > 
> > > Right, any explicit concurrency thread (of which the main thread is one)
> > > should be able to make goals available for execution by the parallel
> > > execution workers.  And the problem is that a parallel conjunct could
> > > (indirectly) call a foreign proc which has a requirement to be executed
> > > on a particular Mercury engine (OS thread).
> > 
> > There is nothing special about either the engines that are used to execute
> > sparks and contexts created by parallel conjunctions, nor is there anything
> > special about the sparks or contexts themselves.  So "parallel execution
> > worker" doesn't really mean anything, it's just any Mercury engine.
> 
> As I said:
> 
>     I don't see conceptually why threads for parallel execution should
>     overlap with explicit concurrency threads.
> 
> That is, there could be separate classes of Mercury engines if
> necessary.  A fixed number of "parallel execution workers" created at
> startup, but also other Mercury engines created explicitly by
> thread.spawn_native.

That's a seperate question.  I was describing how it works now and providing
some clarrification.

Regarding our proposals, I think that introducing different types of Mercury
engines is problematic.  If an explicitly forked context (thread.spawn, or
even thread.spawn_native) executes a parallel conjunction, then it makes the
engine it's running on (the explicitly created engine in spawn_native) a
parallel execution worker because of how parallel conjunctions are executed
(amoung other things).

I don't really want to change how parallel conjunctions are executed because
I think it works well.


> > > It seems solvable.  When the parallel worker hits a foreign proc (say,
> > > with the `may_not_migrate' annotation) it should be able to make the
> > > work available to be picked up by the original Mercury engine.  I think
> > > that would be as if the parallel conjunct never left the original
> > > Mercury engine?
> > 
> > I think that idea works.  There is already support for handing work to specific
> > engines, it's used when a foreign call calls some Mercury code, then returns
> > back through the C code, it arranges for that return to be done on the
> > engine that originally made the foreign call so that it can use the same C
> > stack frame.
> > 
> > However if all may_not_migrate calls are made on IO workers then no
> > migration is necessary.
> 
> Rather, migration is always necessary for may_not_migrate ;)

Sorry, no context migration.  The "work" does migrate.


> > > Blocking foreign procs should not be executed in a parallel worker
> > > either.
> > 
> > As above, this is true for any Mercury Engine.
> 
> Not in a model where thread.spawn_native exists.  It creates a Mercury
> engine dedicated to executing code of a particular Mercury context --
> including the blocking foreign procs.
> 

I'm advocrating for solutions without thread.spawn_native.  The combination
of may_not_migrate and nonblocking IO are alternative proposals.


-- 
Paul Bone



More information about the developers mailing list