[m-dev.] I want to move and rename the dependency_graph module
Paul Bone
paul at bone.id.au
Wed Feb 15 22:42:59 AEDT 2017
On Wed, Feb 15, 2017 at 05:36:09PM +1100, Zoltan Somogyi wrote:
>
>
> On Wed, 15 Feb 2017 17:13:24 +1100, Paul Bone <paul at bone.id.au> wrote:
> > Yeah, I thought so too. Didn't we talk about this at lunch with Andy King
> > once?
>
> I don't remember.
>
> > He mentioned supercompilation in passing and I hadn't heard of it.
> > Basically you both said that it's something that searches the space of
> > possible optimisations (or is it something else) and then picks the best?
> > You both said that it only works for very small programs. That made sense
> > to me.
>
> The basic idea is: for each N, there are only a finite number of machine language
> programs consisting of N instructions. To supercompile a function in e.g. C, do this:
>
> for N = 0 up to some limit:
> for every possible machine language program with exactly N instructions,
> with some exceptions:
> try to prove that its semantics is exactly the same as the C function
> you are trying to compile
> if the proof attempt is successful
> exit
>
> The "some exceptions" part meant that for instructions that contain immediate values,
> the supercompiler will typically explore only a fixed small set of immediates, such as
> 0, 1, -1, 2, -2, up to (say) 16, instead of all 2^16 possible values in a 16 bit field.
> Even then, this is feasible only for VERY small values of N.
Yes, that space would be huge. The space of regular compilation decisions
for a program of the same size would be _much_ smaller. I can see that the
benefit of supercompilation is that the compiler implementers don't need to
write any optimisations, the compiler gropes in the darkness for the most
optimal program.
> > I have some similar code in the deep profiler that might be able to help.
> > The benifit of that is that it can weight each by the number of calls made
> > into and within the SCC.
>
> I don't think we are looking for weighted information.
I mostly agree. It depends on what question we're asking though.
At YesLogic we're concerned with the risk that Prince may crash if given
unusual input. Decreasing the probability of any crash is good, even if
we can't guarantee that we'll never run out of stack space. So knowing how
frequently an SCC is entered, and it's "average depth" gives as an idea of
the typical risk of that SCC. But that doesn't help if a user gives a
strange input - probably a lower risk but it does happen.
Average depth = (Calls Within SCC + Calls Into SCC) / Calls Into SCC
We also like improving performance. Prince already performs very well, but
there are some users with rather high demands. Weighted information can
definitely be used to guide performance improvements. But that's a secondary
priority.
--
Paul Bone
http://paul.bone.id.au
More information about the developers
mailing list