[m-dev.] New release?

Julien Fischer jfischer at opturion.com
Mon Oct 26 10:49:06 AEDT 2015

Hi Zoltan,

On Thu, 22 Oct 2015, Zoltan Somogyi wrote:

> On Wed, 21 Oct 2015 10:49:16 +1100 (AEDT), "Zoltan Somogyi" <zoltan.somogyi at runbox.com> wrote:
>> Actually, I was thinking about grade options earlier today, prompted by the same
>> concern. I think I will have a proposal worked out in a day or two.
> My proposal is attached. At the moment, it is a description
> of an approach, with some details missing, because it makes sense
> to work out those details only if there is agreement on the basic
> approach. I intend this proposal to start a discussion on its approach.
> If someone wants to turn the proposal into a wiki, go ahead.

Some comments:

Incidentally, I suggest you add the revised document to compiler/notes.


> At the moment, we have three pieces of code for computing the grade from
> options. The Mercury version of this code is used by the compiler, and is
> now in compiler/compute_grade.m. The C version, used by e.g. mgnuc and ml,
> is in scripts/*_grade.sh-subr. The algorithm above is too complex to implement
> in sh, so I propose that it be put into a small library containing nothing
> but the code that takes a list of grade-related options, and returns
> the selected solution of the constraint problem they represent. The code
> of compute_grade.m would invoke this small library, and so would a new
> standalone Mercury program, named e.g. compute_grade_for_sh.m, that
> would be invoked from a new sh script, named e.g. compute_grade.sh-subr.

Won't this be a problem when building from the source distribution since the
small Mercury program would need to be compiled from the pre-generated .c files
and doing so requires both the mgnuc and ml scripts?  Also, in the source
distribution we ideally want the canonical_grade script to be available so that
if the user supplies a list of grades to configure (e.g. via
--enable-libgrades), we can sanity check them when configure is run.

> The latter would replace the existing script/*grade.sh-subr, and probably
> also scripts/canonical grade. The third copy is in runtime/mercury_grade.h,
> and it will continue to need double maintenance with the Mercury code.
> If the use of a Mercury by e.g. canonical_grade would be a problem,
> we could try continuing to use sh code to compute the grade. The algorithm
> above should *just about* be doable in sh, even though it would not be
> pretty sight.

mmc (when invoked by itself) and mmc --make require will the new grade related
machinery, since they are what we are intend end users of the system to use.
I don't think we need to do anything with mgnuc / ml (mmake), since at this
point they are primarily tools for building the Mercury system itself.

> If there is broad agreement that something like this is the way to go,
> then it will make sense to make concrete proposals for the concrete names
> of the options that give specific values to specific solver variables.

The proposal is fine by me, with the reservation regarding mgnuc / ml etc.

> --------------------------------------------------------
> NOTE: This includes only the user-visible solver variables. There will be
> some non-user-visible ones as well, such as high tag vs low tag bits
> vs no tag bits, and --num-tag-bits.
> XXXs means something is missing, and will need to be added.
> ??? marks a decision that I (zs) think should be reviewed.
> solver variable CL:
>     CL.hlc: high level code
>     CL.llc: low level code          (requires TAR.c)

Strictly speaking, the Erlang backend is neither of those.

> solver variable DL:
>     DL.hld: high level data         (requires CL.hlc)
>     DL.lld: low level data
>     (default is DL.lld)
> solver variable TAR:
>     TAR.c:  C
>     TAR.cs: C#                      (requires CL.hlc)
>     TAR.j:  Java                    (requires CL.hlc)
>     TAR.e:  Erlang                  (requires CL.hlc)
>     (default is TAR.c)
> solver variable NEST:
>     NEST.n: no gcc nested functions
>     NEST.y: gcc nested functions    (requires CL.hlc and TAR.c)
>     (default is NEST.n)
> solver variable REG:
>     REG.n: no gcc global registers
>     REG.y: gcc global registers     (requires CL.llc and TAR.c)
>     (default is REG.y if supported on platform)
> solver variable GOTO:
>     GOTO.n: no gcc nonlocal gotos
>     GOTO.y: gcc nonlocal gotos      (requires CL.llc and TAR.c)
>     (default is GOTO.y if supported on platform)
> solver variable ASM:
>     ASM.n: no gcc asm labels
>     ASM.y: gcc asm labels           (requires CL.b and GOTO.y)
>     (default is ASM.y if supported on platform)
> solver variable SS:
>     SS.n: no stack segments
>     SS.y: stack segments            (requires CL.llc)
>     (default is SS.y ???)
>     XXX we should make MR_EXTEND_STACKS_WHEN_NEEDED undocumented
> solver variable TSAFE:
>     TSAFE.n: no thread safe
>     TSAFE.y: thread safe
>     (default is TSAFE.n)
> solver variable GC:
>     GC.no:   no gc
>     GC.bdw:  boehm gc
>     GC.bdwd: boehm gc debug
>     GC.nat:  native gc              (requires CL.hlc and TAR.{java,erlang,XXX}

and TAR.cs.

>     GC.acc:  accurate gc            (requires CL.hlc and TAR.c)
>     GC.rafe: becket history gc      (requires CL.hlc and XXX)

Ralph's GC is TAR.c.

>     (default is GC.bdw)
> solver variable DEEP:
>     DEEP.n: no deep profiling
>     DEEP.y: deep profiling          (requires CL.llc and TIME.n and CALLS.n and MEM.n)
>     (default is DEEP.n)
> solver variable TIME:
>     TIME.n: no gprof time profiling
>     TIME.y: gprof time profiling    (requires CL.llc and DEEP.n)
>     (default is TIME.n)
> solver variable CALL:
>     CALL.n: no gprof call profiling
>     CALL.y: gprof call profiling    (requires CL.llc and DEEP.n and XXX)

Requires DEEP.n and TAR.c (not CL.llc).  Call profiling also works with the
high-level C backend.

>     (default is CALL.n)
> solver variable MEM:
>     MEM.n: no gprof memory profiling
>     MEM.y: gprof memory profiling   (requires CL.llc and DEEP.n and XXX)
>     (default is MEM.n)

Ditto regarding the high-level C backend.

> solver variable SCOPE:
>     SCOPE.n: no threadscope profiling
>     SCOPE.c: threadscope profiling  (requires XXX)
>     (default is SCOPE.n)
> solver variable SIZE:
>     SIZE.n: no term size profiling
>     SIZE.c: term size profiling cells (requires CL.llc)
>     SIZE.w: term size profiling words (requires CL.llc)
>     (default is SIZE.n)
> solver variable TR:
>     TR.n: no trailing
>     TR.y: trailing                  (requires MM.n)
>     (default is TR.n)
> solver variable MM:
>     MM.n:  no minimal model tabling
>     MM.c:  stack copy mm, no debug  (requires CL.llc and (GC.bdw or GC.bdwd) and TR.n)
>     MM.cd: stack copy mm, debug     (requires CL.llc and (GC.bdw or GC.bdwd) and TR.n) (not yet documented)
>     MM.o:  own stack mm, no debug   (requires CL.llc and (GC.bdw or GC.bdwd) and TR.n) (not yet documented)
>     MM.od: own stack mm, debug      (requires CL.llc and (GC.bdw or GC.bdwd) and TR.n) (not yet documented)
>     (default is MM.n)
> solver variable SPF:
>     SPF.n: no single prec float
>     SPF.y: single prec float
>     (default is SPF.n)

SPF.y requires TAR.c.

> solver variable DBG:
>     DBG.n:  no debugging
>     DBG.d:  debugging               (requires CL.llc ???)

I'm not sure why you have ??? there.

>     DBG.dd: declarative debugging   (requires CL.llc)
>     DBG.s:  ss debugging            (requires CL.hlc ???)

ssdebug should work with any of the target languages, so the only requirement
is CL.hlc.


More information about the developers mailing list