[m-rev.] for review: merge integer token representations in the lexer
zoltan.somogyi at runbox.com
Sat Apr 22 16:56:36 AEST 2017
On Sat, 22 Apr 2017 16:47:25 +1000 (AEST), Julien Fischer <jfischer at opturion.com> wrote:
> >> + ; integer(integer_base, integer, signedness, integer_size)
> > What is your reason for putting the value field *among* the
> > non-value fields? I would have thought that putting the value
> > either first or last would be conceptually cleaner.
> That order matches the order in which those things occur in integer
I don't think that is all that good a reason, but I also don't think
it matters that much.
> >> - ;
> >> - Token = big_integer(LexerBase, Integer),
> >> + Signedness = lexer_signedness_to_term_signedness(LexerSignedness),
> >> + Size = lexer_size_to_term_size(LexerSize),
> > Why is there a need for these type conversions?
> > By that I mean: why does lexer.m has its own copies
> > of these types?
> The existing code already handled the base argument thus; the rationale
> for it doing so was to avoid the lexer module having to import the term
Can't these types be defined in integer.m? Both lexer.m and term.m
import integer.m in their interfaces. And anyone whose wants to do
anything with the integer's value field has to import it anyway.
Something I forgot earlier: it would be nice if we could speedtest
versions of the compiler with and without this change, to see what
effect the more complex representation of integers has on compiler performance.
However, the only kind of input on which such a test makes sense
is a .m file that contains a high proportion of integer constants.
None of the Mercury systems's own modules qualify. Does anyone
have any such modules?
More information about the reviews