[m-rev.] for review: merge integer token representations in the lexer
jfischer at opturion.com
Sat Apr 22 17:07:50 AEST 2017
On Sat, 22 Apr 2017, Zoltan Somogyi wrote:
> On Sat, 22 Apr 2017 16:47:25 +1000 (AEST), Julien Fischer <jfischer at opturion.com> wrote:
>>>> - ;
>>>> - Token = big_integer(LexerBase, Integer),
>>>> + Signedness = lexer_signedness_to_term_signedness(LexerSignedness),
>>>> + Size = lexer_size_to_term_size(LexerSize),
>>> Why is there a need for these type conversions?
>>> By that I mean: why does lexer.m has its own copies
>>> of these types?
>> The existing code already handled the base argument thus; the rationale
>> for it doing so was to avoid the lexer module having to import the term
> Can't these types be defined in integer.m?
Nothing in the intger module requires them and they're not anything that
other users of the integer module outside of the term parser would want.
> Both lexer.m and term.m import integer.m in their interfaces. And
> anyone whose wants to do anything with the integer's value field has
> to import it anyway.
> Something I forgot earlier: it would be nice if we could speedtest
> versions of the compiler with and without this change, to see what
> effect the more complex representation of integers has on compiler
> However, the only kind of input on which such a test makes sense
> is a .m file that contains a high proportion of integer constants.
> None of the Mercury systems's own modules qualify. Does anyone
> have any such modules?
I don't have any such modules, but it would be fairly simple to generate
an artifical benchmark that contains lots of integer literals; I will do
this and benchmark the compiler.
More information about the reviews