[m-rev.] for review: speed up lexer.m
Zoltan Somogyi
zs at csse.unimelb.edu.au
Fri May 16 17:01:20 AEST 2008
On 15-May-2008, Peter Wang <novalazy at gmail.com> wrote:
> Something like this: using term_io.read_term to read all the
> compiler/*.m files concatenated together.
>
> Before: 3.17s user 0.07s system 99% cpu 3.260 total
> After: 2.44s user 0.06s system 100% cpu 2.504 total
That is one impressive speedup.
I am working on another change that speeds up the same piece of code,
by turning all those nested if-then-elses in get_token etc into switches.
It is not complete yet, but prelim tests show a speedup on tools/speedtest
from 20.70 seconds to 19.98 seconds, or about 3.5%.
In 361, I teach that in many compilers, the scanner is the main bottleneck,
since it is the only part of the compiler that has to process every character.
I thought that the scanner shouldn't be a bottleneck for us, since we do
sooooo much more work after parsing than a traditional compiler, but these
results show that the lexer does consume a nontrivial fraction of the
compiler's time.
Zoltan.
--------------------------------------------------------------------------
mercury-reviews mailing list
Post messages to: mercury-reviews at csse.unimelb.edu.au
Administrative Queries: owner-mercury-reviews at csse.unimelb.edu.au
Subscriptions: mercury-reviews-request at csse.unimelb.edu.au
--------------------------------------------------------------------------
More information about the reviews
mailing list