[mercury-users] version_array vs. field update performance

Ondrej Bojar bojar at csse.unimelb.edu.au
Wed Feb 7 15:48:40 AEDT 2007


Ralph Becket wrote:
> I've just run some tests on version arrays (see vatest.m attached) and I
> get the following results which scale as I'd expect, rather than as you
> report in your tests:
...
> 
> My guess is that you are not calling varr with a new version array each
> time, but rather are reusing the same one on different iterations.  That
> would explain the quadratic behaviour you observed.

Yes, you are right. The mistake I made was to call version_array.new 
before calling benchmark_det -- and benchmark_det is actually 
backtracking on the input argument.

So here are fresh numbers, documenting that tuples perform worse than 
version_arrays or arrays, but the worst is to use array.slow_set (copy 
the whole array on each write).

Your version_arrays even win a little bit over normal arrays.

N ... number of benchmark_det iterations
RWs ... number of reads and writes to the array

tuple:

N\RWs   10      100     1000    10000   100000
10      0       0       30      40      350
100     0       30      50      370     3720
1000    20      50      340     3610    38180
10000   60      360     3770    37900   379460
100000  400     3610    35670   -       -

version_array:

N\RWs   10      100     1000    10000   100000
10      0       0       10      20      270
100     0       0       20      260     2570
1000    10      20      260     2570    24460
10000   30      290     2580    26130   239530
100000  430     2810    25440   -       -

array:

N\RWs   10      100     1000    10000   100000
10      0       0       0       20      240
100     40      0       20      270     2460
1000    0       20      270     2580    25330
10000   30      270     2570    25540   248570
100000  300     2640    25200   -       -

array.slow_set:

N\RWs   10      100     1000    10000   100000
10      0       40      0       40      460
100     0       0       30      460     4410
1000    0       30      460     4780    45240
10000   60      510     4910    49090   428890
100000  510     4750    46440   -       -

In the meantime, I switched to plain arrays in my main application. Most 
of the time, destructive updates are possible and I do them. Whenever a 
"fork" of a hypothesis is needed, I copy the array. With this change, 
the overall performance is nearly the same: what is saved in destructive 
updates is lost in the copying. In fact, version_arrays are a tiny bit 
faster. Unfortunately, my code is therefore optimized on this already 
and what it mostly does is indeed (creating hypotheses and) setting some 
score components.

Thanks, and thanks again for the mode system in Mercury. I again learned 
a lesson on uniqueness.

Ondrej.

> 
> -- Ralph
> 
--------------------------------------------------------------------------
mercury-users mailing list
Post messages to:       mercury-users at csse.unimelb.edu.au
Administrative Queries: owner-mercury-users at csse.unimelb.edu.au
Subscriptions:          mercury-users-request at csse.unimelb.edu.au
--------------------------------------------------------------------------



More information about the users mailing list