[m-rev.] for review: Grade java testing

Fergus Henderson fjh at cs.mu.OZ.AU
Sat Jan 11 00:59:44 AEDT 2003


On 10-Jan-2003, Peter Ross <pro at missioncriticalit.com> wrote:
> No objections to enabling the tests in that directory.  They were the
> test-suite that I was going to use for testing partial evaluation.

I will go ahead and commit that, with the following additional changes.

Estimated hours taken: 0.5
Branches: main

Clean up tests/dppd so that it uses the new test suite framework properly,
all the tests are enabled, and all tests pass.

tests/dppd/run.m:
	Modify the test framework so that the default output is
	reproducable: by default, only execute one iteration, and print
	whether or not the query succeeded, rather than running 1000
	iterations and printing out the time taken.  (You can still run
	multiple iterations and have it print out the times, by using the
	`-n <iterations>' option.)

tests/dppd/run.exp:
	New file, contains the expected results of all of the tests.

tests/dppd/Mmakefile:
	Use a hard-coded rule "run.runtest: run.res" rather than
	a pattern rule "%.runtest: %.res", since for some reason
	(perhaps because run.res wasn't mentioned anywhere in the
	Mmakefile?) the pattern rule wasn't being invoked.

Workspace: /home/ceres/fjh/mercury
Index: tests/dppd/Mmakefile
===================================================================
RCS file: /home/mercury1/repository/tests/dppd/Mmakefile,v
retrieving revision 1.4
diff -u -d -r1.4 Mmakefile
--- tests/dppd/Mmakefile	10 Jan 2003 13:33:39 -0000	1.4
+++ tests/dppd/Mmakefile	10 Jan 2003 13:48:41 -0000
@@ -11,7 +11,7 @@
 # can be found by `mmc --make'.
 include Mercury.options
 
-%.runtest: %.res
+run.runtest: run.res
 
 MCFLAGS=#--pd --no-inlining -d 35 -D petdr #-d 99
 #GRADE=asm_fast.gc.prof
Index: tests/dppd/run.exp
===================================================================
RCS file: tests/dppd/run.exp
diff -N tests/dppd/run.exp
--- /dev/null	1 Jan 1970 00:00:00 -0000
+++ tests/dppd/run.exp	10 Jan 2003 13:51:04 -0000
@@ -0,0 +1,28 @@
+Iterations: 1
+
+advisor                            result   1
+applast                            result   0
+contains_kmp                       result   1
+contains_lam                       result   1
+doubleapp                          result   1
+flip                               result   1
+grammar                            result   1
+imperative_solve_power             result   1
+map_reduce                         result   1
+map_rev                            result   1
+match_kmp                          result   0
+match                              result   1
+match_append                       result   1
+maxlength                          result   0
+missionaries                       result   1
+regexp_r1                          result   0
+regexp_r2                          result   0
+regexp_r3                          result   0
+relative                           result   1
+remove                             result   1
+remove2                            result   1
+rotateprune                        result   1
+ssuply                             result   1
+transpose                          result   1
+upto_sum1                          result   1
+upto_sum2                          result   1
Index: tests/dppd/run.m
===================================================================
RCS file: /home/mercury1/repository/tests/dppd/run.m,v
retrieving revision 1.2
diff -u -d -r1.2 run.m
--- tests/dppd/run.m	10 Jan 2003 13:33:41 -0000	1.2
+++ tests/dppd/run.m	10 Jan 2003 13:53:32 -0000
@@ -49,7 +49,7 @@
 	->
 		{ Iterations = Iterations0 }
 	;
-		{ Iterations = 1000 }
+		{ Iterations = 1 }
 	),
 	io__write_string("Iterations: "),
 	io__write_int(Iterations),
@@ -69,6 +69,12 @@
 :- mode run_benchmark(in, in, (pred) is semidet, di, uo) is cc_multi.
 
 run_benchmark(Iterations, Name, Closure) -->
+	% By default, we just run a single iteration and print out
+	% for each test whether the query succeeded or failed;
+	% this is used by the test suite framework.
+	% If the `-n' option is used (see above), we run
+	% multiple iterations, and print out the times for each benchmark.
+	% This can be useful for testing the effect of optimizations.
 	{ CallClosure = 
 		( pred(_Input::in, Output::out) is det :-
 			( call(Closure) ->
@@ -77,11 +83,13 @@
 				Output = 0
 			)
 		) },
-	{ benchmark_det(CallClosure, 0, _, Iterations, Time) },
-	io__write_string(Name),
-	io__write_string(" "),
-	io__write_int(Time),
-	io__nl,
+	{ benchmark_det(CallClosure, 0, Result, Iterations, Time) },
+	( { Iterations > 1 } ->
+		io__format("%-30s     result %3d        time (ms) %8d\n",
+			[s(Name), i(Result), i(Time)])
+	;
+		io__format("%-30s     result %3d\n", [s(Name), i(Result)])
+	),
 	io__flush_output,
 	garbage_collect.
 

-- 
Fergus Henderson <fjh at cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
--------------------------------------------------------------------------
mercury-reviews mailing list
post:  mercury-reviews at cs.mu.oz.au
administrative address: owner-mercury-reviews at cs.mu.oz.au
unsubscribe: Address: mercury-reviews-request at cs.mu.oz.au Message: unsubscribe
subscribe:   Address: mercury-reviews-request at cs.mu.oz.au Message: subscribe
--------------------------------------------------------------------------



More information about the reviews mailing list