[m-rev.] for review: improve test framework

Fergus Henderson fjh at cs.mu.OZ.AU
Thu Aug 15 10:31:08 AEST 2002


On 15-Aug-2002, Simon Taylor <stayl at cs.mu.OZ.AU> wrote:
> 
> Improve the test framework to make it easier to find out which tests
> failed and to reduce disk usage (important in debug grades). The
> disadvantage is that the tests can no longer be run in parallel.

Not running in parallel has two disavantages:

	- It runs slower.

	  This is unfortunate but probably tolerable since
	  we don't have very many multi-CPU machines anyway.

	- It doesn't test parallel mmake.

	  That would be a serious omission in our test coverage, IMHO.

> Allow the tests to be run with `mmc --make' (still some failures).
> 
> tests/Mmake.common:
> tests/*/Mmakefile:
> 	Move common code (such as the code to deal with subdirectories)
> 	to Mmake.common.
> 
> 	Run the tests using `mmake runtests' rather than using slightly
> 	different runtests scripts in each directory.

The standard name for this would be "mmake check".
(This is specified by the GNU coding guidelines.)

It's OK to have "runtests" as an alternative name if you like,
but we should also support "mmake check".

> Index: tests/Mmake.common
...
> +ifeq ($(RUNTESTS_IN_SUBDIR),true)
> +    STARTUP_LOG=:
> +    REDIRECT_TO_LOG=
> +    PROCESS_LOG=:
> +    CLEANUP_LOG=:
> +    REPORT_SUCCESS=:
> +    REPORT_FAILURE=:
> +else
> +    STARTUP_LOG=rm -f $(LOG_FILE); touch $(LOG_FILE)
> +    REDIRECT_TO_LOG= | tee -a $(LOG_FILE)
> +    PROCESS_LOG=echo ERRORS; \
> +        echo END ERRORS
> +    CLEANUP_LOG=rm -f $(LOG_FILE)
> +    REPORT_SUCCESS=echo ALL TESTS SUCCEEDED; rm -f $(ERROR_FILE)
> +    REPORT_FAILURE= \
> +        awk -f $(TESTS_DIR)/process_log.awk < $(LOG_FILE) > $(ERROR_FILE); \
> +        echo SOME TESTS FAILED: see $(ERROR_FILE)
> +endif

A comment or two there might be nice.
Why are these set to null if RUNTESTS_IN_SUBDIR is true?
What's the PROCESS_LOG command for?

> +# If the tests in the subdirectories, still run the local tests,
> +# but return a status of 1.
> +# 
> +runtests:
> +	@$(STARTUP_LOG)
> +	@rm -f subdirs_failed local_failed
> +	+@{ mmake runtests_subdirs || touch subdirs_failed; } $(REDIRECT_TO_LOG)
> +	+@{ mmake runtests_local || touch local_failed; } $(REDIRECT_TO_LOG)
> +	@if [ -f subdirs_failed -o -f local_failed ] ; then \
> +		$(REPORT_FAILURE); \
> +		$(CLEANUP_LOG); \
> +		rm -f subdirs_failed local_failed; \
> +		exit 1; \
> +	else \
> +		$(REPORT_SUCCESS); \
> +		$(CLEANUP_LOG); \
> +		rm -f subdirs_failed local_failed; \
> +	fi

Why is this code serialized by writing it as the body of a single
target, rather than using multiple targets (with appropriate
dependencies between then, of course) so that the different
parts which don't depend on each other can run in parallel?
E.g.

	.PHONY: start
	start:
		@$(STARTUP_LOG)
		@rm -f subdirs_failed local_failed

	.PHONY: do_runtests_local
	do_runtests_local: start
		+@{ mmake runtests_subdirs || touch subdirs_failed; } \
			$(REDIRECT_TO_LOG)

	.PHONY: do_runtests_subdirs
	do_runtests_subdirs: start
		+@{ mmake runtests_subdirs || touch subdirs_failed; } \
			$(REDIRECT_TO_LOG)

	.PHONY: runtests
	runtests: do_runtests_local do_runtests_subdirs
		@if [ -f subdirs_failed -o -f local_failed ] ; then \
			$(REPORT_FAILURE); \
			$(CLEANUP_LOG); \
			rm -f subdirs_failed local_failed; \
			exit 1; \
		else \
			$(REPORT_SUCCESS); \
			$(CLEANUP_LOG); \
			rm -f subdirs_failed local_failed; \
		fi

If the answer is that the log files would get intermingled,
this could be solved by logging the output to two different files
(and perhaps concatenating them at the end).

> +runtests_subdirs:
> +	succeeded=true; \
> +	for dir in $(SUBDIRS); do \
> +		(cd $$dir && RUNTESTS_IN_SUBDIR=true mmake runtests) || \
> +			succeeded=false; \
> +	done; \
> +	case $$succeeded in false) exit 1 ;; esac
> +

Likewise here.

Why not

	RUNTESTS_IN_SUBDIRS=$(SUBDIRS:%=runtests_in_%)

	runtests_subdirs: $(RUNTESTS_IN_SUBDIRS)

	.PHONY: $(RUNTESTS_IN_SUBDIRS)
	$(RUNTESTS_IN_SUBDIRS):
		cd $* && RUNTESTS_IN_SUBDIRS=true mmake runtests

?

> +#
> +# Run all tests in the directory one at a time, running
> +# `mmake realclean' for the tests which succeed and
> +# gzipping the executables for those which don't.
> +#
> +runtests_local:
> +	@echo STARTING tests in $(THIS_DIR) in grade $(GRADE)
> +	@echo cleaning up the directory before the tests
> +	+ at if ls -lt | head -2 | egrep CLEAN > /dev/null 2>&1; then  \
> +		rm -f CLEAN > /dev/null 2>&1; \
> +	else \
> +		rm -f CLEAN > /dev/null 2>&1; \
> +		mmake $(RM_JFACTOR) realclean_local > /dev/null 2>&1; \
> +		rm -f *.d *.dep *.int *.int2 *.int3 > /dev/null 2>&1; \
> +		rm -f *.date *.date3 *.opt *.optdate > /dev/null 2>&1; \
> +		rm -f *.trans_opt *.trans_opt_date > /dev/null 2>&1; \
> +	fi
> +	+ at failed_targets= ; \
> +	for test in $(TESTS); do \
> +	    echo RUNNING TEST $(THIS_DIR)/$$test in grade $(GRADE); \
> +	    case $$test in \
> +	        *-nodepend) \
> +			depend_success=true; \
> +			test=`basename $$test -nodepend` ;; \
> +		*) \
> +			if mmake $$test.depend; then \
> +			    depend_success=true; \
> +			else \
> +			    depend_success=false; \
> +	    		fi ;; \
> +	    esac; \
> +	    case $$depend_success in \
> +	        true) \
> +		    if mmake $$test.runtest; then \
> +		        echo FINISHED TEST $(THIS_DIR)/$$test; \
> +			rm -f $$test.out* $$test.*res*; \
> +		        mmake $$test.realclean > /dev/null 2>&1; \
> +		    else \
> +		        echo FAILED TEST $(THIS_DIR)/$$test in grade $(GRADE); \
> +		        failed_targets="$$failed_targets $(THIS_DIR)/$$test"; \
> +		    	if [ -f $$test ]; then \
> +				rm -f $$test.gz; gzip $$test; \
> +			fi; \
> +		    fi ;; \
> +		false) \
> +		    echo FAILED TEST $(THIS_DIR)/$$test; \
> +		    failed_targets="$$failed_targets $(THIS_DIR)/$$test" ;; \
> +	    esac; \
> +	done; \
> +	case "$$failed_targets" in \
> +		"") \
> +			echo cleaning up the directory after the tests; \
> +			mmake $(RM_JFACTOR) realclean_local > /dev/null 2>&1; \
> +			rm core > /dev/null 2>&1; \
> +			touch CLEAN; \
> +			echo \
> +		"FINISHED tests in $(THIS_DIR) in grade $(GRADE)" ;; \
> +		*) \
> +			echo \
> +		"FAILED tests in grade $(GRADE): $$failed_targets"; \
> +			exit 1 ;; \
> +	esac
> +
> +endif	# TESTS != ""

Likewise here.  If the issue is log file intermingling, then the results
of each test `foo' could be written out to `foo.log' and then these log
files can all be concatenated at the end.

> +realclean_subdirs:
> +	+succeeded=true; \
> +	for dir in $(SUBDIRS); do \
> +	    (cd $$dir && mmake realclean) || succeeded=false; \
> +	done
> +	case $$succeeded in false) exit 1 ;; esac
> +
> +clean_subdirs:
> +	+succeeded=true; \
> +	for dir in $(SUBDIRS); do \
> +		(cd $$dir && mmake clean) || succeeded=false; \
> +	done; \
> +	case $$succeeded in false) exit 1 ;; esac

Likewise here.

> Index: tests/recompilation/test_functions

That file wasn't mentioned in the log message.

> Index: tests/warnings/Mmakefile
> -%.res_compile:	%.exp $(cs_subdir)%.c
> +%.res_compile:	%.exp %.c

That change isn't mentioned in the log message.
It also looks a bit suspicious to me -- the $(cs_subdir)
reference is needed for the --use-subdirs case, isn't it?

-- 
Fergus Henderson <fjh at cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.
--------------------------------------------------------------------------
mercury-reviews mailing list
post:  mercury-reviews at cs.mu.oz.au
administrative address: owner-mercury-reviews at cs.mu.oz.au
unsubscribe: Address: mercury-reviews-request at cs.mu.oz.au Message: unsubscribe
subscribe:   Address: mercury-reviews-request at cs.mu.oz.au Message: subscribe
--------------------------------------------------------------------------



More information about the reviews mailing list