[m-dev.] Test cases (was: for review: big ints)
trd at stimpy.cs.mu.oz.au
Wed Apr 8 13:38:54 AEST 1998
On 08-Apr-1998, Fergus Henderson <fjh at cs.mu.OZ.AU> wrote:
> On 06-Apr-1998, Tyson Dowd <trd at stimpy.cs.mu.oz.au> wrote:
> > It's sometimes difficult
> > to tell feature tests from regression tests, and lots of the feature
> > tests are together.
> What's the difference between a feature test and a regression test?
> The boundary is not entirely clear. Is the distinguishing factor
> related to history rather than the nature of the test case itself?
> It would be strange if the "correct" location for a test case depended
> on historical factors rather than just being determined by the test
> case itself.
Some regression tests are really just missing "feature tests". Some
regression tests are combinations of code that haven't been tested
together (so never really fit with one "feature" or another -- usually
they get lumped in with the feature that was changed to fix the bug).
But many (most?) regression tests are caused by failure to keep invariants
invariant, which is not so much a feature as "structural integrity".
It's the last category that deserve a category of their own, if only
so future failures don't falsely implicate the wrong part of the
> [quoted out of order]
> > And then further subdivisions, including regression tests: e.g.
> > language/errors
> > language/warnings
> > language/general
> > language/regression
> If a bug causes an incorrect warning message,
> where does the regression test go: in `warnings' or in `regression'?
> I think the distinction between feature tests and regression tests is a
> relatively incidental one that we should not use in deciding the
> location of test cases.
Sometimes. Some of them, I'll agree with you. But if a bug in
maps means that a table generated in polymorphism causes a software
error in code_gen when compiling some code that uses typeclasses,
where should the regression test go?
(It isn't incredibly important, it would just be a little neater).
> However, I could be persuaded into making this categorization
> be a secondary one, with the primary classification being
> based on what it is that each test tests.
Yes, at the moment it's the wrong way around. Whether it's worth
changing would be debatable.
> > I think the best scheme would be to divide
> > tests into
> > language features (types, modes, plain old code)
> > implementation features (pragmas, non-language compile options)
> > library tests (unit tests, sanity tests, library regression)
> > build environment (mmake, scripts, dependencies, mdemangle)
> > programs (full programs, e.g. samples and extras and others)
> I think this is a fairly good categorization scheme.
> In particular, distinguishing between tests of Mercury language
> features and features specific to the current implementation would
> be very useful to anyone developing a new Mercury implementation.
They can also be useful when adding new backends, grades or runtime
systems. It is much more important that a new feature supports
everything in the "language", but not as important that it support
everything in the "implementation" or ("extensions"). As more people
start adding features it might be worthwhile if they can test them stage
by stage until they work with everything.
> Currently we divide the tests of language features into those that test
> features which should also work in Prolog, and those that don't.
> This is a potentially useful distinction and so I think we should keep it.
> But is this a distinction on *what* to test, or on *how*?
> I guess in truth it is both. Perhaps considering it as the latter is
> most useful.
> In addition to the categories you mentioned above, we need a place
> to put tests of the debugger and profiler. These could be considered
> part of the build environment, but probably it makes more sense to
> make them new top-level categories.
> This leads to the following set of subdirectories:
> Proposed name Purpose
> ------------- -------
> language Standard Mercury language features
> (types, modes, plain old code)
> extensions Implementation-specific pragmas, etc.
> library Tests of library modules
> (unit tests, sanity tests, library regression)
> implementation Tests of the build environment
> (mmake, scripts, dependencies, mdemangle)
> debugger Tests of the debugger(s)
> programs Full programs, e.g. samples and extras
> and others
> The existing tests would be moved and renamed as follows:
> Current name Proposed location
> ------------ ------------------
> general language/general
> hard_coded language/hard_coded
> valid language/valid
> invalid language/invalid
> warnings language/warnings
> term extensions/term
> misc_tests implementation
> benchmarks programs/benchmarks
> I'm open to suggestions about the names of the subdirectories of `language'.
> Hmm, now that I think about it, `warnings' should not be there, instead
> it should be a subdirectory of `implementation' -- at least unless
> we change the language reference manual to require warnings for certain
Also, it's interesting that most of the library modules are ADTs, but
some of them really fall into "extensions" or "language", e.g. std_util
and mercury_builtin. So the "tests/library" doesn't exactly
correspond with "mercury/library".
> In your original scheme, you had a distinction between
> "implementation features" and "build environment".
> Above, I changed this to a distinction between
> `extensions' (implementation-specific things that involve
> changes to the source code) and `implementation'
> (implementation-specific things that do not involve changes
> to the source code). I'm not sure which is better.
I happen to like your names better. Extensions is a better name in
Do you think it is worthwhile doing this? Since the way you've
structured it makes it mainly a quick directory move, it would probably
be relatively easy.
Tyson Dowd # So I asked Sarah: what's the serial number on
# your computer? She replied:
trd at cs.mu.oz.au # A-C-2-4-0-V-/-5-0-H-Z
More information about the developers