Wednesday, June 30, 2010

GSoC 2010 Weeks 4 and 5: Test Suite Logging

Despite the title, I have spent most of the last two weeks working on library test suite support. However, I will be talking mostly about test suite logging, where more user-visible developments have taken place.

Controlling Test Suite Output and Logging

My main project for week 4 was to implement command-line options for Cabal controlling console output and log file names. During week 3, I implemented a set of eight output filter options allowing the user to specify separately which test suites' output would be displayed on the console and which would be logged to file. We eventually settled on a simpler scheme: output from test suites is always logged to file, and a single option, --test-filter={summary,failures,all} controls console output. These choices have the following effects:

indicate whether each test suite was successful or not

display summary information and the output from any unsuccessful test suites

display summary information and the output from all test suites

The names of the test suite log files may now be specified by a path template with the option --test-log. Path templates are already in use for build reports, but several variables have been added for test log path templates; a comprehensive listing follows.

the name of the package

the version of the package

the name and version of the package

the compiler which built the pacakge

the operating system the package was built on

the machine architecture the package was built on

the name of the test suite; if this variable is not used, all test suites are logged to the same file

the result of the test suite, one of pass or fail

the output channel being logged, one of stdout or stderr; if this variable is not used, test suite stdout and stderr are logged to the same file

Security Improvements

Because of the $result path template variable, test suite output must be logged first to a temporary file until the result is determined. I wasn't thinking much about security, and being unfamiliar with openTempFile, I naively (and quite unsecurely) reinvented the wheel. After Duncan pointed out my error and the existence of a standard library solution, I fixed the bug.

Detailed Test Suites

In week 5, I submitted to Duncan a draft of support for detailed test suites, where individual test cases are exposed to Cabal. These patches included support for building the library-based detailed test suites as well as for running detailed test suites alongside the other type of test suite. There are two challenging details in this aspect of the project: building and registering the test suite libraries and exposing the individual test case results to the parent process (Cabal, in this case).

Building Library Test Suites

There are a couple of challenges here. The source for a stub executable--to run all the test cases listed in the test library--must be written during the preprocesing stage. The stub executable is relatively simple, because the it is nearly the same for every test suite. During the build stage, Cabal must build this stub executable along with the library for the test suite. I chose to construct a fake Library and PackageDescription, named after the test suite, for the library component of the test because Cabal does not support multiple libraries in the same package, and thus derives the library name from the package name. The library is registered in the in-place package database before building the stub executable, which is named after the test suite with "Stub" appended. Name conflicts between the package and the test suite and between the test suites and executables must be avoided because of these choices.

Running Library Test Suites

The problem here is deciding how to pass data between Cabal and the stub executable when running the test suite. In particular, the stub executable needs the log file template, the path template environment, and the location of dist. The calling process also needs a detailed list of test results from the stub. In my latest patches, Cabal stores all this information in an intermediate structure and shows it into the standard input of the stub, which runs and logs the test cases and shows the list of results on its standard output. Cabal reads this and decides what information to display to the user on the console. There is no support for running only selected test cases from a test suite at this time; this functionality is not a high priority and may be left to third-party test agents.

Next Steps

There are still decisions to be made about the log file format, specifically about how to balance the advantages of human- versus machine-readable logs. The ideal test log format would be readable both by human users and, e.g., Hackage. The current patch set simply dumps the standard output to file as a convenient, if temporary, response to this indecision. Designing a better log format will occupy the rest of week 6. Once the format is settled upon, a test or tests of the test runners will be included in the Cabal test suite.

Tuesday, June 15, 2010

GSoC 2010 Weeks 2 and 3: More Parsing and Improvements to Cabal's Test Suite Runner

My focus this week has been on submitting my executable test suite patches. These patches have just been added to the head repository; although there is ongoing discussion about cosmetic issues in the .cabal file format, executable test suite support is probably approaching its final incarnation.

The most notable of the changes is our conscious decision to use "test suite" instead of "testsuite" everywhere in Cabal, and to emphasize the distinction between individual tests and test suites. As a result, the test stanza has become the test-suite stanza. We have also decided to accept only single versions for the test suite interface type in the .cabal file, instead of version ranges as I previously wrote. As a result, the new test-suite stanza looks like this:

test-suite foo
type: exitcode-stdio-1.0
main-is: main.hs
hs-source-dirs: tests

test-suite bar
type: library-1.0
test-module: Bar
hs-source-dirs: tests

I have also implemented a set of options (--log-{success,failure}-{file,terminal,both,none}) controlling how Cabal logs test suite output. Output logged to file goes in a uniquely named file in the system temporary directory; the other options should be self-explanatory. The exit code is also set depending on the success or failure of the package test suites, making it possible to do things like:

$ cabal configure --enable-tests && cabal build && cabal test --log-success-none --log-failure-file && release-software

in order to have a (nearly) completely automated testing process.

From here, the next step is to create the test interface for the detailed (library) test suite type. As I have written before, the interface must support setting various options for tests from different frameworks, including setting the seed used to generate random values--e.g., with QuickCheck tests--so that tests are reproducable. Ideally, the interface would also distinguish between tests that must actually be run in IO and otherwise pure tests that use random values, which are actually deterministic given the seed. This latter point isn't actually necessary (as shown by the lack of similar support in existing test runners), but it would be a beneficial guarantee of parallelizability of tests.