Friday 11 August 2017

Improving the Automatic Tests

Oliver and I have been working on improving the test framework used for the interval package. The package shares a large number of tests with other interval packages through an interval test framework that Oliver created. Here is the repository.

Creating the Tests

Previously these tests were separated from the rest of the package and you usually ran them with help of the Makefile. Now Oliver has moved them to the m-files and you can run them, together with the other tests for the function, with test @infsup/function in Octave. This makes it much easier to test the functions directly.

In addition to making the tests easier to use we also wanted to extend them to not only test scalar evaluation but also vector evaluation. The test data, input ad expected output, is stored in a cell array and when performing the scalar testing we simply loop over that cell and run the function for each element. The actual code looks like this (in this case for plus)

%!test
%! # Scalar evaluation
%! testcases = testdata.NoSignal.infsup.add;
%! for testcase = [testcases]'
%!   assert (isequaln (...
%!     plus (testcase.in{1}, testcase.in{2}), ...
%!     testcase.out));
%! endfor

For testing the vector evaluation we simply concatenate the cell array into a vector and give that to the function. Here is what that code looks like

%! # Vector evaluation
%! testcases = testdata.NoSignal.infsup.add;
%! in1 = vertcat (vertcat (testcases.in){:, 1});
%! in2 = vertcat (vertcat (testcases.in){:, 2});
%! out = vertcat (testcases.out);
%! assert (isequaln (plus (in1, in2), out));

Lastly we also wanted to test evaluation of N-dimensional arrays. This is done by concatenating the data into a vector and then reshape that vector into an N-dimensional array. But what size should we use for the array? Well, we want to have at least three dimensions because otherwise we are not really testing N-dimensional arrays. My solution was to completely factor the length of the vector and use that as size, testsize = factor (length (in1)), and if the length of the vector has two or fewer factors we add a few elements to the end until we get at least three factors. This is the code for that

%!test
%! # N-dimensional array evaluation
%! testcases = testdata.NoSignal.infsup.add;
%! in1 = vertcat (vertcat (testcases.in){:, 1});
%! in2 = vertcat (vertcat (testcases.in){:, 2});
%! out = vertcat (testcases.out);
%! # Reshape data
%! i = -1;
%! do
%!   i = i + 1;
%!   testsize = factor (numel (in1) + i);
%! until (numel (testsize) > 2)
%! in1 = reshape ([in1; in1(1:i)], testsize);
%! in2 = reshape ([in2; in2(1:i)], testsize);
%! out = reshape ([out; out(1:i)], testsize);
%! assert (isequaln (plus (in1, in2), out));

This works very well, except when the number of test cases is to small. If the number of test is less than four this will fail. But there are only a handful of functions with that few tests so I fixed those independently.

Running the tests

Okay, so we have created a bunch of new tests for the package. Do we actually find any new bugs with them? Yes!

The function pow.m failed on the vector test. The problem? In one place $\&\&$ was used instead of $\&$. For scalar input I believe these behave the same but they differ for vector input.

Both the function nthroot.m and the function pownrev.m failed the vector test. Neither allowed vectorization of the integer parameter. For nthroot.m this is the same for standard Octave version so it should perhaps not be treated as a bug. The function pownrev.m uses nthroot.m internally so it also had the same limitation. This time I would however treat it as a bug because the function pown.m does allow vectorization of the integer parameter and if that supports it the reverse function should probably also do it. So I implemented support for vectorization of the integer parameter for both nthroot.m and pownrev.m and they now pass the test.

No problems were found with the N-dimensional tests that the vector tests did not find. This is a good indication that the support for N-dimensional arrays is at least partly correct. Always good to know!

No comments:

Post a Comment