Friday, 28 July 2017

A Package for Taylor Arithmetic

In the last blog post I wrote about what was left to do with implementing support for N-dimensional arrays in the interval package. There are still some things to do but I have had, and most likely will have, some time to work on other things. Before the summer I started to work on a proof of concept implementation of Taylor arithmetic in Octave and this week I have continued to work on that. This blog post will be about that.

A Short Introduction to Taylor Arithmetic

Taylor arithmetic is a way to calculate with truncated Taylor expansions of functions. The main benefit is that it can be used to calculate derivatives of arbitrary order.

Taylor expansion or Taylor series (I will use these words interchangeably) are well known and from Wikipedia we have: The Taylor series of real or complex valued function $f(x)$ that is infinitely differentiable at a real or complex number $a$ is the power series
$$
f(a) + \frac{f'(a)}{1!}(x-a) + \frac{f''(a)}{2!}(x-a)^2 + \frac{f'''(a)}{3!}(x-a)^3 + ....
$$
From the definition it is clear that if we happen to know the coefficients of the Taylor series of $f$ at the point $a$ we can also calculate all derivatives of $f$ at that point by simply multiplying a coefficient with the corresponding factorial.

The simplest example of Taylor arithmetic is addition of two Taylor series. If $f$ has the Taylor series $\sum_{n=0}^\infty (f)_n (x-a)^n$ and $g$ the Taylor series $\sum_{n=0}^\infty (g)_n (x-a)^n$ then $f + g$ will have the Taylor series
$$
\sum_{n=0}^\infty (f + g)_n (x-a)^n = \sum_{n=0}^\infty ((f)_n + (g)_n)(x-a)^n$
$$
If we instead consider the product, $fg$, we get
$$
\sum_{n=0}^\infty (fg)_n (x-a)^n = \sum_{n=0}^\infty \left(\sum_{i=0}^n (f)_n(g)_n\right)(x-a)^n.
$$

With a bit of work you can find similar formulas for other standard functions. For example the coefficients, $(e^f)_n$, of the Taylor expansion of $\exp(f)$ is given by $(e^f)_0 = e^{(f)_0}$ and for $n > 0$
$$
(e^f)_n = \frac{1}{n}\sum_{i=0}^{n-1}(k-j)(e^f)_i(f)_{n-i}.
$$

When doing the computations on a computer we consider truncated Taylor series, we choose an order and keep only coefficients up to that order. There is also nothing that stops us from using intervals as coefficients, this allows us to get rigorous enclosures of derivatives of functions.

For a more complete introduction to Taylor arithmetic in conjunction with interval arithmetic see [1] which was my first encounter to it. For another implementation of it in code take a look at [2].

Current Implementation Status

As mentioned in the last post my repository can be found here

When I started to write on the package, before summer, my main goal was to get something working quickly. Thus I implemented the basic functions needed to do some kind of Taylor arithmetic, a constructor, some help functions and a few functions like $\exp$ and $\sin$.

This last week I have focused on implementing the basic utility functions, for example $size$, and rewriting the constructor. In the process I think I have broken the arithmetic functions, I will fix them later.

You can at least create and display Taylor expansions now. For example creating a variable $x$ with value 5 of order 3

> x = taylor (infsupdec (5), 3)
x = [5]_com + [1]_com X + [0]_com X^2 + [0]_com X^3

or a matrix with 4 variables of order 2

> X = taylor (infsupdec ([1, 2; 3, 4]), 2)
X = 2×2 Taylor matrix of order 2

ans(:,1) =

   [1]_com + [1]_com X + [0]_com X^2
   [3]_com + [1]_com X + [0]_com X^2

ans(:,2) =

   [2]_com + [1]_com X + [0]_com X^2
   [4]_com + [1]_com X + [0]_com X^2

If you want you can create a Taylor expansion with explicitly given coefficients you can do that as well

> f = taylor (infsupdec ([1; -2; 3, -4))
f = [1]_com + [-2]_com X + [3]_com X^2 + [-4]_com X^3

This would represent a function $f$ with $f(a) = 1$, $f'(a) = -2$, $f''(a) = 3 \cdot 2! = 6$ and $f'''(a) = -4 \cdot 3! = -24$.

Creating a Package

My goal is to create a full package for Taylor arithmetic along with some functions making use of it. The most important step is of course to create a working implementation but there are other things to consider as well. I have a few things I have not completely understood about it. Depending on how much time I have next week I will try to read a bit more about it probably ask some questions on the mailing list. Here are at least some of the things I have been thinking about

Mercurial vs Git?

I have understood that most of the Octave forge packages uses Mercurial for version control. I was not familiar with Mercurial before so the natural choice for me was to use Git. Now I feel I could switch to Mercurial if needed but I would like to know the potential benefits better, I'm still new to Mercurial so I don't have the full picture. One benefit is of course that it is easier if most  packages use the same system, but other than that?

How much work is it?

If I were to manage a package for Taylor arithmetic how much work is it? This summer I have been working full time with Octave so I have had lots of time but this will of course not always be the case. I know it takes time if I want to continue to improve the package, but how much, and what, continuous work is there?

What is needed besides the implementation?

From what I have understood there are a couple of things that should be included in a package besides the actual m-files. For example a Makefile for creating the release, an INDEX-file and a CITATION-file. I should probably also include some kind of documentation, especially since Taylor arithmetic is not that well known. Is there anything else I need to think about?

What is the process to get a package approved?

If I were to apply (whatever that means) for the package to go to Octave forge what is the process for that? What is required before it can be approved and what is required after it is approved?


[1] W. Tucker, Validated Numerics, Princeton University Press, 2011.
[2] F. Blomquist, W. Hofschuster, W. Krämer, Real and complex taylor arithmetic in C-XSC, Preprint 2005/4, Bergische Universität Wuppertal.

Friday, 14 July 2017

Ahead of the Timeline

One of my first posts on this blog was a timeline for my work during the project. Predicting the amount of time something takes is always hard to do. Often time you tend to underestimate the complexity of parts of the work. This time however I overestimated the time the work would take.

If my timeline would have been correct I would have just started to work on Folding Functions (or reductions as they are often called). Instead I have completed the work on them and also for functions regarding plotting. In addition I have started to work on the documentation for the package as well as checking everything an extra time.

In this blog post I will go through what I have done this week, what I think is left to do and a little bit about what I might do if I complete the work on N-dimensional arrays in good time.

This Week

The Dot Function

The $dot$-function was the last function left to implement support for N-dimensional arrays in. It is very similar to the $sum$-function so I already had an idea of how to do it. As with  $sum$ I moved most of the handling of the vectorization from the m-files to the oct-file, the main reason being improved performance.

The $dot$-functions for intervals is actually a bit different from the standard one. First of all it supports vectorization which the standard one does not

> dot ([1, 2, 3; 4, 5, 6], 5)
error: dot: size of X and Y must match
> dot (infsupdec ([1, 2, 3; 4, 5, 6], 5)
ans = 1x3 interval vector

  [25]_com   [35]_com   [45]_com

It also treats empty arrays a little different, see bug #51333,

> dot ([], [])
ans = [](1x0)
> dot (infsupdec ([]), [])
ans = [0]_com



Package Documentation

I have done the minimal required changes to the documentation. That is I moved support for N-dimensional arrays from Limitation to Features and added some simple examples on how to create N-dimensional arrays.

Searching for Misses

During the work I have tried to update the documentation for all functions to account for the support of N-dimensional arrays and I have also tried to update some of the comments for the code. But as always, especially when working with a lot of files, you miss things, both in the documentation and old comments.

I did a quick grep for the words "matrix" and "matrices" since they are candidates for being changed to "array". Doing this I found 35 files where I missed things. It was mainly minor things, comments using the "matrix" which I now changed to "array", but also some documentation which I had forgotten to update.

What is Left?

Package Documentation - Examples

As mentioned above I have done the minimal required changes to the documentation. It would be very nice to add some more interesting example using N-dimensional arrays of intervals in a useful way. Ironically I have not been able to come up with an interesting example but I will continue to think about it. If you have an example that you think would be interesting and want to share, please let me know!

Coding Style

As I mentioned in one of the first blog posts, the coding style for the interval package was not following the standard for Octave. During my work I have adapted all files I have worked with to the coding standard for Octave. A lot of the files I have not needed to do any changes to, so they are still using the old style. It would probably be a good idea to update them as well.

Testing - ITF1788

The interval framwork libary developed by Oliver is used to test the correctness of many of the functions in the package. At the moment it tests evaluation of scalars but in principle it should be no problem to use it for testing vectorization or even broadcasting. Oliver has already started to work on this.

After N-dimensional arrays?

If I continue at this pace I will finish the work on N-dimensional arrays before the time of the project is over. Of course the things that are left might take longer than expected, they usually do, but there is a chance that I will have time left after everything is done. So what should I do then? There are more thing that can be done on the interval package, for example adding more examples to the documentation, however I think I would like to start working on a new package for Taylor arithmetics.

Before GSoC I started to implement a proof of concept for Taylor arithmetics in Ocatve which can be found here. I would then start to work on implementing a proper version of it, where I would actually make use of N-dimensional interval arrays. If I want to create a package for this I would also need to learn a lot of other things, one of them being how to manage a package on octave forge.

At the moment I will try to finish my work on N-dimensional arrays. Then I can discuss it with Oliver and see what he thinks about it.

Friday, 7 July 2017

Set inversion with fsolve

This week my work have mainly been focused on the interval version of fsolve. I was not sure if and how this could make use of N-dimensional arrays and to find that out I had to understand the function. In the end it turned out that the only generalization that could be done were trivial and required very few changes. However I did find some other problems with the functions that I have been able to fix. Connected to fsolve are the functions ctc_intersect and ctc_union. The also needed only minor changes to allow for N-dimensional input. I will start by giving an introduction to fsolve, ctc_union and ctc_intersect and then I will mention the changes I have done to them.

Introduction to fsolve

The standard version of fsolve in Octave is used to solve systems of nonlinear equations. That is, given a functions $f$ and a starting point $x_0$ it returns a value $x$ such that $f(x)$ is close to zero. The interval version of fsolve does much more than this. It is used to enclose the preimage of a set $Y$ under $f$. Given a domain $X$, a set $Y$ and a function $f$ it returns an enclosure of the set
$$
f^{-1}(Y) = \{x \in X: f(x) \in Y\}.
$$
By letting $Y = 0$ we get similar functionality as the standard fsolve, with the difference that the output is an enclosure of all zeros to the function (compared to one point for which $f$ returns close to zero).

Example: The Unit Circle

Consider the function $f(x, y) = \sqrt{x^2 + y^2} - 1$ which is zero exactly on the unit circle. Plugging this in to the standard fsolve we get with $(0.5, 0.5)$ as a starting guess

> x = fsolve (@(x) f(x(1), x(2)), [0.5, 0.5])
x = 0.70711 0.70711

which indeed is close to a zero. But we get no information about other zeros.

Using the interval version of fsolve with $X = [-3, 3] \times [-3, 3]$ as starting domain we get

> [x paving] = fsolve (f, infsup ([-3, -3], [3, 3]));
> x
x ⊂ 2×1 interval vector

     [-1.002, +1.002]
   [-1.0079, +1.0079]

Plotting the paving we get the picture

which indeed is a good enclosure of the unit circle.

How it works

In its simplest form fsolve uses a simple bisection scheme to find the enclosure. Using interval methods we can find enclosure to images of sets. Given a set $X_0 \subset X$ we have three different possibilities
  • $f(X_0) \subset Y$ in which case we add $X_0$ to the paving
  • $f(X_0) \cap Y = \emptyset$ in which case we discard $X_0$
  • Otherwise we bisect $X_0$ and continue on the parts
By setting a tolerance of when to stop bisecting boxes we get the algorithm to terminate in a finite number of steps.

Contractors

Using bisection is not always very efficient, especially when the domain has many dimensions. One way to speed up the convergence is with what's called contractors. In short a contractor is a function that can take the set $X_0$ and returns a set $X_0' \subset X_0$ with the property that $f(X_0 \setminus X_0') \cap Y = \emptyset$. It's a way of making $X_0$ smaller without having to bisect it that many times.

When you construct a contractor you use the reverse operations definer on intervals. I will not go into how this works, if you are interested you can find more information in the package documentation [1] and in these youtube videos about Set Inversion Via Interval Analysis (SIVIA) [2].

The functions ctc_union and ctc_intersect are used to combine contractors on sets into contractors on unions or intersections of these sets.

Generalization to N-dimensional arrays

How can fsolve be generalized to N-dimensional arrays? The only natural thing to do is to allow for the input and output of $f$ to be N-dimensional arrays. This also is no problem to do. While you mathematically probably would say that fsolve is used to do set inversion for functions $f: \mathbb{R}^n \to \mathbb{R}^m$ it can of course also be used for example on functions $f: \mathbb{R}^{n_1}\times \mathbb{R}^{n_2} \to \mathbb{R}^{m_1}\times \mathbb{R}^{m_2}$.

This is however a bit different when using vectorization. When not using vectorization (and not using contractions) fsolve expects that the functions takes one argument which is an array with each element corresponding to a variable. If vectorization is used it instead assumes that the functions takes one argument for each variable. Every argument is then given as a vector with each element corresponding to one value of the variable for which to compute the function. Here we have no use of N-dimensional arrays.

Modifications

The only change in functionality that I have done to the functions is to allow for N-dimensional arrays as input and output when vectorization is not used. This required only minor changes, essentially changing expressions like
max (max (wid (interval)))

to
max (wid (interval)(:))

It was also enough to do these changes in ctc_union and ctc_intersect to have these support N-dimensional arrays.

I have made no functional changes when vectorization is used. I have however made an optimization in the construction of the arguments to the function. The arguments are stored in an array but before being given to the function they need to be split up into the different variables. This is done by creating a cell array with each element being a vector with the values of one of the variables. Previously the construction of this cell array was very inefficient, it split the interval into its lower and upper part and then called the constructor to create an interval again. Now it copies the intervals into the cell without having to call the constructor. This actually seems have been quite a big improvement, using the old version the example with the unit circle from above took around 0.129 seconds and with the new version it takes about 0.092 seconds. This is of course only one benchmark, but a speed up of about 40% for this test is promising!

Lastly I noticed a problem in the example used in the documentation of the function. The function used is

# Solve x1 ^ 2 + x2 ^ 2 = 1 for -3 ≤ x1, x2 ≤ 3 again,
# but now contractions speed up the algorithm.
function [fval, cx1, cx2] = f (y, x1, x2)
  # Forward evaluation
  x1_sqr = x1 .^ 2;
  x2_sqr = x2 .^ 2;
  fval = hypot (x1, x2);
  # Reverse evaluation and contraction
  y = intersect (y, fval);
  # Contract the squares
  x1_sqr = intersect (x1_sqr, y - x2_sqr);
  x2_sqr = intersect (x2_sqr, y - x1_sqr);
  # Contract the parameters
  cx1 = sqrrev (x1_sqr, x1);
  cx2 = sqrrev (x2_sqr, x2);
endfunction

Do you see the problem? I think it took me more than a day to realize that the problems I was having was not because of a bug in fsolve but because this function computes the wrong thing. The function is supposed to be $f(x_1, x_2) = x_1^2 + x_2^2$ but when calculating the value it calls hypot which is given by $hypot(x_1, x_2) = \sqrt{x_1^2 + x_2^2}$. For $f(x_1, x_2) = 1$, which is used in the example, it gives the same result, but otherwise it will of course not work.

[1] https://octave.sourceforge.io/interval/package_doc/Parameter-Estimation.html#Parameter-Estimation
[2] https://www.youtube.com/watch?v=kxYh2cXHdNQ