Some fun with algebraic numbers

I’ve always been fascinated with the way computers can carry out billions of exact computations, without error, in the blink of an eye. There’s historically existed a bit of a divide between people who like using computers to calculate approximate answers to questions (numerical methods) and those who like being able to calculate exact answers and prove things rigorously (symbolic methods). You can see this divide in machine learning as well. The pros and cons are:

  • With numerical methods, you can often come up with really simple and general methods that work for a huge range of problems. For example, finite element PDE solvers can solve basically any classical physics problem without having to actually analytically solve the problem.
  • However, a huge drawback of numerical methods is that it’s sometimes hard to know how accurate your solution really is. Engineers (who use numerical methods a lot) use a bunch of tricks to try to overcome this problem but a lot of the time you do need to resort to more involved math to answer it definitively.
  • On the other hand, symbolic methods have the issue that the complexity of the solutions becomes impractically immense for most real-world problems (however, this isn’t always the case).

An example of a simple symbolic vs. numerical representation is rational numbers. We all know that integers can be represented exactly, and addition, subtraction, and multiplication of integers produces integers. Unfortunately, division isn’t included in this happy family, and division is (as it turns out) a pretty important arithmetic operation! So we can extend the set of integers by also including numbers of the form N/M where N and M are integers. Now all four elementary operations are included, and we can do exact arithmetic. However, it’s worth noting that as we do more and more operations, the rational numbers in our computations tend to get very large numerators and denominators, and computation with them becomes less and less efficient. So this is really only good for when doing short sequences of operations.

Another elementary operation aside from +,*,-,/ that is also important is root-taking i.e. square roots, cube roots, and so on. Unfortunately, the set of rational numbers isn’t closed under these operations. But the set of algebraic numbers is.

Algebraic numbers are the set of numbers that are roots of (non-zero) univariate polynomials with integer coefficients. For example, 8x3 − 4x2 − 4x + 1 = 0. Algebraic numbers may in general be real or complex, and they include all integers and all rational numbers (an integer N is the root of x – N, and a rational number N/M is the root of Mx – N). Not all numbers are algebraic – π isn’t, for example.

As it turns out, the set of algebraic numbers contains a lot of interesting things. For example, it contains all of Euclidean geometry. You know – the part of high school 2D geometry where you construct shapes and prove things about them using just a straight edge (ruler) and compass. It was very important to the ancient greeks to only use straight edges and compasses for any kind of geometrical contsruction, whether they were trying to draw a hexagon, or cut a triangle down the middle, or what have you. Well, as it turns out, all lengths, coordinates, etc. in Euclidean geometry are representable exactly as algebraic numbers. Actually this isn’t even that hard to prove. If you could represent algebraic numbers exactly on a computer, then you could  do Euclidean geometry exactly. For example, let’s say you wanted to check if an object is symmetric around some line or point. You could rotate or flip it, and then compare the coordinates exactly to see if they match.

Algebraic numbers contain a lot of other useful things as well, like the roots of unity, which are useful for doing Fourier transforms, which in turn are useful for, well, pretty much everything!

So is it possible to represent algebraic numbers exactly, in the same way you can do for integer and rational numbers? And also be able to do computations on them efficiently (i.e. in polynomial time)? The answer is: Yes!

The secret lies in the following fact: Every algebraic number has a unique minimal polynomial. That is, every algebraic number has a univariate polynomial (with integer coefficients) that is irreducible – the polynomial cannot be factored out into smaller factors that also have integer coefficients. The minimal polynomial is, in addition, unique (up to multiplication by a constant factor). The proof of this isn’t that difficult but it’s a bit technical so I’m not going to get into it here. But suffice to say that we can represent algebraic numbers by representing their minimal polynomials instead, along with some information specifying exactly which root of the minimal polynomial we are representing (note that in general the minimal polynomial might be of degree N > 1 and may have many roots). If we want to compare two algebraic numbers for equality, we first compare their minimal polynomials. If they are not the same, the algebraic numbers are not equal. If they are the same, we check to make sure they both represent the same root.

How do we specify which root of the minimal polynomial we want? There are a few ways. One more advanced way uses Thom’s lemma and represents the root using the signs of the derivatives of the polynomial at that root. That method is less useful for complex algebraic numbers. A simpler method first simply enumerates all the roots of the polynomial (numerically, up to some specified precision) and then calculates the smallest absolute distance d between them. We then just represent our desired root numerically, with precision higher than d. Note that this represents the root exactly, since we have the minimal polynomial as well.

In Julia, I can use the following type that captures all of what we’ve discussed:

type AlgebraicNumber{T<:Integer,F<:AbstractFloat}
    coeff::Vector{T}
    apprx::Complex{F}
    prec::F
end

Here we represent the minimal polynomial as an array of integer coefficients coeff, and an approximation to the root as well as the absolute precision in apprx and prec.

Ok, we have a way of representing algebraic numbers. What about doing arithmetic on them? Negating an algebraic number is easy – just replace x with –x in the polynomial. Taking the nth root of an algebraic number is also easy – just replace x with x^n in the polynomial. Inversion is, also, easy – just reverse the list of coefficients!

Addition and multiplication of algebraic numbers is more tricky. In this case, you have to construct a new polynomial that has as its roots the addition or sum of the roots of the original polynomials. This can be done using resultants but it winds up being very slow. A more interesting and much faster method is based on power sums of polynomial roots. I won’t get into the details here – the brief explanation is that you can ‘transform’ any polynomial with integer coefficients to a power sum polynomial with rational coefficients and also obtain the original polynomial back. In this new domain, addition and multiplication become simple products and sums of coefficients. Alin Bostan has done a lot of work in this area and the implementation I use is based on his papers. It’s also the implementation used in SymPy.

I’ve written down all of these ideas in a Julia library that you can use to experiment with algebraic numbers. You can do fun things like represent sqrt(2) exactly:

    sqrt2 = sqrt(AlgebraicNumber(2))
    assert(sqrt2^2 == 2)

Or define the golden ratio and prove that it is indeed the golden ratio:

    # Golden ratio
    ϕ = 1//2 + sqrt(AlgebraicNumber(5//4))

    assert(1+1/ϕ == ϕ)

Or you can also do other stuff, like compute the ‘difficult’ minimal polynomial in this stackoverflow question, without even breaking a sweat:

    n = sqrt(AlgebraicNumber(9*5))-sqrt(AlgebraicNumber(4*7))+sqrt(AlgebraicNumber(35))
    d = 1-sqrt(AlgebraicNumber(5))+sqrt(AlgebraicNumber(7))
    α=n/d
    @assert(α.coeff == BigInt[3596, 2312, -280, -156, 19])

For the math-inclined, Algorithms in Real Algebraic Geometry by Basu, Pollack, and Roy is a great resource to learn more about algebraic number computation.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s