In high school algebra, it is demonstrated that a rational function (a quotient of two polynomial functions) can be expressed as the sum of partial fractions, which are terms of the form
The importance of the partial fraction decomposition lies in the fact that it provides algorithms for various computations with rational functions, including the explicit computation of antiderivatives (discussed later in the blog post), Taylor series expansions and inverse Laplace transforms.
In this blog post, I assume that , i.e. that the denominator has a lower degree than the numerator. Otherwise, one can use Euclidean division and seek partial fractions of , where is the remainder of the division guaranteed to have a lower degree of .
The high school method⌗
Consider the following function:
We clear the fractions, i.e. multiply both sides by the denominator:
Clearly, (found by substituting ). Substituting tells us that . Consequently, . Finally, and . Hence:
Clearly, this method is very tedious and difficult to automate (the algorithm needs the ability to simplify and extensively manipulate rational functions).
The Heaviside method in the base case can deal only with rational functions, the denominators of which do not contain any repeated roots. To illustrate the intuition behind this method, consider the following function:
Olivier Heaviside proposes to find the value of by covering the term and substituting :
Clearly, . To justify this maneuver, multiply both sides by in the original expression:
After a chain of simplifications the problem reduces to the following:
Setting annihilates the first two terms on the right-hand side, and the result follows:
Most textbooks, courses and online resources stop here, but the method can also be extended to rational functions with repeated roots.
The basic idea is factoring out the repeated roots and recursing (with memoisation) as follows:
Before we allow repeated roots, we must assume that the polynomial does not have factors that are irreducible over . This is not an unreasonable demand, as the polynomial can often be solved numerically.
Over the complex field, the rational function is by definition factored as follows:
Let . Then, according to the uniqueness of Laurent series, is the coefficient of the term in the Laurent expansion of about the point , i.e., its residue .
This is given by the formula:
When is a root with multiplicity of one, the formula reduces to:
Consistently with Lagrange interpolation.
Improvements to the formula⌗
Unfortunately, Heaviside’s original approach requires successive differentiation. This can be avoided in the following way:
Consider the following derivation for the partial fraction’s last term of a given root (given as the multiplicity of the -th unique root ):
By multiplying both sides of the definition by , we cancel out the factor in and include or its higher power in each partial fraction, except the constant term .
Consider the following function:
Due to the uniqueness of the partial fraction expansion, the highest power of in is , further implying that must be cancelled out when simplifying . Thus, we can reduce the multiplicity of the pole at by one via a single polynomial division. By applying the technique again, we can determine readily:
As such, we consider:
To determine the formula for for ( being the amount of unique roots) and , proceed as:
We want to find the partial fractions of the following function:
Applying the first step to , and yields:
The value of is calculated by considering the formula for with , , and :
Hence, the partial fraction decomposition follows as:
Application to antiderivatives⌗
Partial fraction decomposition, especially when performed using this simple algorithm can be used to compute antiderivatives and infinite sums of rational functions.
Consider this motivating example:
Which is trivially computed using a elementary integration techniques, starting with a rule to break up the expression into parts:
Then, each expression following the same rule is integrated as follows:
An alternative way to approach problems like these is using the Ostrogradsky-Hermite’s method. Consider the following integral:
We begin by finding as follows:
Then, it becomes apparent that
We find as follows:
and are currently unknown, but they will have degrees one less than those of and , respectively. Thus:
The merit of this method is obvious: The integral of a rational function which has roots with non-one multiplicity was reduced to an integral of a rational function with simple roots, akin to the domain extension to the Heaviside method. The coefficients must be equated now to find the values from to , which is a rather tedious and simple task left to a determined reader.
Application to infinite sums⌗
The following identity involving the polygamma function can be used to compute infinite sums of rational functions:
The second term is a partial fraction decomposition of the rational function, which can be computed using methods described earlier. The notation denotes the polygamma function of the -th order, defined as the -th derivative of the digamma function, the logarithmic derivative of . While in this scenario and is almost taken for granted, the following formula can be used to compute the polygamma function also given that :
If and , then:
If is outside of the domain, this is easily fixed using the digamma reflection formula:
Meanwhile the same defect with the general formula is fixed using the polygamma reflection formula:
The Heaviside method is a simple and elegant way to compute partial fraction decompositions of rational functions. Researching this topic has been a very interesting experience, and I hope that this blog post will be useful to someone who is also dealing with real and complex analysis in computer algebra.