Power series are natural generalizations of Taylor polynomials. We know that Taylor polynomials give (if we get lucky) approximations of functions that get better as polynomials get longer, so one can expect that infinite polynomials give precisely the functions from which they arise. Before we get to this relationship, we will explore power series as such, that is, we will try to make some sense of "infinite polynomials"; they have interesting and useful properties.

Definition.

By apower serieswith centerawe mean any series of the formwhere

a_{k}are real numbers.

A very important and useful concept is complex power series, which is beyond the scope of Math Tutor; however, note that most conclusions of this section are valid also in the complex case. We will actually below sometimes refer to complex series, since some notions are somewhat more clear in more dimensional cases.

Note that an ordinary polynomial can be also considered a power series. For
instance, the polynomial
*x**a*_{k}*x*^{k},*a*_{0} = 1,*a*_{1} = −2,*a*_{k} = 0*k* > 1.

Back to power series. The basic notion - the region of convergence - behaves very nicely for power series. While for a general series of functions the region of convergence may be weird, here we know for sure that this set is the nicest possible - an interval (a circle in complex plane, a ball in general). We start with the following observation.

Fact.

Consider a power series with centera. If there existsx_{0}such that the given series converges atx_{0}, then this series converges absolutely at allxsatisfying

| x−a| < |x_{0}−a|.

Here it is exactly the case where considering complex numbers (or points
from the plane) might help in understanding what is happening. If we have
convergence at some point *x*_{0}, then we get absolute
convergence on the interior of the disc with center *a* on whose
circumference *x*_{0} lies.

To put it another way, the series necessarily converges on the neighborhood
*U*(*a*)*x*_{0} − *a*|

Note that every power series converges at its center *a*. The above Fact
shows that if we can move further from *a* (away from the center) with
convergence, then we also automatically extend absolute convergence. There
are three possible situations. One is that there is no convergence apart
from the center *a*. This can indeed happen. Another situation is that
we can "extend" convergence as far as we want from *a*, that is, we get
convergence everywhere, consequently also absolute convergence everywhere.
The third situation is that we "extend" convergence as far as possible from
*a*, but we cannot go arbitrarily far, there is some limit beyond which
we cannot go further. Then we obtain a neighborhood (a disc in the complex
or planar case, a symmetric interval about *a* in the real case) such
that the series converges absolutely inside and diverges outside (if we had
but one convergent point outside, the by the above Fact we can enlarge this
neighborhood of convergence accordingly).

Thus we have the following statement.

Theorem.

Consider a power series with centera. There exists a number(including R≥ 0) such that the given series converges absolutely at all R= ∞xsatisfying| and diverges at allx−a| <Rxsatisfying| .x−a| >R

This number can be found as

These formulas also include the cases
*a*).
Note that the inequality
*x* − *a*| < *R**U*_{R}(*a*),*a* − *R*,*a* + *R*).*a* ± *R*.

Definition.

Consider a power series with centera. The numberRfrom the above theorem is called theradius of convergenceof this series.

The theorem (including the formulas) was obtained in the following way. We
tested the given power series for absolute convergence using suitable tests
and it turned out that convergence depends on the distance from the center
*x* − *a*|.*R*, applying the
Ratio test one gets the
second formula for *R*. When investigating a particular series most
people do not directly apply the above formulas, instead they repeat the
process with testing convergence, since it may give a better insight into
the series and it is not substantially longer. Recall that when
investigating convergence we do not really need to know where indexing start.

**Example:** Investigate convergence of the series

We will use the
Ratio test
to investigate absolute convergence of this series.

Note: We will use *A*_{k} to denote terms of the series
in the Ratio test, since the traditional *a*_{k} is in
the context of power series used just for coefficients, not for whole terms
of series.

(Note that in the last step we did the limit with respect to *k*,
*x* is some fixed parameter here.)
The test now shows that the investigated series converges absolutely if
*x* − 1| < 1*x* − 1| > 1.*R* = 1, the series converges absolutely on
*U*_{1}(1)

What can we say about convergence at the endpoints? This has to be
investigated individually, note that the Ratio test above gives lambda equal
to 1 when *x* is from the border of the region of convergence, so it
does not help.

*x* = 0:

This is the harmonic series that is known to diverge.

*x* = 2:

This is the alternating version of the harmonic series and by the Alternating series test it converges, see Convergence of general series in Theory - Testing convergence.

Conclusion: This series converges on the region of convergence

**Remark:** How do we know that we do not have absolute convergence at 2?
Note that if we take the series we obtained with
*x* = 2*x* = 0*a* ± *R*,

*a*_{k}(*a* ± *R* − *a*)^{k}| = ∑ |*a*_{k}|⋅|±*R*|^{k} = ∑ |*a*_{k}|⋅*R*^{k}.

This means that either we have absolute convergence at both endpoints (then we also must have convergence there), or we do not have absolute convergence at any of them. From practical point of view, unless we have convergence at both endpoints, we never include endpoints in the region of absolute convergence. We will return to this below.

**Example:** Investigate convergence of the series

We will use the Ratio test to investigate absolute convergence of this series.

The test says that the investigated series converges absolutely if
lambda is less than 1, but we see that this happens always, regardless of
*x*. Thus we have
*R* = ∞

**Example:** Investigate convergence of the series

We will use the Root test to investigate absolute convergence of this series.

The test says that the investigated series converges absolutely if
rho is less than 1, but we see that this happens only if
*x* + 3 = 0*R* = 0

**Remark:** The given series does not satisfy the definition of power
series, but it can be rewritten into one.

Now we see that its center is indeed

We see that the three cases discussed above
(*R* positive, *R* = 0,*R* = ∞*a*. We will now look at some properties of power series and
their consequences for functions that they define. But before we get to it
we will add one more statement concerning the region of convergence.

Recall the Fact above that essentially says that if we have convergence at some point, then we gain convergence everywhere inside the disc up to that point (see again the picture above for the complex case, it is very instructive). Using this we concluded that the region of convergence is shaped as a disc. Assume now that we have a series with a positive (in particular finite) radius of convergence. Then the Fact (or the Theorem on radius of convergence) does not allow us to say anything about convergence on the circumference of the disc of convergence. And with a good reason, typically we have convergence at some places and divergence at other places of the circumference, sometimes it is just divergence everywhere or convergence everywhere. We will now show that the Fact we discuss can be strengthened if convergence at that special point is "better".

Fact.

Consider a power series with centera. If there existsx_{0}such that the given series converges absolutely atx_{0}, then this series converges absolutely at allxsatisfying

| x−a| ≤ |x_{0}−a|.

In particular, if we actually have absolute convergence at some point on the circumference of the disc of convergence, then we have convergence on the whole circumference.

Algebraic operations with series are defined in the natural way, using the definition for series of numbers, but due to their form we can only join series with a common center. An important fact is that adding/multiplying convergent power series does not spoil convergence (for the Cauchy product it follows from absolute convergence which we do have automatically for power series). Moreover, the sums of such series are what they should be.

Theorem.

Consider power serieswith radii of convergence

and R_{ f }respectively. Then the following are true. R_{g},(i) The sum of these series converges with radius of convergence

and R≥ min(R_{ f },R_{g})(ii) For a real number

c, thec-multiple of a series converges with the same radius of convergence and(iii) The Cauchy product of these series converges with radius of convergence

and R≥ min(R_{ f },R_{g})

We note that if in the part (i) the radii
*R*_{f} and *R*_{g} are distinct,
then necessarily *R* is equal to their minimum, see
this problem in Solved Problems
- Series of functions. The formula for the Cauchy product is entirely
natural, we just write it in the long way and all is clear, for simplicity
we use the center 0.

*a*_{0} + *a*_{1}*x* + *a*_{2}*x*^{2} + ...]
⋅
[*b*_{0} + *b*_{1}*x* + *b*_{2}*x*^{2} + ...]
=
(*a*_{0}*b*_{0}) + (*a*_{0}*b*_{1} + *a*_{1}*b*_{0})*x* + (*a*_{0}*b*_{2} + *a*_{1}*b*_{1} + *a*_{2}*b*_{0})*x*^{2} + ...

This theorem has an interesting interpretation. Fix some center *a*,
consider the set *S* of all sequences
*a*_{k}}*a* and positive radius of convergence, and
the set *F* of functions that can be expressed as sums of such series.
The first two statements in the above theorem show that both sets are linear
spaces and that operations in one space correspond to natural operations in
the other space. Formally, if we define the transformation

then this transformation is linear. In other words, instead of functions
(which are sometimes rather complicated structures to work with) we can work
with sequences that represent them and the correspondence is best possible,
it also includes the usual linear operations. Thus from the algebraic point
of view these spaces are the same. (Here we actually also need to know that
this transformation is

How does the third statement fit in? If a function *f* is coded by the
sequence *a*_{k}}*g*
is coded by the sequence *b*_{k}},*f*⋅*g**c*_{k}}*a*_{k}}*b*_{k}}*S* and *F*
to multiplication, we would say that the set *F* with the usual
multiplication of functions corresponds to the set *S* equipped with
convolution. This suggests that there are situations when convolution is a
natural replacement for multiplication. Recall that we noted before that
this operation satisfies the usual laws that we are glad to have for
multiplication.

The above theorem can be symbolically expressed as follows: If we introduce
the correspondence
*f* ∼ {*a*_{k}},

*c**f* ∼ *c*{*a*_{k}},

*f* + *g* ∼ {*a*_{k}} + {*b*_{k}},

*f*⋅*g* ∼ {*a*_{k}}*{*b*_{k}}.

One can also make formulas for some other things that we do with functions, for instance substitution (division is actually quite tricky, by the way), but that is more related to expansion of functions that is covered in the next section.

An important factor when working with series of functions is the question of uniform convergence. Here the situation is pretty much the best possible. We just need to avoid the border and we are fine.

Theorem.

Consider a power series with centeraand radius of convergenceThen for every positive R> 0.this series is uniformly convergent on r<R[ a−r,a+r].

More generally, such a power series converges uniformly on any closed
interval that is a subset of
*a* − *R*,*a* + *R*).

As usual, if we use the language of neighborhoods: "A power series converges uniformly on any closed neighborhood that is included in the region of convergence," we get a statement that is true also in other settings (complex series etc.).

This theorem is even better than it seems at the first sight. Note that we
get uniform convergence as close to the border as we want. If we take any
*x*_{0} from
*U*_{R}(*a*),*x*_{0} is inside. Thus
we can choose some *r* which is smaller than *R*, but still large
enough that
*U*_{r}(*a*)*x*_{0}, and we get uniform convergence on a
neighborhood of *x*_{0}.

In short, we get uniform convergence around all points inside the region of convergence, so we can use all the nice properties that we had for uniform convergence (see the Series of functions) everywhere in the region of convergence. Thus we get the following extremely useful theorem.

Theorem.

Consider a power series with centeraand radius of convergenceLet R> 0.fbe the function that it defines on its region of convergence.

(i) The functionfis continuous onU_{R}(a) = (a−R,a+R).

(ii) The functionfis differentiable onon this set U_{R}(a) = (a−R,a+R),and the convergence of this series is uniform on

[ for any positivea−r,a+r]r<R.

(iii) The functionfis integrable onon this set it has an antiderivative U_{R}(a) = (a−R,a+R),and the convergence of this series is uniform on

[ for any positivea−r,a+r]r<R.

One can also say it like this. Given a power series, we can differentiate it term by term and integrate it term by term and the radius of convergence stays the same, convergence stays nice and the results are logical: When we differentiate a series term by term, we get the same thing as if we differentiated the function that the series defines (analogous statement is also true for integration). The fact that power series can be easily and reliably differentiated and integrated is one of the major reasons why they are so useful and popular.

Note two things. First, in the last expression on the right in (ii) we
actually dropped the first term in the indexing. It is not necessary, since
when
*k* = 0,

Second, the series that we get after differentiating and integrating are not in their proper forms, for that we would have to do a shift in indexing.

Again, depending on situation, sometimes this is done (for instance if you want to present the result in a nice polished up way), but in practice people often do not bother.

**Remark:** The above theorem allows us to differentiate and integrate
power series only inside their regions of convergence - and with good
reason. If a power series also converges at some border point of its region
of convergence, then this convergence need not survive differentiation and
integration. For an example consider the
first example above, the series
there was found to be convergent on the interval

It is easy to check that the resulting series does not converge at 2.

By iterating the above theorem we find that power series yield functions that have derivatives of all orders.

Corollary.

Consider a power series with centeraand radius of convergenceLet R> 0.fbe the function that it defines on its region of convergence. Then for every natural numbern, the functionfhas thederivative on n-thon this set U_{R}(a),and the convergence of this series is uniform on

for any positive U_{r}(a)r<R.

Now imagine that you have two series with center *a* that converge and
their sums are equal on some neighborhood of *a*. We write this equality
of series and then differentiate *n* times on each side, we get on the
left and on the right expressions as in the Corollary. When we
substitute *x* = *a*

Corollary(uniqueness).

Consider two power series with centera, a series∑ and a seriesa_{k}(x−a)^{k}∑ such that they both converge on someb_{k}(x−a)^{k}for a positive radius U_{r}(a)r.

If the sums of the two series are equal on this neighborhood, then the series are necessarily the same, that is,for all a_{k}=b_{k}k.

This in particular means that a power series is entirely given by its values
on some neighborhood of *a*, even very small. Once we know the values
of it on some tiny neighborhood of *a*, then also values at other
points of convergence are determined, even if the series converges
everywhere.

In particular, when we have a polynomial, then it is not possible to write it in another way, not even as a truly infinite power series (with the same center, of course; if we choose a different center, we get a different expression for the same function).

**A note on endpoints:**
We remarked that convergence of a series at endpoints need not survive
differentiation,
so no general statement concerning endpoints was possible in the second and
the third part in the above theorem. However, the first statement (the one
about continuity) is not the best possible. It can be improved, which is
something that we do not get automatically from uniform convergence, but
we have a special theorem about it.

Theorem(Abel's convergence theorem).

Consider a power series with centeraand radius of convergenceLet R> 0.fbe the function that this series defines on its region of convergence.

• If the series converges atthen a+R,fis continuous there from the left and the series converges uniformly also on the interval[ a,a+R].

• If the series converges atthen a−R,fis continuous there from the right and the series converges uniformly also on the interval[ a−R,a].

This can be summed up like this.

Assume that a power series with center

aand radius of convergenceconverges on an interval R> 0I(so the intervalIsurely contains( but it may also include one or both its endpoints). Then the resulting functiona−R,a+R),fis continuous onIand the convergence of this series is uniform on any closed intervalJthat is a subset ofI.

In particular, if *I* is closed, that is, if the given power series
actually converges on
*a* − *R*,*a* + *R*],

Taylor series, expanding functions

Back to Theory - Series of functions