We start with a definition of a sequence of functions and dedicate most of this section to investigating convergence, above all to the problem of preservation of properties. Eventually we will have no choice but to introduce uniform convergence. At the end we briefly look at monotonicity.

Definition.

By asequence of functionswe mean any countable ordered set

{ f_{k}}_{k ≥ n0}= {f_{n0},f_{n0+1},f_{n0+2},...},where

f_{k}are functions and the starting indexn_{0}is some integer.

Depending on what kind of functions we consider, we get different theories. Here we will only consider real functions, that is, functions defined on subsets of real numbers and with real values. Another popular option is complex functions and in fact most results from here can be transferred with minor (and usually obvious) modifications to the complex case, but that is beyond the scope of Math Tutor.

Our main interest now is developing the notion of convergence for sequences
of functions, in which case we do not really care about the beginning of the
given sequence. Then - as usual - we will simplify our life by simply writing
*f*_{k}}

If we want to study sequences of functions, a good start is to imagine their
graphs. We have one picture with coordinate axes and we draw infinitely many
graphs into this picture. If any reasonable work is to be done, we need to
have at least a little place on the real line where all of these functions
actually exist. That is, the whole work in this and subsequent sections is
based on the usually unspoken assumption that the domains of all
*f*_{k} have non-empty intersection. We then work on
this intersection and essentially disregard anything that happens outside
this common set. So the right idea at the start is to imagine a nice set on
a real line (perhaps an interval) and infinitely many functions (graphs) on
it.

Before we get to the main definition, we will try to figure out what makes
actually sense. The most natural approach to sequences of any kind is to
fall back on what we know - sequences of (real) numbers. Given a sequence of
functions *f*_{k}},*x* from the intersection of their domains and
substitute it into all of these functions. Every
*f*_{k}(*x*)*f*_{k}(*x*)}*A* - a real number.

When we try this for all numbers *x* from the intersection of domains, we
see that this intersection splits into two parts. For some *x* the
resulting real sequence does not converge. For some it does, these form the
*region of convergence* for the given sequence of functions. For each
*x* from this region we get the appropriate limit *A* of
*f*_{k}(*x*)},*A*_{x} as the value of the limit
obviously depends on the choice of *x*. What is the situation now? We
were given a reasonable sequence of functions (whose domains have non-empty
intersection) and we identified a region of convergence. It may be empty,
but if it is not, then we have a certain subset of real numbers and for
every *x* from this set we have a value assigned, namely the limit
*A*_{x}. In other words, we have a new function, call
it *f*. If we want to know how much is *f* (*x*),*x* and find the limit of
*f*_{k}(*x*)}.*f* is naturally called the limit of the sequence
*f*_{k}}.

Definition.

Consider a sequence of functions{ f_{k}}.

We define itsregion of convergenceas the set of allxfor which allf_{k}are defined and the sequence{ converges.f_{k}(x)}

On this region of convergence we define the functionfcalled thelimitof{ by the formulaf_{k}}

Since the main focus now is functions, not numbers (that is, we prefer to
talk on the level of functions as abstract objects), we would like a notation
that would not refer to points. The fact that a function *f* is the
limit of a sequence of functions
*f*_{k}}

However, we know that when dealing with functions it is crucial to know
where they live and where things work, which we do not see from the "lim"
notation. There is no widely accepted notation for the region of
convergence, in particular because we sometimes choose to work on smaller
sets anyway. Therefore we introduce the following more general convention.
If *M* is any subset of the region of convergence (for instance the
region itself), then we say that the sequence
*f*_{k}}**converges to**
*f* **on** *M*. Often we prefer to refer to functions
themselves rather than to the sequence they form, so people
also say that functions *f*_{k} **converge to**
*f* **on** *M*. Both statements can be written as

*f*_{k}→*f* on *M*.

We will shortly see that there are other ways to look at convergence of
functions. It is generally accepted that when we talk about convergence
without further specification, we mean the one we just defined, in
particular because it is the basic one, other kinds of convergence usually
work by adding some conditions to this notion that we already have. However,
sometimes we want to emphasize that we indeed mean this convergence, that we
indeed look at what is happening at individual points. Then we would say
that *f*_{k} **converge pointwise** to *f* on
*M*.

**Remark:**
When dealing with sequences of real numbers, we were also interested in a
particular case of divergence, when the limit was infinite. Here this is of
no interest, since the right kind of object to obtain as a limit of a
sequence of functions is a function again, and we cannot assign the value
infinity to a function.

**Example:**
Consider the sequence given by
*f*_{k}(*x*) = *x*^{k}*k* = 1,2,3,...

All these functions are defined on the whole real line, so the set of real
numbers is our starting position for the investigation of the sequence of
functions *x*^{k}}.*x*^{k}}*x* = 2,^{k}}.*x* = −1/2,^{k}}*x* that we fix. Therefore we know that if
*x*| < 1,*x*^{k}}*x* = 1,*x*^{k}} = {1^{k}} = {1}*x* give
divergent geometric sequences. Thus we have the following conclusion:

The region of convergence of the given sequence is the interval

If we call the resulting function *f*, then we can also write that
*x*^{k}→*f**x*^{10} to be precise).

We see that the graphs of *x*^{k} on *x*-axis,

**Example:**
Consider the sequence given by
*f*_{k}(*x*) = arctan(*k*⋅*x*)*k* = 1,2,3,...

All these functions are defined on the whole real line, so the set of real
numbers is our starting position for the investigation of the sequence of
functions *kx*)}.*x* the sequence *kx*}*k* goes to infinity and arctangent is equal to
*x* is negative, then *kx*}*x* = 0,*kx*)}

Again, a picture shows what is happening.

With growing *k* the curves (graphs) approach more and more constant
functions

Since the notion of convergence for functions is derived from convergence of real numbers, many nice properties are preserved. First of all, convergence of functions behaves well with respect to the usual algebraic operations.

Theorem.

Assume that a sequence of functions{ converges to a functionf_{k}}fon a setMand that a sequence of functions{ converges tog_{k}}gon the same setM. Then the following is true:

(i) For any real numbera, the sequence{ converges toa⋅f_{k}}on a⋅fM.

(ii) The sequence{ converges tof_{k}+g_{k}}on f+gM.

(iii) The sequence{ converges tof_{k}−g_{k}}on f−gM.

(iv) The sequence{ converges tof_{k}⋅g_{k}}on f⋅gM.

(v) The sequence{ converges tof_{k}/g_{k}}on the set of all f/gxfromMfor which allare not 0. g_{k}(x)

(vi) The sequence{ converges tof_{k}^{gk}}on the set of all f^{ g}xfromMfor which allare positive. f_{k}(x)

In short, operations work whenever and wherever the outcome makes sense (for the last condition recall how we treat general powers). Note that the first two statements together combine into the statement that the notion of limit satisfies linearity.

Composition is tricky. Limit works well with composition of functions only
under some special conditions, see for instance the corresponding theorem in
the section
Basic properties
in Sequences - Theory - Limit. Therefore we will not offer any general
statement that would say something along the lines that
*f*_{k}(*g*_{k})*f* (*g*),

Theorem.

(i) Assume that a sequence of functions{ converges to a functionf_{k}}fon a setMand that a functiongmapsNintoM. Then the sequence{ converges tof_{k}(g)}on f(g)N.

(ii) Assume that a sequence of functions{ converges to a functiong_{k}}gon a setN, a functionfis continuous onMand allg_{k}mapNintoM. Then the sequence{ converges tof(g_{k})}on f(g)N.

Some properties of functions are preserved by convergence.

Theorem.

Assume that a sequence of functions {f_{k}} converges to a functionfon a setM.

(i) If allf_{k}are odd, then alsofis odd.

(ii) If allf_{k}are even, then alsofis even.

(iii) If allf_{k}areT-periodic, then alsofisT-periodic.

(iv) If allf_{k}are non-decreasing functions, then alsofis a non-decreasing function.

(v) If allf_{k}are non-increasing functions, then alsofis a non-increasing function.

(vi) If allf_{k}are constant functions, then alsofis a constant function.

The last statement invites a clarifying remark. We talk of a *sequence
of constants* there. Every function as individual is a constant, but each
may be different. On the other hand, a *constant sequence* of functions
is a sequence where all functions are the same, but not necessarily constant
functions, For instance, if
*f*_{k}(*x*) = *x*^{2}*k*, then we obtain a constant sequence, and it obviously
converges to *x*^{2}. In general a constant sequence converges
to that constant element - just like in the case of real sequences.

We observed (see Functions - Theory - Limit and comparison) that passing to a limit may change inequality into equality, but never into an opposite inequality. This explains why in (iv) and (v) above monotonicity survives, but strict monotonicity does not, see below.

We saw some properties that are preserved, but unfortunately the really
interesting properties cannot be relied on to survive convergence. In
particular:

(1) even if all *f*_{k} are 1-1, *f* need
not be 1-1;

(2) even if all *f*_{k} are increasing,
*f* need not be increasing;

(3) even if all *f*_{k} are decreasing,
*f* need not be decreasing;

(4) even if all *f*_{k} are bounded,
*f* need not be bounded;

(5) even if all *f*_{k} are continuous,
*f* need not be continuous;

(6) even if all *f*_{k} are differentiable,
*f* need not have derivative;

(7) even if all *f*_{k} are integrable,
*f* need not be integrable.

Indeed, all those arctangents in the last example are increasing,
*f* does not have an antiderivative around the origin due to that jump.

To see that boundedness need not survive, consider this example: If we
define
*f*_{k}(*x*) = min(*e*^{x},*k*),*k* and replaced by a
constant there. Thus they are all bounded, but this sequence obviously
converges to *e*^{x} on the real line and this is not
bounded.

Is there any way to salvage this unpleasant situation? There are no reasonable conditions that would make (1)--(3) work. The other four can be fixed if we demand that the convergence be "better" (see the next part below). Actually, one can also fix (4) by requiring that the given sequence of functions is "uniformly bounded" and (5) by requiring that the given functions are "equicontinuous", but this is advanced stuff and we will not explore it here (it is also used much less that the "better convergence" that we do show here).

Before we get to this new convergence, we will look closer at the last three
problems (5)--(7). We will show that they in fact represent the idea of
"switching operations". With continuity it goes like this. The most
practical way of determining continuity is via limit (see
Continuity in
Functions - Theory - Real functions). Consider a sequence
*f*_{k}}*f*. Let *a* be some point in the interior of
the region of convergence. The function *f* is continuous there exactly
if its value at *a* is the same as its limit at *a*. We will now
look at this condition closer, in the last step we use the assumption that
*f*_{k} are continuous.

Given a sequence of functions
*f*_{k}},*k* and the variable
*x*. In a perfect world, when we decide to apply limit to both these
quantities, the order would not matter. However, we see that
interchangeability of limits is equivalent to preservation of continuity and
that we know is not true. Indeed, we can use the above example with
arctangent to nicely illustrate that the order in which we apply the limit
does matter.

This problem appears in many situations when we can apply limit to more objects, for instance with functions of more variables, so the problem if interchangeability of limits is quite important. It is definitely worth asking under which conditions we can do it.

The derivative problem, as stated in the above list, is not exactly of this
kind, but in fact we usually want more than is written there. If we have
differentiable functions *f*_{k} that converge to some
*f* on a set *M*, we would like to know that *f* is
differentiable and its derivative *f* ′ can be obtained by taking
the limit of derivatives *f*_{k}′. That is, we want to
have a choice whether we first take the limit and then differentiate or the
other way around.

This interchangeability of limit and differentiation is quite a problem as
well. The above example with arctangent shows that we can lose
differentiability entirely, but it can also happen that *f* does have a
derivative, but we cannot reach it using *f*_{k},
see for instance
this problem in Solved Problems -
Series of functions.

This brings us to the integral problem. There it is even more complicated.
If integrable functions *f*_{k} converge to *f* on
some *M*, then we cannot hope that antiderivatives
*F*_{k} would go to *F* for the simple and
fundamental reason that every function has infinitely many antiderivatives;
we may choose one for each function, but then most likely they will not form
a convergent sequence as the shifts will play hell with convergence.

There are two reasonable ways out. One is to use definite integral, so one
can for instance require that on any segment between *a* and *b*
that lies entirely in *M*,

Thus we again in fact want to be able to change the order of two operations,
limit and (definite) integration. The other option is to use the definite
integral from some fixed *a* to a variable *x*, thus obtaining one
specific choice of antiderivative, which makes good sense if the region of
convergence is an interval.

Also here it may happen that *f* is integrable, but this integral
has nothing to do with integrals of *f*_{k}. Consider
the following functions.

Note that as we go through this sequence, the triangles keep sliding toward 0.
Thus if we fix some positive *x*, sooner or later those hills pass it
and *f*_{k}(*x*)*f*_{k} converge to 0 for
*x* > 0.*x* = 0*f*_{k}(*x*)}*f* = 0*f* is 0, while all *f*_{k} have integral 1 there.
Thus integral of *f* cannot be obtained as a limit of integrals of
*f*_{k}.

We have shown that the last three problems (5)--(7) are related to the problem of interchanging the order of limits, differentiation and integration. Now we will get some positive results about it.

Before we show its definition, we will show why it is the right one. When we
lost continuity and more in the first two examples, the real cause was in
different speeds of convergence. For instance, we know that for
*x*| < 1*x*^{k} goes to 0, but for different *x* it goes
there with different speeds; the closer *x* is to 1 (or −1), the longer
it takes for *x*^{k} to get to 0. After all, the
pictures above show it quite clearly. The same thing is true for the second
example, the closer to zero we look, the longer those arctangents take to get
to their limit.

There is another way to express this. One possible interpretation of limit
is through approximation. If numbers *a*_{k} converge to
*A*, then for every tolerance
*ε* there is some
*a*_{K} that approximates *A* up to that
*ε*. Does it work for
functions? The first two examples show that convergence as we defined it
does not work like this. For instance, the functions
*x*^{k} converge to the function 0 on
*ε* = 1/2,*K* such that
*x*^{K} would be 0 up to

The basic idea of uniform convergence is to fix the above flaw. It allows
the limit function *f* to be approximated arbitrarily well by a certain
*f*_{k}, or equivalently, it forces
*f*_{k} to converge to *f* everywhere at the same
speed. The idea is simple: Instead of playing the limit game separately at
every point, it is played simultaneously on the whole set of convergence.

Definition.

Consider a sequence of functions{ Letf_{k}}.Mbe a set on which allf_{k}are defined.

We say that the sequence{ f_{k}}converges tofuniformly onMif and only if for everythere is an integer ε> 0Nsuch that for everyand for every k≥NxfromM:

| f_{k}(x) −f(x)| <ε.We denote it

f_{k}⇉fonM.

We start with a simple observation:

Fact.

Consider a sequence of functions{ and a functionf_{k}}f.

Ifon f_{k}⇉fM, thenon f_{k}→fM.

This shows that uniform convergence is indeed stronger than pointwise
convergence and we will show below that it is strong enough to do what we
want from it. It also shows that attempting to establish uniform convergence
makes sense only on subsets of the region of convergence of
*f*_{k}}.*f*.
This Fact shows that we simply take the *f* from pointwise convergence,
which is something that we can usually handle well. To avoid the unpleasant
epsilon-delta game we then turn to another observation.

Fact.

Consider a sequence of functions{ and a functionf_{k}}f. LetMbe some subset of the region of convergence of{ f_{k}}.

Thenon f_{k}⇉fMif and only if the numberssup converge to 0 as_{M}|f_{k}−f|kgoes to infinity.

This is in fact nothing deep, just a slightly different way of writing the
definition; the supremum tells us how good a particular approximation of
*f* by some *f*_{k} is globally on the set
*M*, and we want this approximation to get as good as we need. Since
the supremum can be often evaluated (or at least estimated), this is very
practical. We get *f* from pointwise convergence and finding supremum
of a function is also a standard problem.

**Example:**
Consider the sequence given by
*f*_{k}(*x*) = *x*^{k}.*M* = (−1,1),*k* we obtain

_{M}| *f*_{k}(*x*) − *f* (*x*)|
=
sup{|*x*^{k}|; −1 < *x* < 1}
= 1.

We see that quality of approximations (or speed of convergence) gets really
bad near 1 and *a* < 1,*M* = [−*a*,*a*].*x*
from this set we have *x*^{k}| ≤ *a*^{k},

_{M}| *f*_{k}(*x*) − *f* (*x*)|
= sup{|*x*^{k}|; −*a* ≤ *x* ≤ *a*}
= *a*^{k}→0.

In the last step (when evaluating the limit) we used the fact that
*a*| < 1.*x*^{k} converges uniformly to 0 on all intervals
*a*,*a*]*a* < 1.*a* can be arbitrarily close to 1), then situation improves
dramatically. This is in fact rather typical, a reasonable series has
troubles with convergence at the endpoints of its region of convergence
(after all, convergence "ends" there, and it is hardly natural that it would
stop abruptly, in a typical case it is getting steadily worse until it fails
completely). When we cut off these endpoints, convergence becomes uniform.
This behavior can be observed in many solved problems on this topic, see
Solved Problems - Series of
functions. Some sequences are even better, they converge uniformly
everywhere, again see Solved Problems.

Similarly we easily show that in the example with sliding triangles above
the convergence is not uniform on *a*,∞)*a* > 0,

Uniform convergence is markedly better than pointwise convergence. For instance, recall that we had troubles with composition. With uniform convergence we do not have to worry.

Theorem.

Assume that a sequence of functions{ converges uniformly to some continuous functionf_{k}}fon a setMand that a sequence of functions{ converges to a functiong_{k}}gon a setN. Assume also that allg_{k}mapNintoM. Then the sequence{ converges tof_{k}(g_{k})}on f(g)N.

Now we look at the properties discussed above.

Theorem.

Consider a sequence of functions{ that converges to a functionf_{k}}f.

(i) If allf_{k}are continuous on a setMand{ converges uniformly tof_{k}}fonM, thenfis also continuous onM.

(ii) If allf_{k}are continuous on a setMand{ converges uniformly tof_{k}}fonM, then for every interval[ that is a subset ofa,b]Mone has(iii) If all

f_{k}are continuously differentiable on a setMand the sequence of derivatives{ converges uniformly to some functionf_{k}′}gonM, thenfis differentiable onMandf′ =g.

Moreover,{ actually converges tof_{k}}funiformly onM.

The fact that (iii) is so complicated shows that derivatives can be quite
tricky, even uniform convergence of *f*_{k} is not
enough to get something reasonable and one has to ask things about
derivatives. Then it is just an application of a certain special version of
(ii). Since this version is of some independent interest, we state it here.

Proposition.

Consider a sequence of functions{ that converges uniformly to a functionf_{k}}fon some intervalM. Assume that allf_{k}are continuous onM. Fix someafromMand for allxfromMdefineThen

F_{k}converge uniformly toFonM.

We know (see e.g. the Fundamental theorem of Calculus in Integrals - Theory -
Introduction) that those *F*_{k} are antiderivatives of
*f*_{k} and *F* is an antiderivative of *f*.
Thus we see that uniform convergence of continuous (therefore integrable)
functions guarantees convergence of their integrals and we had to take
special antiderivatives to avoid trouble with arbitrary constants, see
discussion above.

We also have a special local version of (i) that is sometimes useful.

Proposition.

Consider a sequence of functions{ defined on some neighborhood of a pointf_{k}}a. Assume that for everyk,lim for some real number_{x→a}(f_{k}(x)) =A_{k}A_{k}.

Iff_{k}converge uniformly to some functionfon some neighborhood ofa, then{ is a convergent sequence andA_{k}}lim _{x→a}(f(x)) = lim(A_{k}).

In short both (i) and this Proposition say that uniform convergence allows for changing the order of limits as we discussed above. Similarly, (ii) and (iii) state that under appropriate assumptions (uniform convergence at the right place) we can change the order of limit and integration, or limit and derivative.

Given a sequence of functions
*f*_{k}},

Definition.

Consider a sequence of functions{ letf_{k}},Mbe some set on which allf_{k}are defined.

(1) We say that this sequence isincreasing onMif for everyxfromMthe sequence{ is increasing.f_{k}(x)}

(2) We say that this sequence isnon-decreasing onMif for everyxfromMthe sequence{ is non-decreasing.f_{k}(x)}

(3) We say that this sequence isdecreasing onMif for everyxfromMthe sequence{ is decreasing.f_{k}(x)}

(4) We say that this sequence isnon-increasing onMif for everyxfromMthe sequence{ is non-increasing.f_{k}(x)}

We say that this sequence ismonotone onMif it satisfies one of the above properties.

We say that this sequence isstrictly monotone onMif it is increasing onMor decreasing onM.

In fact, here it is not necessary to do it pointwise, we can do just with
functions as objects, since in Functions - Theory - Real function -
Operations with
functions we defined comparison (inequality) for functions on sets. Thus
the sequence *f*_{k}}*M* if
*f*_{k+1} > *f*_{k}*M* for every *k*, similarly we can define the other
properties. It is actually the same, those inequalities were defined via
points anyway, but in this way we hide it.

As an example we take the sequence in our first example (those powers). It
is decreasing on

Uniform convergence is studied preferably on closed sets, because many things work better there (note that we also preferred statements with closed intervals above). In particular, on a closed interval one has the following interesting statement.

Theorem(Dini's theorem).

Consider a sequence of continuous functions{ that converges to a continuous functionf_{k}}fon a closed interval[ for some real numbersa,b]a<b.

If the sequence{ is monotone, then the convergence is uniform.f_{k}}