# Basic properties

We start by looking at a few basic properties of limits. Then we look at theorems about operations, which leads directly to the limit algebra, our main tool for evaluating limits. Then we will investigate some interplays between convergence and monotonicity and boundedness; we also look at limits and subsequences. At the end we briefly introduce to notion of Cauchy sequence.

## A few simple statements

The following statements should be clear if you understand what limit means.

Fact.
A sequence {an} converges to L if and only if {an − L} converges to 0.

Fact.
If a sequence {an} goes to L, then {|an|} goes to |L|.

Fact.
A sequence {an} goes to 0 if and only if {|an|} goes to 0.

Fact.
Assume that the sequences {an} and {bn} converge. The limit of {an} is equal to the limit of {bn} if and only if {an − bn} goes to 0.

Note that the last statement is not true any more if we drop the assumption about convegence.

Fact.
If a sequence {an} has a non-zero limit, then there exists N and a constant m > 0 such that an | > m for k > N.

## Limits and Operations

When evaluating a limit, the starting point is elementary limits that we are supposed to remember; these are the basic building blocks. More complicated expressions are created by combining such elementary expressions, so we also need to know how to put them together in a limit.

Theorem (limits and algebraic operations).
Assume that the sequence {an} has limit A and the sequence {bn} has limit B. Then the following is true:
(i) For any real number c, the sequence {can} has limit cA if it makes sense.
(ii) The sequence {an + bn} has limit A + B if it makes sense.
(iii) The sequence {an − bn} has limit A − B if it makes sense.
(iv) The sequence {anbn} has limit AB if it makes sense.
(v) The sequence {an /bn} has limit A/B if it makes sense.
(vi) The sequence {an bn} has limit AB if it makes sense.

Now what is it about making sense? If A and B are real numbers, that is, if the two sequences are convergent, then the operations (i) through (iv) make always sense. However, the ratio A/B only makes sense if B is not zero. And this is exactly what is meant; we can apply this theorem if what we get at the end makes sense. What we really have here is an extension of the usual algebra. Before, when we wrote "3 + 2 = 5", we meant that three apples added to two apples give five apples. But now it can also mean that "a sequence converging to 3 when added to another sequence that happens to converge to 2 will yield a sequence that converges to 5". This is the "limit algebra" and unlike the usual algebra, this one also features infinities.

We could present a theorem now with many statements, but it is much easier to start from another end. Note that in the theorem above we did not assume that A,B are finite, and some operations can be defined also for cases when they feature infinity. If we use these operation in the above theorem and deem that they "make sense", then all the results we obtain in this way are correct. What operations can we introduce?

For instance, what do we get if we add or multiply two immensely huge numbers? Another immensely huge number. We just argued that ∞ + ∞ = ∞ and ∞⋅∞ = ∞.
One can also ask what happens when infinities are mixed up with ordinary numbers. For instance, when we subtract 13 from a really huge number, we are still left with a huge number (a millionaire who loses a dollar is still basically a millionaire). Indeed, for any real number L we have ∞ − L = ∞.

Now, what do we get if we subtract an immensely huge number from another immensely huge number? Well, that depends. They may be equal and we get zero. Or one might be larger, then the outcome depends on which and by how much. This shows that the difference ∞ − ∞ can have many possible outcomes, depending on situation. Indeed, this operation is not defined since it cannot be done reasonably. such expressions are called "indeterminate".

For a complete list of all operations, along with notes and more details, click here. In particular, you will find there an important remark concerning some limitations of this algebra. In other words, definitely check it out. For a brief list of the limit algebra, click here.

Note that the "making sense" for working with limits is somewhat different from making sense for numbers. The reason is that now the numbers A,B do not represent real numbers, that is, fixed quantities, but outcomes of limits, in other words, they represent processes, "almost numbers". This has the effect that some operations, although they can be performed with real numbers, do not work with limits. The best example is the power 00. We know that as a number it makes sense, it gives 1. However, if these zeros represent limits of sequences, then we are in a situation that we look for a limit of the general power an bn. When n gets close to infinity, then both an and bn are close to 0, but a small number raised to a small number need not be close to 1, it might be very small or very huge, depending on which "almost zero" is closer to zero. Thus the outcome of the limit (that is, of the expression 00 in the limit algebra) depends on how fast go an and bn to 0, the limit might not even exist. In the limit algebra, 00 is an indeterminate expression.

Indeterminate expressions are as important as those of the limit algebra. When calculating a limit, one should know what works and also what does not. For a complete list of indeterminate expressions, along with notes and more details, click here. For a brief list, click here.

Finally, here you will find some remarks about operations with sequences some of which do not have a limit. This is of less interest and we include it only for the sake of completeness, or to satisfy the more curious reader.

In practice we thus follow a very simple rule.

If we want to find the limit of a sequence given by some an expression, we "substitute" infinity into it and if the operations involved make sense, then the outcome is the correct answer to our limit.

However, note that this is quite informal and some profs are allergic to it. Therefore it is safer to do all "infinity calculations" on the side. In our calculations we put them, along with other remarks, between big double angled braces ⟪ and ⟫ to indicate that they are not parts of the "official" solution. Here is a very simple example, written in a long way with all the steps; usually you would do it faster.

We can rarely get an answer that easily, in particular because often the "little pieces" are a bit more complicated and we do not know their limits right away, we need to work them out before we can try to put them together. Then the above theorem as stated is a bit awkward; it is more convenient to express it in this form:

Important note: each of these equalities is true only if the expression on the right makes sense. They are therefore "conditional": Until we know that the final answer in our calculations makes sense, all the equalities in it need not be true. To put it another way, it is no good splitting an expression into parts if what we get at the end makes no sense. For example, the constant function 1 has limit 1 at infinity. However, if we write it as 1 = (1 + n) − n and calculate the limit of each term separately, we get something that does not make sense: ∞ − ∞.

Now will show a simple example:

Example: Find

We see that n is always inside some simple term whose limit at infinity we already know (see elementary limits). Namely, constants 13 and 5 converge to themselves. Further:
(i) We know that 1 divided by the root of n, which is in fact 1/n1/2, tends to zero (n with a positive power is in the denominator). Another way to see this: root of n goes to infinity and 1/∞ = 0.
(ii) By the Squeeze theorem (see Limit and comparison in Theory - Limits) we know that (−1)n/n→0.
(iii) We know that n→∞.
(iv) In order to see the second term in the denominator, we first rewrite it to get a positive power: 2n = 1/2n. Now 2n is a geometric sequence whose base 2 is greater than one, so it goes to infinity. Consequently, 1/2n goes to 1/∞ = 0.
It is also possible to write 2n = 1/2n = (1/2)n. Now it is a geometric sequence whose basis 1/2 is (in absolute value) less than 1, so it converges to 0.

We now use the above theorem to put these basic facts together and find the limit of the given sequence. By the theorem, the numerator converges to 13 − 0 + 0 = 13, while the denominator converges to ∞ + 5⋅0 = ∞. Finally, the whole sequence must converge to 13/∞ = 0.

We will now show how to write this procedure using the limit notation. Here we will write all the steps to show how we decompose the given expression step by step, usually one would write it much shorter.

This solution was correct, but rather long. Applying the limit algebra and doing some unofficial calculations on the side (between double angled braces) we can do it much faster:

And that is my favourite way of handling this problem - correct and short.

For some tips and insight into applying this theorem, see Methods Survey - Limit.

We still did not cover one important operation, that of composition.

Theorem.
Let {an} be a sequence with limit A, assume that an ≠ A for all n. Let f be a function which has limit B when xA. Then the sequence {f (an)} has limit B.

Here A and/or B may be also infinite, as long as the statements involved make sense. The most typical case is when A is a number and f is continuous at A, which for all practical purposes means that f is given by some formula which does not mind having A substituted into it. In this case, the theorem can be expressed like this:

Application is easy. If we are looking for the limit of a sequence which has the form of "some expression inside a nice function", then we can ignore the function, find the limit of the expression inside, and then put this limit into the function. A simple example is here.

Note that this in some sense fits with the above example about using limit algebra. There we for a moment ignored operations and just focused on simple terms, basic building blocks from which the given expression is built. We realized that we knew what their limits are, then we put them together to get the final answer. This theorem tells us that we can also ignore functions at first, evaluate simple terms, then not just compose the partial answers together using limit algebra, but we can also substitute partial answers into functions and if it makes sense, we get a correct answer. If you look for instance at this example, it should be clearer.

Thus the practical rule - substitute and see - applies also to expressions with composition, it is a general rule for limits that we use as our first approach. Of course, many, perhaps most limits cannot be solved in this way. Then we have to use tricks that change the given sequence into another one that can be solved the basic way - by putting together simple results using the limit algebra.

One more very important rule: Unless you know what you are doing, always finish all parts. In particular, if you split a limit of a product into a product of smaller limits and one of them comes up as zero, you cannot stop calculations and claim that the whole thing is zero. Granted, zero times a number is again zero, but that only works in the usual algebra. In the limit algebra we can also have "zero times infinity", which is an indeterminate product that can be anything. As an example we try a different decomposition of 1:

lim(1) = lim((1/n)⋅n) = lim(1/n)⋅lim(n) = 0⋅∞.

Obviously it would be a mistake to stop once we saw that the first limit was zero, but after completing the other part we see the indeterminate product and know that it was not a good idea to split the original limit into two. For more details, see this note.

## Limit and boundedness, monotonicity

We saw that a convergent sequence can approach its limit in strange ways, so definitly convergent sequences cannot be expected to be monotone in general. Boundedness is not so hopeless:

Theorem.
Every convergent sequence is bounded.

Can we conversely get some information about convergence from the two basic properties? The counterpositive of the above statement says that an unbounded sequence must be divergent, that's one piece of information. Can we get something positive, too? No. The example of alternating sequence shows that a bounded sequence need not be convergent, there is even no limit at all (not even improper). However, if we are willing to loose some terms of the given sequence, we do get something out of boundedness (cf. Bolzano-Weierstrass theorem in Functions - Theory - Real numbers - Topological notions):

Theorem (Bolzano-Weierstrass theorem).
Every bounded sequence has a convergent subsequence.

Another useful property is monotonicity. If you try to imagine all kinds of increasing sequences, you should start having the (correct) feeling that such sequences either grow towards some upper bound which is then their limit and they converge, or they grow above all possible bounds and they therefore tend to infinity; in any case, they have a limit. Indeed, this is true, and moreover boundedness provides a nice way to avoid that infinity.

Theorem.
Every monotone sequence has a limit.
Every bounded monotone sequence is convergent.

In more detail, every non-decreasing sequence (in particular every increasing sequence) either converges or goes to infinity, and every non-increasing sequence (in particular every decreasing sequence) either converges or goes to minus infinity.

## Limit and subsequences

Theorem.
If a sequence is convergent, then all its subsequences are also convergent and converge to the limit of the original sequence.

This is not exactly useful when investigating convergence, but the following, weaker statement in sort of opposite direction is often useful.

Fact.
If a given sequence has two subsequences that converge to different limits, then the given sequence diverges.

For instance, from the alternating sequence {1, −1, 1, −1, 1, −1, 1,...} we can take all odd terms to form the subsequence {1, 1, 1, 1, 1,...} converging to 1, and we can take all even terms to form the subsequence {−1, −1, −1, −1, −1,...} converging to −1. By the above fact, the alternating sequence diverges.

## Cauchy sequences

One useful observation one can make about a sequence is that as it goes along, it changes less and less. To define it formally we again use the idea of a game. Somebody gives us a tolerance and we want to be able to throw away some beginning of the gives sequence so that its remaining terms never jump by more than this tolerance.

Definition.
Consider a sequence {an}. We say that this is a Cauchy sequence, or that this sequence is Cauchy, if for every ε > 0 there is some natural number N so that for all m,n ≥ N we have |an − am| < ε.

If a sequence converges, then it settles down to some value and does not change much. This seems clear and in fact it is simple to prove. Less easy to prove (for instance using the Bolzano-Weierstrass theorem above) is the fact that if a sequence settles down, then it should converge (which again sounds like common sense). Thus we get the following theorem.

Theorem.
A sequence of real numbers is convergent if and only if it is Cauchy.

Note that this theorem only gives convergence, not the actual value of the limit, suggesting that this result is more theoretical than practical. This is true, in theory it is indispensable and extremely useful in many situations.

We mentioned above that one implication is less trivial. While convergent sequences are Cauchy also for sequences of elements from fairly general spaces, the fact that Cauchy sequences converge is not automatic. For instance, when we work in the world of rational numbers, then this is no longer true (see completeness in Extra - Sets and mappings - Important sets of numbers). The statement that Cauchy sequences of real numbers converge is sometimes called the Bolzano-Cauchy theorem.