Intuitive evaluation

Here we will talk about how to guess the limit of a sequence. This can be applied to many sequences and uses several ingredients. First of all, one has to remember the elementary limits and the limit algebra. The second important ingredient is the understanding of interplay of powers and exponentials (and roots). We will start with little introduction.

In many sequences one finds sums of powers and similar expressions. For instance, consider the sequence {n3 + 5n1/2 − n−2}. The first two powers go to infinity (positive exponent), the third power is properly written as 1/n2 which converges to 0. Using the limit algebra we easily find the limit of the whole sequence: ∞ + 5⋅∞ − 0 = ∞.

Now here's a problem: What is the limit of {n3 − 5n2}? We know that ∞ − ∞ is an indeterminate expression. There are procedures for finding the outcome, but the appropriate methods sometimes lead to complicated calculations. In this section we will learn how to guess the correct answer and how to prove it easily (sort of).

By the way, when we are adding and subtracting some objects and even multiplying them by real numbers like we did in the first example, we call it "linear combination". So the expression in the first example was a "linear combination of powers". We will use this terminology throughout this section. First we will identify what kind of problems we will address.

Polynomials and other sums

Note that negative powers do not cause troubles in expressions like the one above, we saw that n−2 tends to zero and therefore does not cause throubles in the limit algebra when added to something else. Therefore we will only consider positive exponents in this section. If the exponents also happen to be all integers, we are actually talking polynomials here. But we will work in more generality and also allow for positive exponents that are not integers (for instance the exponent 1/2, that is, the square root).

We will also allow for "exponentials", powers of the form en, 2n etc. Again, since we know that qn→0 if |q| < 1 and 0 is not a problem in sums, we will only consider exponentials qn with |q| > 1.

But that is not all. While "linear combinations" of powers and exponentials as above (powers/exponentials multiplied by numbers and then added/subtracted) appear quite often, we will include still more expressions: logarithms, factorials and even powers of the form nn. Now we are ready to state the problem we are trying to address here:

Question:
We have an expression of the form αA(n) + βB(n) +..., where the functions A(n), B(n),... can be powers na with a > 0, exponentials qn with |q| > 1, powers of logarithms [ln(n)]a with a > 0, factorials like n! and general powers like nn. What happens with this expression if we let n go to infinity?

Note that all expressions listed above tend to infinity as n→∞. When they are combined together in some linear combination, we typically get some indeterminate expression involving infinities, since we only get a result if all these infinities are added. What if they are subtracted? We will soon see that some infinities are "bigger" or more important then others.

Solution to the problem:
If we let n→∞, then an expression as above will behave exactly like its dominant term. To prove it mathematically, we factor the dominant term out of the expression and then apply the limit. The expression that remains after factoring should tend to a real number.

By a "dominant term" we mean the term that, for large n, prevails over all other terms in the given linear combination, and so the other terms can be ignored, they do not interfere with the tendency that the dominant power has around infinity. There is a hierarchy among all the types listed above (powers, exponentials, logarithms,...). When we say that "term A prevails over term B for large values of n", we really mean that when investigating the limit of the expression αA + βB for n tending to infinity, we can disregard the term B and just worry about A. For more insight into this comparison of infinities, click here.

Example:
We will shortly see that n2 prevails over ln(n). Now imagine that we need to find the limit of the sequence {23ln(n) − n2}. When n grows really large, the second term prevails, we can ignore the first one. Thus the sequence will behave (when n grows to infinity) like the sequence {-n2}, which we know tends to negative infinity. Consequently, also the given sequence goes to negative infinity.

Of course, this result is just guessing. We wrote above (in Solution) that mathematically we can do this by factoring out the dominant term. We try it:

Now the first term converges to infinity. What about the part in parentheses? The ratio is of the type infinity over infinity, which is handled using the l'Hospital rule. By a remarkable coincidence we investigated exactly this problem in the previous section L'Hospital's rule, so we know that this ratio converges to zero. Thus we can finish the problem:

Note that the fraction that appeared after factoring out the dominant power indeed tended to a real number, exactly as stated above.

There is a way to write properly not just the correct mathematical calculation, but also our intuitive reasoning. When we write A(n) ∼ B(n), we mean that the expressions A and B behave the same when n→∞; for most practical purposes, the expressions A(n) and B(n) are the same when n is really really large. We also say that A and B are "of the same order". So the above intuitive reasoning can be written as follows:

23ln(n) − n2 ∼ −n2→−∞.

Note that this handy way of writing is not universally accepted. Also, it does not constitute a proper solution - after all, we were just guessing there. Any answer that we get in this way should be confirmed by a correct mathematical calculation, for instance the factoring out procedure that we have shown above.

When we look at powers in a linear combination of powers and other similar terms, we actually do two different things. First, we look just at the powers, exponentials etc., that is, we ignore the multiplicative constants before them. In this way we determine the type of an expression. For instance, the expression "13⋅[ln(n)]3" is of the type [ln(n)]3, and the example above is of the type n2, because the type of an expression is given by the type of its dominant term. The type tells us how the given expression approximately behaves when n is really large.

This type is used when roughly comparing behaviour of different expressions, finding out which can be ignored etc. Then when we really start guessing the limit at infinity, we do have to include the multiplicative constants in our reasoning. Thus we would say that 23ln(n) − n2 is of the type n2 when n gets large, but then we would have to say that it behaves like -n2 when we try to guess the limit; that is, we do have to carry the constants when using the ∼ procedure. We will return to this topic later.

As you can see, the intuitive procedure can be simple, writing it properly mathematically (by factoring) may be a bit longer but should never be tricky, as long as we correctly identify and factor out the dominant term. Which brings us to the main part of this section:

Scale of powers

Here is the list of the terms mentioned above from the most dominant to the least. That is, every listed expression prevails over all expressions listed later:

(1)   the power nn,
(2)   the factorial n!,
(3)   the exponential qn for |q| > 1,
(4)   the power n a for a > 0,
(5)   the logarithm [ln(n)]a for a > 0.

For practical use people often prefer a more colloquial way of remembering this hierarchy, using phrases like "powers beat logarithms" and "exponentials beat powers" etc.

When investigating a linear combination of such terms, we always first find the dominant expression. However, there may be more terms of this dominant expression. Thus we also need to know mutual dominance within each category. There the rule is simple. In categories (3), (4) and (5) there is always a parameter (a or q) and the highest one is the dominant one.

Note that if q < −1, then {qn} does not have a limit. Thus, if such a term prevails in a linear combination, then this combination does not have a limit. When comparing exponentials between themselves, the absolute value |q| determines dominance.

In fact, the question of dominance is one kind of an answer to the question "which infinity is larger". Since for q < −1 we do not have qn→∞, we will now only consider q > 1. In the following picture (not quite to scale) we will try to express symbolically the relationship between different kinds of expressions, obtaining the scale of powers:

Example:
What is the limit of {13⋅2n + n2 − 5⋅3n − ln(n)1/2}?

Solution: There are three categories there: exponentials, powers and logarithms. Exponentials are the highest on the list, so they will supply the dominant term. There are two candidates, 2n and 3n. Since 3 > 2, the latter exponential is the dominant term, that is, the given expression is of the type (or order) 3n. Thus we can ignore all the other terms and guess that

13⋅2n + n2 − 5⋅3n − ln(n)1/2 ∼ −5⋅3n→−5⋅∞ = −∞.

How do we confirm this result mathematically? By factoring the dominant term out.

The three fractions in the parentheses are best handled separately. The first one is just a geometric sequence, the other two will lead to l'Hospital's rule after passing to investigation of functions:

Finally we get

Note that sometimes we can apply this reasoning even for terms that do not look exactly like those above, but can be changed into them by algebra. Two most typical examples: (2n)3 = 23n3 = 8⋅n3 and 32n+1 = (32)n⋅31 = 3⋅9n.

Roots

The intuitive procedure also works if there are some roots mixed in the expression; that is, if some parts of the expression are closed under roots. We then follow the following procedure. First we handle individually each root. For every root, we find the dominant term of the expression inside this root, this determines the type of the expression under the root. When we apply the root to this expression, we obtain the type of the root as a whole. We can confirm this by factoring it out, the resulting root then should have a proper limit at infinity.

After handling the roots (if any), we put all the types together (those that were by themselves, and types of roots) and determine the dominant term of the whole expression. This procedure may be repeated several times, in case there is a root under which there is an expression with another root and so on. Finally, having determined the dominant term of the whole given expression, we can handle it as above; that is, factor it out of the dominant expression and check the limit.

Note: The factoring part is usually easier when one repeats the guessing part; that is, first factor dominant terms from the roots and then work your way out.

Example: Find (if it exists)

First we check on the root. There are just powers under it, so they are in the same category and the one with higher exponent wins. In our case, the expression under the root is of the type n6. When the root is applied, we find that the root itself is of the type n6/2 = n3, just like the second part of the given expression. Thus we get two terms of the same type, two dominant terms, and therefore none can be ignored. We try to do the guessing now, first ignoring the term under the root that we know can be ignored.

We obtained our guess (on the way we saw that the root by itself behaves like 3n3 for large n) and now we prove it. We start by pulling out the dominant factor from the root, as recommended.

In fact, the guessing part above might have been wrong and we did not know it, because we did not cover an important topic yet. We got lucky there, but now it is a time for

Warning: What happens if we get several dominant terms? If they are added, we can add them safely. If they are subtracted (that is, if limit algebra would lead to the indeterminate form ∞ − ∞), then we have to be very careful. If applying the usual algebra to dominant terms in the guessing phase would preserve this dominant term, we can do it. If algebra would make this term disappear, we cannot do it!

For example, if we had 4n6 instead of 9n6 in this last example, the root would behave like 2n3, which would cancel with the other term. Then the step 2n3 − 2n3 = 0 cannot be done.

Why is it so? Becasue when we guess, each term actually represents not just itself, but there may be other terms of lower importance hidden in it (like the "+n4" part in the above example). When we subtract dominant terms and some is left, then these parts that we ignored before can still be ignored (the dominant term that overshadowed them before is still there) and the guessing calculation is correct. However, if algebra would cause the dominant term to disappear, then one of the terms we previously ignored would suddenly get promoted to dominance, therefore this new dominant term would now determine the outcome of the limit! This limit is most likely not equal to zero that we would get by cancelling the dominant terms. In such cases we have to use a more precise, more careful method of evaluation, a method that would not ignore terms that might be temporarily unimportant.

Compare the following two examples; they might look silly, but they illustrate the point well. In each of them we do the guessing first (even if it might be wrong) and then do the proper calculation. In the first example, the guessing is correct; note that in the proper calculation, after putting the terms together, we can still ignore the "+n" part, so it really plays no role in the final outcome. In the second calculation this n gets promoted to dominance.

Rule for dominant terms:
If we are evaluating a limit by guessing and there are more dominant terms in an expression, then we can put them together by algebra only if they do not cancel as a result.

If they do cancel, we have to give up on intuitive calculations and try some other method. However, even then this guessing part helps as a preparation, since it is often useful to know the types of terms in a sequence. There is an example in Solved Problems - Limits.

Ratios

We now reached the most general type that can be handled using the intuitive calculations: a fraction whose numerator and denominator are of the type we described above. How do we handle such fractions? First we separately investigate the numerator and denominator: We determine the dominant term of each of them, then factor them out. Then we have one dominant term in the numerator, one in the denominator, so we cancel them if we can and finally we find the limit of the resulting ratio. The scale of powers can again help here. If the term in the numerator prevails, we get infinity as the limit. If the term in the denominator prevails, we get zero as the limit.

This is quite natural. We usually get an infinity over infinity situation, and prevailing means that one infinity is larger than the other infinity, so it wins. For instance, when the infinity in the denominator prevails over the infinity in the numerator, it means that the denominator is eventually much much bigger than the numerator and the resulting ratio is thus very tiny, which suggests that it goes to zero.

We mentioned that sometimes the dominant terms of the denominator and numerator can be cancelled when they are factored from the ratio; we then obtain the type (or order) of the fraction as a whole.

Sometimes when facing a ratio, people prefer cancelling to factoring out and comparing. It works - but only sometimes. We offer a thorough discussion of how to handle ratios of polynomials in this note.

Example:
Find (if it exists)

First we will guess the answer intuitively, then do it by proper calculations. We should start by handling the roots.

In the cubic root, the dominant term is the cube, therefore we can ignore the other terms for large values of n. In the square root in the denominator, the exponential prevails over the square and so we can ignore the power.

Now we know the types of the roots, so we can compare them with the remaining terms and find the dominants, separately for the numerator and denominator. Then we use the hierarchy of powers to guess the outcome:

What was the reasoning? In the numerator, powers beat logarithms, and the highest power is the square. By the way, this shows why it is important to always handle roots first. At the first glance one would guess that the n3 is the dominant term in the numerator, but after we analyzed the root, we saw that it actually behaves just like n.

In the denominator, the dominant term was the exponential 2n, so we ignored the rest. Finally, since exponentials beat powers, we concluded that the ratio converges to zero.

Now we are supposed to confirm our guess by calculations - namely factoring. Although an experienced student would do it on a few lines, we prefer to show more detail and also add a remark concerning the two roots; therefore we offer the calculations here.


The intuitive reasoning can be applied to even more complicated expressions. A fraction as above can be put under a root and then added to some powers, exponentials etc. It would be difficult to express precisely where we can apply this kind of reasoning, but less precisely it goes like this: The basic building block is a linear combination of powers, exponentials, powers of logarithms, factorials, general powers (their multiples added and/or subtracted). This basic combination can be put under some root, thus creating a new building block that can be a part of another linear combination. These linear combinations can be put together using ratios, and all these procedures can be repeated in any order.

Warning: Ignoring of parts can be done only in basic linear combinations, that is, in sums with powers, exponentials, powers of logarithms, factorials and general powers. Other expressions (like roots, ratios) can be ignored only when they get replaced by their dominant terms and thus become eligible parts of linear combinations. It is not permissible to do ignoring in expressions which are not parts of roots and/or ratios, but are mixed up with other functions. For instance, we can replace n − ln(n) with just n if it is under a root or a part of a fraction, but we cannot do it if it is an argument of, say, exponential. For instance, it is easy to show that 2n − ln(n) ∼ 2n is actually wrong, and without further investigation we do not know whether ln(n − ln(n)) ∼ ln(n) or arctan(n − ln(n)) ∼ arctan(n).

For many examples we refer to Solved Problems - Limits.

Products

Although it does not happen often, sometimes expressions of the above type are multiplied together. In such a case one applies the intuitive procedure to each term of the product, finding the type of each. The type of the whole product is then the product of individual types. However, usually one does not get a type of the form we studied above (powers, exponentials etc.), rather a product of such types. Such expressions are not listed in our scale of powers, therefore it is rare to get a ready answer this way. However, often one can use experience with types to find out something useful anyway.

Example:

Note that we found out that the numerator is of the type n2n and the denominator is of the type n2n!; fortunately, we could cancel the two types smartly and still got the answer using the scale of powers. However, now we will make a small change and it will not work any more:

Now we are comparing (after cancelling) two types, n2n in the numerator and n! in the denominator. Since the former is not a part of the usual scale of powers, we do not know the outcome when the two types get divided. We know that n! beats 2n, but obviously n2n goes to infinity much faster than just 2n; could it be that it goes so much faster that it even gets ahead of n!, that is, is it possible that n2n beats n! ?

The answer is: no. In fact, n2n is a new type that beats 2n and is beaten by n!, so it fits exactly between these two. The sequence above converges to zero; the proof would be a modification of the proof that factorials beat exponentials.


Stolz-Cesaro theorem
Back to Theory - Limits