For many students, the biggest problem with calculus is not the material but motivation. Why should we learn all these things, they ask when they start going through it, what use is it? What is really sad, many students are still asking these questions after they complete their course.
In most courses there are parts called "Applications" (we have been guilty of it here as well), but students do not see them as such. And indeed, applications of integrals are usually limited to volume of revolving bodies and similar things that are not something you do every day, and considering derivatives, even I cannot recall the last time I had to find a square of the largest area that can be inscribed in an apple.
Well, there are problems of this sort here in Math Tutor as well, and although I made an effort to also include problems that look at least remotely real (related rates, section Applications in Series - Theory - Series of functions etc.), I admit that students are entirely justified in not being happy about explanations they hear.
Another question they often ask is this: Why do mathematicians have to make everything so darn complicated? Who on Earth is supposed to make some sense of the seemingly never ending chain of Definitions and Theorems (to say nothing of proofs)? We will now try to give some reasons why mathematics is done and why it is done in this way.
Although a student of calculus may find it hard to believe, mathematics of this sort is indispensable once you start studying the world around seriously, in particular when you start wondering whether the world is just completely random or perhaps there are some rules that it follows. Essentially all important notions that are covered in a standard calculus course were in fact not dreamed up by mathematicians, but by other scientists (notably what we now call physicists), simply because they needed them. Mathematics is, above all, a language; language that we use to describe processes happening around us. The real mastery of mathematics is not solving derivatives and integrals, but being able to describe our thoughts and observations using mathematics and then use it to gain some insight into that thing.
For instance, we may notice that when we drop a stone, then the longer the
stone falls, the larger the distance it covers. This is a good beginning
(and we do not need any math for it), but this investigation does not get
far unless we ask how does the distance depend on time. Is it a
The main advantage of mathematics as language is its precision. If some process can be described mathematically, then (and only then?) we can discover and also confirm patterns that this process exhibits. Mathematics is also a set of tools that allow us to derive new information from descriptions that we obtain. And again, since mathematics is precise by its nature, the answers that we get in this way are reliable.
Accordingly, those branches of science that can use mathematics well have also highly reliable predictions. When a physicist claims that in a given situation, this or that happens, then our experience shows that it does happen. (Of course, there are also areas in physics where theories are new and untested, where the knowledge is still in the development stage - then one can expect a surprise or two. But that is fine, those surprises help physics advance.) For instance, mechanics of movement in gravitational field is well-understood, and when a satellite is launched at a certain speed and direction, we are able to predict its future path with amazing precision - and those predictions work (unless the satellite suffers some technical mishap).
We also understand stresses and forces and what they do with materials, mathematics provides answers that allow us to build high buildings and daring bridges that do not collapse (well, almost not at all). There are many such examples where mathematics is crucial, and you may ask why you did not hear about it in class. Well, for one, the class time is so limited and stretched thin that a typical lecturer is glad to cover the basic material, with no time for extras. Secondly, most of those applications need a rather good background in various areas (differential equations, physics,...) to do them justice, which is exactly something that (typically) freshmen do not have. Enough excuses, let's move on.
This last example about engineering touched upon an important topic.
When the structure we are describing is too complicated, then the
mathematical description (a model of this structure, usually a set of
equations that restrict the behavior of important parameters) is also
complicated and we may not have tools to actually "solve" such a complicated
model. After all, we cannot find an exact solution even for the simple
Sometimes the situation is so complicated that we cannot even describe it properly using mathematics. This explains why we cannot design directly, say, a "perfect airplane" with the least air resistance and best performance. There are so many things happening when an airplane moves through air that physicists are unable to entirely describe it. Reasonably faithful models are available, but equations there are so complicated that mathematics is unable to solve them (at least now, and there is no breakthrough in sight). With computers and approximate methods we can get good hints, but at the end we still take the designed shape into a wind tunnel and ask nature what it thinks about it, because there is always a chance that it has some surprise in store for us. Also cars, trains, competition bikes and other similar devices go to the wind tunnel. Similarly, we can only get approximate answers about strength of a construction on the one hand and the forces acting on it at different situations on the other hand. Thus when an airplane is designed, they build one just to be stressed and broken up to prove that it can stand up to what it should.
A sidenote that you can skip: Around the end of the 20th century there were two "invisible" airplanes in service, the F−117 (the ugly one with flat faces and sharp corners) and the B-2 (the smooth bomber with graceful lines). It all started with a work by a Russian mathematician, he investigated the model of radar waves reflection and found out that the strength of this reflection is influenced more by edges present on an object than by its size. Americans decided to apply this to a real airplane. The requirement that an airplane flies reasonably well and is small from the point of view of radar is a very tall order to fill, definitely not something that can be solved precisely, and thus the solution was searched by cranking approximate methods through computers big time. When the F−117 was being designed, there were no computers good enough to handle this; only after they simplified the problem by assuming that this airplane consists mostly of flat faces were they able to get some results and the plane was born. By the time they got around to designing B−2, computers improved so much that they were now able to handle the fiendish calculations needed to work with curved surfaces. Perhaps it's just me, but I find it amazing that the obvious visual difference between these two airplanes is caused by several years in development of computers, and the two factors behind this whole idea is a discovery by a mathematician and the fact that we are unable to solve the corresponding model precisely due to its complicated nature.
Similarly, we cannot reliably predict weather or stock market because these two systems are too complicated to be adequately described mathematically, to say nothing of solving the resulting models.
However, complexity of a model is not the only problem. Sometimes we simply cannot use mathematics at all or almost at all, since in order to be able to describe something mathematically, we must be able to measure it. Many things are not like this, a typical case would be psychology or philosophy. The only mathematics that works in psychology is statistics, and thus also the answers are statistical - useful when working with big groups, but of limited reliability when it comes to individuals. You probably noticed (or will notice now that your attention is turned to it) that overall reliability of a prediction is usually directly related to how much mathematics it uses.
Most mathematics that is used in this way depends on the twin notions of differentiation and integration, in particular in the form of differential equations. To see why we will offer two examples now. The first one is quite simple, we believe that a high school graduate should be able to follow it (even a good student in his/her third year). In this example we show how the important notions of calculus appear naturally when we try to make some sense of a rather simple situation. The second example is less trivial and assumes knowledge of material from a typical first year calculus course. It shows how some ideas covered here in Math Tutor help in studying heat and lead to an important differential equation.
From what we wrote above one thing is clear: The answers that mathematics gives must be 100 percent reliable. When we solve a quadratic equation using the popular rhyme "Be squared mind as four eyes see...", we want to be totally sure that the number(s) that come out of this formula really solve that equation (unless we made a mistake, but that's another story). It took some effort (and a couple of "revolutions in mathematics" about a hundred years ago) to achieve this, but since then the consensus is that we do have this reliability. (Note that this refers only to mathematics as such. Mathematics provides answers within the model in which it is used, but if this model does not describe precisely the real-life situation that we are trying to study, then also the answers - while mathematically correct - may not be true in real life. This distinction is lost on many people, and as we observed above, faithfulness of a mathematical model can be a real problem in some areas.)
When you thing about it, you soon realize that an absolute necessity for achieving such a reliability is that the meaning of notions that mathematics uses must be absolutely clear. For comparison, if you take five philosophers at random, put them in a room and ask them what a free will is, there is a good chance that soon a fight will break out, and very little chance that they will all agree. Then it is no surprise that philosophers also cannot agree on their conclusions. Similarly, when you ask whether a particular country is democratic or not, you often get different answers depending on whom you ask. This is something that could not be allowed in mathematics.
This is the reasons why we have definitions. You could say that definition is a test that allows one to decide. For example, let f be a function defined on the real line.
Definition: We say that f is odd exactly if it satisfies
f (−x) = −f (x)for all real numbers x.
This is a typical definition, most of them have the form "We say that an
object f satisfies property P (or: we call it P) if it
satisfies a certain condition C." Note that we used the word "if",
but it is actually wrong. What we really mean is "if an only if", a
definition is always exclusive. We say that we call an object P
Back to our definition of "odd". When all mathematicians adopt such a definition, then they also have to necessarily agree on which functions are odd and which are not. It is very simple, when you give them a concrete function and ask whether it is odd, they all test whether the above property is true. Since algebra is universal, they have to get the same answers. This is probably the main difference between mathematics and that philosophy example. Even if all philosophers agreed on some definition of free will, it would have to be express by words whose meaning is naturally not unique, therefore in concrete examples thay would still disagree on whether a certain process exhibits it or not. Thus the real problem is not in the unwillingness of philosophers to agree, but in the ambiguity of language used. Again we see that in mathematics we need a special language that would not allow such things.
A definition also has another meaning. If we somehow establish that a
certain object f does have the property P, then everybody can
use it when they need it, that is, they can use
Thus we actually use a definition in two different ways. If we have a new object and we want to show that it does have the desired property, then we have to somehow prove that this condition C works for this object. On the other hand, once somebody proves it, then we can use this condition freely whenever we work with that object.
Where do definitions come from? In a sense they are arbitrary. Imagine that the definition of "odd" was not done yet and I decided to do this:
Definition: We say that f is odd if it satisfies
f (0) = 13.
What would happen? The first thing to emphasise is that there is nothing wrong about this definition. It is a perfectly normal definition. On the other hand, although it is my definition, I have to admit that the property that is there is pretty much useless. Consequently, nobody would use it (except me if I really decided to push it) and as soon as I am dead, this definition disappears from mathematics. What we are trying to say is that the name is not really important; the important thing is the idea, and the idea usually comes from actual use. When people worked with functions, they noticed that some of them have this interesting behavior that if you put minus into them, it is the same as if you put it in front of them. More importantly, this strange behavior turned out to be quite useful and people used it here and there. Sooner or later you get bored of writing "since this function exhibits that strange behavior, I can do this and that with it," and simply put some name to that behavior. Instead of "odd" we could have called such a function "pink" and everything would work just the same. Most definitions came this way, they are shortcuts for saying "I can do this particular thing with this object."
Sometimes the first idea of how the definition should look like is not the best one. It happened many times that people made a definition of property P, but after they learned more, they suddenly realized that in most cases they actually say "this object has property P plus a little bit." So they got together and decided to amend this definition, so that the definition of P also included that bit. It also happens that one notion is defined independently by different people. This is not exactly rare, but since these definitions come from necessity, usually these independently created definitions actually agree. However, sometimes it happens that they do not say the same thing, and if both versions are useful and both of them attract following, then we may have the unpleasant situation when one notion may mean two things. For instance, we defined convex and concave functions here; this definition is used throughout (most of) Europe and Asia, but in South America they have the names exactly in the opposite way!
The moral of this story is that when you talk mathematics to people, it is always good to check that you are all using the same definitions. In 99.99 percent of cases you are, but when it seems that there is some misunderstanding brewing, then it is time to sit up and start asking good questions.
By the way, this relates to the other part of definitions. We talked about where the ideas come from, but what about names? Why is an odd function called odd and not stewplinous? Most names come by association, for instance those odd functions are a clear allusion to powers xk for odd k. Here the practical use does not help, and usually the name that somebody introduced first survives. Here independent creation of notions becomes more problematic, people pull the names out of thin air and we easily get problems like that confusion with increasing, non-decreasing, strictly increasing etc functions.
But the main thing in mathematics are theorems. A typical theorem says: If I
know a certain thing about an object, then automatically also another thing
must be true. If you are dead, you do not write homeworks. If, in a
multi-story house, you break down all walls on the ground floor, then the
house falls down. If you sit down on an anthill inhabited by the fire-ant
variety, then you will soon wish you didn't. Actually, it might be fun to
try to find situations where these three statements might not be true, but
for a mathematical statement any possibility of not working must be totally
excluded (at least within the world of logic and arithmetics). If
Of course, "real" mathematical theorems are somewhat more complicated than this, and it is naturally also less trivial to see that they are indeed correct. However, since we want mathematics to be totally reliable, we need to establish the truthfulness of every statement that is made. This brings us to proofs. A proof is essentially an argument that shows (in a typical case) that you can get from the assumption of the theorem to its conclusion using a logical chain of statements; two neighboring statements in this chain must be either so close that there is a simple (and therefore easy to check) logical or algebraic step from one to another, or there must be an already proved theorem that creates a bridge from one statement to another.
A (theoretical) mathematician's job typically revolves around discovering new statements and then looking for ways to prove them. If a proof is found, the result is published and other mathematicians scrutinize it. If they are satisfied that there is no gap in the argument, no room for doubt, then it is accepted and this statement becomes a theorem. Sometimes a mistake is discovered, this happens once in a while. Since mathematics is not a religion, mathematicians are usually quick to admit the problem and either retract the alleged "theorem", or provide a real proof. Theorems that passed this test are collected in books and form theories, these are reliable (with very few rare exceptions - not counting misprints of course, typos have been with us since the dawn of time and it seems that they are here to stay for a while despite all the technological progress).
And this brings us to the last topics. If we want to be able to check that proofs are correct, then we have to be careful about the language that is used in those proofs (recall the philosophers). This essentially forces mathematicians to adopt a certain formalism, something that makes arguments (and mathematics in general) harder to understand to humans, but it has the advantage that it does not allow for two or more possible meanings (unlike human languages). In other words, here comes the epsilon and delta. Frankly, we are not exactly happy about them either. Mathematicians even tried to do without them, but unfortunately, they were also getting answers that turned out to be wrong, which is something that just won't do. Reluctantly they introduced that boring and unpleasant formalism and mathematics became the scourge of students. (Sidenote: Relatively recently a way was found to actually do calculus correctly and without epsilons. The problem is that you would first have to spend a semester learning about infinitesimals, which you would probably enjoy about as much as those epsilons, and only then start with limits, derivatives etc.) At the beginning of this section we made a statement about what "mastery of mathematics" means. Here is another: "Knowing mathematics" in particular means being able to freely translate those formal mathematical sentences into "humanese" and back. If you can do it, you discover that often behind those weird theorems there are interesting ideas. One of the main motivations for creating Math Tutor was actually the desire to help students in seeing those ideas behind mathematics. We hope that we succeeded at least a little bit.