Mappings, cardinality

Here we introduce mappings, look at their properties and introduce operations. At the end of this section we look at comparing sizes of sets.

Definition.
Consider sets A and B. By a transformation or a mapping from A to B we mean any subset T of the Cartesian product A×B that satisfies the following condition:

For every a from A there is a unique b from B such that (a,b)∈T.

We write T : A ↦ B.

Example: Consider sets A = {a,b,c} and B = {1,2}. Then for instance

T = {(a,1),(b,1),(c,2)}

is a mapping from A to B.

We view such a mapping as a black box that sends elements from one set to another set, for instance the element a is sent to 1. The condition from the definition essentially means that one element from A cannot be sent to two (or more) different places in B. On the other hand, there is no problem in sending two elements to the same place, as you can see in the example. Note that it is not necessary to use up all elements from the target set B in the mapping.

This idea of sending is also reflected in a special notation we use for mappings. Instead of saying that, say, (a,1) is in T, we would write T (a) = 1. We say that 1 is the image of a under T or that 1 is the value of T at a. Sometimes we also use the notation

T : a ↦ T(a).

The starting set A is called the domain of T, denoted D(T ). Alternative notations: DT or dom(T ). The target set B is less important as such than the domain, in particular because it may contain extra points not involved in the given mapping. Of interest is the range of T, which is the set of all elements of B that are images of some elements of A under T. We denote it R(T ), alternatives are RT or ran(T ). In our example above, D(T ) = A and R(T ) = B, the latter does not happen often.

Once we can send elements, we can also send sets. If D is a subset of A, the the image of D under T is the set

T [D] = {T(d); dD}.

For instance, R(T ) = T [A].

Remark. Some authors also use the name function as an equivalent for a mapping. However, other authors reserve this name only for mappings that work with numbers, whereas the names mapping and transformation suggest a more general framework. This is the approach we adopt here, but since it is not a universally accepted convention, we will not make a big deal out of it.

There are two popular ways to draw mappings. One is the graph of T, when we somehow mark which elements of the Cartesian product A×B are involved in T. For instance, for the above example we could draw this:

In practice we usually do not bother drawing the unused elements of the product and use this picture instead.

Drawing such a graph is probably the most popular visualization, but sometimes a different kind of a picture is more telling and therefore more useful.

For ordered sets (especially sets of numbers) we often prefer this picture.

Before we move on we introduce one mapping that is always defined when we have a set.

Definition.
Consider a set A. We define the identity mapping on A, denoted IdA, as a mapping from A to A defined by IdA(a) = a for every a from A.

Properties of mappings

There are two important properties of mappings.

Definition.
Consider a mapping T from A to B.
We say that T is one-to-one (1-1), or injective, or that it is an injection, if for every two non-equal elements a1, a2 from A also their images T(a1), T(a2) are not equal.

We say that T is onto, or surjective, or that it is an surjection, if R(T ) = B. We say that T is a mapping "from A onto B".

The condition for injectivity says that two distinct elements cannot be sent to the same place. We have an advantage here that we do not pretend to be formally perfect, so we could say it this way. However, if we wanted to write it precisely and correctly using logical language, the definition would be more complicated than an alternative definition that goes like this. A mapping is 1-1 if the following condition is true: If two element are sent to the same place, then they must have been the same in the first place.

T(a1) = T(a2) ↦ a1 = a2.

To a mathematician this statement is way neater, so it is used very often. It is also more convenient for practical work, since working with the relation "not equal" can get tricky, while we know how to deal with equalities.

Examples: In the following picture, the two examples of mappings in the first row are 1-1, while the three examples in the next two rows are not.

Now concerning surjectivity, the two examples of mappings in the first row are onto, while the three examples in the next two rows are not.

Note that sometimes these properties are determined by number of elements in the two sets. In the picture about injectivity, note that in the very last example we have no chance of creating a 1-1 mapping from A to B, simply because there are not enough elements in the target set. Similarly, in the picture about surjectivity, there is no way we can make a mapping onto B in the last example, since we cannot reach four elements with just three arrows. In general, if A and B are finite sets and the size of A is greater (sharply) then the size of B, then we cannot make a 1-1 mapping from A to B; on the other hand, there is no chance of surjectivity if the size of A is (sharply) smaller than that of B. We can also pass to counterpositives. If there exists a 1-1 mapping from A to B, then B must have at least as many elements as A. If there exists a mapping from A onto B, then A must have at least as many elements as B.

The above two properties combine into a very useful notion.

Definition.
Consider a mapping T from A to B.
We say that T is bijective, or that it is a bijection, if it is one-to-one and onto.

Among all the above examples there is just one bijection, namely the upper left example in the last (and second last) picture. Bijections are very useful, since they allow us to compare sets. For instance, from the above remark concerning sizes we should guess that for two finite sets, unless they have the same number of elements, they cannot be connected by a bijection; conversely, if there is some bijection, then the two sets must have the same number of elements. We look closer at this in the last part of this section.

Intuitively, if we have two sets and a bijection between them, then we have a pairing of elements of the two sets, and it seems that the direction of the arrows is no longer important. This is true, we will look at this in the next part.

Operations with mappings

We start with something very simple, it does not count as an operation, but it is something we can do to mappings.

Definition.
Consider a mapping T from A to B. Let D be a subset of A. We define the restriction of T to D as the mapping S from D to B defined by S(d) = T(d) for all d from D.

Intuitively, we simply pretend that T does not exist outside D and we have the restriction. We denote it T |D.

Since the sets we work with here are general, there are no operations (like addition etc.) available. Thus the only thing we can try in general is to look at composition of mappings.

Definition.
Let T be a mapping from A to B and let S be a mapping from B to C. By the composition of these two mappings we mean the mapping S ○ T from A to C defined by

S ○ T : a ↦ S(T(a)) for a from A.

That is, we first find the image of a under T and then we substitute this image into S. The picture shows that it makes sense, since the images under T are exactly in the space where S works; precisely, the range of T is a subset of the domain of S.

For instance, (S ○ T )(a) = S(T(a)) = ½. In effect, we "skip the middleman", composition means that every element from A gets sent twice in a row and then we consider it just one sending, thus obtaining a new mapping.

Note that the notation for composition is read "right to left", that is, when substituting, we first use the function on the right and then the one on the left. This may be unusual at first and beginners occasionally mix it up, but it makes sense, since the order is the same as the alternative notation S(T ). Note also that given the sets and mappings as above, we cannot compose in the other order. The composition T ○ S = T(S ) cannot be done: If we substitute an element from B into S, we get an element from C, but T does not act on C.

Note also that in such a composition the second mapping S only matters at places where one can get to it using T; in other words, values of S at elements of B that are not parts of R(T ) are irrelevant for the composed mapping. Using this observation, the following statements that put together properties and composition come easily:

Fact.
Let T be a mapping from A to B and let S be a mapping from B to C.
If both T and S are 1-1, then S ○ T is also 1-1.
If T is not 1-1, then S ○ T is also not 1-1.

If both T and S are onto, then S ○ T is also onto.
If S is not onto, then S ○ T is also not onto.

If both T and S are bijections, then S ○ T is also a bijection.

It might be interesting to think of some implications that suggest themselves but are not listed above, obviously because they are not true. For instance, the following picture shows that it can happen that S is not 1-1, but the composition is 1-1.

On the other hand, in the previous picture S was not injective and nor was the composition. This shows that if the injectivity fails for S, then the injectivity of the composition S ○ T depends on the way in which the injectivity fails for S. Similar observation works for surjectivity. Try to think of an example where T is not onto, but the composition is; or an example when both T and S are not bijective, but S ○ T is. By the way, from the above statements it also follows that if we compose two mappings and one of them is a bijection, then properties of the second one (injectivity, surjectivity) are preserved.

Once we have composition, we can pass to another important definition.

Definition.
Let T be a mapping from A to B. We say that a mapping S from B to A is the inverse mapping to T, denoted T −1, if

S(T(a)) = a for every a from A   and
T(S(b)) = b for every b from B.

If such an inverse mapping exists, then we say that T is invertible.

Intuitively, these two mappings cancel each other's effect. If T sends a someplace, then S sends it back where it came from, and vice versa.

The symmetry in the above picture means that being inverse is in a sense a symmetric notion, that is, also T is an inverse to T −1, we can write it as follows: (T −1)−1 = T. If an inverse exists, then it is unique, so we can talk about the inverse mapping.
Note that DT −1) = RT ) and RT −1) = DT ).

Above we had lots of pictures with examples of mappings that were or where not 1-1/onto. If you think about them for a while, you should start having a feeling that it is not possible to "undo" action of a mapping if it is not one-to one or onto. In the first case two arrows go to the same place, so obviously it is not possible to create an arrow in the "opposite direction" for the inverse mapping and the first equality in the definition is not true. If T is not onto, then there is an element in B that is not a target for any arrow, thus making it impossible again to do an "opposite" arrow, this time the second condition in the definition fails.

Theorem.
Let T be a mapping from A to B. It is invertible if and only if it is a bijection.

Remark: If a mapping is a bijection, then it is invertible, by the above observation also its inverse is invertible and thus the inverse mapping is a bijection.

It is interesting to put the notion of inverse together with the composition. The condition in the definition of the inverse function can be rewritten as follows:

T −1 ○ T = IdA     and     T ○ T −1 = IdB.

This suggests an interesting structure for the operation composition. If you are curious, look at this note.

Note that if a mapping from A to B is not onto, then there are some unused elements in B. In many situations we can disregard them, thus we get the following statement.

Theorem.
Let T be a mapping from A to B. If we consider it as a mapping from A to R(T ), then it is bijective (and therefore invertible) if and only if it is 1-1. Then it also has an inverse T −1 from R(T ) to A.

Comparing sizes of sets

If two sets are finite, we can compare their sizes by simply counting numbers of elements in each of them. As we saw above, we can also determine equality of their sizes by asking the following question: Does there exist a bijection from one set to another? This other way is used as a definition, since it works not only for finite sets, but also for infinite sets. Then we do not talk about sizes, but about cardinalities.

Definition.
Consider two sets A and B. We say that they have the same cardinality, denoted |A| = |B|, if there exists a bijection from A onto B.

We say that the cardinality of A is less than or equal to the cardinality of B, denoted |A| ≤ |B|, if there exists an injective mapping from A to B.

Equivalently, |A| ≤ |B| if there exists a map from B onto A. This comparison of sizes behaves very reasonably, for instance every set A satisfies |A| ≤ |A| and |A| = |A|, this relation is easily shown to be transitive (see Ordering in the next section). One can also show that any two sets can be compared, that is, for any two sets A, B on has |A| ≤ |B| or |B| ≤ |A|.

This feeling that things work "normally" is further strengthened by a statement called the Cantor-Bernstein theorem: If |A| ≤ |B| and |B| ≤ |A|, then necessarily |A| = |B|. It sounds obvious, but since comparing sizes of sets involves existence of mappings, it is actually rather hard to show that things work the way we would like them to. We can also define sharp comparison, |A| < |B| if |A| ≤ |B|, but |A| = |B| is not true. Also this relation works as one would expect.

We say that a set A is finite if either it is empty (then it has cardinality 0) or there exists a positive integer n such that A has the same cardinality as the set {1,2,...,n}, in which case we say that A has cardinality n. We denote it |A| = n or also #A = n. Otherwise we say that the set is infinite, sometimes peoplw write |A| = ∞.

As we observed above, for finite sets the notion of cardinality agrees with the intuitive notion of size. It gets more interesting when sets are infinite, since it turns out that there are many "sizes" of infinity. One of the most important "sizes" is the cardinality of the set of all natural numbers (see the next section).

Definition.
Consider a set A. We say that this set is countable if it has the same cardinality as the set ℕ of all natural numbers.

Countable sets are those sets whose elements can be "indexed" or "enumerated". Indeed, consider a set A that is countable. By definition, there must be some bijection T from A onto the set ℕ. Now we can order elements of A according to the number they are assigned via T. First we take the element a such that T(a) = 1, we can call it a1, then we take the element a such that T(a) = 2, we can call it a2, and so on, and since T assigns a number (an "index") to every element of A, we eventually get the whole set A in this way.

This property of being "indexable" is very useful, so countable sets are very welcome, in particular such a set can be exhausted by induction. By the way, countability is the smallest size of infinite sets. What sets are countable? It turns out that quite a lot, since the "countable" size of infinity can take quite a lot. We start with a simple observation that adding one element to a countable set does not change its size. To prove this, it is enough to show that the set of natural numbers has the same cardinality as the set of natural numbers enriched by 0. The bijection we use is T(n) = n − 1, convince yourself that it is 1-1 and onto. A picture shows intuitively that elements of the two sets can be paired (the arrows correspond to that T ) and so they are of the same "size".

We see that adding one element to an infinite set does not change its size. We can do even better. If we enlarge a countable set by another set of the same size, it will still be countable. One way to see this is to prove that

integers form a countable set,

since we know that integers are formed by taking natural numbers and adding natural numbers with minus signs (cf. the next section). Intuitively we would say that there must be twice as many integers as there is naturals, but infinities are tricky.

First we need to make infinitely many holes in natural numbers to make room for negative integers. Consider the mapping T(n) = 2n from the set ℕ to ℕ.

This actually shows that the set of natural numbers has the same size as the set of even natural numbers, but for us the important thing is the holes on the right. There is obviously infinitely many of them, so they offer places where we can send negative numbers.

This particular T is defined as follows: T(n) = 2n for positive integers and T(n) = 1 − 2n for zero and negative integers. It is easy to check that this is a bijection from integers onto natural numbers. We said that countability is equivalent to being able to index the given set. We can argue the countability of integers also this way: We will come up with a way to enumerate all integers, for instance like this:

We index by keeping going left and right, obviously we will eventually index all integers in this way. By the way, this indexing exactly corresponds to the bijection above. We introduce this alternative reasoning here, because we will use it in proving the most outrageous claim so far: If we add infinitely many copies of natural numbers to this set, we again get a countable set. In other words, the Cartesian product of natural numbers with natural numbers has the same size as the original set. We will prove it by coming up with a way to enumerate all pairs. We will indicate the first few indexed pairs and then show the general pattern.

It seems clear that if we follow the pattern, we eventually index all the pairs. We will now look at this result from two different points of view. First, if we think of countable infinity as an abstract element, then the first fact concerning countability shows that actually ∞ + 1 = ∞. By induction we see that adding a natural number to infinity does not change it. The second fact (integers are countable) says, from this point of view, that ∞ + ∞ = ∞, but also that 2 ⋅ ∞ = ∞. By induction we can prove that in fact n ⋅ ∞ = ∞ for any natural number n. The last (so far) fact (on Cartesian product) says that also ∞ ⋅ ∞ = ∞, that is, 2 = ∞. And again, by induction n = ∞ for any natural number n. However, that's where things stop, since is much bigger that the original infinity, in fact 2 is already bigger. We will not go into this any deeper, instead we revisit the last fact.

We claim that

rational numbers form a countable set.

First, since rational numbers include natural numbers, it is clear that they are at least countable. Thus the more difficult part is in proving that they are also at most countable. Recall that we proved that the set of all pairs (a,b) of natural numbers is countable. Since we know that doubling does not change size of infinite sets, it follows that also the set

M = {(a,b); a an integer, b a natural number}

is countable. The mapping S defined by S(a,b) = a/b is a mapping from M onto rationals, which means that the size of rational numbers is at most the same as the size of M, that is, rationals are at most countable. The proof is finished.

Thus we see that all sets that we obtained by trying to fix some algebraic problems (cf. the next section) are countable. However, this is not true for real numbers.

Real numbers is not a countable set.

The classical proof of this fact shows that no matter how we try to enumerate real numbers, there will always be some left, and it is surprisingly simple, but still too long for this section; you have to take our word for it. Since connecting two countable sets yields a countable set and rationals are countable, it follows that there is uncountably many irrational numbers. In other words, when we look at a line in the plane, mark some point as origin and then mark all points whose distance from the origin is a rational number, then there will be substantially more holes than points on this line.

Cardinality of sets has been studied thoroughly for quite some time, there are sets that are even bigger than the set of real numbers and even bigger and so on (there is an infinite chain of infinite sizes), but one seemingly simple question remains open: Is the cardinality of real numbers the next after countability, or is there a set whose cardinality is strictly larger than countability but strictly smaller than cardinality of real numbers? Most mathematicians think that there is no intermediate size, which is called the Continuum Hypothesis, but nobody could prove it so far. If you happen to have an idea concerning this, you have a good chance to get some major math prize.