# 5.4. Convergent Sequences

We start this new section with a closer look at our introductory sequence $\left({a}_{n}\right)=\left(\frac{2n+1}{n+1}\right)$. Let us recall the table of values

with a detailed eye for the sequence members. First we note that each numerator is nearly twice the appropriate denominator, thus all sequence members are less than 2 in value. The difference between doubled denominator and numerator however is constantly 1 so the sequence members' difference from the number 2 is steadily decreasing.

We see this clearly noting them in their decimal representation and visualizing them on the real line:

The tendency for 2 becomes also obvious when calculating the distance - i.e. the difference in absolute value - of sequence members to 2. Take e.g. the 1000th member:

$|{a}_{1000}-2|=|\frac{2001}{1001}-2|=\frac{1}{1001}\text{.}$

Thus the 1000th sequence member and all to follow (note: the sequence is increasing) is closer to 2 than 0.001.

However to ensure that this sequence approaches the number 2 it isn't sufficient to choose some sequence members by chance and calculate their distance to 2 as small. The task has to be reversed instead: Are we able to prove that for each arbitrary distance $\epsilon >0$ nearly all sequence members are closer to 2 than $\epsilon$?

We will show this for our example sequence. But to this end we first have to introduce a precise notation for the fact that 'a sequence $\left({a}_{n}\right)$ approaches a number g'.

The following definition is the base of all calculus.

Definition:  Let $\left({a}_{n}\right)$ be any sequence and $g\in ℝ$.

We call  $\left({a}_{n}\right)$  convergent to g (in symbols: ${a}_{n}\to g$) if for each real $\epsilon >0$ there is a natural number ${n}_{0}\in {ℕ}^{\ast }$ such that

 [5.4.1]

If $\left({a}_{n}\right)$ is convergent to g, we call g a limit of $\left({a}_{n}\right)$. In this case the sequence $\left({a}_{n}\right)$ is convergent. Sequences with no limit are said to be divergent.

Consider:

• Occasionally there is an advantage in rephrasing the definition of convergence. Obviously the distance of ${a}_{n}$ to g will be less than $\epsilon$ if ${a}_{n}$ is at most $\epsilon$ units away from g regardless if it is to the right or to the left. That means ${a}_{n}$ has to be in an open interval centered at g with radius $\epsilon$:

$|{a}_{n}-g|<\epsilon ⇔{a}_{n}\in \right]g-\epsilon ,g+\epsilon \left[$

In this context we call these special intervals $\underset{¯}{\epsilon \text{-neighbourhoods}}$ of g and restate the convergence condition as follows:

 ${a}_{n}\to g⇔$ For every $\epsilon >0$ there is an ${n}_{0}\in {ℕ}^{\ast }$, such that  . [5.4.2]

Now we can prove exactly that our initial sequence converges to 2:

 Example:   $\frac{2n+1}{n+1}\to 2$ Proof:  Let $\epsilon >0$ be arbitrary. We need to find an ${n}_{0}\in {ℕ}^{\ast }$ so that all sequence members from ${n}_{0}$ on are closer to 2 than $\epsilon$. Let us see what they have to contribute to fulfill this demand: $\begin{array}{ll}|{a}_{n}-g|<\epsilon \hfill & ⇔|\frac{2n+1}{n+1}-2|<\epsilon \hfill \\ \hfill & ⇔|\frac{2n+1-2n-2}{n+1}|<\epsilon \hfill \\ \hfill & ⇔|\frac{-1}{n+1}|<\epsilon \hfill \\ \hfill & ⇔\frac{1}{n+1}<\epsilon \hfill \\ \hfill & ⇔\frac{1}{\epsilon } So we have: The distance to 2 of a sequence member is less then $\epsilon$ as soon as the index is more than $\frac{1}{\epsilon }-1$. This information enables us to choose an appropriate ${n}_{0}$: As $\left(n\right)$ is unbounded (cf. [5.3.21]), we can take an ${n}_{0}\in {ℕ}^{\ast }$ such that ${n}_{0}>\frac{1}{\epsilon }-1$. For all  $n\ge {n}_{0}$  this implies:  $n>\frac{1}{\epsilon }-1$. These n thus satisfy $|\frac{2n+1}{n+1}-2|=|\frac{-1}{n+1}|=\frac{1}{n+1}<\epsilon$.

Consider:

• When proving convergence we often need to cope with the request  'Now take an ${n}_{0}>\dots$'. But as $ℕ$ is unbounded we can always make such a choice! For our comfort we will tacitly assume this argument in future proofs.

Although we now definitely know that $\frac{2n+1}{n+1}\to 2$ we still can ask if  $\left(\frac{2n+1}{n+1}\right)$ has further limits. From the original definition we don't get any information on the uniqueness of the limit, but it is easy to see that no sequence permits two limits:

Proposition:

 Every sequence $\left({a}_{n}\right)$ has at most one limit. [5.4.3]

Proof:  Assume ${g}_{1}$ and ${g}_{2}$ are two different limits of $\left({a}_{n}\right)$. Then their distance

$r≔|{g}_{1}-{g}_{2}|$

is non-negative. If we center at ${g}_{1}$ and at ${g}_{2}$ each an open interval with radius $\frac{r}{2}$ we get two $\epsilon$-neighbourhoods with no common elements. (+)

But as ${a}_{n}\to {g}_{1}$ we get from [5.4.2] a natural number ${n}_{1}\in {ℕ}^{\ast }$ such that all sequence members ${a}_{n}$ with an index $n\ge {n}_{1}$ are members of the first neighbourhood. Similar we see all members from an appropriate index  ${n}_{2}\in {ℕ}^{\ast }$ onwards in the second neighbourhood.

Now set ${n}^{\ast }≔\mathrm{max}\left\{{n}_{1},{n}_{2}\right\}$. The sequence member ${a}_{{n}^{\ast }}$ thus belongs to both neighbourhoods, in contradiction to (+).

So we know that convergent sequences have exactly one limit. So we may introduce a symbol that only shows the data of  $\left({a}_{n}\right)$. If ${a}_{n}\to g$ we say that g is the limit of $\left({a}_{n}\right)$ and we denote this by:

$lim{a}_{n}=g$,  or more explicitely:  $\underset{n\to \infty }{\mathrm{lim}}{a}_{n}=g$.

The convergence  $\frac{2n+1}{n+1}\to 2$  e.g. may now be noted like this:

$lim\frac{2n+1}{n+1}=2$ .

The question of convergence is always combined with the problem of divergence. From our alternative definition [5.4.2] we get a quite handy criterion for divergence.

Proposition:

 ${a}_{n}\to g⇔$ each $\epsilon$-neighbourhood $\right]g-\epsilon ,g+\epsilon \left[$ of g misses at most finitely many sequence members. [5.4.4]
 ${a}_{n}/\to g⇔$ there is one $\epsilon$-neighbourhood $\right]g-\epsilon ,g+\epsilon \left[$ of g missing infinitely many sequence members. [5.4.5]

Proof:
 1. ► "$⇒$":  If $\right]g-\epsilon ,g+\epsilon \left[$ is an arbitrary $\epsilon$-neighbourhood of g, [5.4.2] provides an ${n}_{0}\in {ℕ}^{\ast }$ such that  But that means that at most the sequence members ${a}_{1},\dots ,{a}_{{n}_{0}-1}$ - and these are only finitely many - are missing in $\right]g-\epsilon ,g+\epsilon \left[$. "$⇐$":  If $\epsilon >0$, the neighbourhood $\right]g-\epsilon ,g+\epsilon \left[$ will miss at most finitely many members, say ${a}_{{n}_{1}},\dots ,{a}_{{n}_{k}}$. If we set ${n}_{0}≔max\left\{{n}_{1},\dots ,{n}_{k}\right\}+1$ all sequence members ${a}_{n}$ with $n\ge {n}_{0}$ will belong to $\right]g-\epsilon ,g+\epsilon \left[$. According [5.4.2] this proves the convergence ${a}_{n}\to g$. 2. ► This is just the negation of 1.

The following examples will play an important role in our further studies. The third one, by the way, uses our new criterion for 'non-convergent'.

Example:

 $\frac{1}{n}\to 0$ [5.4.6] For a given $\epsilon >0$  we take an  ${n}_{0}>\frac{1}{\epsilon }$. Then we have for all $n\ge {n}_{0}\text{:}$  $n>\frac{1}{\epsilon }$ and thus $|\frac{1}{n}-0|=\frac{1}{n}<\epsilon$. $c\to c$   for each constant sequence $\left(c\right)$ [5.4.7] If  $\epsilon >0$  all sequence members with  $n\ge {n}_{0}≔1$ satisfy $|c-c|=0<\epsilon$. $\left(n\right)$  ist divergent [5.4.8] We have to show: No real number $g\in ℝ$ is a limit of $\left(n\right)$. But for each g the set $\left\{n\in {ℕ}^{\ast }|n\ge g+1\right\}$ is infinite, i.e. the special $\epsilon$-neighbourhood $\right]g-1,g+1\left[$ misses infinitely many sequence members. With [5.4.2] we conclude: $n/\to g$.

Even without our proof the divergence of $\left(n\right)$ is quite obvious: The natural numbers 'break away to the right', they can't concentrate on a fixed point in the end. There are many divergent sequences with a similar behaviour. The next example however shows a divergent sequence that is bounded. In that case divergence is due to the fact that the sequence can't decide where to go.

Example:

 $\left({\left(-1\right)}^{n}\right)$ is divergent [5.4.9]

Proof:  Again we use our criterion [5.4.2].

The special $\epsilon$-neighbourhood $\right]1-1,1+1\left[=\right]0,2\left[$ of 1 misses infinitely many sequence members, namely all with an odd index. Thus at the very moment 1 is no limit of $\left({\left(-1\right)}^{n}\right)$.

Now, if g is an arbitrary real different from 1, we have:  $r≔|g-1|>0$. As r is the distance of g to 1 the neighbourhood $\right]g-r,g+r\left[$ will miss the number 1 and therefore all the infinitely many sequence members with an even index. So g is no limit of $\left({\left(-1\right)}^{n}\right)$ either.

Having no limit at all this sequence is divergent.

The convergence of a sequence describes the behaviour of the sequence members 'in the long run', so the 'first' members are of no significance in this respect. More precise we have the following:

Proposition:  Let $\left({a}_{n}\right)$ be an arbitrary sequence. For each  $k\in ℕ$  the following equivalence holds:

 ${a}_{n}\to g⇔{a}_{n+k}\to g$ [5.4.10]

Proof:

"$⇒$":  Let $\epsilon >0$ be given. As ${a}_{n}\to g$ we get an ${n}_{0}\in {ℕ}^{\ast }$ such that $|{a}_{n}-g|<\epsilon$ for all $n\ge {n}_{0}$. From $n+k\ge n\ge {n}_{0}$ we have for these n more than ever: $|{a}_{n+k}-g|<\epsilon$. This proves ${a}_{n+k}\to g$.

"$⇐$":   Let's start again with an $\epsilon >0$. Now we have ${a}_{n+k}\to g$, so we get an ${n}_{0}\in {ℕ}^{\ast }$ such that $|{a}_{n+k}-g|<\epsilon$ for all $n\ge {n}_{0}$. As all $n\ge {n}^{\ast }≔{n}_{0}+k$ satisfy $n-k\ge {n}_{0}$ we have for these n: $|{a}_{n}-g|=|{a}_{n-k+k}-g|<\epsilon$. Thus the convergence ${a}_{n}\to g$ is valid.

From our examples we see that the basic task in proving a certain convergence is to solve the inequality $|{a}_{n}-g|<\epsilon$ for n. With more complicated sequences this leads fairly quick to unsolvable problems. It might be helpful to note that it isn't necessary to solve the inequality equivalently. It is rather sufficient instead to squeeze down the distance $|{a}_{n}-g|$ beneath $\epsilon$ using the estimating technique. We illustrate this in our final example.

The next section will provide more comfortable utilities that often will replace efforts like the following.

 Example:   $\frac{6{n}^{2}+2n-1}{3{n}^{2}-n}\to 2$ Proof:  For an arbitrary $\epsilon >0$ we take an ${n}_{0}>\frac{2}{\epsilon }$. Then we get for all $n\ge {n}_{0}$ :

 5.3. 5.5.