5.5. Properties of Convergent Sequences

The first question to study will be if convergence has implications on monotony and boundedness. The results however are poor.

As we find monotone and non-monotone sequences among the convergent sequences, and among the non-convergent ones as well, there is no connexion between monotony and convergence.

A slight improvement is known with boundedness: Though there are divergent bounded sequences, take e.g.  $\left({\left(-1\right)}^{n}\right)$, we can prove that convergence always implies boundedness.

Proposition:

 Each convergent sequence is bounded. [5.5.1]

Proof:  If ${a}_{n}\to g$, then according to [5.4.2] we find especially for 1 a number  ${n}_{0}\in {ℕ}^{\ast }$, such that

.

From that we get for all  $n\in {ℕ}^{\ast }$:

$\mathrm{min}\left\{g-1,{a}_{1},\dots ,{a}_{{n}_{0}-1}\right\}\le {a}_{n}\le \mathrm{max}\left\{g+1,{a}_{1},\dots ,{a}_{{n}_{0}-1}\right\}$.

One conclusion is a new criterion for divergence:

Unbounded sequences are always divergent.

Thus the divergence of e.g. $\left(n\right)$ is also due to its unboundedness.

There are certain connexions between the position of the sequences members and the position of the limit. More precisely we have:

Proposition:  Let ${a}_{n}\to g$. Then we have for any  $c\in ℝ$:

 $g>c⇒{a}_{n}>c$   for all n as of an appropriate ${n}_{0}$ $g   for all n as of an appropriate ${n}_{0}$ [5.5.2] [5.5.3]

Proof:  We only show the first variant in each case.
 1. ► Let $g>c$, i.e. $r≔g-c>0$. As ${a}_{n}\to g$ the special $\epsilon$-neighbourhood  $\right]g-r,g+r\left[$ contains all sequence members from a certain ${n}_{0}$ onwards. In particular this means: 2. ► Suppose $g. According to 1. we may conclude ${a}_{n} for nearly all n. But this contradicts the premise ${a}_{n}\ge c$ for all n.

Consider:

• 1. is often used, especially if $c=0$, the following way:
 $g\ne c⇒{a}_{n}\ne c$   for all n as of an appropriate ${n}_{0}$ [5.5.4]

• The example $0<\frac{1}{n}\to 0$ shows that 2. cannot be tightened by replacing e.g. $\ge$ by $>$.

• As  ${a}_{n}\to g⇔{a}_{n+k}\to g$  we may however in 2. extenuate the premise to 'for nearly all n'.

• Obviously both statements in 2. can be combined to:
 [5.5.5]

When testing zero sequences for convergence we may confine ourselves to positive sequences, which often provides some technical comfort.

Proposition:  For any sequence $\left({a}_{n}\right)$ we have:

 ${a}_{n}\to 0⇔|{a}_{n}|\to 0$ [5.5.6]

Proof:  From the equality $|{a}_{n}-0|=|{a}_{n}|=||{a}_{n}|-0|$ we see that for each $\epsilon >0$ the following equivalence holds:

$|{a}_{n}-0|<\epsilon ⇔||{a}_{n}|-0|<\epsilon$.

And from that our assertion is immediate.

Let's take $\left(\frac{{\left(-1\right)}^{n}}{n}\right)$ as an example. From $|\frac{{\left(-1\right)}^{n}}{n}|=\frac{1}{n}\to 0$  we get:

$\frac{{\left(-1\right)}^{n}}{n}\to 0$

Approaching the limit of a convergent sequence has implications for the sequence members among themselves: Their distance to each other will also decrease arbitrarily. The following property of a convergent sequence is called the Cauchy-Condition.

Proposition:  If ${a}_{n}\to g$ there is an  ${n}_{0}\in {ℕ}^{\ast }$ for each $\epsilon >0$ such that

 [5.5.7]

Proof:  At first we get an  ${n}_{0}\in {ℕ}^{\ast }$ such that

Thus, using the triangle inequality, the following estimate holds for all $n,m\ge {n}_{0}\text{:}$

$|{a}_{n}-{a}_{m}|=|{a}_{n}-g+g-{a}_{m}|\le |{a}_{n}-g|+|{a}_{m}-g|<\frac{\epsilon }{2}+\frac{\epsilon }{2}=\epsilon$

It is interesting to ask if the Cauchy-Condition alone implies convergence. In general the answer is 'No'. There is a divergent sequence in $ℚ$ satisfying the Cauchy-Condition! In $ℝ$ however we have peculiar circumstances and indeed real Cauchy-Sequences are always convergent, as we will prove in chapter 8.

Finaly we note the nesting theorem, a criterion that often copes successfully with 'difficult' sequences.

Proposition (nesting theorem):  Let $\left({a}_{n}\right)$, $\left({b}_{n}\right)$ and $\left({c}_{n}\right)$ be three sequences such that

.

If $\left({a}_{n}\right)$ and $\left({c}_{n}\right)$ are convergent to the same limit g, i.e. ${a}_{n}\to g$ and ${c}_{n}\to g$, then $\left({b}_{n}\right)$ also converges to g:

 ${b}_{n}\to g$ [5.5.8]

Proof:  We use the criterion [5.4.2]. Let $\epsilon >0$ be arbitrary. As ${a}_{n}\to g$, we find an  ${n}_{1}\in {ℕ}^{\ast }$ such that

Similar we get an  ${n}_{2}\in {ℕ}^{\ast }$ with

Thus for all $n\ge {n}_{0}≔\mathrm{max}\left\{{n}_{1},{n}_{2}\right\}$ the following estimates hold:

$g-\epsilon <{a}_{n}\le {b}_{n}\le {c}_{n}

$g-\epsilon <{b}_{n}

The latter estimate however proves the convergence ${b}_{n}\to g$.

Consider:

• 'As usual' the premise may be extenuated to:  ${a}_{n}\le {b}_{n}\le {c}_{n}$  for nearly all n.

Example:

 $\frac{\mathrm{sin}n}{n}\to 0$ [5.5.9]

Proof:  All sine figures are between −1 and 1 so that the following estimate holds for all n. With the noted convergences we get our result from the nesting theorem.

$\begin{array}{ccc}\underset{↓}{-\frac{1}{n}}& \le \frac{\mathrm{sin}n}{n}\le & \underset{↓}{\frac{1}{n}}\\ 0& & 0\end{array}$

This example is easily generalized. We only needed the boundedness of $\left(\mathrm{sin}n\right)$ and the fact that $\left(\frac{1}{n}\right)$ is a zero sequence.

Proposition:  If $\left({a}_{n}\right)$ is bounded and $\left({b}_{n}\right)$ convergent to zero, we have:

 ${a}_{n}\cdot {b}_{n}\to 0$ [5.5.10]

Proof:  Due to [5.5.6] we only need to show: $|{a}_{n}\cdot {b}_{n}|\to 0$.  As $\left({a}_{n}\right)$ is bounded we get an $s\in {ℝ}^{>0}$ such that $|{a}_{n}|\le s$ for all n. So we have:

$0\le |{a}_{n}\cdot {b}_{n}|=|{a}_{n}|\cdot |{b}_{n}|\le s\cdot |{b}_{n}|$

With [5.5.6] we see that both $\left({b}_{n}\right)$ and $\left(|{b}_{n}|\right)$ are zero sequences as is $\left(s\cdot |{b}_{n}|\right)$ according [5.6.5]. Furthermore we obviously have $0\to 0$ so that the assertion now holds by the nesting theorem. 