# 7.5. The Basic Representation Theorem

Testing the differentiability of a function usually means to look for a continuous extension of the difference quotient function, a sometimes cumbersome procedure but working with many functions. There are however seriuous difficulties with functions like sin, cos or exp, i.e. functions too important to ignore them.

Fortunately we will succeed in this and similar cases using an equivalent criterion, the basic representation theorem, to be introduced in this chapter. This theorem will also significantly bring foward our subject.

Proposition:  Let $a\in A$ be an accumulation point of $A\subset ℝ$. For any function  $f:A\to ℝ$ we have:

 f  is differentiable at a$⇔$ there is a function  $r:A\to ℝ$, continuous at a,such that  $f=f\left(a\right)+\left(X-a\right)r$ .
[7.5.1]

Differentiability assumed we have in addition:  r is uniquely determined and  $r\left(a\right)={f\phantom{\rule{0.05em}{0ex}}}^{\prime }\left(a\right)$.

Proof:  Firstly we note for any function  $r:A\to ℝ$ and any $x\ne a$ the equivalence:

$\begin{array}{ll}\hfill & f\left(x\right)=f\left(a\right)+\left(x-a\right)r\left(x\right)\hfill \\ ⇔\text{ }\hfill & r\left(x\right)=\frac{f\left(x\right)-f\left(a\right)}{x-a}\hfill \\ ⇔\text{ }\hfill & r\left(x\right)={m}_{a}\left(x\right)\hfill \end{array}$ [0]

"$⇒$":  Now let  f be differentiable at a which means that the difference quotient function ${m}_{\phantom{\rule{0.0em}{0ex}}a}$ is continuously extendable at a, let's say by  $r:A\to ℝ$. As r is continuous at a it remains to show that  f is representable as asserted. Both functions coincide in their domain and in their range, so we only need to prove that  $f\left(x\right)=f\left(a\right)+\left(x-a\right)r\left(x\right)$ for all $x\in A$.

• If $x=a$  this is obvious:  $f\left(a\right)=f\left(a\right)+\left(a-a\right)r\left(a\right)$ .

• If $x\ne a$  we have  $r\left(x\right)={m}_{\phantom{\rule{0.0em}{0ex}}a}\left(x\right)$, which is actually the desired result due to [0].

Furthermore it is already know in this case that  $r\left(a\right)={f\phantom{\rule{0.05em}{0ex}}}^{\prime }\left(a\right)$.

"$⇐$":  If  $f=f\left(a\right)+\left(X-a\right)r$ with a function r continuous at a the equivalence [0] guarantees that r is a continuous extension of ${m}_{\phantom{\rule{0.0em}{0ex}}a}$. In other words: ${m}_{\phantom{\rule{0.0em}{0ex}}a}$ is continuously extandable at a which proves  f to be differentiable at a.

Now, if $r,s:A\to ℝ$ are two functions representing  f we see from [0] that:

• $r\left(x\right)={m}_{\phantom{\rule{0.0em}{0ex}}a}\left(x\right)=s\left(x\right)$, which says in particular $r\left(x\right)=s\left(x\right)$ for all $x\ne a$

and with f being differentiable at a we also have

• $r\left(a\right)={f\phantom{\rule{0.05em}{0ex}}}^{\prime }\left(a\right)=s\left(a\right)$

so that finally the identity $r=s$ holds.

Consider:

• The proof of [7.5.1] shows that the function r needed when representing  $f=f\left(a\right)+\left(X-a\right)r$ is just the continuous extension of the difference quotient function. If this extension is available the representation is easily written down. As an example we consider the square and the reciprocal function:

$\begin{array}{ll}\hfill {X}^{2}& ={a}^{2}+\left(X-a\right)\left(X+a\right)\hfill \\ \hfill \frac{1}{X}& =\frac{1}{a}+\left(X-a\right)\left(-\frac{1}{aX}\right)\hfill \end{array}$

• Considering that  $r\left(a\right)={f\phantom{\rule{0.05em}{0ex}}}^{\prime }\left(a\right)$ the representation theorem reveals a surprising similarity of the function  f and its tangent function ${t}_{\phantom{\rule{0.0em}{0ex}}a}$ :

$\begin{array}{ll}\hfill f& =f\left(a\right)+\left(X-a\right)r\hfill \\ \hfill {t}_{\phantom{\rule{0.0em}{0ex}}a}& =f\left(a\right)+\left(X-a\right)r\left(a\right)\hfill \end{array}$

If we replace the actual values $f\left(x\right)$ by the values of the tangent function, which may be easier to get, we are able to calculate the error:

$|f\left(x\right)-{t}_{\phantom{\rule{0.0em}{0ex}}a}\left(x\right)|=|x-a|\cdot |r\left(x\right)-r\left(a\right)|$

For continuity reasons this error will decrease when x approaches a. The linear function ${t}_{\phantom{\rule{0.0em}{0ex}}a}$ is called an approximation for  f at a, and  f itself is said to be a linearly approximable at a.

• This approximation concept ist extendable in the following sense:
A function  $f:A\to ℝ$ is called approximable of order k at an accumulation point $a\in A$ if there is a function $r:A\to ℝ$ contiunuous at a such that

$f=f\left(a\right)+{\left(X-a\right)}^{k}r$

We won't dwell on this higher approximation concept.

Continuity and differentiability are two basic concepts with functions. The representation theorem reveals how they are related.

Proposition:  Let $a\in A$ be an accumulation point of $A\subset ℝ$.

 If  $f:A\to ℝ$ is differentiable at a then  f is continuous at a. [7.5.2]

The reverse is not valid.

Proof:  If  f  is differentiable at a we may take the representation

$f=f\left(a\right)+\left(X-a\right)r$

according to [7.5.1]. r is continuous at a as are the constant function  $f\left(a\right)$ and the linear function $X-a$ as well. Continuity at a is thus guaranteed by the continuity theorems [6.3.1,3].

The absolue value function $|X|$ is continuous, but not differentiable at 0 which proves the reverse to be false.

The absolute value function is continuous everywhere and fails to be differentiable at only one point, which is of course sufficient for a counter example to [7.5.2]. It is quite a challenge to look for continuous functions that are nowhere differentiable. The first to construct a function like that was Karl Weierstrass in 1872.

The just established continuity of a differentiable function is not the only property resulting from the representation theorem [7.5.2]. There are many more, and as another example we will show that the inverse of an injective function  f is differentiable as well if  f is regular at a, i.e. if  ${f}^{\prime }\left(a\right)\ne 0$.

Proposition:  Let the injective function  $f:A\to ℝ$ be differentiable at a. If  ${f}^{\prime }\left(a\right)\ne 0$ the inverse function  ${f}^{-1}:f\left(A\right)\to ℝ$ has the following properties:

 ${f}^{-1}$ is continuous at  $f\left(a\right)$. [7.5.3] ${f}^{-1}$ is differentiable at  $f\left(a\right)$ with $\left({f}^{-1}{\right)}^{\prime }\left(f\left(a\right)\right)=\frac{1}{{f}^{\prime }\left(a\right)}$. [7.5.4]

Proof:  We take the representation  $f=f\left(a\right)+\left(\mathrm{X}-a\right)r$  where r is continuous at a and $r\left(a\right)\ne 0$.

1.  For any sequence $\left(f\left({a}_{n}\right)\right)$ in  $f\left(A\right)$ converging to  $f\left(a\right)$ we see that

$\left({a}_{n}-a\right)r\left({a}_{n}\right)=f\left({a}_{n}\right)-f\left(a\right)\to 0$

As $\mathrm{lim}r\left({a}_{n}\right)=r\left(a\right)\ne 0$ we conclude that ${a}_{n}-a\to 0$ and thus we have

${f}^{-1}\left(f\left({a}_{n}\right)\right)={a}_{n}\to a={f}^{-1}\left(f\left(a\right)\right)$

2.  We need to know first that  $f\left(a\right)$ is an accumulation point of  $f\left(A\right)$: As a is an accumulation point of A (otherwise f won't be differentiable at a) we find a sequence $\left({a}_{n}\right)$ in A such that ${a}_{n}\to a$. With  f being continuous at a we may thus conclude:  $f\left(A\right)\ni f\left({a}_{n}\right)\to f\left(a\right)$.

Further, as $r\circ {f}^{-1}$ is continuous at  $f\left(a\right)$, we get a relative ε-neighbourhood  $f{\left(A\right)}_{f\left(a\right),\epsilon }$ such that $r\left({f}^{-1}\left(x\right)\right)\ne 0$ for all $x\in f{\left(A\right)}_{f\left(a\right),\epsilon }$ . Using the identity

$x=f\circ {f}^{-1}\left(x\right)=\left(f\left(a\right)+\left(\mathrm{X}-a\right)r\right)\circ {f}^{-1}\left(x\right)=f\left(a\right)+\left({f}^{-1}\left(x\right)-a\right)r\left({f}^{-1}\left(x\right)\right)$

we thus find the following representation of  ${f}^{-1}$ on  $f{\left(A\right)}_{f\left(a\right),\epsilon }$ :

${f}^{-1}\left(x\right)=a+\frac{x-f\left(a\right)}{r\left({f}^{-1}\left(x\right)\right)}={f}^{-1}\left(f\left(a\right)\right)+\left(x-f\left(a\right)\right)\frac{1}{r\left({f}^{-1}\left(x\right)\right)}$

As $\frac{1}{r\circ {f}^{-1}}$ is continuous at  $f\left(a\right)$ we find due to [7.5.1] that  ${f}^{-1}$ is differentiable at  $f\left(a\right)$ with

$\left({f}^{-1}{\right)}^{\prime }\left(f\left(a\right)\right)=\frac{1}{r\circ {f}^{-1}\left(f\left(a\right)\right)}=\frac{1}{r\left(a\right)}=\frac{1}{{f}^{\prime }\left(a\right)}$

as derivate number.

Later on we will see that functions being regular on an entire interval will be injective automatically. This is no longer true, not even locally, with only pointwise regular functions. Click for an example.

We will employ the representation theorem again to get more results on differntiable functions within the next chapters. Now we use the theorem to settle the differentiability for an important class of functions.

Proposition:  Each limit function  $f=\sum _{i=0}^{\infty }{a}_{i}{\left(\mathrm{X}-a\right)}^{i}$ of a convergent power series is differentiable at the expansion point a with

 ${f}^{\prime }\left(a\right)={a}_{1}$ [7.5.5]

Proof:  According to [5.11.9] the power series $\left(\sum _{i=0}^{n}{a}_{i}{\left(\mathrm{X}-a\right)}^{i}\right)$ converges for every x in its domain of convergence. For those x, $x\ne a$, the series

$\left(\sum _{i=1}^{n}{a}_{i}{\left(x-a\right)}^{i-1}\right)=\left(\frac{1}{x-a}\sum _{i=1}^{n}{a}_{i}{\left(x-a\right)}^{i}\right)$

converges as well with the limit function  $r≔\sum _{i=1}^{\infty }{a}_{i}{\left(\mathrm{X}-a\right)}^{i-1}$  being continuous due to [6.2.18]. Representing  f as

$f={a}_{0}+\sum _{i=1}^{\infty }{a}_{i}{\left(\mathrm{X}-a\right)}^{i}=f\left(a\right)+\left(\mathrm{X}-a\right)\sum _{i=1}^{\infty }{a}_{i}{\left(\mathrm{X}-a\right)}^{i-1}=f\left(a\right)+\left(\mathrm{X}-a\right)r$

and using [7.5.1] we thus see that  f is differentiable at a with  ${f}^{\prime }\left(a\right)=r\left(a\right)={a}_{1}$.

[7.5.5] guarantees the differentiability for a wide class of functions: An analytical function  $f:A\to ℝ$ coincides at each $a\in A$ with the limit function of a convergent power series expanded at a. Thus we have:

 Every analytical function  $f:A\to ℝ$ is differentiable at each accumulation point $a\in A$ of its domain.
[7.5.6]

Using the rearrangement theorem for convergent power series will yield a considerable extension of [7.5.5]: Limit functions of convergent power series are differentiable on the whole of their domain of convergence. The derivative is calculated with an easy scheme!

Proposition:   The limit function  $f=\sum _{i=0}^{\infty }{a}_{i}{\left(\mathrm{X}-a\right)}^{i}$ of a convergent power series is differentiable at each point b of its domain of convergence and

 ${f}^{\prime }\left(b\right)=\sum _{i=1}^{\infty }{a}_{i}\phantom{\rule{0.1em}{0ex}}i\phantom{\rule{0.1em}{0ex}}{\left(b-a\right)}^{i-1}$ [7.5.7]

Proof:  According to the rearrangement theorem [5.11.20] there is a convergent power series $\left(\sum _{j=0}^{n}{b}_{j}{\left(\mathrm{X}-b\right)}^{j}\right)$ such that the limit function $g=\sum _{j=0}^{\infty }{b}_{j}{\left(\mathrm{X}-b\right)}^{j}$ coincides with  f on a neighbourhood of  b. g, and thus  f as well, is differentiable at b due to [7.5.5] with

${f}^{\prime }\left(b\right)={g}^{\prime }\left(b\right)={b}_{1}$

Taking the coefficients ${b}_{j}=\sum _{i=j}^{\infty }{a}_{i}\left(\phantom{T}\begin{array}{c}i\\ j\end{array}\right)\phantom{T}{\left(b-a\right)}^{i-j}$ from the proof of [5.11.20] we thus get:

${f}^{\prime }\left(b\right)={b}_{1}=\sum _{i=1}^{\infty }{a}_{i}\left(\phantom{T}\begin{array}{c}i\\ 1\end{array}\right)\phantom{T}{\left(b-a\right)}^{i-1}=\sum _{i=1}^{\infty }{a}_{i}\phantom{\rule{0.1em}{0ex}}i\phantom{\rule{0.1em}{0ex}}{\left(b-a\right)}^{i-1}$

exp, sin and cos are analytical functions (c.f. [5.12.4]) and thus differentiable at each $b\in ℝ$. Recalling that

$\begin{array}{l}\mathrm{exp}=\sum _{i=0}^{\infty }\frac{1}{i!}{\mathrm{X}}^{i}\hfill \\ \mathrm{sin}=\sum _{i=0}^{\infty }\frac{{\left(-1\right)}^{i}}{\left(2i+1\right)!}{\mathrm{X}}^{2i+1}\hfill \\ \mathrm{cos}=\sum _{i=0}^{\infty }\frac{{\left(-1\right)}^{i}}{\left(2i\right)!}{\mathrm{X}}^{2i}\hfill \end{array}$

(see [5.11.12] for details) we may calculate their derivations using [7.5.7]. Note that the value of the first addend in [7.5.7] equals to ${a}_{1}{\left(b-a\right)}^{0}$. To consider this for the sine we set the initial value $i=0$ when proving 2. With the Cosine we have ${a}_{1}=0$ so that the derivation in 3. correctly starts with ${a}_{2}{\left(b-a\right)}^{1}$.

Proposition:

 ${\mathrm{exp}}^{\prime }\left(b\right)=\mathrm{exp}b$ [7.5.8] ${\mathrm{sin}}^{\prime }\left(b\right)=\mathrm{cos}b$ [7.5.9] ${\mathrm{cos}}^{\prime }\left(b\right)=-\mathrm{sin}b$ [7.5.10]

Proof:

1.  ${\mathrm{exp}}^{\prime }\left(b\right)=\sum _{i=1}^{\infty }\frac{1}{i!}\phantom{\rule{0.1em}{0ex}}i\phantom{\rule{0.1em}{0ex}}{b}^{i-1}=\sum _{i=1}^{\infty }\frac{1}{\left(i-1\right)!}\phantom{\rule{0.1em}{0ex}}{b}^{i-1}=\sum _{i=0}^{\infty }\frac{1}{i!}\phantom{\rule{0.1em}{0ex}}{b}^{i}=\mathrm{exp}b$

2.  ${\mathrm{sin}}^{\prime }\left(b\right)=\sum _{i=0}^{\infty }\frac{{\left(-1\right)}^{i}}{\left(2i+1\right)!}\left(2i+1\right)\phantom{\rule{0.1em}{0ex}}{b}^{2i}=\sum _{i=0}^{\infty }\frac{{\left(-1\right)}^{i}}{\left(2i\right)!}\phantom{\rule{0.1em}{0ex}}{b}^{2i}=\mathrm{cos}b$

3.  ${\mathrm{cos}}^{\prime }\left(b\right)=\sum _{i=1}^{\infty }\frac{{\left(-1\right)}^{i}}{\left(2i\right)!}2i\phantom{\rule{0.1em}{0ex}}{b}^{2i-1}=\sum _{i=1}^{\infty }\frac{{\left(-1\right)}^{i}}{\left(2i-1\right)!}\phantom{\rule{0.1em}{0ex}}{b}^{2i-1}=\sum _{i=0}^{\infty }\frac{{\left(-1\right)}^{i+1}}{\left(2i+1\right)!}\phantom{\rule{0.1em}{0ex}}{b}^{2i+1}=-\mathrm{sin}b$

 7.4. 7.6.