1. Limits

Definition: series limit

    \[\left(\lim <em>{n \rightarrow \infty} x</em>{n}=A\right):=\forall \varepsilon>0\quad \exists N \in \mathbb{N}\quad \forall n>N\quad \left(\left|x_{n}-A\right|<\varepsilon\right)\]

We say that the sequence {x_n} converges to A or tends to A and write x_n \to A as n \to \infty.

Definition: fundamental or Cauchy sequence

A sequence {x_n} is called a fundamental or Cauchy sequence if for any \varepsilon > 0 there exists an index N \in N such that |x_m − x_n| < \varepsilon whenever n > N and m > N.

Theorem: Weierstrass

In order for a nondecreasing sequence to have a limit, it is necessary and sufficient that it be bounded above.

Two important limit

    \[\text { e }:=\lim _{n \rightarrow \infty}\left(1+\frac{1}{n}\right)^{n}\]

    \[\lim_{x\to 0}\frac{\sin x}{x}=1\]

Definition 3. inferior limit and superior limit

    \[\varliminf_{k \rightarrow \infty} x_{k}:=\lim <em>{n \rightarrow \infty} \inf </em>{k \geq n} x_{k}\]

    \[\varlimsup_{k \rightarrow \infty} x_{k}:=\lim <em>{n \rightarrow \infty} \sup </em>{k \geq n} x_{k}\]

Theorem 2. Stolz

Let {\displaystyle (a{n}){n\geq 1}} and {\displaystyle (b{n}){n\geq 1}} be two sequences of real numbers. Assume that {\displaystyle (b{n}){n\geq 1}} is a strictly monotone and divergent sequence (i.e. strictly increasing and approaching{\displaystyle +\infty } , or strictly decreasing and approaching -\infty) and the following limit exists:

    \[\lim_ {n\to \infty }{\frac {a_{n+1}-a_{n}}{b_{n+1}-b_{n}}}=l\]

Then, the limit

    \[\lim _{n\to \infty }{\frac {a{n}}{b{n}}}=l\]

Theorem 3. Toeplitz limit theorem

Supports that n,k\subseteq \mathbb N^{+},t_{nk}\geq0and

    \[\sum_{k=1}^{n}{t_{nk}} = 1,\quad \lim_{n \rightarrow \infty}{t_{nk}} = 0\]


    \[\lim_{n \rightarrow \infty}{a_{n}} = a\]

, let

    \[x_{n} = \sum_{k=1}^{n}{t_{nk}a_{k}}\]

, s.t.

    \[\lim_{n \rightarrow \infty}{x_{n}} = a\]

By using t_{nk}=\frac{1}{n}, we can quickly infer The Cauchy proposition theorem.

By using t_{n k}=\frac{b_{k+1}-b_{k}}{b_{n+1}-b_{1}}, we can quickly infer The Stolz theorem.

Stirling’s formula

Specifying the constant in the \mathcal O(\ln n) error term gives \frac12 \ln(2\pi n), yielding the more precise formula:

n!\sim {\sqrt {2\pi n}}\left({\frac {n}{e}}\right)^{n}

2. Continuity

Definition 0

A function f is continuous at the point a, if for any neighbourhood V (f (a)) of its value f (a) at a there is a neighbourhood U(a) of a whose image under the mappingf is contained in V (f (a)).

3. Differential calculus

Definition 0

The number

    \[f^{\prime}(a)=\lim _{E \ni x \rightarrow a} \frac{f(x)-f(a)}{x-a}\]

is called the derivative of the function f at a.

Definition 1

A function f : E \to R defined on a set E\subset R is differentiable at a point x ∈ E that is a limit point of E if f (x + h) − f (x) = A(x)h + \alpha(x;h), where h \mapsto A(x) h is a linear function in h and \alpha(x;h) = o(h)as h \to 0, x +h \in E.

Definition 2

The function h \mapsto A(x) h of Definition 1, which is linear in h, is called the differential of the functionf : E \to \mathcal R at the point x\in E and is denoted \mathrm d f (x) or \mathrm D f (x). Thus, \mathrm d f (x)(h) = A(x)h.

We obtain

    \[\frac{\mathrm{d} f(x)(h)}{\mathrm{d} x(h)}=f^{\prime}(x)\]

We denote the set of all such vectors by T\mathbb R(x_0) or T\mathbb R_{x_0}. Similarly, we denote by T\mathbb R(x_0) or T\mathbb R_{y_0} the set of all displacement vectors from the point y_0 along the y-axis. It can then be seen from the definition of the differential that the mapping

    \[\mathrm{d} f\left(x_{0}\right): T \mathbb{R}\left(x_{0}\right) \rightarrow T \mathbb{R}\left(f\left(x_{0}\right)\right)\]

The derivative of an inverse function

If a function f is differentiable at a point x0 and its differential \mathrm{d} f\left(x_{0}\right): T \mathbb{R}\left(x_{0}\right) \rightarrow T \mathbb{R}\left(y_0\right)a is invertible at that point, then the differential of the function f^{ −1} inverse to f exists at the point y_0 = f (x_0)and is the mapping

    \[\mathrm{d} f^{-1}\left(y_{0}\right)=\left[\mathrm{d} f\left(x_{0}\right)\right]^{-1}: T \mathbb{R}\left(y_{0}\right) \rightarrow T \mathbb{R}\left(x_{0}\right)\]

inverse to \mathrm{d} f\left(x_{0}\right): T \mathbb{R}\left(x_{0}\right) \rightarrow T \mathbb{R}\left(y_0\right)a .

The derivative of some common function formula

  1.     \[(C)^{\prime}=0\]

  2.     \[\left(x^{\mu}\right)^{\prime}=\mu x^{\mu-1}\]

  3.     \[(\sin x)^{\prime}=\cos x\]

  4.     \[(\cos x)^{\prime}=-\sin x\]

  5.     \[(\tan x)^{\prime}=\sec ^{2} x\]

  6.     \[(\cot x)^{\prime}=-\csc ^{2} x\]

  7.     \[(\sec x)^{\prime}=\sec x \tan x\]

  8.     \[(\csc x)^{\prime}=-\csc x \cot x\]

  9.     \[\left(a^{x}\right)^{\prime}=a^{x} \ln a \quad(a>0, a \neq 1)\]

  10.     \[\left(\mathrm{e}^{x}\right)^{\prime}=\mathrm{e}^{x}\]

  11.     \[\left(\log _{a} x\right)^{\prime}=\frac{1}{x \ln a}(a>0, a \neq 1)\]

  12.     \[(\ln x)^{\prime}=\frac{1}{x}\]

  13.     \[(\arcsin x)^{\prime}=\frac{1}{\sqrt{1-x^{2}}}\]

  14.     \[(\arccos x)^{\prime}=-\frac{1}{\sqrt{1-x^{2}}}\]

  15.     \[(\arctan x)^{\prime}=\frac{1}{1+x^{2}}\]

  16.     \[(\operatorname{arccot} x)^{\prime}=-\frac{1}{1+x^{2}}\]

  17.     \[(\sinh x)'= \cosh x\]

  18.     \[(\cosh x)'=\sinh x\]

  19.     \[(\tanh x)' =\frac{1}{\cosh ^{2} x}\]

  20.     \[(\operatorname{coth} x)' =-\frac{1}{\sinh ^{2} x}\]

  21.     \[(\operatorname{arsinh} x)'=\left(\ln \left(x+\sqrt{1+x^{2}}\right)\right)' = \frac{1}{\sqrt{1+x^{2}}}\]

  22.     \[(\operatorname{arcosh} x)'=(\ln \left(x \pm \sqrt{x^{2}-1}\right))'= \pm \frac{1}{\sqrt{x^{2}-1}}\]

  23.     \[(\operatorname{artanh} x)'=(\frac{1}{2} \ln \frac{1+x}{1-x})' = \frac{1}{1-x^{2}}\]

  24.     \[(\operatorname{arcoth} x)'=(\frac{1}{2} \ln \frac{x+1}{x-1})' = \frac{1}{x^{2}-1}\]

L’Hôpital’s rule

The theorem states that for functions f and g which are differentiable on an open interval I except possibly at a point c contained in I, if

    \[\lim_{x\to c}{\frac {f(x)}{g(x)}}=\lim_{x\to c}{\frac {f'(x)}{g'(x)}}\]

Taylor’s theorem

Let k \geq 1 be an integer and let the function f :\mathbb R\to\mathbb R be k times differentiable at the point a \in\mathbb R. Then there exists a function R_k : \mathbb R \to\mathbb R such that ,

    \[f(a+x)=f(a)+f'(a)(x)+{\frac {f''(a)}{2!}}x^{2}+\cdots +{\frac {f^{(k)}(a)}{k!}}x^{k}+R_k(x;a)\]


    \[R_{k}(x;a)=\int_{a}^{a+x} \frac{f^{(k+1)}(t)}{k !}(a+x-t)^{k} \mathrm d t\]



remainder term

using little o notation, R_{k}(x;a)=o\left(|x|^{k}\right), \quad x \rightarrow 0(The Peano remainder term)

The Lagrange form remainder term( Mean-value forms)

    \[R_{n}(x;a)=\frac{f^{(n+1)}(\theta)}{(n+1) !}x^{n+1}\quad (\theta\in(a,a+x))\]

4. Integral



In calculus, an antiderivative, inverse derivative, primitive function, primitive integral or indefinite integral of a function


is a differentiable function F whose derivative is equal to the original function f

Suppose F'(x)=f(x), the notation is

    \[\int f(x)\mathrm dx=F(x)\]

So all the antiderivative of f become a family set {F(x)+C|C\in\mathbb R}.

also the equation below is obviously.

    \[\mathrm d \int f(x) \mathrm{d} x=f(x) \mathrm{d} x, \quad \int F^{\prime}(x) \mathrm{d} x=F(x)+c\]

Theorem: Integration by parts

    \[\int u(x)v'(x)\,\mathrm dx=u(x)v(x)-\int u'(x)v(x)\,\mathrm dx\]

Example: Wallis product

the Wallis product for


, published in 1656 by John Wallis states that


so that:

so that:

Simplify the Polynomial and Integral


    \[Q(z)=\left(z-z{1}\right)^{k{1}} \cdots\left(z-z{p}\right)^{k{p}}\]



is a proper fraction, there exists a unique representation of the fraction


in the form

    \[\frac{P(z)}{Q(z)}=\sum_{j=1}^{p}\left(\sum_{k=1}^{k_{j}} \frac{a_{j k}}{\left(z-z_{j}\right)^{k}}\right)\]

and if




are polynomials with real coefficients and

    \[Q(x)=\left(x-x_{1}\right)^{k_{1}} \cdots\left(x-x_{l}\right)^{k_{l}}\left(x^{2}+p_{1} x+q_{1}\right)^{m_{1}} \cdots\left(x^{2}+p_{n} x+q_{n}\right)^{m_{n}}\]

there exists a unique representation of the proper fraction


in the form

    \[\frac{P(x)}{Q(x)}=\sum_{j=1}^{l}\left(\sum_{k=1}^{k_{j}} \frac{a_{j k}}{\left(x-x_{j}\right)^{k}}\right)+\sum_{j=1}^{n}\left(\sum_{k=1}^{m_{j}} \frac{b_{j k} x+c_{j k}}{\left(x^{2}+p_{j} x+q_{j}\right)^{k}}\right)\]

where a_{jk}, b_{jk}, and c_{j k} are real numbers.

and with these formulas below:

And from that we get the recursion:

    \[\int \frac{\mathrm{d} x}{\left(x^{2}+a^{2}\right)^{m+1}}=\frac{1}{2 m a^{2}} \frac{x}{\left(x^{2}+a^{2}\right)^{m}}+\frac{2 m-1}{2 m a^{2}} \int \frac{\mathrm{d} x}{\left(x^{2}+a^{2}\right)^{m}}\]

Primitives of the Form

    \[\int R(\cos x, \sin x)\mathrm dx\]

We make the change of variable t = \tan \frac{x} {2} . Since:

    \[\cos x=\frac{1-\tan ^{2} \frac{x}{2}}{1+\tan ^{2} \frac{x}{2}}, \qquad \sin x=\frac{2 \tan \frac{x}{2}}{1+\tan ^{2} \frac{x}{2}}\]

so that

    \[\mathrm{d} t=\frac{\mathrm{d} x}{2 \cos ^{2} \frac{x}{2}} \quad \Rightarrow\quad \mathrm{d} x=\frac{2 \mathrm{d} t}{1+\tan ^{2} \frac{x}{2}}=\frac{2\,\mathrm{d} t}{1+t^{2}}\]

It follows that

    \[\int R(\cos x, \sin x) \mathrm{d} x=\int R\left(\frac{1-t^{2}}{1+t^{2}}, \frac{2 t}{1+t^{2}}\right) \frac{2}{1+t^{2}} \mathrm{d} t\]

not only \sin ,\coscan to do this, but here are a lot of formula:

    \[\tan a=\frac{2 \tan \frac{a}{2}}{1-\tan ^{2} \frac{a}{2}}\]


    \[\cot \alpha=\frac{1-\tan ^{2} \frac{\alpha}{2}}{2 \tan \frac{\alpha}{2}}\]


    \[\sec \alpha=\frac{1+\tan ^{2} \frac{\alpha}{2}}{1-\tan ^{2} \frac{\alpha}{2}}\]


    \[\csc \alpha=\frac{1+\tan ^{2} \frac{\alpha}{2}}{2 \tan \frac{\alpha}{2}}\]


Riemann Sums


A partition P of a closed interval [a,b], a < b, is a finite system of points x_0,\cdots,x_n of the interval such that a = x_0 < x_1 <\cdots < x_n = b.

If a function f is defined on the closed interval [a, b] and (P, \xi) is a partition with distinguished points on this closed interval, the sum

    \[\sigma(f ; P, \xi):=\sum_{i=1}^{n} f\left(\xi_{i}\right) \Delta x_{i}\]

where \Delta x_i = x_i − x_{i−1}, is the Riemann sum of the function f corresponding to the partition (P, \xi) with distinguished points on [a,b].

The largest of the lengths of the intervals of the partition P , denoted \lambda(P), is called the mesh of the partition.

we define:

    \[\int_{a}^{b} f(x) \mathrm{d} x:=\lim <em>{\lambda(P) \rightarrow 0} \sum</em>{i=1}^{n} f\left(\xi_{i}\right) \Delta x_{i}\]

Integral mean value theorem

If f is a continuous function on the closed, bounded interval [a,b], then there is at least one number \xi in (a , b ) for which

    \[\int_{a}^{b} f(x) \mathrm{d} x=f(\xi)(b-a)\]

The second Integral mean value theorem

If f , g are continuous functions on the closed, bounded interval [a,b],g is monotonous on [a, b], then there is at least one number \xi in (a , b ) for which

    \[\int_{a}^{b}(f \cdot g)(x) \mathrm{d} x=g(a) \int_{a}^{\xi} f(x) \mathrm{d} x+g(b) \int_{\xi}^{b} f(x) \mathrm{d} x\]

Newton-Leibniz formula

Let f be a continuous real-valued function defined on a closed interval [a, b]. Let F be the function defined, s.t.

    \[\frac{d}{\mathrm{d} x} \int_{a}^{x} f(t) \mathrm{d} t=f(x), \quad \forall x \in[a, b]\]

Substitution Rule For Definite Integrals

Suppose f \in C[a, b], \varphi:[\alpha, \beta] \rightarrow[a, b]and \varphi^{\prime} \in \mathcal{R}[\alpha, \beta],\varphi(\alpha)=a, \varphi(\beta)=b, s.t.

    \[\int_{a}^{b} f(x) \mathrm{d} x=\int_{\alpha}^{\beta} f(\varphi(t)) \varphi^{\prime}(t) \mathrm{d} t\]