### File content

7. Integral calculus =================== 7.1 Definition and examples ------------------------ ---- Definition: A function f: [a, b] -> CC is called piecewise continuous if there is a decomposition of the interval [a, b] into finitely many sub-intervals, so that f is continuous on each sub-interval. Formally there must be a0..an with a = a0

W <= CC be a function, so that f is partially continuous restricted to the interval [a, b]. The Riemann integral is then defined as the following limit value: N-1 b ___ / \ | f (x) dx: = lim> f (a + k (b-a) / N) * (b-a) / N / N-> oo / a --- k = 0 One can show that this limit value always exists. We also write int (f (x) dx, a..b) Examples: int (x dx, 0..b) = lim sum (k * b / N * b / N, k = 0..N-1 ) = lim b ^ 2 / N ^ 2 * sum (k, k = 0..N-1) = N-> oo = lim b ^ 2 / N ^ 2 * N * (N-1) / 2 = b ^ 2/2. int (x ^ 2 dx, 0..b) = lim sum ((k * b / N) ^ 2 * b / N, k = 0..N-1) = = lim b ^ 3 / N ^ 3 * sum (k ^ 2, k = 0..N-1) = 1/3 b ^ 3. The integral is interpreted geometrically as the area under the graph of f in the interval a..b. ^ | 1 | The area of the curvilinear triangle, i.e. below | || the normal parabola is thus int (x ^ 2 dx, x = 0..1) = 1/3. | / | .5 / | | - | | _ - ° | - | -----. 5 ----- 1 -----------> The curvilinear area is created by areas of increasingly narrowing rectangles (height f (a + k * (ba) / N) and width (ba) / N) approximated. Comment: Normally, the Riemann integral is initially defined for any functions as a limit value of sums as above, although the selection of the support points does not necessarily have to be equidistant. If the limit value exists regardless of the choice of support points, provided that their distance (mesh size) approaches zero, the function is called "Riemann integrable". One can then show that a function is Riemann integrable if and only if it has at most countably many points of discontinuity and is also restricted to the interval [a, b] (Lebesgue criterion). In the case of a Riemann-integrable function, the equidistant subdivisions are of course sufficient. There are also more general integral terms (today's standard is the Lebesgue integral), with which an integral can also be assigned to other functions, especially those that are defined on an open interval, such as [0, oo [, or those that are nowhere continuous, such as the Dirichlet function d (x) = if x rational then 1 else 0 For our purposes (and the vast majority of cases that occur in practice) the definition of the integral for piecewise continuous functions is sufficient. Another example: int (exp (x) dx, x = 0..b). First we determine sum (exp (k * b / N), k = 0..N-1) = sum (exp (b / N) ^ k, k = 0..N-1) = (exp (N * b / N) -1) / (exp (b / N) -1) / * geometric series * / = (exp (b) -1) / (exp (b / N) -1) So sum (exp ( k * b / N) * b / N, k = 0..N-1) = b / N * (exp (b) -1) / (exp (b / N) -1). The limit of this for N-> oo results in exp (b) -1, so int (exp (x) dx, x = 0..b) = exp (b) -1. Theorem: For a <= b <= c applies (if the expressions occurring are defined at all) int (f (x) dx, x = a..b) + int (f (x) dx, x = b..c ) = int (f (x) dx, x = a..c) Reason: Intuitively this is clear, for a rigorous proof you have to be a little careful, because the combination of two equidistant divisions does not generally result in an equidistant division. Theorem (mean value theorem of the integral calculus): If f: [a, b] -> RR is continuous (not only in parts!), Then x0: [a, b] exists in such a way that int (f (x) dx, x = a .. b) = f (x0) * (ba) Illustration: f (x0) * (ba) is a rectangle of width ba and height f (x0). First you choose the height so that this area corresponds to the integral. Then you get a suitable x-value with the intermediate value theorem. The following more general version of the mean value theorem also applies: Theorem: Let f and g be continuous on [a, b] and g (x)> = 0. There is x0: [a, b] with int (f (x) * g (x) dx, x = a..b) = f (x0) * int (g (x) dx, x = a..b ). Proof: If int (g (x) dx, x = a..b) = 0, then g (x) = 0 on the whole of [a, b] (exercise) and x0 can be chosen arbitrarily. Let int (g (x) dx, x = a..b)> 0 and m the absolute minimum of f on [a, b] and M the maximum. Set mu: = int (f (x) * g (x) dx, x = a..b) / int (g (x) dx, x = a..b). Because of m * int (g (x) dx, x = a..b) <= int (f (x) * g (x) dx, x = a..b) <= M * int (g (x) dx, x = a..b) (here one uses g (x)> = 0), mu lies between m and M and is assumed by f according to the intermediate value theorem. That gives the desired x0. QED The following example shows that the assumption int (g (x) dx, x = a..b)! = 0 instead of g (x)> = 0 is not sufficient. Set f (x) = x, g (x) = x ^ 3, a = -1, b = 1,1 int (x * x ^ 3 dx, x = 1..1,1) = [1/5 x ^ 5] _ {- 1} ^ {1,1} = 0.52 .. int (x * x ^ 3 dx, x = 1..1.1) = [1/4 x ^ 4] _ { -1} ^ {1,1} = 0.12 .. 7.2 Integration and Differentiation ------------------------------- ---- Proposition: Let I be an interval (possibly open, possibly with limits -oo, oo) and f: D-> W continuous. For every a: I the function F: I-> W is defined by F (x) = int (f (t) dt, t = a..x) continuously differentiable on all I and it is F '(x) = f (x). Proof sketch: F (x + h) -F (x) = int (f (t) dt, t = x..x + h) = h * f (xi) for an xi: [x..x + h] / * Mean value theorem of integral calculus * /. It follows that F '(x) = lim (F (x + h) -F (x)) / h = f (x). h-> 0 Definition: Let f be continuous. A function F with F '= f is called an * antiderivative * of f. The concept of an antiderivative can be extended to partially continuous functions f. Here the antiderivative must be differentiable everywhere except at the jump points of f and its derivation must agree with f there. Example: In this sense, the amount function is the antiderivative of the signum function. The fundamental theorem of differential and integral calculus immediately follows from the above theorem: If F is an antiderivative of f, then int (f (x) dx, x = a..b) = F (b) -F (a) Reason: Two antiderivatives can only differ by a constant, which is canceled out by the difference. The function int (f (t) dt, t = a..x) itself is also an antiderivative. The notation [F (x)] _ a ^ b is introduced for the difference F (b) -F (a). This fundamental theorem allows the very comfortable evaluation of integrals: Examples: From d / dx x ^ s = s * x ^ {s-1} it follows that 1 / (s + 1) * x ^ (s + 1) becomes an antiderivative x ^ s is. So it follows int (x ^ s dx, x = a..b) = [1 / (s + 1) x ^ (s + 1)] _ a ^ b For an antiderivative of f one writes int (f (x) dx) and describes this as an "indefinite integral" in contrast to the previously introduced "definite integrals" with explicit integration limits. For example: int (x dx) = 1/2 * x ^ 2 The notation should be used with some caution, because 1/2 * x ^ 2 + 1 is also an antiderivative. You can therefore see the notation int (f (x) dx) = F (x) + C where C should represent any constant. The following additional antiderivatives result from the derivatives found so far. int (sin (x) dx) = -cos (x) int (cos (x) dx) = -sin (x) int (exp (x) dx) = exp (x) int (1 / x dx) = ln (x), if x> 0 int (1 / x dx) = ln (-x), if x <0 therefore in total: int (1 / x dx) = ln (| x |). int (1 / (1 + x ^ 2) dx) = arctan (x) int (1 / sqrt (1-x ^ 2) dx) = arcsin (x) We also note that the integration is a linear operation: int (f (x) + g (x) dx) = int (f (x) dx) + int (g (x) dx) int (lambda * f (x) dx) = lambda * int (f (x) dx) 7.3 Substitution rule ---------------------- The chain rule is known to be: If f (x) = h (g (x)), then f '( x) = h '(g (x)) * g' (x) Correspondingly, h (g (x)) is an antiderivative to h '(g (x)) * g' (x): int (h '(g (x)) * g '(x) dx) = h (g (x)) This is the * substitution rule *. The difficulty lies in the fact that the * integrand * has to have this very specific format h '(g (x)) * g' (x) for suitable functions g, h, so that the rule is applicable. Examples: int (exp (lambda * x) dx) = 1 / lambda int (exp (lambda * x) * lambda dx) = 1 / lambda exp (lambda * x) int (exp (x ^ 2) * x dx) = 1/2 * int (exp (x ^ 2) * 2x dx) = 1/2 exp (x ^ 2) Sometimes the following symbolic calculation is useful as a memory aid: After g '(x) = dg (x) / dx man * formally * also g '(x) dx = dg (x), i.e. int (h (g (x)) * dg (x), x = a..b) = int (h (u) du, u = g (a) .. g (b)) and also int (h (g (x)) * dg (x)) = int (h (u) du) With this one can calculate as follows: int (exp (x ^ 2) * x dx) = 1/2 * int (exp (u) du) = 1/2 exp (u) = 1/2 exp (x ^ 2) NR: Set u: = x ^ 2, so you / dx = 2x, i.e. du = 2x dx, i.e. x dx = 1/2 du. Further examples: int (tan (x) dx) = int (sin (x) / cos (x) dx) = - int (1 / u du) = - ln (| u |) = -ln (| cos (x) |). NR: u = cos (x), so du = -sin (x) dx. Sometimes you have to apply the substitution rule in the opposite direction: If you are looking for an antiderivative to f and g has an inverse function (possibly restricted to a suitable interval) and h (y) = int f (g (y)) * g '(y ) dy then applies to the antiderivative F (x) = int f (x) dx that F (g (y)) = h (y), i.e. F (x) = h (g ^ (- 1) (x) ). Example: We are looking for an antiderivative to f (x) = sqrt (1-x ^ 2) (defined on [-1,1]). We choose g (y) = sin (y) on [-pi / 2, pi / 2] with the inverse function arcsin: [- 1,1] -> [- pi / 2, pi / 2] It is int f (g (y)) g '(y) dy = int sqrt (1-sin (y) ^ 2) * cos (y) dy = int | cos (y) | * cos (y) dy = int cos (y) ^ 2 dy = 1/2 * (y + sin (y) * cos (y)) / * By guessing * /. So int sqrt (1-x ^ 2) dx = 1/2 * (arcsin (x) + x * sqrt (1-x ^ 2)) In particular, int (sqrt (1-x ^ 2) dx, -1 ..1) = arcsin (1) = pi / 2 (area of half the unit circle). This version of the substitution rule can also be clearly noted by the formal calculation with dx, du: int sqrt (1-x ^ 2) dx = ... Substitution x = sin (y), dx / dy = cos (y), i.e. dx = cos (y) dy ... = int sqrt (1-sin (y) ^ 2) cos (y) dy = int cos (y) ^ 2 dy = ... You can also use the substitution rule for integrals with integration limits use ("certain integrals") if the limits are also substituted accordingly: int (f '(g (x)) * g' (x) dx, x = a..b) = [f (g (x))] _ {x = a} ^ b = [f (u)] _ {u = f (a)} ^ {f (b)} Or with the Leibniz calculus: int (f '(g (x)) * g '(x) dx, x = a..b) = int (f' (u) du, u = g (a) .. g (b)) = [f (u)] _ {u = f (a )} ^ {f (b)} Example: int (exp (x ^ 2) * x dx, x = 0..2) = 1/2 * int (exp (u) du, u = 0..4) = [1/2 exp (u)] _ 0 ^ 4. int (exp (x ^ 2) * x dx, x = -1..1) = 1/2 * int (exp (u) du, u = 1..1) = 0. When the substitution rule is applied in reverse, Again, the inverse function is required and, if necessary, the integral must be broken down. int (sqrt (1-x ^ 2) dx, x = -1..1) = / * x = sin (phi), phi = arcsin (phi), dx = cos (phi) d phi * / int (cos (phi) ^ 2 d phi, x = -pi / 2..pi / 2) = [1/2 * (phi + sin (phi) * cos (phi))] _ (pi / 2) ^ (pi / 2) = pi / 2 7.4 Partial fraction decomposition ------------------------- The following example may explain the method of partial fraction decomposition: Assume that an antiderivative f ( x) = 1 / (x ^ 2-x-2). First you break down the denominator into linear factors: x ^ 2-x-2 = (x + 1) (x-2). Then one makes the approach 1 / ((x + 1) (x-2)) = A / (x + 1) + B / (x-2) Multiplying by the main denominator leads to B (x + 1) + A ( x-2) = 1, i.e. (coefficient comparison!): A + B = 0, -2A + B = 1, i.e. A = -1, B = 1. So int (1 / (x ^ 2-x-2) dx) = int (-1 / (x + 1) + 1 / (x-2) dx) = -ln (| x + 1 |) + ln (| x-2 |). The two fractions -1 / (x + 1) and 1 / (x + 2) are called * partial fractions *. This approach always works if the denominator of degree d also has d different real zeros. If there are complex zeros, one must use partial fractions that have the corresponding quadratic factors as denominators (and linear terms Ax + B in the numerator). Alternatively, you can also calculate formally with complex numbers. If one has a zero a of greater multiplicity, then partial fractions with denominator (x-a) ^ i for i from 1 to the multiplicity of the zero must be used. Example: Assume an antiderivative to 1 / ((x-1) ^ 2 * (x ^ 2 + 2x + 3) (zeros of the denominator 1 (double), -1 + i sqrt (2), -1-i sqrt (2) approach. 1 / ((x-1) ^ 2 * (x ^ 2 + 2x + 3)) = A / (x-1) + B / (x-1) ^ 2 + (Cx + D ) / (x ^ 2 + 2x + 3) 1 = A (x-1) (x ^ 2 + 2x + 3) + B (x ^ 2 + 2x + 3) + (Cx + D) (x-1) ^ 2 1 = A (x ^ 3 + x ^ 2 + x-3) + B (x ^ 2 + 2x + 3) + C (x ^ 3-2x ^ 2 + x) + D (x ^ 2-2x +1) Coefficient comparison: A + C = 0 / * x ^ 3 * / A + B-2C + D = 0 / * x ^ 2 * / A + 2B + C-2D = 0 / * x * / -3A + 3B + D = 1 / * 1 * / Substitution and addition procedure: A = -1 / 9, B = 1/6, C = 1/9, D = 1/6 The partial fraction decomposition: 1 / ((x-1) ^ 2 * (x ^ 2 + 2x + 3)) = - 1 / (9 (x-1)) + 1 / (6 (x-1) ^ 2) + (2x + 3) / (18 (x ^ 2 + 2x + 3)) So: int (1 / ((x-1) ^ 2 * (x ^ 2 + 2x + 3)) dx) = -1 / 9 * ln | x-1 | - 1 / (6 (x-1)) + 1/18 * ln | x ^ 2 + 2x + 3 | + sqrt (2) / 36 * arctan ((x + 1) / sqrt (2)) additional calculation: int (1 / (x ^ 2 + 2x + 3) dx) = int (1 / ((x + 1) ^ 2 + 2) dx) / * quadr. Supplement * / = int (1 / (y ^ 2 + 2) dy) / * y = x + 1 * / = int ((1/2) / ((y / sqrt (2)) ^ 2 + 1) dy) / * y = x + 1 * / = int ((1/2) / ((y / sqrt (2)) ^ 2 + 1) dy) / * z = y / sqrt (2) * / = sqrt (2) / 2 int (1 / (z ^ 2 + 1) dz) = 1 / sqrt (2) arctan (z) = 1 / sqrt (2) arctan ((x + 1) / sqrt (2)) int (x / (x ^ 2 + 2x + 3) dx) = int (1/2 * (2x + 2) / (x ^ 2 + 2x + 3) - 1 / (x ^ 2 + 2x + 3) dx) = = 1/2 * int (1 / u du) - 1 / sqrt (2) arctan ((x + 1) / sqrt (2)) / * u = x ^ 2 + 2x + 3 * / = 1/2 * ln | x ^ 2 + 2x + 3 | - 1 / sqrt (2) arctan ((x + 1) / sqrt (2)) Further examples int 1 / (1-x ^ 4) dx. Zeroing the denominator: 1, -1, i, -i, i.e. 1-x ^ 4 = (x-1) (x + 1) (x ^ 2 + 1) int 1 / (1 + x ^ 4) dx. Zeroing the denominator: a + ai, a-ai, -a + ai, -a-ai with a = 1/2 * sqrt (2). So 1 + x ^ 4 = (x ^ 2 + ax + 1) (x ^ 2-ax + 1). The latter example is astonishingly complex and is said to have caused difficulties for Leibniz. 7.5 Partial integration ------------------------- From the product rule of differentiation one obtains the following calculation rule for the integration, the so-called * partial integration *: u (x) * v (x) = int (u '(x) * v (x) dx) + int (u (x) * v' (x) dx) or in a more useful form: int (u '(x) * v (x) dx) = u (x) * v (x) - int (u (x) * v '(x) dx) You can use this rule if the integrand is a product, so that for one of the factors an antiderivative is known. Unfortunately, the integral is not completely solved by the partial integration, but only traced back to another, which can be easier, but also more difficult. Examples: *) int (cos (x) * x dx) / * u '= cos (x), u = sin (x), v = x, v' = 1 * / = sin (x) * x - int (sin (x) * 1 dx) = sin (x) * x + cos (x) + C *) int (ln (x) dx) = int (1 * ln (x) dx) / * u '= 1 , u = x, v = ln (x), v '= 1 / x * / = x * ln (x) - int (x / x dx) = x * ln (x) - x + C *) int ( arctan (x) dx) = x * arctan (x) - int (x / (1 + x ^ 2) dx) = x * arctan (x) - 1/2 ln | 1 + x ^ 2 | + C *) int (cos (y) ^ 2 dy) = = sin (y) * cos (y) + int sin (y) ^ 2 dy now do not go on like this again, otherwise you will turn in a circle. Instead: int (cos (y) ^ 2 dy) = sin (y) cos (y) + int 1-cos (y) ^ 2 dy = sin (y) cos (y) + x-int cos (y) ^ 2 dy Also int (cos (y) ^ 2 dy) = 1/2 (x + sin (y) cos (y)) + C *) Find I_m (x) = int 1 / (1 + x ^ 2) ^ m dx. I_m (x) = int 1 / (x ^ 2 + 1) ^ m dx = / * u = 1, v = 1 / (x ^ 2 + 1) ^ m * / x / (x ^ 2 + 1) ^ m + 2m * int x ^ 2 / (1 + x ^ 2) ^ (m + 1) dx = / * conversion + shortening * / x / (x ^ 2 + 1) ^ m + 2m * int 1 / (1 + x ^ 2) ^ (m + 1) - 1 / (1 + x ^ 2) ^ m dx = x / (x ^ 2 + 1) ^ m + 2m * (I_ (m + 1) (x) - I_m (x)) NR: x ^ 2 / (1 + x ^ 2) ^ (m + 1) = (1 + x ^ 2) / (1 + x ^ 2) ^ (m + 1) -1 / ( 1 + x ^ 2) ^ (m + 1) = 1 / (1 + x ^ 2) ^ m-1 / (1 + x ^ 2) ^ (m + 1). So: I_ (m + 1) (x) = 1 / (2m) * ((2m-1) I_m (x) + x / (1 + x ^ 2) ^ m) With I_1 (x) = arctan (x ) the I_m can be calculated successively in this way. *) Determine J_m (x) = int sin (x) ^ m dx J_0 (x) = x J_1 (x) = -cos (x) J_m (x) = -sin (x) ^ (m-1) * cos (x) + (m-1) * int sin (x) ^ (m-2) * cos (x) ^ 2 dx = = -sin (x) ^ (m-1) * cos (x) + ( m-1) * int sin (x) ^ (m-2) * (1-sin (x) ^ 2) dx = = -sin (x) ^ (m-1) * cos (x) + (m- 1) * (J_ (m-2) (x) - J_m (x)) So J_m (x) = -1 / m cos (x) sin ^ (m-1) (x) + (m-1) / m J_ (m-2) (x) 7.6 Improper integrals --------------------------- The expression int (f (x) dx, As is well known, x = a..b) is only defined if f is defined on [a, b]. So if a or b = + -oo or if f is not defined at a or b, then the integral "in the real sense" does not exist. In some cases, however, the expression can be assigned a meaningful value by forming a limit, which is then referred to as an "improper integral". Example: int (1 / x ^ 2 dx, x = 1..oo) = lim int (1 / x ^ 2, x = 1..z) = z-> oo lim [-1 / x] _1 ^ z = 1-1 / z = 1 z-> oo int (1 / sqrt (x) dx, x = 0..1) = lim int (1 / sqrt (x), x = z..1) = z- > 0 lim [2 * sqrt (x)] _ z ^ 1 = 2-2 * sqrt (z) = 2 z-> 0 int (1 / sqrt (1-x ^ 2) dx, x = -1..1 ) = lim arcsin (1-eps) - arcsin (-1 + eps) = eps-> 0 - (- pi / 2) + pi / 2 = pi Similar: int (1 / (1 + x ^ 2) dx, x = -oo..oo) = pi It is not always possible to evaluate certain (especially improper) integrals using antiderivatives. Example: Determine int (sin (x) / x dx, x = 0..oo) NB sin (x) / x can be continued continuously for x = 0. The antiderivative F (x) for f (x) = sin (x) / x with F (0) = 0 is called "Sinus integralis" and is denoted by Si (x). There is no closed formula for Si (x). So one is interested in lim Si (x) x-> oo. We only prove that the limit exists. for x = n * pi ... (n + 1) * pi, n is even, sin (x) / x is positive for x = n * pi ... (n + 1) * pi, n is odd, is sin (x) / x negative So lim Si (x) = sum (-1) ^ n * int (| sin (x) / x | dx, x = n * pi .. (n + 1) * pi, n = 0..oo) x-> oo and the series converges according to the Leibniz criterion for alternating series. One can show (see Forster) that the limit value is just pi / 2. An interesting application of improper integrals is the following convergence criterion for series (integral criterion). Theorem: If f: [1, oo] -> RR ^ + is a function such that the sequence (a_n) _n falls monotonically with a_n = f (n), then sum (a_n, n = 1..oo) converges exactly then if the improper integral int (f (x) dx, x = 1..oo) exists. The proof results directly from the estimate a_n> = int (f (x), x = n..n + 1)> = a_ (n + 1): int (f (x), x = 1..oo) <= a_1 + a_2 + a_3 + ... <= a_1 + int (f (x), x = 1..oo) Example: The series sum (1 / n ^ s, n = 1..oo) converges for s> 1, because int (1 / x ^ s, x = 1..oo) = [x ^ (1-s) / (1-s)] _ 1 ^ oo = 1 / (s-1). The estimate used in the proof can sometimes also be used directly: from int (1 / x dx) = ln (x), the integral estimate sum (1 / n, n = 2..N) <= ln (N) <= sum (1 / n, n = 1..N-1) (sketch!) So 0 <= sum (1 / n, n = 1..N) -ln (N) <= sum (1 / n, n = 1..N) -sum (1 / n, n = 2..N) = 1. If one sets gamma_N: = sum (1 / n, n = 1..N) -ln (N), then gamma_ {N-1} -gamma_N = [ln (x)] _ (x = N-1) ^ N - 1 / N = int (1 / x-1 / N dx, x = N-1..N)> 0 ie the sequence (gamma_N) _N is positive, constrained and monotonically decreasing and must therefore converge to a number. This is called the Euler-Mascheroni constant and is denoted by gamma. It is gamma = 0.57721 ... You don't know whether gamma is rational, irrational, or transcendent and whether it can be expressed in any way using the "other numbers" such as pi, e, etc. Application: Let S_N = sum (1 / n, n = 1..N) A_N = sum ((- 1) ^ (n + 1) * 1 / n, n = 1..N) (alternating harmonic series.) It is S_N = ln (N) -gamma + o (1) A_ (2N) = S_ (2N) - S_N = ln (2) + ln (N) -ln (N) + gamma-gamma + o (1) = ln (2) + o (1) Example: 1 - 1/2 + 1/3 - 1/4 + 1/5 - 1/6 = 1 + 1/2 + 1/3 + 1/4 + 1 / 5 + 1/6 - 2/2 - 2/4 - 2/6 = 1 + 1/2 + 1/3 + 1/4 + 1/5 + 1/6 - 1 - 1/2 - 1/3 So lim A_N = ln (2) N-> oo