MTH3007b Lecture 1
Me, in the lecture
zzzzz…
Various techniques can be used to find numerical solutions to analytical problems. However, there will always be slight inaccuracy; so, let’s first discover how to measure that…
Approximations and Errors
There are three main ways to measure error:
- Order of magnitude: defined as a non-negative function for a function as , where ; also written in “big O” notation as .
- Absolute error: defined as for some approximation of a quantity .
- Relative error: defined for nonzero values of , .
Order of Magnitude (Taylor series of around )
We define the Taylor series of cosine as .
If we only choose to use the first to terms of the expansion, such that , then we can determine that there is an inaccuracy of exactly the next terms: . In the limit of , we can write this inaccuracy as , and hence determine the constant (in this case by using the formula definition), or simply write…
, or even by determining that it also satisfies the equation with a constant of , in this case.
Differential Equations (RECAP)
Now that we can measure the accuracy of our numerical solutions, we might want to find some actual solutions. First, to recall some basic information on differential equations:
- An Ordinary Differential Equation (ODE) is an equation involving the derivative(s) of an unknown function with respect to one independent variable.
- A Partial Differential Equation (ODE) is an equation involving the derivative(s) of an unknown function with respect to multiple independent variables.
- The Order of a Differential Equation is the highest occurring derivative in the equation; normally first or second order.
There are various real-world examples of each of these, such as the Navier-Stokes equation, Euler-Lagrange equations, chemistry rate equations, biology predator-prey equations, the Solow-Swan ODE in economics, or the Black-Scholes PDE in finance.
Often these equations will have families of solutions with parameter(s), not just a unique solution. To find a single solution, we’ll impose side conditions: an initial conditions, such as the value of a function at its origin, or boundary conditions, such as giving two values of the function at arbitrary, distinct points.
Finite Difference Method
We can then estimate ordinary derivatives by using their analytical formula, and approximating it with finite differences:
Essentially, this is the difference between a tangent (the perfect analytical solution, touching the function at only one point) and a secant (the approximated numerical solution, touching the function at two points - preferably very close together, if accurate).
Explicit Euler Method
Using this Finite Difference Method, we can solve Ordinary Differential Equations (ODEs). For example with the Euler method, where we first replace the derivatives with finite differences.
For an initial value problem, the ODE can be written as:
This is called the explicit Euler method, or the forward Euler method, and we can calculate the total number of integration steps as and hence . A smaller we hence naturally improve the accuracy, but at a computational cost.
Explore the rest of the notes separate to the lecture notes, or hope he recaps - the lecture notes aren't very well-written from this point.
However, this also introduces new types of error that we can quantise. For instance, the local truncation error: the error after one integration step due to truncating a function, for instance a Taylor series. Similarly, the global truncation error is the error due to integrating over the whole interval.
Both of these errors can be calculated directly using the “Big O” notation from before, and can then give us the order of a method: how the global truncation error varies with integration step. For instance, the Euler method is a first order algorithm.
If we wanted to program this, then we could use the following Python code:
After using this algorithm however, you may observe that it becomes unstable for and shows oscillatory behaviour for ; it isn’t great for large timesteps!
Implicit Euler Method
Aside from this explicit method, we may also have an implicit relation where a dependent variable is not isolated in the equation; sometimes we can convert between the two, but this is not always possible.
In these cases, we can write the definition of a derivative slightly differently, replacing with . This creates a completely different equation, but one that is evaluated identically under the limit:
This is again a finite difference, but a backward difference approximation (BDA) instead of a forward difference approximation (FDA).
For implicit relations, this can give rise to the implicit Euler method, or aptly named backward Euler method, similar to before (just shifting time forwards slightly to neaten the formula)…
Pre-Lecture Notes from University Notes
- Missing due to how late the notes were released - no rough notes taken during class either, due to incredibly slow pace.