Finite difference

Numerical approximation for the derivative of a numerical function

Why

  • Not always possible to analytically differentiate a function.
  • Sometimes the "function" is a complicated procedure.
  • Useful in optimization, simulations, and data analysis.
  • Enables real-time computation where analytical methods are cumbersome.

Method

We use Taylor expansion to approximate the function and then solve for the derivative:

For a forward difference, the Taylor expansion is:

$$f(x_0+\Delta x) \approx f(x_0) + f'(x_0)\Delta x$$

Solving for the derivative gives:

$$f'(x_0) \approx \frac{f(x_0+\Delta x) - f(x_0)}{\Delta x}$$

For a backward difference, the Taylor expansion is:

$$f(x_0-\Delta x) \approx f(x_0) - f'(x_0)\Delta x$$

Solving for the derivative gives:

$$f'(x_0) \approx \frac{f(x_0) - f(x_0-\Delta x)}{\Delta x}$$

For a central difference, we combine both the forward and backward Taylor expansions:

$$f(x_0+\Delta x) - f(x_0-\Delta x) \approx 2 f'(x_0) \Delta x$$

Solving for the derivative gives:

$$f'(x_0) \approx \frac{f(x_0+\Delta x) - f(x_0-\Delta x)}{2\Delta x}$$

We need 2 evaluations of the function \(f\) to estimate the derivative for forward and backward differences, but 3 for central difference. Which method to choose depends on the situation:

  • Interval where \(f\) is defined (evaluating \(f(x_0+\Delta x)\) or \(f(x_0-\Delta x)\) might not always be possible).
  • If the evaluation of \(f\) is computationally expensive and you are interested solely in the value at \(x_0\), the forward or backward difference might be more appropriate.
  • Central difference usually provides a more accurate approximation, especially for small \(\Delta x\), at the cost of an additional function evaluation.

Which one is best depends on the situation:

  • Interval where $f$ is defined (evaluating $f(x_0+\Delta x)$ or $f(x_0-\Delta x)$ might not always be possible)
    • Example: If you are calculating the derivative at the endpoint of a time-series data set, you would use a backward difference at the last point and a forward difference at the first point.
  • If evaluation of $f$ is expensive and we are interested in the value at $x_0$, forward or backward difference might be better.
    • Example: In a simulation where evaluating $f$ requires solving a computationally expensive set of equations, using forward or backward difference minimizes the number of new evaluations needed.
  • When higher accuracy is required and $f$ can be evaluated at both sides of $x_0$.
    • Example: In computational fluid dynamics where small errors can propagate and amplify, using central difference can provide more accurate results in the interior of the domain.

Error estimate for forward difference

Use Taylor expansion again: $$\frac{f(x_0+\Delta x) - f(x_0)}{\Delta x} = \frac{f(x_0)+\Delta x f’(x_0)+\frac12\Delta x^2f’’(x_0)+\mathcal{O}(\Delta x^3) - f(x_0)}{\Delta x} $$

$$= f’(x_0)+\frac12 f’’(x_0)\Delta x +\mathcal{O}(\Delta x^2)$$ The error is (approximately) proportional to $\Delta x$

Error estimate for backward difference

Similarily for the backward difference: $$\frac{f(x_0) - f(x_0-\Delta x)}{\Delta x} = \frac{f(x_0)- \left( f(x_0) -\Delta x f’(x_0)+\frac12\Delta x^2f(x_0)+\mathcal{O}(\Delta x^3) \right)}{\Delta x} $$ $$= f’(x_0)-\frac12 f’’(x_0)\Delta x +\mathcal{O}(\Delta x^2)$$

The error is (approximately) also proportional to $\Delta x$

Error estimate for central difference

The central difference is better: $$\frac{f(x_0+\Delta x) - f(x_0-\Delta x)}{2\Delta x} = \frac{\left(f(x_0) +\Delta x f’(x_0)+\frac12\Delta x^2f’’(x_0)+\mathcal{O}(\Delta x^3)\right) - \left( f(x_0) -\Delta x f’(x_0)+\frac12\Delta x^2f’’(x_0)+\mathcal{O}(\Delta x^3) \right)}{2\Delta x} $$ $$ = f’(x_0)+\mathcal{O}(\Delta x^2)$$ The error has a higher polynomial dependency in the small parameter $\Delta x$.

Assignment 1

We have covered now the material for the coding in the first assignment.

It’s worth having a go at it now to help you understand the material in the lectures with a real example.