# Finite difference approximations¶

At the heart of scientific computing is finding approximate solutions to problems arising in engineering and science. Such problems are almost always modeled by ordinary or partial differential equations. An example of an equation used to model mechanical vibrations causing sound is the wave equation:

Here the solution \(u = u(x,y,z,t)\) can represent the displacement in a solid, the acoustic pressure or particle velocity or, under certain conditions, the electric or magnetic fields. The quantity \(c\) is the “speed of sound”.

The above equation is for (3+1) dimensions. In a single dimension it takes the form

Here we have included initial conditions (at time \(t = 0\)) and boundary conditions at the left and right boundary.

As a computer typically has a **finite amount of memory** we will have to restrict ourselves to finding approximate solutions to the equation at a **finite** number of \(x\) and \(t\) values. Eventually we will learn how to solve evolve the solution in time, but for now let’s restrict ourselves to how to approximate the spatial derivative on the right hand side.

First we have to choose where (“for what values of \(x\)”) we would like to know the approximate value of the derivative. A natural choice is the **equidistant grid** consisting of the \(n+1\) **grid points**

Here \(h\) is the grid spacing which has been chosen such that \(x_0 = X_L\) and \(x_n = X_R\).

Now, in order to find an approximation to the second derivative we expand the solution in a Taylor series around \(x+h\) and \(x-h\). For notational convenience we suppress the dependence on time and get

Adding these equations together we find:

which can be rearranged to

The finite difference approximation on the left hand side is thus a second order accurate approximation to \(\frac{\partial^2 u(x)}{\partial x^2}\). In particular we see that if we choose \(x=x_i\) and use the fact that \(x_i \pm h = x_{i \pm 1}\) and also introduce the notation \(u_i = u(x_i)\) we may write:

## Biased stencils¶

Assuming that we know \(u_i, \, i = 0,\ldots,n\) we see that it is not possible to use the above **centered** finite difference stencil at \(x_0\) and \(x_n\) as that would require knowing \(x_{-1}\) and \(x_{n+1}\). Instead we can derive **biased** stencils at the end-points of the grid. The idea is the same as above, expand in a Taylor series, but now only around interior points, precisely we expand

Now we can subtract two times the first equation form the second equation yielding the first order accurate approximation

If we also include the expansion for \(u_3\) we can get the second order approximation

To check the coefficients of the stencils we can consider the special cases of a constant and a linear function which should be differentiated exactly by a first and second order accurate method respectively. Obviously both stencils are exact for a constant (the coefficients add up to zero) and additionally the secon approximation is exact for the linear function \(x\) on the grid \(x_0 = 0, x_1 = 1, x_2 = 2, x_3 = 3\). The second derivative of a linear is zero which is exactly what comes out, i.e. \(-3 + 2\times 4 - 5\times 1 + 1\times 0 = 0\).

## Higher order approximations¶

The above procedure can of course, in principle, be extended to use all the points on the grid, yielding, in principle, a very high order accurate approximation. A general algorithm for finding finite difference stencils of all orders has been found by Bengt Fornberg and is described in his excellent review paper.

In the section Finite difference examples in Fortran we discus how the above approximations can be implemented in Fortan.