Finite-Difference Equations
One of the most common tasks in scientific computing is finding solutions to differential equations, because most physical theories are formulated using differential equations. In classical mechanics, for example, a mechanical system is described by a second-order differential equation in time (Newton's second law); and in classical electromagnetism, the electromagnetic fields are described by first-order partial differential equations in space and time (Maxwell's equations).
In order to describe continuous functions (and the differential equations that act on them), computational schemes usually adopt the strategy of discretization. Consider a general mathematical function of one real variable, , where the domain of the input is , or some finite interval. In principle, in order to fully specify the function, we have to enumerate its values for all possible inputs ; but since can vary continuously, the set is uncountably infinite, so such an enumeration is impossible on a digital computer with finite discrete memory. What we can do, instead, is to enumerate the function's values at a finite and discrete set of points,
We define the values at these points as
If is appropriately chosen, the set of values ought to describe quite accurately. One reason for this is that physical theories typically involve differential equations of low order (e.g., first, second, or third order, rather than, say, order 1,000,000). Hence, if the discretization points are sufficiently finely-spaced, the value of the function, and all its higher-order derivatives, will vary only slightly between discretization points.
As we shall see, discretization converts differential equations into discrete systems of equations, called finite-difference equations. These can then be solved using the standard methods of numerical linear algebra.
Contents[hide] |
Derivatives
Suppose we have discretized a function of one variable, obtaining a set of as described above. For simplicity, we assume that the discretization points are evenly-spaced and arranged in increasing order (this is the simplest and most common discretization scheme). The spacing between points is defined as
Let us discuss how the first and higher-order derivatives of can be represented under discretization.
First derivative
The most straightforward representation of the first derivative is the forward-difference formula:
This is inspired by the usual definition of the derivative of a function, and approaches the true derivative as . However, it is not a very good approximation. To see why, let's analyze the error in the formula, which is defined as the absolute value of the difference between the formula and the exact value of the derivative:
We can expand in a Taylor series around :
Plugging this into the error formula, we find that the error decreases linearly with the spacing:
There is a better alternative, called the mid-point formula. This approximates the first derivative by sampling the points to the left and right of the desired position:
To see why this is better, let us write down the Taylor series for :
Note that the two series have the same terms involving even powers of , whereas the terms involving odd powers of have opposite signs. Hence, if we subtract the second series from the first, the result is
Because the terms are equal in the two series, they cancel out under subtraction, and only the and higher terms survive. After re-arranging the above equation, we get
Hence, the error of the mid-point formula scales as , which is a good improvement over the error of the forward-difference formula. What's especially nice is that the mid-point formula requires the same number of arithmetic operations to calculate as the forward-difference formula, so this is a free lunch!
It is possible to come up with better approximation formulas for the first derivative by including terms involving etc., with the goal of canceling the or higher terms in the Taylor series. For most practical purposes, however, the mid-point rule is sufficient.
Second derivative
The discretization of the second derivative is easy to figure out too. We again write down the Taylor series for :
When we add the two series together, the terms involving odd powers of cancel, and the result is
A minor rearrangement of the equation then gives
This is called the three-point rule for the second derivative, because it involves the value of the function at the three points , , and .
Discretizing partial differential equations
With discretized derivatives, differential equations can be formulated as discrete systems of equations. We will discuss this using a specific example: the discretization of the time-independent Schrödinger wave equation in 1D.
Deriving a finite-difference equation
The 1D time-independent Schrödinger wave equation is the second-order ordinary differential equation
where is Planck's constant divided by , is the mass of the particle, is the potential, is the quantum wavefunction of an energy eigenstate of the particle, and is the corresponding energy. The differential equation is usually treated as an eigenproblem, in the sense that we are given and seek to find the possible values of the eigenfunction and the energy eigenvalue . For convenience, we will adopt units where :
To discretize this differential equation, we simply evaluate it at :
where, for conciseness, we denote
We then replace the second derivative with a discrete approximation, specifically the three-point rule:
This result is called a finite-difference equation, and it would be valid for all if the number of discretization points is infinite. However, if there is a finite number of discretization points, , then the finite-difference formula fails at the boundary points, and , where it involves the value of the function at the "non-existent" points and . We'll see how to handle this problem in the next section.
Boundaries aside, the finite-difference equation describes a matrix equation:
The second-derivative operator is represented by a tridiagonal matrix with in each diagonal element, and in the elements directly above and below the diagonal. The potential operator is represented by a diagonal matrix, where the elements along the diagonal are the values of the potential at each discretization point. In this way, the Schrödinger wave equation is reduced to a discrete eigenvalue problem.
Boundary conditions
We now have to figure out how to handle the boundaries. Let us suppose is defined over a finite interval, . As we recall from the theory of differential equations, the solution to a differential equation is not wholly determined by the differential equation itself, but also by the boundary conditions that are imposed. Thus, we have to specify how behaves at the end-points of the interval. We will show how this is done for a couple of the most common boundary conditions; other choices of boundary conditions can be handled using the same kind of reasoning.
Dirichlet boundary conditions
Under Dirichlet boundary conditions, the wavefunction vanishes at the boundaries:
Physically, these boundary conditions apply if we let the potential blow up in the external regions, and , thus forcing the wavefunction to be strictly confined to the interval .
We have not yet stated how the discretization points are distributed within the interval; we will make this decision in tandem with the implementation of the boundary conditions. Consider the first discretization point, , wherever it is. The finite-difference equation at this point is
This involves the wavefunction at , which lies just outside our set of discretization points. But if we choose the discretization points so that , then under Dirichlet boundary conditions, so the above finite-difference formula reduces to
As for the other boundary, the finite-difference equation at involves . If we choose the discretization points so that , then the finite-difference formula becomes
From this, we conclude that the discretization points ought to be equally spaced, with at a distance to the right of the left boundary and a distance to the left of the right boundary . This is shown in the following figure:
Since there are discretization points, the interval should contain multiples of . Hence,
Having made the above choices, the matrix equation becomes
You can check for yourself that the first and last rows of this equation are the correct finite-difference equations at the boundary points, corresponding to Dirichlet boundary conditions.
Neuman boundary conditions
Neumann boundary conditions are another common choice of boundary conditions. They state that the first derivatives vanish at the boundaries:
An example of such a boundary condition is encountered in electrostatics, where the first derivative of the electric potential goes to zero at the surface of a charged metallic surface.
We follow the same strategy as before, figuring out the discretization points in tandem with the boundary conditions. Consider again the finite-difference equation at the first discretization point:
To implement the condition that first derivative vanishes at the boundary, we invoke the mid-point rule. Suppose the boundary point falls in between the points and . Then, according to the mid-point rule,
With this choice, therefore, we can make the replacement in the finite-difference equation, which then becomes
Similarly, to apply the Neumann boundary condition at , we let the boundary fall between and , so that the finite-difference equation becomes
The resulting distribution of discretization points is shown in the following figure:
Unlike the Dirichlet case, the interval contains multiples of . Hence, we get a different formula for the positions of the discretization points
The matrix equation is:
Due to the Neumann boundary conditions and the mid-point rule, the tridiagonal matrix has instead of on its corner entries. Again, you can verify that the first and last rows of this matrix equation correspond to the correct finite-difference equations for the boundary points.
Higher dimensions
We can work out the finite-difference equations for higher dimensions in a similar manner. In two dimensions, for example, the wavefunction is described with two indices:
The discretization of the derivatives is carried out in the same way, using the mid-point rule for first partial derivatives in each direction, and the three-point rule for the second partial derivative in each direction. Let us suppose that the discretization spacing is equal in both directions:
Then, for the second derivative, the Laplacian operator
can be approximated by a five-point rule, which involves the value of the function at and its four nearest neighbours:
For instance, the finite-difference equations for the 2D Schrödinger wave equation is
Matrix reshaping
Higher-dimensional differential equations introduce one annoying complication: in order to convert between the finite-difference equation and the matrix equation, the indices have to be re-organized. For instance, the matrix form of the 2D Schrödinger wave equation should have the form
where the wavefunctions are organized into a 1D array labeled by a "point index" . Each point index corresponds to a pair of "grid indices", , representing spatial coordinates on a 2D grid. We have to be careful not to mix up the two types of indices.
We will adopt the following conversion scheme between point indices and grid indices:
One good thing about this conversion scheme is that Scipy provides a reshape
function which can convert a 2D array with grid indices into a 1D array with the point index :
>>> a = array([[0,1,2],[3,4,5],[6,7,8]]) >>> a array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) >>> b = reshape(a, (9)) # Reshape a into a 1D array of size 9 >>> b array([0, 1, 2, 3, 4, 5, 6, 7, 8])
The reshape
function can also convert a 1D back into the 2D array, in the right order:
>>> c = reshape(b, (3,3)) # Reshape b into a 2D array of size 3x3 >>> c array([[0, 1, 2], [3, 4, 5], [6, 7, 8]])
Under point indices, the discretized derivatives take the following forms:
The role of boundary conditions is left as an exercise. There are now two sets of boundaries, at and . By examining the finite-difference equations along each boundary, we can (i) assign the right discretization coordinates and (ii) modify the finite-difference matrix elements to fit the boundary conditions. The details are slightly tedious to work out, but the logic is essentially the same as in the previously-discussed 1D cases.
Computational physics notes | Prev: Eigenvalue problems Next: Sparse matrices