# Numerical Methods (Backwards Euler) - Part 3

We're learning when and how to use implicit numerical methods.

Keywords: C, numerical methods, math, physical simulations

This week was loaded with math! I continued to read Pixar Physics Simulation Animation Course as usual, specifically the article on implicit numerical methods.

This involves solving ODEs (ordinary differential equations) that are considered to be ‘stiff’. We will see why we qualify such ODEs as ‘stiff’ and how we should approach solving for these types of ODEs by using implicit numerical methods such as the Backwards Euler implicit method.

⚠️ Pixar’s resource advises us to avoid an stiff ODE by reformulating our dynamic problem. We’ll be focusing on the cases in which this is not possible.

## Stiff EDOs

After having scoured the Internet for a definition of the term ‘stiff ordinary differential equation’, I ended up on Wikipedia’s entry which, admittedly for me, is not very sufficient but I will give it to you anyway:

A stiff differential equation is a differential equation whose sensitivity to parameters will make it difficult to solve by explicit numerical methods.

In short: stiff ODEs are difficult to solve with explicit numerical methods (we talked about explicit methods a few weeks ago).

Definition aside, I find that Pixar Physics Simulation Animation Course gives us a fun example that we will work on to better visualize the Wikipedia definition… Let’s imagine that we want to simulate the dynamics of a particle whose change of position is described by the following differential equation:

$\dot X(t) = \frac{d}{dt} \begin{pmatrix} x(t) \\ y(t) \end{pmatrix} = \begin{pmatrix} -x(t) \\ -ky(t) \end{pmatrix}$

where k is a positive and large constant. If k is large enough, the particle will never stray too far from $y(t) = 0$ since $-ky(t)$ will constrain $y(t)$ back to the origin. We also see that the x position of the particle is freer to move around, as described by the absence of this large and positive constant k.

In this example, this ODE would therefore be ‘stiff’ if the variable k is so large that the particle stays close to the line $y = 0$. As we will see in the next part, explicit methods will have a hard time simulating this expected behavior.

## The problem with explicit methods

Recall that the equation of the explicit Euler method is described as follows:

$X_{new} = X_0 + h\dot X(t_0)$

and in this example, $X_0 = \begin{pmatrix} x_0 \\ y_0 \end{pmatrix}$ and $\dot X(t) = \frac{d}{dt} \begin{pmatrix} x( t_0) \\ y(t_0) \end{pmatrix} = \begin{pmatrix} -x_0 \\ -ky_0 \end{pmatrix}$. Then, after the substitution, we have:

$X_{new} = \begin{pmatrix} x_0 \\ y_0 \end{pmatrix} + h\begin{pmatrix} -x_0 \\ -ky_0 \end{pmatrix}$

Which produces the following:

$X_{new} = \begin{pmatrix} (1 - h)x_0 \\ (1-hk)y_0 \end{pmatrix}$

We learned in Part 1 of Numerical Methods that in order to find a solution, we must take small steps in time to approach the solution as massive steps could very well make the system unstable. A quick rearrangement of terms will show us that these conditions must be met in order for the position of the particle described in our example to converge to (0,0):

$x_{step}: | 1 - h | < 1 \\ y_{step}: | 1 - kh | < 1$

🚨 If we take k equal to 1000: the maximum permitted step is calculated to be $1 - 1000h = -1 => h = 0.002$. If we surpass this limit, the particle’s y position will never be able to converge towards 0!

Solving for y by plugging in parameters h = 0.0015 and k = 1000.0 into a forward Euler solver that I wrote, we can visualize the particle’s y position as it oscillates around zero:

Now, let’s take larger steps in time, where h > 0.002 (h = 0.003 and k = 1000.0). Notice how the solution diverges from 0:

OK, so we could use a very small step like h = 0.0015, right? Maybe… however, we would quickly realize that the x position converges to 0 very slowly, which will make our simulation slow, and expensive!

$\begin{array}{cc} x_0 & y_0 \\ \hline \\ 100.000000 & 100.000000 \\ 99.849998 & -50.000000 \\ 99.700226 & 25.000000 \\ 99.550674 & -12.500000 \\ 99.401352 & 6.250000 \\ 99.252251 & -3.125000 \\ 99.103371 & 1.562500 \\ 98.954712 & -0.781250 \\ 98.806282 & 0.390625 \\ 98.658073 & -0.195312 \\ \end {array}$

Can we do better? Could we use a more stable method to converge faster? Yes - with the Backwards Euler implicit method!

## Backwards Euler (implicit)

Backwards Euler is the easiest of the implicit methods to understand and that’s why I’m going to focus on it.

Here is the equation:

$X_{new} = X_0 + h\dot X_{new}$

If you had been an observant person, you would have seen that the difference between the implicit Euler equation and the explicit Euler equation is that the change in X is defined as $h\dot X*{new }$ in the implicit Euler equation. We must therefore solve this change by calculating $\dot X*{new}$… to finally calculate $X\_{new}$.

In fact, it’s like walking backwards.

$X_0 = X_{new} - h\dot X_{new}$

In other words, it’s as if we were at position $X\_{new}$ and wanted to move backwards to get to position $X_0$… Hence the name: ‘Backwards Euler’.

It’s a little awkward to solve $X*{new}$, since we need to find $\dot X*{new}$… which is also unknown. Let’s see if we can rewrite this into something we can solve.

We start by providing the equation of the Backward Euler method:

$X_0 + \Delta X = X_0 + hf(X_0 + \Delta X)$

where $\Delta X = X\_{new} - X_0$. We can write it as follows:

$\Delta X = hf(X_0 + \Delta X)$

💡 $f(X*0 + \Delta X)$ is unknown, and also $\Delta X$ (because $\Delta X = X*{new} - X*0$ contains $X*{new}$ which is unknown). So we have to approximate $f(X_0 + \Delta X)$ using the Taylor series as follows:

$f(X_0 + \Delta X) = f(X_0) + \Delta X f'(X_0)$

Using the last equation, we can rewrite our equation for $\Delta X$

$\Delta X = h(f(X_0) + \Delta X f'(X_0)) \\ \frac{1}{h}\Delta X - \Delta X f'(X_0)) = f(X_0)$

💡 To continue, remember that $f(X_0)$ is a vector and $f'(X_0)$ is a matrix. This is the case thanks to vector calculus:

$\begin{bmatrix} f_1(x,y) \\ f_2(x,y) \end{bmatrix} = \begin{bmatrix} \dfrac{\partial f_1}{\partial x} & \dfrac{\partial f_1}{\partial y} \\ \dfrac{\partial f_2}{\partial x} & \dfrac{\partial f_2}{\partial y} \end{bmatrix}$

The derivation is called the Jacobian matrix. We’ll take a closer look at this in a later part.

We can multiply everything by an identity matrix to isolate our variable $\Delta X$.

$\frac{1}{h}\Delta XI - \Delta X f'(X_0))I = f(X_0)I \\ \Delta X(\frac{1}{h}I - f'(X_0)) = f(X_0) \\ \Delta X = (\frac{1}{h}I - f'(X_0))^{-1} f(X_0)$

After having isolated $\Delta X$, it is now possible to solve $X\_{new} = X_0 + \Delta X$. We will see how to do it with C code in the next part.

## Finally, a solution and some code

Here is the code that I will refer to in this section.

As a reminder, we want to converge this stiff ODE:

$\dot X(t) = \frac{d}{dt} \begin{pmatrix} x(t) \\ y(t) \end{pmatrix} = \begin{pmatrix} -x(t) \\ -ky(t) \end{pmatrix}$

And we know $\frac{\partial}{\partial X}f(X(t))$ is the Jacobian matrix:

$\frac{\partial}{\partial X}f(X(t)) = \begin{bmatrix} -1 & 0 \\ 0 & -k \end{bmatrix}$

Let’s go back to the equation $\Delta X = (\frac{1}{h}I -f'(X_0))^{-1} f(X_0)$. We calculate $\Delta X$ as follows:

$\Delta X = (\frac{1}{h}I - f'(X_0))^{-1} f(X_0) \\[0.5em] = (\frac{1}{h}I - \begin{bmatrix} -1 & 0 \\ 0 & -k \end{bmatrix} )^{-1} \begin{pmatrix} -x_0 \\ -ky_0 \end{pmatrix} \\[0.5em] = \begin{bmatrix} \frac{1+h}{h} & 0 \\ 0 & \frac{1+kh}{h} \end{bmatrix} ^{-1} \begin{pmatrix} -x_0 \\ -ky_0 \end{pmatrix} \\[0.5em] = \begin{bmatrix} \frac{h}{1+h} & 0 \\ 0 & \frac{h}{1+kh} \end{bmatrix} \begin{pmatrix} -x_0 \\ -ky_0 \end{pmatrix} \\[0.5em] = -\begin{pmatrix} \frac{h}{1+h}x_0 \\[0.5em] \frac{h}{1+kh}ky_0 \end{pmatrix} \\$

What’s interesting is that you can take a step with any magnitude! If we calculated the limit of $\Delta X$ as it approaches infinity, we would see that:

$\lim_{x \to \infty} \Delta X = \lim_{x \to \infty} -\begin{pmatrix} \frac{h}{1+h}x_0 \\[0.5em] \frac{h}{1+kh}ky_0 \end{pmatrix} = -\begin{pmatrix} x_0 \\[0.5em] \frac{1}{k}ky_0 \end{pmatrix} \\[0.5em] = -\begin{pmatrix} x_0 \\[0.5em] y_0 \end{pmatrix} \\[0.5em]$

So $X*{new} = X_0 + -(X_0) = \begin{pmatrix} 0 \\ 0 \end{pmatrix}$ as _h* approaches infinity!

Now we can use the program I wrote to see this example. after navigating to the folder where you downloaded the code, run the following:

make ./out 0.2 1000.0 100.0 100.0

where h=0.2, k=1000.0, x0=100.0 and **y0 **=100.0. You should see two files implicit.txt and explicit.txt added to the folder. In implicit.txt we see that y converges to zero!

$\begin{array}{cc} x_0 & y_0 \\ \hline \\ 100.000000 & 100.000000 \\ 83.333328 & 0.497505 \\ 69.444443 & 0.002475 \\ 57.870369 & 0.000012 \\ 48.225307 & 0.000000 \\ 40.187756 & 0.000000 \\ 33.489796 & 0.000000 \\ 27.908163 & 0.000000 \\ 23.256802 & 0.000000 \\ 19.380669 & 0.000000 \\ \end {array}$

But in explicit.txt it’s a completely different story:

$\begin{array}{cc} x_0 & y_0 \\ \hline \\ 100.000000 & 100.000000 \\ 80.000000 & -19900.000000 \\ 64.000000 & 3960100.000000 \\ 51.200001 & -788059904.000000 \\ 40.959999 & 156823928832.000000 \\ 32.767998 & -31207964278784.000000 \\ 26.214397 & 6210385271062528.000000 \\ 20.971518 & -1235866737660919808.000000 \\ 16.777214 & 245937476671354437632.000000 \\ 13.421771 & -48941558402957300465664.000000 \\ \end {array}$

## Next time

We will see how to solve stiff second order differential equations! See you next time!

## Comments for Numerical Methods (Backwards Euler) - Part 3

Written by Carmen Cincotti, computer graphics enthusiast, language learner, and improv actor currently living in San Francisco, CA.  Follow @CarmenCincotti