Numerical Methods (Backwards Euler) - Part 3

We're learning when and how to use implicit numerical methods.

Keywords: C, numerical methods, math, physical simulations

By Carmen Cincotti  

This week was loaded with math! I continued to read Pixar Physics Simulation Animation Course as usual, specifically the article on implicit numerical methods.

This involves solving ODEs (ordinary differential equations) that are considered to be ‘stiff’. We will see why we qualify such ODEs as ‘stiff’ and how we should approach solving for these types of ODEs by using implicit numerical methods such as the Backwards Euler implicit method.

⚠️ Pixar’s resource advises us to avoid an stiff ODE by reformulating our dynamic problem. We’ll be focusing on the cases in which this is not possible.

Stiff EDOs

After having scoured the Internet for a definition of the term ‘stiff ordinary differential equation’, I ended up on Wikipedia’s entry which, admittedly for me, is not very sufficient but I will give it to you anyway:

A stiff differential equation is a differential equation whose sensitivity to parameters will make it difficult to solve by explicit numerical methods.

In short: stiff ODEs are difficult to solve with explicit numerical methods (we talked about explicit methods a few weeks ago).

Definition aside, I find that Pixar Physics Simulation Animation Course gives us a fun example that we will work on to better visualize the Wikipedia definition… Let’s imagine that we want to simulate the dynamics of a particle whose change of position is described by the following differential equation:

X˙(t)=ddt(x(t)y(t))=(x(t)ky(t)) \dot X(t) = \frac{d}{dt} \begin{pmatrix} x(t) \\ y(t) \end{pmatrix} = \begin{pmatrix} -x(t) \\ -ky(t) \end{pmatrix}

where k is a positive and large constant. If k is large enough, the particle will never stray too far from y(t)=0y(t) = 0 since ky(t)-ky(t) will constrain y(t)y(t) back to the origin. We also see that the x position of the particle is freer to move around, as described by the absence of this large and positive constant k.

In this example, this ODE would therefore be ‘stiff’ if the variable k is so large that the particle stays close to the line y=0y = 0. As we will see in the next part, explicit methods will have a hard time simulating this expected behavior.

The problem with explicit methods

Recall that the equation of the explicit Euler method is described as follows:

Xnew=X0+hX˙(t0) X_{new} = X_0 + h\dot X(t_0)

and in this example, X0=(x0y0)X_0 = \begin{pmatrix} x_0 \\ y_0 \end{pmatrix} and X˙(t)=ddt(x(t0)y(t0))=(x0ky0)\dot X(t) = \frac{d}{dt} \begin{pmatrix} x( t_0) \\ y(t_0) \end{pmatrix} = \begin{pmatrix} -x_0 \\ -ky_0 \end{pmatrix}. Then, after the substitution, we have:

Xnew=(x0y0)+h(x0ky0) X_{new} = \begin{pmatrix} x_0 \\ y_0 \end{pmatrix} + h\begin{pmatrix} -x_0 \\ -ky_0 \end{pmatrix}

Which produces the following:

Xnew=((1h)x0(1hk)y0) X_{new} = \begin{pmatrix} (1 - h)x_0 \\ (1-hk)y_0 \end{pmatrix}

We learned in Part 1 of Numerical Methods that in order to find a solution, we must take small steps in time to approach the solution as massive steps could very well make the system unstable. A quick rearrangement of terms will show us that these conditions must be met in order for the position of the particle described in our example to converge to (0,0):

xstep:1h<1ystep:1kh<1 x_{step}: | 1 - h | < 1 \\ y_{step}: | 1 - kh | < 1

🚨 If we take k equal to 1000: the maximum permitted step is calculated to be 11000h=1=>h=0.0021 - 1000h = -1 => h = 0.002. If we surpass this limit, the particle’s y position will never be able to converge towards 0!

Solving for y by plugging in parameters h = 0.0015 and k = 1000.0 into a forward Euler solver that I wrote, we can visualize the particle’s y position as it oscillates around zero:

Euler Explit Graphic h = 0.0015

Now, let’s take larger steps in time, where h > 0.002 (h = 0.003 and k = 1000.0). Notice how the solution diverges from 0:

Euler Explit Graphic h = 0.003

OK, so we could use a very small step like h = 0.0015, right? Maybe… however, we would quickly realize that the x position converges to 0 very slowly, which will make our simulation slow, and expensive!

x0y0100.000000100.00000099.84999850.00000099.70022625.00000099.55067412.50000099.4013526.25000099.2522513.12500099.1033711.56250098.9547120.78125098.8062820.39062598.6580730.195312\begin{array}{cc} x_0 & y_0 \\ \hline \\ 100.000000 & 100.000000 \\ 99.849998 & -50.000000 \\ 99.700226 & 25.000000 \\ 99.550674 & -12.500000 \\ 99.401352 & 6.250000 \\ 99.252251 & -3.125000 \\ 99.103371 & 1.562500 \\ 98.954712 & -0.781250 \\ 98.806282 & 0.390625 \\ 98.658073 & -0.195312 \\ \end {array}

Can we do better? Could we use a more stable method to converge faster? Yes - with the Backwards Euler implicit method!

Backwards Euler (implicit)

Backwards Euler is the easiest of the implicit methods to understand and that’s why I’m going to focus on it.

Here is the equation:

Xnew=X0+hX˙new X_{new} = X_0 + h\dot X_{new}

If you had been an observant person, you would have seen that the difference between the implicit Euler equation and the explicit Euler equation is that the change in X is defined as hX˙newh\dot X*{new } in the implicit Euler equation. We must therefore solve this change by calculating X˙new\dot X*{new}… to finally calculate X_newX\_{new}.

In fact, it’s like walking backwards.

X0=XnewhX˙new X_0 = X_{new} - h\dot X_{new}

In other words, it’s as if we were at position X_newX\_{new} and wanted to move backwards to get to position X0X_0… Hence the name: ‘Backwards Euler’.

It’s a little awkward to solve XnewX*{new}, since we need to find X˙new\dot X*{new}… which is also unknown. Let’s see if we can rewrite this into something we can solve.

We start by providing the equation of the Backward Euler method:

X0+ΔX=X0+hf(X0+ΔX) X_0 + \Delta X = X_0 + hf(X_0 + \Delta X)

where ΔX=X_newX0\Delta X = X\_{new} - X_0. We can write it as follows:

ΔX=hf(X0+ΔX) \Delta X = hf(X_0 + \Delta X)

💡 f(X0+ΔX)f(X*0 + \Delta X) is unknown, and also ΔX\Delta X (because ΔX=XnewX0\Delta X = X*{new} - X*0 contains XnewX*{new} which is unknown). So we have to approximate f(X0+ΔX)f(X_0 + \Delta X) using the Taylor series as follows:

f(X0+ΔX)=f(X0)+ΔXf(X0) f(X_0 + \Delta X) = f(X_0) + \Delta X f'(X_0)

See this resource to learn more about the Taylor Series and this exact derivation.

Using the last equation, we can rewrite our equation for ΔX\Delta X

ΔX=h(f(X0)+ΔXf(X0))1hΔXΔXf(X0))=f(X0) \Delta X = h(f(X_0) + \Delta X f'(X_0)) \\ \frac{1}{h}\Delta X - \Delta X f'(X_0)) = f(X_0)

💡 To continue, remember that f(X0)f(X_0) is a vector and f(X0)f'(X_0) is a matrix. This is the case thanks to vector calculus:

[f1(x,y)f2(x,y)]=[f1xf1yf2xf2y] \begin{bmatrix} f_1(x,y) \\ f_2(x,y) \end{bmatrix} = \begin{bmatrix} \dfrac{\partial f_1}{\partial x} & \dfrac{\partial f_1}{\partial y} \\ \dfrac{\partial f_2}{\partial x} & \dfrac{\partial f_2}{\partial y} \end{bmatrix}

The derivation is called the Jacobian matrix. We’ll take a closer look at this in a later part.

We can multiply everything by an identity matrix to isolate our variable ΔX\Delta X .

1hΔXIΔXf(X0))I=f(X0)IΔX(1hIf(X0))=f(X0)ΔX=(1hIf(X0))1f(X0) \frac{1}{h}\Delta XI - \Delta X f'(X_0))I = f(X_0)I \\ \Delta X(\frac{1}{h}I - f'(X_0)) = f(X_0) \\ \Delta X = (\frac{1}{h}I - f'(X_0))^{-1} f(X_0)

After having isolated ΔX\Delta X, it is now possible to solve X_new=X0+ΔXX\_{new} = X_0 + \Delta X. We will see how to do it with C code in the next part.

Finally, a solution and some code

Here is the code that I will refer to in this section.

As a reminder, we want to converge this stiff ODE:

X˙(t)=ddt(x(t)y(t))=(x(t)ky(t)) \dot X(t) = \frac{d}{dt} \begin{pmatrix} x(t) \\ y(t) \end{pmatrix} = \begin{pmatrix} -x(t) \\ -ky(t) \end{pmatrix}

And we know Xf(X(t))\frac{\partial}{\partial X}f(X(t)) is the Jacobian matrix:

Xf(X(t))=[100k] \frac{\partial}{\partial X}f(X(t)) = \begin{bmatrix} -1 & 0 \\ 0 & -k \end{bmatrix}

Let’s go back to the equation ΔX=(1hIf(X0))1f(X0)\Delta X = (\frac{1}{h}I -f'(X_0))^{-1} f(X_0). We calculate ΔX\Delta X as follows:

ΔX=(1hIf(X0))1f(X0)=(1hI[100k])1(x0ky0)=[1+hh001+khh]1(x0ky0)=[h1+h00h1+kh](x0ky0)=(h1+hx0h1+khky0)\Delta X = (\frac{1}{h}I - f'(X_0))^{-1} f(X_0) \\[0.5em] = (\frac{1}{h}I - \begin{bmatrix} -1 & 0 \\ 0 & -k \end{bmatrix} )^{-1} \begin{pmatrix} -x_0 \\ -ky_0 \end{pmatrix} \\[0.5em] = \begin{bmatrix} \frac{1+h}{h} & 0 \\ 0 & \frac{1+kh}{h} \end{bmatrix} ^{-1} \begin{pmatrix} -x_0 \\ -ky_0 \end{pmatrix} \\[0.5em] = \begin{bmatrix} \frac{h}{1+h} & 0 \\ 0 & \frac{h}{1+kh} \end{bmatrix} \begin{pmatrix} -x_0 \\ -ky_0 \end{pmatrix} \\[0.5em] = -\begin{pmatrix} \frac{h}{1+h}x_0 \\[0.5em] \frac{h}{1+kh}ky_0 \end{pmatrix} \\

What’s interesting is that you can take a step with any magnitude! If we calculated the limit of ΔX\Delta X as it approaches infinity, we would see that:

limxΔX=limx(h1+hx0h1+khky0)=(x01kky0)=(x0y0)\lim_{x \to \infty} \Delta X = \lim_{x \to \infty} -\begin{pmatrix} \frac{h}{1+h}x_0 \\[0.5em] \frac{h}{1+kh}ky_0 \end{pmatrix} = -\begin{pmatrix} x_0 \\[0.5em] \frac{1}{k}ky_0 \end{pmatrix} \\[0.5em] = -\begin{pmatrix} x_0 \\[0.5em] y_0 \end{pmatrix} \\[0.5em]

So Xnew=X0+(X0)=(00)X*{new} = X_0 + -(X_0) = \begin{pmatrix} 0 \\ 0 \end{pmatrix} as _h* approaches infinity!

Now we can use the program I wrote to see this example. after navigating to the folder where you downloaded the code, run the following:

make ./out 0.2 1000.0 100.0 100.0

where h=0.2, k=1000.0, x0=100.0 and **y0 **=100.0. You should see two files implicit.txt and explicit.txt added to the folder. In implicit.txt we see that y converges to zero!

x0y0100.000000100.00000083.3333280.49750569.4444430.00247557.8703690.00001248.2253070.00000040.1877560.00000033.4897960.00000027.9081630.00000023.2568020.00000019.3806690.000000\begin{array}{cc} x_0 & y_0 \\ \hline \\ 100.000000 & 100.000000 \\ 83.333328 & 0.497505 \\ 69.444443 & 0.002475 \\ 57.870369 & 0.000012 \\ 48.225307 & 0.000000 \\ 40.187756 & 0.000000 \\ 33.489796 & 0.000000 \\ 27.908163 & 0.000000 \\ 23.256802 & 0.000000 \\ 19.380669 & 0.000000 \\ \end {array}

But in explicit.txt it’s a completely different story:

x0y0100.000000100.00000080.00000019900.00000064.0000003960100.00000051.200001788059904.00000040.959999156823928832.00000032.76799831207964278784.00000026.2143976210385271062528.00000020.9715181235866737660919808.00000016.777214245937476671354437632.00000013.42177148941558402957300465664.000000\begin{array}{cc} x_0 & y_0 \\ \hline \\ 100.000000 & 100.000000 \\ 80.000000 & -19900.000000 \\ 64.000000 & 3960100.000000 \\ 51.200001 & -788059904.000000 \\ 40.959999 & 156823928832.000000 \\ 32.767998 & -31207964278784.000000 \\ 26.214397 & 6210385271062528.000000 \\ 20.971518 & -1235866737660919808.000000 \\ 16.777214 & 245937476671354437632.000000 \\ 13.421771 & -48941558402957300465664.000000 \\ \end {array}

Next time

We will see how to solve stiff second order differential equations! See you next time!

Resources


Comments for Numerical Methods (Backwards Euler) - Part 3



Written by Carmen Cincotti, computer graphics enthusiast, language learner, and improv actor currently living in San Francisco, CA.  Follow @CarmenCincotti

Contribute

Interested in contributing to Carmen's Graphics Blog? Click here for details!