During this week, I started rereading Physically Based Modeling - SIGGRAPH (specifically, Differential Equation Basics) when I realized that although I was capable to blindly use numerical methods to approximate solutions to first order differential equations, I was having trouble explaining *when* to use them. So I took the time to learn the subject more thoroughly.

## First-order Ordinary Differential Equations (ODE):

In order to understand numerical methods, I’m starting with a very very brief intro to first-order ODEs. These are unknown functions which depend on only one real variable (*t*) which is written in the form:

Where ** y** is the unknown function.

We are interested here in Cauchys problems, where we use a differential equation with an initial condition:

where ** t_{0}**,

**and**

*T***are given.**

*y*_{0}## Analytical solutions

Consider an example of a differential equation of the form:

We can solve it by separating the variables ** t** and

**:**

*y*Next, by integrating the two members of the equation on the variable ** t**, we obtain:

where ** C** is the integration constant.

To calculate this ** C** variable, we need to provide an initial value (which makes this problem an

**initial value problem**)

Let’s add an arbitrary initial condition… assuming that:

Our final answer is:

There are many possible methods to solve them analytically (or “by hand”) - but that is not the subject of this post. In fact, in most cases we will see ODEs that are super difficult to compute, as well as impossible. In this case, it is almost necessary to employ numerical methods with a computer to approximate such solutions.

## Numerical methods

As I just said, there are first order ODEs that cannot be solved explicitly and thus an analytical solution cannot be found, or it’s just not worth the trouble. We can avoid these frustrating calculations by using numerical methods to approximate a solution.

How does it work? We can avoid calculating the integral by estimating the slope of the curve with respect to each time step (in the last example, it is the variable ** t**). So we are approximating the value of the function

**with respect to the variable**

*y***(don’t worry if this looks confusing, we’ll see some examples soon).**

*t*The method that I suggest we study first is **Euler’s method**. Note that there are other more effective and precise methods, but it’s easier to study this method first to get a grasp of the subject.

## Euler’s method

I suggest you look at this example on KhanAcademy before moving on.

OK… if we use our last example:

Remember that we can avoid finding the integral by estimating the slope of the curve with respect to each time step (here, it is the variable ** t**). Thus, we approximate the value of the function

**with respect to the variable**

*y***. The equation of Euler’s method is:**

*t*where ** t_{0}** and

**are the initial**

*y*_{0}*t*and

*y*values at this given time step (of magnitude

**). If we study this equation thoroughly, we can see that the function**

*h***is the slope… or , in colloquial terms, if we consider the equation of a line: $y = mx + b$, it is the**

*f(t*_{0}, y_{0})*m*part.

Let’s use our equation $\frac{dy}{dt} = y'$ with an initial condition of: $y(0) = 1$ where with each step, we use the previous *y* value that we just calculated as y_{0}.

t_{0} |
y_{0} |
f(x,y) | y |
---|---|---|---|

0 | 1 | 1 | 1 |

0.25 | 1 | 1 | 1.25 |

0.5 | 1.25 | 1.25 | 1.56 |

0.75 | 1.56 | 1.56 | 1.95 |

1.0 | 1.95 | 1.95 | 2.44 |

The behavior of this approximation is pretty much the same as the real equation…but not quite (we’ll see why).

Anyway, why does it work? Recall that a line tangential to a point on a curve is:

So it’s the same with Euler’s method! We calculate, at each time step, a line tangental to our approximate curve in order to estimate the next point (on the approximate curve)!

Nevertheless, the error between the true curve and the one we have just approximated is obvious… especially when we realize that the true value of $e ^ 1 = 2.72$… can we do better? Yes! If we take smaller step sizes, we will find approximate points closer to the real thing! For example, with a step of 0.025:

t_{0} |
y_{0} |
f(x,y) | y |
---|---|---|---|

0 | 1 | 1 | 1 |

0.025 | 1 | 1 | 1.025 |

0.05 | 1.025 | 1.025 | 1.05 |

0.075 | 1.05 | 1.05 | 1.08 |

… | 1.0 | 2.62 | 2.62 | 2.685 |

Thus, with smaller steps, we approache the real value, but at a cost because now we require more steps to calcualte… thus increasing total computation time.

## Euler method in C

To conclude this section - I leave you with a very small a code snippet (NOTE: I’m a C novice!)

```
float func(float x, float y) {
return y;
}
int main(int argc, char *argv[])
{
float h = 0.025; // step size
float x0 = 0.; // x initial condition
float y = 1.; // y initial condition
int x_approx = 2; // x that we want to approximate
while(x0 < x_approx) {
printf("X: %f, Y:%f\n", x0, y);
y = y + h * func(x0, y);
x0 = x0 + h;
}
printf("FINAL X: %f, Y:%f ", x0, y);
return 0;
}
```

```
X: 0.000000, Y:1.000000
X: 0.025000, Y:1.025000
X: 0.050000, Y:1.050625
X: 0.075000, Y:1.076891
X: 0.100000, Y:1.103813
X: 0.125000, Y:1.131408
X: 0.150000, Y:1.159693
```

## Next time

We’re going to check out the other methods (that are more accurate) and also check out the different types of error associated with each!