In this post I am going to explain the Euler-Maruyama method for approximating stochastical differential equations.

Euler method

First, the Euler method. Suppose you have a (deterministic) ordinary differential equation where is the derivative of , is a computable function and the domains of definition are so that everything “works fine”. Suppose we want to approximate the solution on the closed interval . We also need to have a boundary condition, e.g., that to ensure uniqueness of the solution.

Let us now approximate the result. First choose a partitioning of the interval . We take contant stepsizes, but you can also use any other partitioning. So choose a “big” and then define the stepsize as and the different steps as for .

We know from our boundary condition that .
What is now a meaningful value for ?
We know that which we can compute. Thus we can approximate the “real” solution in a neighborhood of by a line. This works in a practical sense if was chosen “big enough”, since then is small.

So a good guess for would be . We take the correct value and gradient at and walk . More generally we can define an approximation of via

If this is not clear to you I suggest you stop here and draw a picture.

Euler-Maruyama method

The Euler-Maruyama method is for SDEs the same as the Euler method is for ODEs. Suppose we have the SDE

where is brownian motion.

We want to do the same, namely approximate on the closed interval . Again we need a boundary condition like . And we also take a steplength and define the different points as for . Basically we do everything exactly as before. We can also again approximate in a small neighborhood by a line but this time the gradient of the line is stochastical in nature. So we get as approximation the markov chain

where which are i.i.d. random variables which follow a distribution.

What does Markovian mean in this context? It means that for the conditional expected values so the next expected value depends only on the previous one.


14 July 2015