The Risk Analytics Factory

In Wired Nov-09, I read under World's Biggest Digital Brains that those, with names like Blue Brain, Discover, Ranger, .. are able to handle "many" teraflops, studying the inner working of the human brain, climate simulations, .....: Number of processors up to 16.000 for 500 Teraflops.

In Finance?

You cannot predict extreme events. Studying the past will also not help to manage all risk. You need to ask What-If, Provided-Then, .. questions?!

Simulate deals across comprehensive scenarios can become heavy within a bank, not to speak about market segment, sectorial analysis, .. Such simulations are not performed, because of technological limitations.
A portfolio of 1000 moderately complex structured instruments (average valuation time 10 sec) across 10.000 scenarios would take more than 3 years on a single core process. To get the result in 10 seconds, a speed up of 10 Million was required.

Can we imagine to set up affordable risk-answering-factories with such a performance?

First, kernel sw would need to take its part, say, by doing principle component application , surrogate models. Speed-up of 1000. Still 10.000 to do.
Put part of the valuations on a Personal Supercomputer from NVIDIA (quad-core machine plus GPUs), and you can get a speed-up of approx.100.
Still 100 to do. With the help of Mathematica's symbolic parallelization techniques, we have made UnRisk grid-enabled and optimized for close to linear speed-up related to the available core processes. Provided, we take a networked pool of 128 core processes. 32 Tesla personal Supercomputers for the job?!

Luckily, we have selected the right algorithms and technologies to multiply speed-up. NVIDIA implementation to come next. And yes, it is possible.

No comments:

Post a Comment