The diffusion in convection

In last weeks blog post "A convection toy problem or can you find the diffusion" I showed the picture of a function propagating through the time. Although the analytical solution of the problem suggests that the shape of the peak remains stable as a function of time, the peak obviously broadens indicating that (numerical) diffusion occurs.

If we go back one step and use a central difference scheme for the discretization of the first order spatial derivative we obtain the following result


We clearly see that during propagation the peak does not only change its shape but that scheme becomes even unstable. If we can analyse this behaviour perhaps we get an answer to our first problem, namely how numerical diffusion is introduced into the stable propagation when using an upwind scheme.

We use a Taylor expansion for the following equation


and obtain


Using the last but one equation with the last one we obtain

Numerical diffusion is introduced due to the discretisation scheme used — the negative diffusion coefficient in the leading error term makes the scheme unconditionally unstable. As a homework I suggest to perform this kind of analysis to the unwinding scheme (one sided difference) to answer the question where the diffusion comes from. I will post the solution next week.

Cyborg Tech is Coming?

Who is ready to be a Cyborg?

I just think of a trader, quant, risk manager, …  biomechanically connected to an internet of finance?!

I am a sic-fi fan too (I started reading Stanislav Lem in the early 70s, just reread them recently).  But beware, first we build the tools - then they build us.

I rather believe in a big improvement of human-machine interactions. Never forget: it is the strength of our brain that it is able to forget …

Does Antifragility Need Fragility?

There is this book: Anitfragile. Its conclusion: the antifragile is immune to prediction errors and it becomes stronger with added stress.

There is an ongoing thread at Wilmott General Forum - Can antifragility exists without fragility?

"Traden4Alpha" (an outstanding knowledgeable senior member) points out that it depends on the assessment criteria. At the transaction level, antifragility needs fragility: hedgers need speculators, the counter party that accepts to take the fragile side of a contract (but speculators themselves can buy antifragility).

The difficult thing about this at the global level is transparency: who has what fragile/antifragile position  and how do those positions cross-connect to fragility concentration or fragility diffusion and buffering.

This led the two of us to

Local and systemic risk management 

In heterogeneous, decentralized systems the failure of one agent is for the benefit of another. Diffusion and buffering helps to gain overall antifragility.

The danger of tightly coupled complex systems

The fear of systemic risk may lead to tightly coupled complex (security) systems that may have unintended consequences (complexity) but do not give enough time to react to them (tight coupling).

The metaphor: a nuclear power plant. Their tightly coupled complexity is a inevitable system characteristic, but their safety mechanism does not always make them safer (too poor instruments, wrong interpretation, unexpected connections, false alarms, …)

In search of the perfect antifragility may lead to "normal accidents".

A sufficiently complex tightly coupled system will fail sooner or later

Learning from turbulences leads to antifragility 

Avoiding risk is a poor way to manage risk. You never learn and all events stay unexpected, hidden and more or less catastrophic.

The safety system in the time when the subprime crisis emerged with all the toxic CDOs blew the danger up instead of making the system safer. Market participants - thinking the system was safe - "forgot" the convex risk of a put-option integrated into a mortgage backed security.

If such a system fails, we have not gained anything.

So, a diversified, decentralized, limited purpose, less tightly coupled system of financial institutions, less overall constraints and rules, … may be a better way. Gain from disorder - as NN Taleb says.

Math Wednesday at the European Wolfram Technology Conference 2014

Today, a short one: I am attending the final day of the EWTC 2014. I gave a talk on the City Swap in the early morning.

Tom-Wickham Jones from Wolfram just speaks on Big Data, Data Framework, Data Store and Association. Quite impressive. Sascha used this Association in his Functional Goats.

And I met some old friends from the Mathematica community.

More Cars, Fewer Fatal Accidents

This is the first post in a new topic: Telegrams. We read a lot and whenever we see something we find interesting, we just post it without thinking a lot about relevance, importance, profundity, … It will be motley and without much analysis ...

There is this new book: Stories and Numbers About Danger and Death

An example from the UK: 1951 there were only 4 million vehicles. They sadly killed 907 children under 15. That dropped to 533 in 1993, 124 in 2008 and 55 in 2010 - sad enough….

The reason is most probably a positive mix of regulation (especially, traffic calming and separation of traffic participants - streets are no longer playgrounds ..), technology and good sense?

However, a good result of good risk management.

When Radical Experimentation Becomes Indispensable


NN Taleb calls trial end error insight gain: stochastic tinkering. FastCompany presented a few stories about how companies are using rogue R&D to tinker their way to the next big idea in this article.

But, IMO, radical experimentation and tinkering are not necessarily twins.

Let me add our story:

Taming the machine infernal

Nov-09 we installed a NVIDIA Tesla personal super computer - see details here. It was at a time when GPUs became ripe for advanced numerical computation. With double precision, programming models ….

Our radical experiment-objective: reduce the calibration of a Heston model in a least-square sense from market data of vanilla options on the FTSE100 index from hours to seconds. And it worked: see the results in the joint transtec-UnRisk white paper: risk analytics in time.

Exploit new hardware and accelerate schemes on the "old" one

We made it quick but very clean - even bank proof, even knowing it does not make sense to provide it as a single - although stunning - solution into our comprehensive platform.

It was a radical experimentation project that provided deep insight into the specifics, benefits and limits of hybrid CPU/GPU programming - and we discovered new schemes that accelerate the calculations drastically, even on CPUs.

Make UnRisk inherently parallel

Our parallel multi-core implementations are so blazingly fast that they manage the comprehensive risk management tasks of our customers in time on contemporary PCs.

But the xVA requirements will introduce a new complexity of the valuation space - hundreds of millions of single valuations. And this is what we do. With the experience of the 2009+ radical experimentation project we go far beyond. Making our engine inherently parallel and platform-agnostic - the same code will run on multi-core or on heterogeneous architectures.

Do it right when the timing is right

We could not have done it without the experiment - selecting one of the most difficult problems. And we might have done it too early without that experience. The right tome id now.

The next big idea?

Solving the most complex risk management problems on personal super computers is one ground breaking achievement. Bringing unexpected complex quant finance solutions to tablet computers another.

We spend years in radical experimentation (this is only one example). Carefully choosing the maths and  mapping every practical implementation detail. And then trow it away and make something really big.

Picture from sehfelder

About High-Tech and Low-Tech Heroes


I, the oldest UnRisker, cannot resist. I refer to Diana's Mister Fantastic post - just published.

When I was young, there were already legions of super-heroes and even the Fantastic Four debuted in the 60s. And I read those comics. But in my teenage mind, I was probably a little too obsessed by some old-fashioned stuff, like Hal Foster's Prince Valiant.

Valiant is a prince from the Norwegian region Thule. Valiant came to Camelot where he earned the respect of King Arthur and became a Knight of the Round Table. He fought the Huns, Saxons, … dark magicians and dangerous animals with his powerful Singing Sword a magical blade, sister to King Arthur's  Excalibur. Quite low tech but effective - as you can see in the picture.

What has this to do with UnRisk?

We are passionate about future technologies, but one might call us old-fashioned in our business principles: responsiveness and transparency.

We have established the UnRisk Academy to explain in the full detail, how our Singing Sword, the laser sabers and other sophisticated high-tech arms work.

The picture is from Wikipedia.org.

If UnRisk was a superhero

it most probably was Reed Richards alias Mister Fantastic.
"Mister Fantastic numbers among the very smartest men in the Marvel Universe, and that was true even before cosmic rays granted him amazing powers. This genius scientist leads the Fantastic Four on a never-ending quest of discovery and exploration. Sure, his obsession with science sometimes comes at the detriment of his family life, but a kinder and nobler hero you'll rarely find.
IGN Comic Book of Heroes
Mister Fantastic is an engineer and a scientist and he is knowledgeable in many different fields, like mathematics, physics, chemistry and biology. He built a starship with which he ventured into space, where he and the other crew members were exposed to cosmic radiation. Due to this incident his body was changed and he gained full flexibility. He can transform/stretch into any shape or form he wants, which makes him truly adaptive to any situation.
He doesn't wear shiny armor, he doesn't care for fame or fortune but is dedicated to science and wants to excel in what he does.

UnRisk is the result of a joint effort of people from miscellaneous scientific fields (mathematics, informatics, physics, ...) and comes in various shapes and forms. It can be used either as an Extension for other platforms like for example Mathematica, or it can be used as a standalone application. It is constantly being improved and we work closely with our customers to include their wishes and needs. 

A convection toy problem or Can you find the diffusion

In last weeks Physic's Friday blog post we continued our discussion of convection in PDEs. We mentioned upwind schemes - a class of numerical discretisation methods for solving hyperbolic partial differential equations. Today I try to show you the application of unwinding schemes using a simple but meaningful example, the hyperbolic linear advection equation.


Discretising the spatial computational domain yields a discrete set of grid points x(i). The grid point with index i has only two neighbouring points x(i-1) and x(i+1). If a is positive, the left side is called upwind side and the right side is called downwind side (and vice versa if a is negative). Depending on the sign of a it is now possible to construct finite difference schemes containing more points in upwind direction. These schemes are therefore called uwpind-biased or upwinding schemes.

The figure shows sets of discrete points with indices {i-1,i,i+1}. The formulation of the first derivative by finite differences depends on the sign of a. Upwinding schemes consider the ”flow”-direction of the information and only take the cor- responding points (marked with dots) into account. The left part of the figure shows the points used for a first order upwinding scheme for a > 0. The right part of the figure shows the same for a < 0. The discretised version of the above equation using a first order upwind scheme is given by:


If we apply this scheme to a function (some sort of Gauss peak) with homogenous Dirichlet boundary conditions (we chose the size of the simulation box in a way to justify the boundary conditions) we would expect that the peak is propagated as a function of time without changing its shape. The following figure shows the result at different times ...


Obviously the peak broadens when propagating through the time - is diffusion the reason for this ? And if so, where does the diffusion come from ?

Is There a Proven Way to Add Value?

In the wu-wei post I pointed out that it is often wise to let the complex stream work for you. Be patient. To make it right don't try, do.

Problem solving principles are different in different fields and for different purposes.

The steps of mathematical problem solving

You have a problem description, usually in natural language (a term sheet for example)
  1. Transform the problem description into a mathematical model
  2. Transform the model into a form that makes it of good nature for calculation
  3. Calculate
  4. Interprete the results
Each of the steps has its traps and needs special knowledge and skills. In derivative and risk analytics step one is about model selection and validation. And that needs the other steps. But also in general, step 1 is the most difficult one. 

The proven way to add value

Do the most difficult work. It's valued and scarce, because it is difficult. This is what makes quant work indispensable. Quants have the most difficult parts to master and contribute, if it's easy it is not for them.

But whether you are a front office quant, a risk quant, a model validator, a library quant opt a quant developer what is difficult depends also on the methodology and technology you use.

We at UnRisk have decided to make the difficult work in 2. and 3. And we provide the UnRisk Financial Language to support 1. Serving quants doing the very difficult work, we have unleashed the programming power behind UnRisk - UnRisk-Q.

This post has been inspired by Seth Godin's post.

Trinomial Bonsais

In his recent post on convection, Michael Aichinger recapitulated the treatment of convection in a trinomial tree when convection plays a significant role. In such cases, the standard branching would deliver one negative weight, leading to instabilities. Therefor down- and upbranching have to be implemented.

Too large timesteps?
However, due to the explicitness of trinomial trees, there is a second source of instability in trinomials: When timesteps are too large compared to the discretization in the space (i.e. the short rate) dimension, the scheme becomes unstable. Assume we want to valuate a fixed income instrument under a one-factor Hull-White model with a time horizon of 30 yearsm and we would likt o have a grid resolution for the interest rates of 10 basis points. With a typical (absolute) volatility of, say, 1 percent (a reasonable guess ), this leads to a time step of the order of 1 day and that 10000 time steps are needed.

The grid then looks 50 times finer (in both directions) than the following plot.

Only 200 of the necessary 10000 timesteps plotted here.

No. There are much cleverer methods available.

For a more detailed description of the stability conditions, see section 4.5 of the Workout.

Is It Enough To Be Good In Not Being Bad?


When I made my first tumbling steps in computing it was already agreed: "No one ever has got fired for buying IBM". But at that time, the 70s, only a few understood that decisions are about optimizing risk.

The fear of regret

It is a wonderful example of understanding loss aversion - defensive decision making. Traditional marketers of brands codify growth promotion: value & benefit, standard & importance, recognition & programming, identity & promotion or emotion & love

And we know it but often suppress it: value is the weakest and love it the strongest growth code - but most people do not love risk management systems, nor find them ideologic. But they could be mainstream. Risk is often mistaken with danger - on all levels, from the managerial to the operational.

So, in enterprise level technologies the "IBM example" goes on and on. Standard&importance wins the game.

Please do not misunderstand, I do not complain - I want to point out that this is is our power source: do things that matter for those who care. Is this arrogant? Not thought to ….

Better or less likely to be bad?

My experience as a seller and buyer: people quite often pay a fairly large premium for brands, not because they are objectively better, but they do not expect them to be bad.

There is no metric for assessing a technology.

Stochastic tinkering?

As NN Taleb calls trial and error insight gain. Yes, people make progress using things without knowing how they work. And this is great - if somebody comes along and explains (a little later). Especially if traps and side effects need to be avoided. I call this the black box - white box principle of using (in contrast to the white box - place box principle of learning). This is the risk engineering part of risk management,

The tightly coupled complex systems trap

We have pointed on many detailed model-method traps here. And uncovered principles that were common sense for a long time but do not work a expected.

Here's a real dangerous one: a complex system may have unintended consequences and tightly coupled means there is not enough time to react to them. And it is paired with the "sunk cost bias". If a system is expensive, you don't change them.

Serving costumers individually by an integrated, but loosely coupled system

UnRisk offers highly automated, adaptable, decision support risk management systems that are development systems in one. It works on a generic portfolio across scenario valuation principle and because of its data base concept it is integrated, its orthogonal organization makes it loosely coupled.

Our growth code is individualization & innovation.

This post is inspired by a Rory Sutherland at Edge contribution-

Picture from sehfelder

UnRisk FACTORY Support Made Easy – Logging Notebooks

In today’s blog I am going to describe one of the most import  mechanisms for debugging UnRisk FACTORY valuations: The UnRisk FACTORY Logging Notebooks.
By the use of these Logging Notebooks support cases like, e.g.,
  • The calibration of an interest rate model does not work?
  • The valuation delivers an unreasonable result?
  • Have I set up the instrument correctly?
If a user gives me a call and poses one of these or a similar question, I ask him / her  to send the corresponding Logging Notebook, which he / she can download from the corresponding details page (the following screenshot shows this download possibility):

Downloading the Logging Notebook







So, while we are talking on the phone, I already can
  • open this Logging Notebook in Mathematica and load the UnRisk-Q package
  • perform the same valuation as the customer on my local machine without having the need to being connected to his / her UnRiskFACTORY
  • check within this Notebook the used market data, interest rate model, instrument data, scenario, ….
In most of the cases the question of the customer can be answered within a few minutes.
Just to give you a short impression how the contents of such a Logging Notebook looks like (with yome experience – which all of our developers have – it’s self explaining ;-) ), here is a small piece of the UnRisk-Q function call:


UnRisk FACTORY Logging Notebook









At the end I just can say thank you to all of our developers – the incorporation of this logging mechanism has made the customer support much simpler and faster.

Convection in a PDE

In my last two blog posts I tried to explain the mechanism of convection and presented some examples of convection in nature. The starting point has been the post about Interest Rate Models - From SDE to PDE where we ended with a diffusion - convection - reaction equation. The numerical solution using standard discretisation methods entails severe problems, resulting in strong oscillations in the computed values. The drift term (first derivative) is chiefly responsible for these diculties and forces us to use specif- ically developed methods with so-called upwind strategies in order to obtain a stable solution. Very roughly speaking, it is mandatory to “follow the direction of information flow”, and to use information only from those points where the information came from. In trinomial tree methods, the up-branching and down-branching takes into account the upwinding and leads to nonnegative weights which correspond to stability.
Left: Trinomial trees cut off the extreme ends to avoid stability problems: Change of calculation boundary!
Right: Flow of Information - If convection plays an important role, upwinding has to be applied.
Originating from the field of computational fluid dynamics, upwind schemes denote a class of numerical discretisation methods for solving hyperbolic partial differential equations. In our case, they are used to cure the oscillations induced by using standard discretisation techniques in convection dominated domains of combined diffusion-convection-reaction equations. Upwind schemes use an adaptive or solution-sensitive finite difference stencil to numerically simulate the direction of propagation of information in a flow field in a proper way. Starting with next weeks blog post I will show in detail how unwinding schemes work and how we can use them to cure instabilities induced by convection.

Atmospheric Architecture


2009 I have written about Algorithmic Architecture - an example of the parametric lofting of buildings of sculptured geometries and how it can be programmed in a declarative way.

Today, I read in the Wired UK magazine, Jun-14 issue about the Les Turbulences  FRAC Centre. In short, the shape of the centre is inspired by the long term water conditions and the surrounding traffic and local meteorological conditions. If its sunny the building looks golden, if its overcast blue (panelling is done by a grid of LEDs)

I have no idea whether the architects at the FRAC Centre use a kind of parametric engineering approach, as sketched in algorithmic architecture, but if someone does combine the two approaches algorithmically it is modeling with model calibration (inspired by colliding weather conditions) and recalibration (displaying patterns of the surrounding traffic and weather).

Parametric financial modeling is much more flexible and it's much more algorithmic. 

But the principles are quite general - modeling, calibration, recalibration. And having an architecture that is represented in a programming language makes it much easier and powerful.

A financial language together with a rich computational knowledge base that brings the high level programs to life make the difference.

UnRisk Financial Language

Picture from sehfelder

Two-dimensional lions

If we take my Fortran code for the goats, wolves and lions problem as shown in the F-word of programming, we know that memory usage may be a problem. So, if we compile it for 678 possible animals (217 goats, 255 wolves, 206 lions), the calculation performs neatly. However, my Windows task manager shows me that 1.22 Gigabyte of memory have been allocated for this comparably small example (without any optimisation).

Memory efficiency
Obviously, if we know the total number of animals in a universe of possible forests, it suffices to know number of animals of two species and obtain the number of the third one by subtraction. Hence, two two-dimensional arrays (one storing the forests before lunch, and one storing them after lunch) are enough. For the above example of 217 goats, 255 wolves and 206 lions, 4 Megabyte of memory are needed.

Efficiency may lead to bugs
As Sascha pointed out in Functional goats, this might lead to a higher danger of (hidden or not) bugs. In fact, in my first two dimensional reprograming of the problem, I obtained wrong results because I had forgotten to reassign .FALSE. to the after lunch forests before having the meals.

Medium size examples
For 1017 goats, 1055 wolves and 1006 lions, the corrected 2d implementaion allocates 75 MByte of memory (I was not brave enough to try the 3d version). In the specfic environment of my home computer (8 Gbyte of memory, i5 CPU,single core implementaion), it took 340 seconds to calculate more than 540 million possible forests. For 2017 goats, 2055 wolves and 2006 lions, it was 45 minutes (as I expected: the calculation time increases with the third power of the initial size).

Number of possible different forests when starting
with 1017 goats, 1055 wolves and 1006 lions=3078 animals


Relevance in finance
Obviously, when valuating a financial derivative based on some transition model (like in Black Scholes to name an easy one), we do not have to store the values at all times but just the before- and after transition values.
Dirty developments in regulation (xVA) makes it necessary to think very careful about an efficient memory management. 

Quants - How to Do, But Not Overdo.


I promise, I will not exploit the "goats .." forever. But my how I failed to solve it in time let me think a little more about some simple working principles.

Clear, we have objectives, procedures, methods and tools and we have knowledge and skills, a world view and an environment.

But there are a few simple things that we tend to suppress.

Set a date (time)

It is serious to announce a date. There is no project without dates. It can be far in the future. But never miss a date. It might be the most important date for somebody else.

Don't parallelize

Your brain is not a multi-thread computer. Cognitive loads slow you down. Software developers must choose adding positive distraction over cognitive loads.

Don't use a speedometer

You can approach your destination right over the mountains driving slow or  bypass it on the highway. A speedometer is not always an indication how far your goal still is. So, number of code lines / week do not say much. It's the richness and freedom of the language, the programming style, the computational knowledge base, algorithmic engines, … see the slow developer movement

When good ENUF is great

Deliver - even if it's not a master piece. Especially in quant finance, complex systems can be worse if they lose additional information in the methodological jungle. And basic technologies are emerging fast. They can help you to make she most important dead lines.

Quants - think it, build it, in time, good enough and fast enough.

Picture from sehfelder

How to Simulate the Short Rate Without a Computer (or: a Physicists View of Financial Models)

Since this is my first post here, I would like to introduce myself. I am Diana Hufnagl and I am fairly new to the financial world. I started working in this field one and a half years ago and there is still a lot to learn for me. My scientific origin is physics and today I would like to share my first thoughts on financial models with you. 

Consider the following stochastic differential equation (SDE):

dr = m dt + s dW

As you probably all know it can be used to model the short rate for example. The last term corresponds to its stochastic part, modeled by a Wiener process.
When I first saw this equation I instantly remembered an afternoon a couple of years ago. I spent it in front of the microscope looking at micrometer-sized particles in water. What I am referring to is an experiment we conducted in a physics course at the University, where we observed and measured Brownian motion, which is a Wiener process. 

What we did was: 
  • prepare a solution of latex particles in water,
  • put it under a microscope,
  • put some paper on the wall,
  • project the picture of the microscope onto said paper
  • and then follow one particle at a time as it floats through the water.
Every few seconds we marked the position of the particle on the paper, thereby recording its random walk. We did this for lots of different particles to collect enough independent paths. We spent hours in a room lit only by the projection of the microscope on the wall, making dots on a piece of paper. And, as weird as it might sound to non-physicists, it was amazing! Describing and discovering the laws of nature (even if it is something as simple as particles moving in water) is something I find truly fascinating. Afterwards we sat down and calculated the mean value and the variance of our recorded random walks and finally obtained the expected probability distribution. And by doing so we basically made a real life simulation of the SDE given above for m = 0. If one wants to include a drift term (m ≠ 0), one could introduce a steady flow for example, which would lead to a shift of the mean value of the distribution. 

Therefore I just gave you the means to simulate for example the short rate without the use of a computer. All you need are small particles in the correct fluid (fixes s) exhibiting a specific flow (fixes m) and you are good to go! Using a microscope you can then observe the different realizations of the short rate.

In Outstanding Vintages Buy "Small" Wines





I am not only a seller, but also a buyer. Understanding positions on the value/price map is my "life". Buying price-performance champions needs experience or trust in assessments.

I like to drink a glass of wine now end then (or two) and my small wine cellar is not for gathering prestige objects or reselling, but storing affordable wine to drink with joy when ripe (I am a long buyer of joy).

Bordeaux. Vintage charts give you rankings from vintages over a long period.

1961 is still on "drink or hold", as well as 1982, both outstanding vintages.
1990, one of the driest, leading to almost overripe fruit (maybe comparable to 1949) is outstanding, but you should have drunk its "smaller" wines.
Usually you need to look a little deeper into left (Haut Medoc) or right bank (Saint-Emilion, Pomerol) scores.

My favorite in the 20th century: 1982, great from both banks, maybe the first in the modern era. But, horribly expensive - and I have my limit.

Outstanding 2xxx years: 2000, 2005, 2009, 2010.

My personal assessment:

2000 - a great vintage that produced opulent wines 97/100
2005 - a stunning vintage from top to the bottom in all appellations (textbook-perfect) 100/100
2009 - vintage of the decade/century? Dramatically ripe fruit (especially at the left bank) 99/100
2010 - another outstanding vintage that produced tannic, powerful and (too?) rich wines 97/100

I prefer "elegant, balanced" over "opulent" - consequently 2005 slightly over 2009. Robert Parker, the renowned wine critic, might see this different.

In all of those outstanding years you can buy "small" wines for, say, EUR 60 and below that taste like wines of the "big" Chateaus in other years. Especially 2005 and 2009.

Unfortunately it is becoming more difficult, buying 2005 price-performace leaders (lucky, if you can get a Chateau Guillot Clauzel - 2 ha small, next to Chateau Le Pin (same terroir), for 1/20 of the price ...).

Of 2009, you still may find a

Chateau Jean Faure - Saint Emilion
Chateau Trois Origines - Saint Emilion
Château Vieux Pourret Dixit - Saint Emilion
Chateau La Pointe - Pomerol
Chateau La Croix - Pomerol
Chateau Fieuzal rouge - Pessac Leognan
Chateau Phelan Segur - St Estephe
Château Haut Bages Liberal - Pauillac
Chateau Clos de Quartre Vents - Margaux

The same bargains can be found in other wine regions, but there is no region that is discussed and assessed by so many tasters. You can expect that if randomly purchasing a bottle, it will fit into the picture. This is maybe similar, in say, Napa, Duero, Toscany .. regions.

Personal taste preferences play a greater role in, say, Burgundy, Rhone, or Piemonte.

p.s. IMO, this has an analogy to technology prices. If you are lucky you find a stunning technology with a high value / low price - built in an outstanding "year" of technological innovation.

Picture from sehfelder

The F-Word of Programming


In Three ways to solve the goats, wolves and lions puzzle, I described my feaasible brute force algorithm, which I had coded in my first programming language (and still the one in which I am reasonably fluent): Fortran, the, in the meantime, F-word of programming.
Input device of the kind I used in 1978
http://blogs.laprensagrafica.com/litoibarra/?p=2808

I know that Fortran is less elegant than, say, the Wolfram language, but
(a) I like it, and
(b) I am quite good in wiriting small sample codes.
The third reason will be given below.

The code reads as (everything hard coded)


      logical status (0:78,0:78,0:78)
      integer i,j,k,animals, count
      do i=0,78
      do j=0,78
      do k=0,78
      status (i,j,k)=.false.
      end do
      end do
      end do
c
c status (i,j,k) will be set to true if a forest with i goats, j wolves and k is reached.
c
      status (17,55,6)=.true.
      count=0
      do animals= 78,1,-1
       do i=0,animals
        do j=0,animals-i
         k=animals-i-j


         if(status(i,j,k))then
c
c count counts all possible forests
c
           count=count+1
c ------------------------------------------------------------
c the following lines print possible final states
c
           if(i. eq. 0 .and. j .eq. 0)then
             write (*,*)i,j,k
           end if
           if(i. eq. 0 .and. k .eq. 0)then
             write (*,*)i,j,k
           end if          
           if(j. eq. 0 .and. k .eq. 0)then
             write (*,*)i,j,k
           end if
c------------------------------------------------------------
c in the sequel: Possible meals
c
           if(i*j .ne. 0)then
             status (i-1,j-1,k+1)=.true.
           end if
           if(i*k .ne. 0)then
             status (i-1,j+1,k-1)=.true.
           end if
           if(j*k .ne. 0)then
             status (i+1,j-1,k-1)=.true.
           end if
c
         end if


         end do
        end do
      end do
      write(*,*) "count= ",count
      end

It turns out that with the specific example, 4208 different states of forests can be obtained, and it takes less than one second to calculate them (compiled with the free G95 compiler and executed on one core of my i5 desktop computer).

The above code is not memory efficient at all, and memory allocation needs quite a portion of elapsed time,

Like in finite elements for a transient diffusion equations, we do not need a three-dimensional array but two (old and new) two-dimensional ones would be sufficient.

When we increase the number of initial animals to (117, 155, 106) (yielding 223 lions at the maximal final forest), the Fortran program still performs in less than 2 seconds and calculates 985508 possible forests. Can a functional language do this as well?

Doing by Not Doing - The Wu-Wei Principle


Wu-wei is an important concept in Taoism and literally means non-action or without control. It is included in the paradox of wei wu wei: act without action.

In Laozi's sense the action is seeking to "forget" knowledge - and this can only work like Wittgenstein's ladder: to go beyond you must throw away the ladder you have climbed up.

Be patient

My simple (western) interpretation is: do as little as possible and do when the timing is right (a "natural" flow is working for you, but you need to follow it) - be patient, but awake.

The metaphor of fuzzy control

Simplifying, traditional controllers can become wiggly in certain situations.  Fuzzy control systems are "smoothing".

If you drive your car through a turn you do not change directions in little steps approaching a next target point. You drive like a fuzzy control system avoiding unnecessary changes and even in unexpected events, you try to avoid volatile actions. It is kind of coolness by experience.

Wu-wei in the small.

Try not. Do?

A seller always want to be the first. And yes sometimes you need to prototype to get insight. Especially when you transfer solutions from other fields. UnRisk prototyped Adaptive Integration, FEM and streamline diffusion, ...

But there are some examples in the large, where it was wise to not try.

Were UnRisk was not the first:

Libor Market Models - it made sense to have a deep look into the maths first - negative Eigenvalues in practice

Normally Distributed Short Rate Models - we did not kill them. Why? See Black vs Bachelier revisited

Complex Credit Derivatives - here we even rejected bending the reality. And it turned out that the most complex instruments were also the toxic ones.

VaR - it was widely misunderstood and maybe even used to hide risk. We knew it requires multi-method and across many risk-factors approaches. And did it when it was absolutely clear.

xVA - with counter party exposure modeling, collateral management and central clearing requirements with impact on the whole bank it introduces an unprecedented complexity into the valuation space. So complex that the computational engines do not only need to be inherently parallel, but optimize data and valuations streams.

And xVA may be the reference for being lucky when wu-wei.

We will present a first release end of 2014 (for distinguished partners, who work very close with us challenging it through different lenses: auditor, financial advisor, SaaS risk management provider, bank)


This post has been inspired by W. Singerland's post in Edge

Classes of Magic Forests


My Pavlovian reflex was to attack the magic forest problem using backtracking search in Haskell - I actually did that, but soon realised that the problem does not backtrack. That got me thinking a bit more about the magic forest problem itself, and I'd like to share these insights in today's blog entry (it is called "UnRisk Insight" after all). More about the Haskell code in a later blog entry.

EDIT: I deliberately didn't read the other solutions before working this out, because I didn't want to spoil the experience of puzzle solving. Andreas Binder's post contains essentially the same idea.

The Magic Forest


Let me briefly restate the problem: In a magic forest, goats, wolfs and lions live happily together. Being a theoretical physicist - and not a great storyteller - this is how the forest looks like in my imagination:

(g, w, l).

So the state of the forest is defined by a vector containing the number of goats, wolfs and lions living in it. Well, one could say the beauty lies in the abstraction :) It turns out our forest is a pretty lively place: a "stronger" animal can devour a weaker one any time and then magically turns into the third species. That is, our forest evolves one step at a time by adding either of three vectors to it:

d1=(-1, -1, +1)
d2=(-1, +1, -1)
d3=(+1, -1, -1)

Note that each step reduces the number of two species by one and increases the third by one, so in total, each step reduces the number of animals by one. Of course, there is the additional constraint that the number of animals in each species can never drop below zero.

Classes of Forests


The whole thing's too complicated for a physicist, so let's make a coordinate transformation (I'll indicate coordinates in the transformed space by angle brackets):

<x, y, z> = <g+w, g+l, w+l>

In this space, the above transitions are just

d1' = <-2,0,0>,
d2' = <0,-2,0>,
d3' = <0,0,-2>,

but the constraint is more complicated. What this shows us is that each step reduces one coordinate by two in this space. If we let the wildlife in our little forest go along with its business, it might eventually arrive at a stable state where no devouring is possible anymore. This state must be characterised by two species being extinct, otherwise further devouring would be possible. In <x,y,z>-space this means that one coordinate must be zero, which further means that we can only make two species extinct if their total number of animals is even (because d1', d2', d3' always reduce the sum by two). There must be at least one such combination: we have two possibilities, even and odd, and three places to distribute even/odd to (goats, wolfs, lions). Let's enumerate the possibilities:

forest classrealization 1realization 2
<eee>(ooo)(eee)
<eoo>(ooe)(eeo)
<oeo>(oeo)(eoe)
<ooe>(eoo)(oee)

The two columns on the right contain all 8 possibilities of distributing (e)ven and (o)dd numbers over the three species. Two realisations in ()-space always correspond to one possibility in <>-space, which is given in the leftmost column in the table. In <>-space, there is always at least one even number, or all three are even (that's because we need to add an even and an odd number to get an odd number in <>-space, so there can only be zero or two odd numbers).

The transformations d1, d2 and d3 all flip even to odd numbers and vice versa: that means under each of these transformations, we hop back and forth between columns two and three in the above table, but we never go to a different row. That is, there are four distinct classes of magic forests, in the sense that during its entire lifetime, a forest always stays in its class.

Solution Strategy


The "optimal" solution strategy we are trying to find is the one with the maximum number of animals surviving. As each devouring step reduces the number of animals by one, this corresponds to the shortest path to a stable forest.

Given an initial forest, we first determine the class it belongs to. For the three classes <eoo>, <oeo> and <ooe>, there is only one choice: we have to eliminate those species that belong to the even number (that's goats and wolfs for <eoo>, goats and lions for <oeo> and wolfs and lions for <ooe>). For the first class, there is an initial choice of which two species to eliminate.

Strategy for <eoo>


Let's look at <eoo> first, <oeo> and <ooe> will work out the same. For an <eoo> forest, we know the solution will be (0, 0, u), where u is the number of lions remaining in the end. To get there, we have to get rid of goats as well as wolfs. Let m be max(g,w), i.e., the species with the greater number of animals out of the two species we want to get rid of, and assume it is the wolfs in this example. That means we have to get rid of the wolfs, and there are two transformations that bring down the number of wolfs: d1 and d3. The shortest strategy will thus contain only those two transformations, because anything that increases the number of wolfs will take us longer.

Is it possible to reach a stable state with just these two transformations? Of course, because the only possibility where neither of those two transformations is feasible is the one where both goats and lions are zero: for one, this would be a solution of the problem, but more importantly, we know that we can never get there, because this solution lies in a different forest class.

How many steps do we need to arrive at the stable state? As we are only using d1 and d3, and both of those reduce m by one, that's clearly m steps.

Bottom line for <eoo>: The optimal strategy consists of m steps and is, in general, not unique - any feasible combination of m steps of d1 and d3 that eliminates both goats and wolfs is an optimal solution. [The proof (or rejection) that all feasible combinations of m steps of d1 and d3 end up at the optimal solution is left as an exercise for the reader ;]

Strategy for <eee>


The forest class <eee> is special, because we can choose which of the three species is to survive. Once we have chosen that lucky species, everything goes along the same lines as for the <eoo> case. Of course, we want to choose that species such that we need a minimum number of steps: that is, we choose the species with the smallest maximum of the two remaining species.

Number of Surviving Animals


We already know which animal survives, and the survivor's count is easily determined: the initial number of animals is N=g+w+l, the optimal solution takes m steps, and each step reduces the total number of animals by one. Thus, the number of surviving animals is N-m.

Solution for the Example Forest


The forest given in the example was (17,55,6), and is of class <eoo>. From that it follows that we have to eliminate goats and wolfs, and lions will survive. The maximum of the number of species we want to get rid of is 55, and it corresponds to wolfs. So the optimal solution will take us 55 devouring steps, and as the total number of animals is N=17+55+6=78, there will be 78-55=23 lions left in the end.

Conclusion


I'll go home now and put on my sack cloth and ashes - that took me a lot longer than the 2.5 mins it is supposed to take 16 year old students.

Examples of Convection

In last weeks physics friday we discussed convection. Before I will give some examples where convection occurs in nature I will shortly repeat the basics (quote from physics.info): "Convection is the transfer of internal energy into or out of an object by the physical movement of a surrounding fluid that transfers the internal energy along with its mass. Although the heat is initially transferred between the object and the fluid by conduction, the bulk transfer of energy comes from the motion of the fluid. Convection can arise spontaneously (or naturally or freely) through the creation of convection cells or can be forced by propelling the fluid across the object or by the object through the fluid. "

For the examples we focus on spontaneous convection. Spontaneous convection is mainly driven by buoyancy but also surface tension plays a role to a lesser extent.

Some of the physical processes most influencing people's live are driven by convection:

Atmospheric circulation on a local level through anabatic (updraft) and katabatic (downdraft) winds.

http://static2.2alpesnet.com
But also on a global scale atmospheric circulation is driven by convection - tropical and polar cells are common termini in weather reports.
Ocean currents are driven by a combination of temperature and salinity gradients (thermohaline circulation) in the deep ocean, by winds near the surface, and by topography everywhere water touches land. Examples include the gulf stream, the historical first reported ocean current, and the deep ocean return current both effecting the local (the gulf stream for example keeps eurpoe warmer than north america at the same lattitude) or the global climate.
Also geologic effects like plate tectonics are driven by mantle convection and the outer core convection (along with charge separation) generates earth's magnetic field.

http://bprc.osu.edu/


From the above examples you see that convection is really an important physical effect really influencing people's everyday live. The role of convection in finance will be discussed in our next physic's friday blog.

Once Upon A Time ….


Today, May 1, is the International Workers' Day. A celebration of labour and working classes. It is a public holiday in more than 80 countries (but only some of those countries this day is officially known as Labor Day).

Once upon a time the workers organized themselves and fought against the negative social effects of the industrial revolution. Strikes were new measures and it may be that the Haymarket affair in Chicago (May 4, 1886) was the reason to choose May 1, as date for International Workers' Day.

Time has changed working regimes dramatically. We know throughout ideologies that Taylorism is the wrong working principle and that instead of preaching the dignity of "slavery work" we should concentrate on motivation factors.

But traditions are long runners as such. Although we know that traditions may kill innovation, this will not change much?

Once upon a time begins a scientific paper written by school children, published in Biology Letters, on bumblebees.

Children, or, say it more general, amateurs, are scientists, you know?

There were always barriers that inhibited amateurs doing research - like the difficulty to read scientific papers. It's to make infrastructure, experimental and analytics methods and tools available to amateurs to motivate them to conduct science outside traditional institutional settings.  And I am not thinking of Big Data to replace scientific method.

I am thinking of an infrastructure for interactive documents, documents that are programs. Documents that can be easily extended programmatically - on and on.

Once upon a time begins Sascha's program in Functional Goats, Wolves and Liens.

In quant finance, young quants find groundbreaking results and share them in forums, the blogosphere, … they are scientists, you know .. and bring much needed diversity into science.

It is May 1, and officially, I have a day off. And I tend to forget why?

Picture from sehfelder