The graphics card company Nvidia have recently released their next generation of computer graphics cards, able to do orders-of-magnitude more floating point computations per second than normal CPUs. Furthermore, they have simultaneously made available their "NVIDIA CUDA (TM)" toolkit, enabling 'normal' programmers to exploit this hardware for general parallel computations with a C-like language. One talks of 'GPGPU' or general purpose programming with graphics processing units.
Thanks to a Foundational Questions Institute mini-grant, I have been able to purchase two such cards to attempt to use them to develop numerical simulations of eternal inflation.
On these pages I detail my experiences trying this out, should anyone else be thinking of attempting anything similar...
Related projects on the graphics card: