The company where this study was formulated constructs VR applications for the medical environment. The hardware used is ordinary desktops with consumer level graphics cards and haptic devices.
In medicine some operations require microscopes or cameras. In order to simulate these in a virtual reality environment for educational purposes, the effect of depth of field or focus have to be considered.
A working algorithm that generates this optical occurrence in realtime, stereo rendered computer graphics is presented in this thesis. The algorithm is implemented in OpenGL and C++ to later be combined with a VR application simulating eye-surgery which is built with OpenGL Optimizer.
Several different approaches are described in this report. The call for realtime stereo rendering (~60 fps) means taking advantage of the graphics hardware to a great extent. In OpenGL this means using the extensions to a specific graphic chip for better performance, in this case the algorithm is implemented for a GeForce3 card.
To increase the speed of the algorithm much of the workload is moved from the CPU to the GPU (Graphics Processing Unit). By re-defining parts of the ordinary OpenGL pipeline via vertex programs, a distance-from-focus map can be stored in the alpha channel of the final image with little time loss.
This can effectively be used to blend a previously blurred version of the scene with a normal render. Different techniques to quickly blur a rendered image is discussed, to keep the speed up solutions that require moving data from the graphics card is not an option.
Source: Linköping University
Author: Henriksson, Ola