Real-Time Image-Based Edge Detection Shader

This project involved the implementation of an image-based edge detection algorithm. I researched many geometric methods and “tricks” for generating silhouette edges, however the majority of the algorithms only detected silouette edges. With this image-based technique all sharp edges within an object are able to be highlighted. Eventually used in the Geneticist project, the implementation was written using a combination of C++/OpenGL and GLSL (OpenGL Shading Language).

The general concept behind the algorithm is to search for discontinuities in the surface normal and depth values within each screen rendering. The implementation took advantage of modern graphics hardware by rendering both the image normal values, encoded as an RGB value, and the depth from the camera to separate offscreen buffers - using OpenGL Frame Buffer Objects (FBO). The algorithm than searches for discontinuities in both the normal image and the depth image. The final scene rendering is colored to represent an edge or crease at the location where any significant discontinuities are found. This implementation uses multiple passes to render each final image so majority of the focus was for the optimization of speed to ensure real-time performance.

Source available on github.

Image-based edge detection demo program. Top-left: depth texture. Top-right: normal texture. Bottom-left: unmodified flat rendering. Bottom-right: final image with edges highlighted.