Molehill 2D

While Flash 11 was in beta there was a lot of hype about the 3d rendering performance with the new Molehill API. At that time I did a lot of research into the feasibility of leveraging the Molehill apis to get the most out of 2D rendering performance. Most 2D Molehill projects used textured quads to leverage the graphics card. My target application required a significant amount of vector graphics, both lines and filled polygon.

The Molehill API does not support GL_LINEs so each vector line was extruded out into two triangles.

A lot of work was put into packing as many vectors as possible into the vertex buffers and to reduce the number of polygons.

While the results were impressive the biggest setback was the lack of line support within Molehill. When dealing with massive amounts of lines the polygon count jumps significantly, hindering performance.

This project has been sitting in my github private repository for some time so I'm making it public. Keep in mind this was used entirely for performance testing and to be used only for reference.

A more in-depth explanation of the project coming soon...

source.

Using Hibernate Interceptors for ChangeEvents


After doing some research for a new Change Event system for our Enterprise application at work I came across the Hibernate Interceptors interface. This interface allows you to register callbacks when persistent objects are saved, updated, deleted, or loaded within a Session.

The main idea I had was to use this Interceptor interface to monitor any changes which have occured within a particular Session. This information can than be used to delegate data change events to the interested parties.

Let me back up a little bit and give an overview of the application I am working with. Within our system we have a core 'Aenemic Domain Model' (save why we ended up with an anti-pattern for another discussion). The domain model we are currently working with has a very complex mapping of relationships representing multiple states of a particular scenario. This model is directly mapped to Hibernate using XML configuration files which are generated along side the model. The application must react to changes in the model inflicted by both the current user and any users collaborating within the particular scenario.

In order to opperate as desired there are two primary components which must react to changes within the core domain model. The first component are the 'Agents' whose responsiblity is to analyze the current scenario and give notifications, warnings, and errors. The second component, a Flash based UI, can be broken down into three sub categories.

The first component is the 'Agent Framework'. The 'Agent Framework', to summarize briefly, is a collection of 'Agents' which monitor all scenarios active within the system. The agents will observe each scenario and monitor undesirable configurations for each of their concerning domains. This framework uses a decoupled non-persistent model which is tailored to optimize the CPU intensive analysis.

The second component consists of the UI. Our application can be broken down into three main UI categories: Reports, Viewer, and general UI. The Reports is comprised by a large collection of grid displays used to represent detailed information about the current scenario. The Viewer is a CAD-style graphical display which allows the users to plan and interact with the scenario. The general UI includes any custom dialogs which allow the users to view or manipulate the configuration.

In summary we have a primary model and four components, or 'listeners', interested in changes to this model: agents, reports, viewers, and ui. Each of these components are interested in different subsets of data. Up until now these 'listeners' were explicity invoked within each operation. This was a reasonable solution to make it through the first few releases. However as the project grew, ensuring each listener was notified properly for each transaction became a debugging nightmare. Also as the agents' models grew in complexity and the volume of data increased, the time to load and convert the model began to effect the users interaction with the system.

Now back to the original idea. We can create a Hibernate Interceptor which will monitor any changes to the current session. These events can be tracked within a 'ChangeMonitor' to manage any object which has been changed including any affected attributes and both the current/previous state of the attributes. If a transaction is successfully completed and flushed to the database this interceptor can dispatch JMS messages of any data which has changed. The 'listeners' which must react to changes in the model can now recieve detailed information as what has occured within a particular transaction and react accordingly. However with any system this design has its benefits not void of any down sides.

The ChangeMonitor reduces the complexity of the service and business logic layers by leveraging the dirty check functionality already occuring within Hibernate. The developer no longer has to determine which parties are interested in the changes occuring within each transaction. The data collected from the Hibernate Listener also allows us to retain a before/after state to fine tune the reactions and filter uninteresting changes. The ChangeMonitor could also be used for both optimization and auditing to explicitly monitor all data which as been modified for the transaction. This being said there are a few negative issues which also must be considered.

With the introduction of the ChangeMonitor we start to see a common pattern of too much information can become a hassle to manage. In our previous framework we only knew when particular objects were added, updated, or removed. Now each component has the power to explicitly be aware of all objects and all attributes. This will require additional filtering on each of the four components to ensure they only react to data changes within their concerning subset. The other potential downside which will require more research is how changes to collections and different association types, one-to-one, many-to-one, many-to-many, etc., can be determined.

For now I think I have written enough. Time to give it a try and see what we can accomplish!

Demo Reel

Here is a small demo reel showing some of the OpenGL samples I have made over time. Most of the demos are small graphics programs I wrote when learning to work with shader programs in GLSL.


Real-Time Image-Based Edge Detection Shader


This project involved the implementation of an image-based edge detection algorithm. I researched many geometric methods and “tricks” for generating silhouette edges, however the majority of the algorithms only detected silouette edges. With this image-based technique all sharp edges within an object are able to be highlighted. Eventually used in the Geneticist project, the implementation was written using a combination of C++/OpenGL and GLSL (OpenGL Shading Language).

The general concept behind the algorithm is to search for discontinuities in the surface normal and depth values within each screen rendering. The implementation took advantage of modern graphics hardware by rendering both the image normal values, encoded as an RGB value, and the depth from the camera to separate offscreen buffers - using OpenGL Frame Buffer Objects (FBO). The algorithm than searches for discontinuities in both the normal image and the depth image. The final scene rendering is colored to represent an edge or crease at the location where any significant discontinuities are found. This implementation uses multiple passes to render each final image so majority of the focus was for the optimization of speed to ensure real-time performance.

Source available on github.

Image-based edge detection demo program. Top-left: depth texture. Top-right: normal texture. Bottom-left: unmodified flat rendering. Bottom-right: final image with edges highlighted.


Shadow Mapping


As a side project for the Geneticist game I developed a demo application to implement shadow mapping technology using OpenGL and GLSL. This implementation was the heart behind the technology used for the shadows in the Geneticist project. Similar to the edge-detection algorithm this implementation took advantage of the frame buffer objects in order to utilize multiple renders per frame.

Simple Shadow Mapping demonstration

The main concept behind the shadow mapping is to render the distance of the scene from the view of the light source. These depth values are then stored in a “depth buffer” which is later passed to a shader as a texture. When the engine performs the rendering from the camera, it transforms the visible points to the lights view space. If the distance to the light source of this point is greater than the distance stored in the depth texture at that point, than the transformed point must be shadowed as the light cannot see the point. Because the depth from the light source is rendered every frame, this allows dynamic objects, such as the character, to cast shadows in real-time. The main downfall of this method is the precision loss when mapping from light space to the view space. If you look at the shadows in this screenshot from Genetics you can see the edges can become very aliased. To correct this problem many people will multi-sample or implement algorithms to increase the resolution at the shadow edges.


Genetics Project showing the shadow mapping in use. Both the static terrain and dynamic objects generate shadows.
Source for the demo application is available on github.