visit tracker on tumblr

GLSL Soft Particles

Soft Particles are a solution to common intersection artifacts that occur when smoke or fog billboards intersect solid objects. In the case of an intersection, some of the billboard fragments do not pass the depth buffer test. As a result the illusion of smoke or fog that should have a volume is broken and the billboards are perceived to be just a set of flat layers facing the camera.

Figure 1: Intersection artifacts occur due to billboards intersecting with objects (left). Fading out billboards near objects avoids breaking the illusion of volume (right).

One solution to avoid this problem is to fade out the billboards where they are near objects. To do this a fragment shader that checks the z-depth of the underlying objects in the z-buffer while prcoessing each billboard can be applyed. The fragment shader below does this by looking at the underlying objects depth at each pixel and increasing the billboards transparency if it is too close.

uniform sampler2D colorMap;
uniform sampler2D depthMap;

varying float depth;

void main() {
	
	vec2 coord = gl_FragCoord.xy / vec2(640,360);
	vec4 depthMapDepth = -texture2D(depthMap, coord).z;
	vec4 color = texture2D(colorMap, gl_TexCoord[0].st);
	
	float scale = 0.2f;
	float fade = clamp((depthMapDepth.x-depth)*scale, 0.0, 1.0);
	
	gl_FragColor = vec4(vec3(color),color.a * fade);
}

Listing 1: Implementation of a basic soft particle fragment shader.

The fragment shader assumes that a depth map of the scene is stored in a uniform sampler. Below is a short capture of the fragment shader in action:

References

GLSL Plane Deformation

Inspired by the work of Dominik Ströhlein posted on Form Over Function I started myself to port some of the oldschool amiga plane deformation effects to GLSL. The video below shows a compilation of most of the effects.

References

GLSL Shadow Mapping

During Revision 2011 I finally found some free time to fix a small bug in my GLSL shadow mapping demo. The result can be seen in the video below.

The demo does not make use of the OpenGL texture coordinate generation facility as described by Cass Everitt in "Projective Texture Mapping". Instead it computes the needed transformations including the inverse of the camera view transformation by hand using a shortcut that makes use of view matrix specific properties. After that the resulting matrix is send down to the vertex shader where the matrix is used to transform every incoming vertex to clip coordinates of the spot light view. The bug was that I did the divide by w already in the vertex shader and send these normalized device coordinates down to the fragment shader to get interpolated. The problem here is just that interpolation needs to be done before the divide by w because otherwise you wont get properly perspective correct interpolated normalised devide coordinates. So doing the divide by w in the fragment shader solved the problem.

References