Want to see what you can do in 48 hours? Game portfolio feeling a little thin? Have a feeling you’ll need a break by the time you’re two days into the Spring semester? Well, the Global Game Jam is entering its 3rd year, and for the 3rd year, UMBC will be the Baltimore host site January 28th-30th.

This is a 48 hour event, where teams from around the globe work to each develop a complete game over one weekend. The first year had 54 sites in 23 countries. Last year, there were 124 sites in 34 countries. The Baltimore site is open to participants at all skill levels, and it is not necessary to be a UMBC student to register. Just go to <http://globalgamejam.org/sites/2011/umbc> and register. Participation will be limited to the first 40 registrants.

The jam will start at 5PM on Friday, January 28th in the UMBC GAIM lab, room 005a in the ITE building. At that time, the theme for this year’s games will be announced, and we’ll brainstorm game ideas and form into teams. There is no need to come as a team: each individual has an equal chance to pitch their game ideas, and you can join the team whose game you like best. Teams will have until 3pm on Sunday, January 30th to develop their games. We’ll have demos of each game and selection of local awards, wrapping up by 5pm Sunday. Thanks to generous support by Next Century, there is no registration fee for the this site, but you must register for the UMBC site in advance at www.globalgamejam.org.

Hope to see many of you there!

I don’t know how I missed this, but UMBC made the Princeton Review’s list of the top 50 undergraduate game design programs. Now I might split hairs and say that we really focus on game development more than design, much as there’s a distinction between being an actor and being a director, but I certainly won’t complain about making the list!

There’s a nice article in today’s Baltimore Sun business section on Zynga East and the Baltimore area’s inroads in social game development. Plus, I’m quoted about social games vs. PC and console games.

The White House announced the National STEM Video Game Challenge yesterday. This is a mobile game development contest, with a $50,000 grand prize and $25,000 prize for the best undergraduate or graduate student developed game, and a $25,000 prize for the best game reaching an underserved population.

Games should emphasize some specific (but fairly broad) Science, Technology, Engineering and Math skills. Submissions are due January 5th.

Almost every graphics game developer I’ve met knows Schlick’s approximation for Fresnel reflectance. This approximation has all the features to be well used in games. It looks visually equivalent to the real Fresnel computation for unpolarized light, while being way easier to compute. However, very few developers have actually looked at Christophe Schlick’s original 1994 Computer Graphics Forum paper, “An Inexpensive BRDF Model for Physically-based Rendering“. That is a shame, because this paper has some really good ideas for coming up with other approximations for shader computation.

The basic idea is to come up with a rational function (ratio of two polynomials) as an approximation. There’s a long history of this in and out of graphics, but it usually has some inherent problems. Consider “Fast Phong Shading“, published by Bishop and Weimer in SIGGRAPH 1986. This method attempts to avoid the vector renormalization inherent in Phong normal interpolation by using a quadratic Taylor series. Unfortunately, Taylor expansion is inherently centered around a point, in this case the center of the triangle. The approximation will have some error by the time you get to the edge of the triangle, and two triangles sharing that edge won’t necessarily have the same amount of error in their normalization. For big triangles, this can result in a visible shading discontinuity along trangle edges. Not good.

Schlick’s idea is to express what’s important about any function as kernel conditions, then apply those as constraints. For his Fresnel approximation

F=F_0 + (1-F_0)(1-dot(H,V))^5

These constraints are that F should be 1 when dot(H,V)=0, F0 when dot(H,V)=1, and the first several derivatives of  F should also be 0 when dot(H,V)=1. In the paper, he also has similar approximations for the geometric attenuation and distribution terms of a Cook-Torrance shading model, but the method is a good one to know in general for reducing shader computation:

  1. Look at the function and pick the kernel conditions: value or derivatives at some critical points, desired integral over the whole domain, etc.
  2. Based on the way the function looks, choose a rational function with the right number of coefficients. This is still somewhat of a black art, since there will be many choices for numerator and denominator  polynomial that have the same number of coefficients as you have kernel conditions. For example, for four conditions, you could choose a cubic, quadratic numerator/linear denominator, linear numerator/quadratic denominator, or 1/cubic denominator.
  3. Solve for each coefficient
  4. Evaluate the total error, decide if it is good enough. If not, try a different rational function, or add extra kernel conditions to fix the problem.

Multi-variate functions are OK, though will potentially introduce many additional coefficients. This kind of approximation is usually best applied near the visual output end of a shader. Applied to computation too early on, and the small errors may be magnified by the intervening shader code. None the less, it can be a great way to reduce a computationally expensive shader.

Throwing elections with an electronic voting machine is too easy these days. Does your voting machine run Pac Man? This one does, without even breaking the “tamper-proof” seals!

Quaternions are an extremely handy representation for rotations. Unfortunately, they fall into a slightly dusty corner of math, so often seem just a little scary. It’s good to understand what they mean, at least for rotations, when it makes sense to them, and when it makes more sense to use rotation matrices. Euler angles occasionally make sense as an interface for specifying rotation, but please convert them to something else as soon as possible!

Quaternions are a combination of a 3D vector and a scalar. Rotation by an angle θ around a unit vector v is represented by the quaternion

q = vec4(sin(θ/2)*v, cos(θ/2))

The quaternion, q, will be unit length (the sum of the squares of all for components is one). You can easily look at a quaternion and tell the axis it rotates around by looking at the vector part, which won’t be unit length anymore since it is scaled by sin(θ/2), but still points in the right direction. You can tell the angle by either looking at the scalar part, or the length of the scaled vector part. The inverse rotation will have the axis pointing in the opposite direction, which you can think of as either a rotation around the opposite axis, or the result of flipping the sign of θ.

Interpolating rotations

One great thing about quaternions is how well they interpolate between two rotations. If you just linearly interpolate and re-normalize, you get a nice interpolation of rotation axes along a great circle, plus a smooth rotation of twists around those axes. For t in [0,1],

normalize( q1*(1-t) + q2*t )

In comparison, directly interpolating between rotation matrices gives you weird squishy non-rotations in the middle, and interpolating Euler angles tends to take you on odd paths, especially near the singularities.

You can do even better with the spherical linear interpolation or slerp, as straight linear interpolation goes faster in the middle than at the ends. There’s nothing quaternion-specific about slerp. Given two N-dimensional unit-length vectors, it’ll interpolate between them along an N-dimensional sphere. slerp between unit-length vectors q1 and q2 is given by

φ=acos(dot(q1,q2))
slerp(q1,q2,t) = q1*sin(φ*(1-t))/sin(φ) + q2*sin(φ*t)/sin(t)

That looks quite a bit like the linear interpolation, but with different mixing factors for q1 and q2. If the linear interpolate and renormalize isn’t good enough, but slerp’s trig functions are too slow, there are several approximations that land somewhere in between. My favorite is the one that does normal linear interpolation but replaces t with a polynomial to adjust the interpolation speed.

Using quaternions

There are three main things to know for using quaternions. First, to rotate a single point p by quaternion q:

pRot = p + 2*cross(q.xyz, q.w*p + cross(q.xyz, p))

The second big operation is converting a quaternion rotation into a 3×3 rotation matrix. That’s

vec3 Q = 2.*q.xyz;
qMat = mat3(
 	1 - Q.y*q.y - Q.z*q.z, Q.x*q.y + Q.z*q.w, Q.x*q.z - Q.y*q.w,
 	Q.x*q.y - Q.z*q.w, 1 - Q.x*q.x - Q.z*q.z, Q.y*q.z + Q.x*q.w,
 	Q.x*q.z + Q.y*q.w, Q.y*q.z - Q.x*q.w, 1 - Q.x*q.x - Q.y*q.y);

The final big one is how to combine two quaternion rotations into a new one. That’s just

q = vec4(q1.w*q2.w - dot(q1.xyz,q2.xyz),
      q1.w*q2.xyz + q2.w*q2.zyx + cross(q1.xyz,q2.xyz));

So when do we use each? In raw operations, the direct quaternion rotation is 18 multiplies and 12 adds. GPU operations are at least somewhat dependent on the GPU, but assuming up to a four-element multiply-and-add or dot product is a single operation, it’s probably about 6 GPU instructions. That’s two for the inner cross product, one more to add in q.w*p, two more for the outer cross product, and one more to multiply by 2 and add to p.

The matrix construction, assuming the common multiplies are factored out, is 12 multiplies and 12 adds, while the matrix multiply to actually rotate with it would be 9 multiplies and 6 adds, for a total of 21 multiplies and 18 adds (clearly more expensive). In GPU terms, it’s about 7 GPU instructions to create the matrix and 3 to use it (still more expensive). On the other hand, to apply the same rotation to two points is 36 multiplies and 24 adds using the direct rotation, but only 30 multiples and 24 adds with the matrix multiply since you can use the same rotation matrix twice. So, to transform a single point, it’s best to use direct quaternion rotation, but for two or more (or even a point and normal), converting to a 3×3 matrix form is a big win.

Finally, combining two quaternions is 16 multiplies and 8 adds, or about 6 GPU operations. In comparison, combining two rotation matrices takes 27 multiplies and 18 adds or about 9 GPU instructions.

So… If you are doing lots of work with rotations, or need to do any interpolation between transformations at all, quaternions are the tool for you. If you are transforming more than one point by the resulting rotation, you’re better off converting the quaternion to a matrix to use it, but given that quaternions interpolate so much better and are so much cheaper to combine together, it’s still often worth it.

Perhaps in a later post I’ll go through the math behind quaternions, and how to get a quaternion given a rotation matrix, but for now, this is just enough quaternion to be dangerous.

My game, Starry Eye, is on the App Store. It’s an arcade shooter, straight out of 1979. You play as the little blue guy with the tongue. Reviews wanted badly!

I’ve posted the entire project online at www.workly.com/starryeye/se10.zip

Thanks very much to Johnathan Moriarty, Eve Addison, Charles Lohr, Greg Aring, Fernando Lynch, Mary Lewis, Jenn Dahlke, and Bryan Eastlack for putting on a great show at the UMBC BetaScape table(s!) at ArtScape this weekend. I’m sure I forgot someone, please email me to let me know…

Brian Eastlack and Jenn Dahlke (both UMBC '10) demonstrating their game to Governor Martin O'Malley.

Brian Eastlack and Jenn Dahlke (both UMBC '10) demonstrating their game to Governor Martin O'Malley.

Congratulations to all on a show well-done!

Today, a look at Fresnel-modulated reflections. Hardly a secret trick, but it makes a surprising difference for somewhat shiny objects. Fresnel reflectance is the property that glancing reflections are stronger than head-on reflections. It’s particularly noticeable in surfaces like water or glass, but is even visible on a piece of paper. Oh, and since Fresnel reflectance is named after a person, it should always be capitalized (and since he was French, you don’t pronounce the ‘s’).

The most important directions for Fresnel reflectance are the surface normal, N, the direction you see the surface from, V, the direction the reflected light is coming from L (all unit-length vectors). Since it’s dealing with reflected rays, N should be half way between V and L, so N = normalize(V+L) and dot(N,V) = dot(N,L). I’m ultimately going to be applying it to an environment map, so I’ll stick with the dot(N,V) version.

There are also constants that control the strength of the effect: n1 and n2, the indices of refraction of each material, or sometimes rewritten in terms of n, the ratio of the two indices of refraction. Indices of refraction for common materials are pretty easy to find in a Physics text or online: vacuum is 1, air is pretty close to 1, water is about 1.33, glass is about 1.5.

Fresnel reflectance has different terms for incoming light polarized parallel to the surface than for light that’s not parallel to the surface. I’ll add another direction T, for the refracted light, since it makes the equations easier, though you can always rewrite the refracted direction in terms of the reflected one.

F_\parallel=\left(\frac{n_1\ dot(N,V) - n_2\ dot(N,T)}{n_1\ dot(N,V) + n_2\ dot(N,T)}\right)^2
F_\bot=\left(\frac{n_1\ dot(N,T) - n_2\ dot(N,V)}{n_1\ dot(N,T) + n_2\ dot(N,V)}\right)^2

The polarization dependence is handy if you’re using a polarizing filter to enhance or diminish the reflections in a photograph, but most graphics assumes unpolarized light, which is an equal mix of both terms. Cook and Torrance came up with the combined form that was used in graphics for many years:

\begin{align*} F&=\frac{1}{2}\frac{(g-c)^2}{(g+c)^2}\left(1+\frac{(c(g+c)-1)^2}{(c(g-c)+1)^2}\right)\\ c&=dot(N,V);\ g=n^2+c^2-1 \end{align*}

If you don’t have the index of refraction, it’s easier to measure the reflectance at normal incidence (looking head-on where it is smallest). From Cook and Torrance’s paper, that’s

F_0=\left(\frac{n-1}{n+1}\right)^2=\left(\frac{n_1 - n_2}{n_1 + n_2}\right)^2

But… almost everyone these days uses Schlick’s approximation for Fresnel. There’s actually lots of good stuff on approximating functions in Schlick’s paper, but only the Fresnel approximation seems to have really stuck:

F=F_0 + (1-F_0)(1-dot(N,V))^5

For a little more intuitive control, you can write this in terms of F0 at normal incidence and F90 at the edge of the object:

F=F_0 + F_{90}\ (1-dot(N,V))^5

Image d above is what it looks like for a high dynamic range environment map. I’ve also included a regular Blinn-Phong layer with a light source positioned at the brightest point in the environment texture. F is just the blend factor between the two.

It’s important to use a high-dynamic range texture for this, because ordinary 8-bit textures can’t distinguish between “the sky is bright”, maxed out at 255 and “the sun is 10,000 times brighter”, also maxed out at 255. Multiply by 0.04, and you get about 10 in both cases. But image was exposed so the sky really is 255, the sun should be around 25,500,000. When multiplied by 0.04, that’s still 10,200,000 (or really bright). If we don’t keep the full dynamic range of the environment map, we get the somewhat disappointing result in image c

Oh, and images a and b are what you get for a couple of choices of constant blend fraction. The glancing reflections around the edges of the model really add an extra dimension of shininess.

« Older entries § Newer entries »