To help everyone get ready for Global Game Jam 2012 this weekend, here’s a shout-out video from around the world. UMBC is at about the 4:36 mark.
Game Development at the University of Maryland, Baltimore County
To help everyone get ready for Global Game Jam 2012 this weekend, here’s a shout-out video from around the world. UMBC is at about the 4:36 mark.
Want to see what you can do in 48 hours? Game portfolio feeling a little thin? Have a feeling you’ll need a break by the time you’re two days into the Spring semester? Well, the Global Game Jam is entering its 4th year, and for the 4th year, UMBC will be one of the world-wide host sites January 27th-29th.
This is a 48 hour event, where teams from around the globe work to each develop a complete game over one weekend. The first year had 54 sites in 23 countries. The second year had 124 sites in 34 countries. Last year was up to 169 sites. The UMBC site is open to participants at all skill levels, and it is not necessary to be a UMBC student to register. Just go to <http://globalgamejam.org/> and register for “United States – Maryland – Baltimore – UMBC”. Participation will be limited to the first 40 registrants.
The jam will start at 5PM on Friday, January 27th in the UMBC GAIM lab, room 005 in the ITE building. At that time, the theme for this year’s games will be announced, and we’ll brainstorm game ideas and form into teams. There is no need to come as a team: each individual has an equal chance to pitch their game ideas, and you can join the team whose game you like best. Teams will have until 3pm on Sunday, January 29th to develop their games. We’ll have demos of each game and selection of local awards, wrapping up by 5pm Sunday. Thanks once again to generous support by Next Century, there is no registration fee for the this site, but you must register for the UMBC site in advance at www.globalgamejam.org.
Hope to see many of you there!
UMBC computer science students should have gotten an email from me including the game development track classes this Spring. A few of those could be of interest to area game developers as well. There are options to take UMBC classes as either a graduate or undergraduate non-degree-seeking student. The ones I think could be particularly interesting to those of you already working in the industry:
For all of these, the 4xx number is the undergraduate version and the 6xx number is a corresponding graduate version. Typically, these will have shared lectures but different assignments.
New data is out from the Entertainment Software Association. UMBC GAIM is in good company as one of 343 universities offering degrees in game design, development or programming. It’s not surprising so many universities are offering game development-focused degrees. The industry is continuing to grow, with a record $25.1 billion in revenue last year.
Their “2011 Essential Facts about the Computer and Video Game Industry” report has some other really interesting data as well. The average game player’s age is 37. Game players do skew male, but not as much stereotypes might make you think (58/42), and for frequent game purchasers, it’s almost even (52/48). Also, the top selling console category is action (21.7%), but for PC games it’s strategy (33.6%). There’s lots more in there (mostly graphs and 40-point numbers, so pretty easy reading).
This is not really a game or graphics topic per se, though it does have some connections to implicit modeling. I’ve seen a bunch of blog posts, tweets, G+ posts, etc. about something that seems to have been dubbed the batquation. It’s an equation that is supposed to look like the batman logo when graphed. I’ve tried to track down the original source, but have not had much luck (if you know, let me know!). Sadly many of the posts about this use a low-resolution thumbnail of the photo or a zoom on the graph, so you can’t actually read the equation itself. Here’s one of the better versions I’ve found:
A word on how this seems to have been constructed: It’s symmetric in x, so use |x| everywhere. Each term is responsible for one part of the logo:
Find a function f, where f(x,y)=0 is the right shape for one of the curved segments. Everything except the sides of the wings are of the simple form f(x,y)=g(x)-y or y=g(x). For example, the first term is responsible for the sides of the wings. Without the square roots, that term looks like this:
That gives you the segment, but also stuff outside of it you don’t want. So find a function h(x,y) that’s positive in the area where f(x,y) should apply and negative where it shouldn’t. Then |h|/h is a nice step function, 1 where h is positive and -1 where h is negative. Take the square root of that, and now it’s 1 where h is positive and imaginary (actually i) where h is negative. Use that to limit the scope of f(x,y) to the region of interest. The wings actually have two of these trimming functions, one in x and one in y (shown in color where each is imaginary)
The trimmed version (where the entire first term = 0) looks like this:
Repeat for each segment. Multiply all of those segments together, and you get a function that’s zero along the batman logo and non-zero everywhere else.
A bunch of the posts and stories on this are of the form “this is cool, does anyone have a graphing calculator to check it out”. Thanks to the nasty numerical properties of the trimming terms (with values 1, 0/0 and i), Mathematica has trouble at at the edges of those terms. I cheated here by explicitly excluding the imaginary regions, but this is what it’s raw un-tweaked output looks like:
I’m not sure a graphing calculator would cut it, but cool none the less.
Doing my best impersonation of someone who blogs with more regularity than I really do…
I glossed over (flubbed?) the error analysis a little in my last post, and should really do a better job. I’ll look at CLEAN/LEAN mapping, but the analysis methods are useful in lots of situations where you compute something from a texture.
To keep things simple, I’ll use a simplified form of the (C)LEAN variance computation:
The error in this expression is especially important in (C)LEAN mapping since it determines the maximum specular power you can use, and how shiny your objects can be. For specular power s, 1/s has to be bigger than the maximum error in V, or you’ll get some ugly artifacts.
M and B come from a texture, so have inherent error of and due to the texture precision. The error in each will be 1/2 of the texel precision. For example, with texel values from 0 to 255, a raw texel of 2 could represent a true value anywhere from 1.5 to 2.5, all of which are within .5 of the texel value.
In general, we’ll scale and bias to use as much of the texture range as we can. The final error for an 8-bit texture then is range/512. For data that ranges from 0 to 1, the range is 1 and the representation error is 1/512; while for data that ranges from -1 to 1, the range is 2, so the representation error is 2/512 = 1/256.
The error in each parameter propagates into the final result scaled by the partial derivative. is 1, so error due to M is simple:
The error due to B is a little more complicated, since is 2 B. We’re interested in the magnitude of the error (since we don’t even know if was positive or negative to start with), and mostly interested in its largest possible value. That gives
Generally, you’re interested in whichever of these errors is biggest. The actual error is dependent on the maximum value of B, and how big the texel precision ends up being after whatever scale is used to map M and B into the texture range. So, for a couple of options:
B range | -1 to 1 | -2 to 2 | -1/2 to 1/2 |
Max Bump Slope | 45° | 63.4° | 26.6° |
---|---|---|---|
1/256 | 1/128 | 1/512 | |
2*1/256 = 1/128 |
2*4/128 = 1/32 |
2*.5/512 = 1/512 |
|
M range | 0 to 1 | 0 to 4 | 0 to 1/4 |
1/512 | 1/128 | 1/2048 | |
1/128 | 1/32 | 1/512 | |
128 | 32 | 512 |
We can make this all a little simpler if we recognize that, at least with the simple range-mapping scheme used here, and are also dependent on .
So, this says the error changes with the square of the max normal-map slope, and that the precision of B is always the limiting factor. In fact, if there were an appropriate texture format, M could be stored with two fewer bits than B. For 16-bit textures, rather than 2-9 for the texture precision, you’ve got 2-17, giving a maximum safe specular power of 215=32768 for bumps clamped to a slope of 1. There’s no need for the slope limit to be a power of 2, so you could fit it directly to the data, though it’s often better to be able to communicate a firm rule of thumb to your artists (spec powers less than x) rather than some complex relationship (steeper normal maps can’t be as shiny according to some fancy formula — yeah, that’ll go over well).
Inspired by Stephen Hill’s post over on his self shadow blog, I wanted to put down some thoughts about LEAN mapping and CLEAN mapping for specular highlight filtering.
About a year and a half ago, Dan Baker and I published LEAN mapping, a method we developed for filtering normal maps to avoid aliasing for the water in Civilization V. A shiny bumpy surface should look less shiny once it is far enough away that you can’t see the individual bumps. At the Game Developer’s Conference this year, Dan presented a new lighter weight version we’re calling CLEAN mapping (Compact LEAN mapping, where LEAN mapping was Linear Efficient Antialiased Normal Mapping).
You can read the paper for the nitty-gritty details, but the gist of LEAN mapping is to models the bumps with off-center 2D Gaussian distributions of normal vectors in the surface tangent space. A 2D Gaussian has a center (mean) and elliptical shape (described by a 2×2 symmetric covariance matrix). You can stick the mean into a texture, and regular texture filtering does the right thing. The same is not true for the covariance, but you can compute the covariance from the raw second moment, and that does do the right thing when filtered. LEAN mapping needs to store at least five pieces of texture data, scaled to fit into the range of a texel. Two for the mean bump direction
and three for the raw second moments
At the top level of the MIP chain, these are initialized directly from the normal data. You apply your favorite MIP generation method for the rest of the MIP chain, and the difference between the way the B and M terms filter is what captures the conversion of bump directions into highlight shape. Given those five values in a couple of textures, we can reconstruct the main bump direction and shape of the distribution (= size and shape of the specular highlight). It’s simple, amazingly stable (we used specular powers over 13,000 with 16-bit textures), and has the cool bonus of turning grooved bumps into an anisotropic highlight shape, which happens in real life too.
To use it, you look up M and B from the texture and use them to reconstruct a covariance matrix for the distribution of normals. A few levels down, won’t equal anymore, and it’s this difference that matters.
The determinant of this matrix, , might come out negative due to numerical error (more on that later). If it is, I just clamp the matrix to 0. I like to add the specular power into the covariance at render-time, though you could add it into and when creating the texture. Then the specular term is computed using a Beckmann distribution (basically a projected Gaussian distribution). Given Blinn-Phong specular power s, and normalized tangent-space light and view vectors Lt and Vt:
Any method has its drawbacks, and for basic LEAN mapping there are two. The first is the number of texture elements needed. Five values need two textures, which is often too many. If we give up the anisotropic highlight shape, we get CLEAN mapping. Now we just compute three texture elements at the top MIP level:
When you look these up with standard texture filtering, the difference between the way they’re filtered gives you a single variance rather than the 2×2 covariance matrix. You don’t get the highlight stretching from grooved bumps, but you do get the bump antialiasing that avoids bump sparkling and shimmering.
The second, thougher, problem is the numerical error alluded to above. The variance SHOULD always be positive, or covariance matrix SHOULD always end up with a positive determinant, but especially at the finest MIP levels, we’re subtracting pairs of very similar values. The specular term adds some padding to that, but if a 1-bit error in the normal is bigger than 1/s, there will be artifacts. In Civ 5, we used 16 bit texture, which gives a good amount of headroom. If you do it using 8-bit textures, you’ll have to limit the steepness of your bumps and/or maximum specular power to avoid problems. For example, if and are limited to -1 to 1, one bit in an 8-bit texture is 1/128, which limits the effective specular power to under 128. Compressed textures are out of the picture as the errors are just too big. So really, direct LEAN mapping is most useful if you have and can afford 16-bit textures.
16-bit textures are feasible for a PC game like Civ V, but for consoles, methods like directly storing the variance in a texture as suggested in Stephen Hill’s post are necessary avoid the numerical errors. Variance doesn’t filter linearly like the LEAN moments do, so you’ll see some texture filtering issues, but they’re better than the precision errors. Of course, you’ll need to build all of the MIP levels from a high-precision or floating point LEAN map source, or filter each level directly down from the base texture (so don’t just let the automatic MIP generation do it). Then, at least, the raw variances stored in the texture levels will be right, and the errors will be limited to the hardware texture filtering.
Edit: There are some problems with the error analysis in this post. See this follow up for a full (and better) analysis.
The 2011 UMBC Digital Entertainment Conference is going on right now in UMBC Lecture Hall V. As I type this, we’re listening to Greg Foertsch talk about being an Art Director at Firaxis. Still plenty more speakers to come before we wrap up at 5:00. It’s free and open for anyone who is interested to attend, so come on down!
BMoreMedia had a nice video piece on the game development programs at UMBC. Check it out!
I previously announced the 2011 UMBC Digital Entertainment Conference, but now have a schedule of speakers as well. Here is the full announcement!
The 2011 Digital Entertainment Conference is coming Saturday, April 30th to UMBC. Every year, the UMBC Game Developer’s Club invites speakers from the videogames industry to come in and share their knowledge and experience. This year, the conference is sponsored by Zynga, and will feature speakers from Zynga, Firaxis, Pure Bang, and Dream Rock Studios. The conference will be taking place in the Engineering Building, LH 5, starting at 10 am. The schedule of speakers is as follows:
10 am – Greg Foertsch, Project Art Director at Firaxis
11 am – Ed Zavada, Programmer at Dream Rock Studios
12 pm – Lunch Break
1 pm – Eric Jordan, Programmer at Firaxis
2 pm – Ben Walsh, CEO of Pure Bang Games
3 pm – Barry Caudill, Executive Producer at Firaxis
4 pm – Michelle Menard, Designer at Zynga
Admission is free and the conference is open to anyone, so come out and take advantage of this amazing opportunity!
© 2024 UMBC Games, Animation and Interactive Media
Theme by Anders Noren — Up ↑