UMBC Games, Animation and Interactive Media

Game Development at the University of Maryland, Baltimore County

Exciting additions to GGJ 2020 at UMBC

Two exciting announcements for UMBC Global Game Jam 2020 site.

First, Oxide Games is sponsoring the UMBC site this year. This should help us to make the GGJ experience all the more awesome for our participants. Thanks Oxide!

Second, Unity software engineer (and UMBC alum) Elizabeth Baumel will be leading a training on Unity’s new Data Oriented Tech Stack! This is open to any UMBC GGJ site participants, and will start at 10am on the first day of the Global Game Jam (aka January 31st). It’ll be hands-on, so install Unity version 2019.3 and download version 0.5.0 of the Entities package before you come so you’ll be ready to jump in!

To find the Entities package, open Window > Package Manager, then choose “All Packages” and under Advanced, choose “Show preview packages”. This is next-generation Unity stuff, so it’s not part of the regular release yet.

Global Game Jam 2020 registration

Registration for the 2020 Global Game Jam is open!

UMBC is once again hosting the Global Gam Jam this January. It will run from 5pm Friday, January 31st to 5pm Sunday, February 2nd, just after classes start. Space is limited, so sign up now!

For anyone who hasn’t participated, the global game jam is a 48 hour game development event, similar in spirit to a hackathon, with hundreds of host sites around the world. At 5pm local time, introduce the jam and announce this year’s theme. Previous year’s themes have ranged from a phrase (“as long as we’re together there will always be problems”) to a word (“extinction”) to an image (ouroboros: a snake eating its tail), to a sound (the recording of a heartbeat). Participants brainstorm game ideas around the theme, form into teams, and spend the weekend building games that are designed to be both fun and express the theme.

The UMBC site is not restricted to just students. In previous years, we have had a mix of UMBC students, alumni, students from other schools, game development professionals, and just people with an interest in game development. More details at gaim.umbc.edu/global-game-jam. However, we are limited to just 40 participants, so sign up early if you want to come. If you are not near UMBC, check the Global Game Jam for a site near you.

Streaming Global Game Jam 2019

The Global Game Jam site at UMBC is up and running for 2019. UMBC has been a site for all 11 years of the global game jam. After many years of only one or two nearby sites when we were close to overflowing, it’s good to see seven sites in Maryland and three in northern Virginia. That is definitely a record!

We can’t share the theme yet until 6pm Hawaii time, but you can watch our live feed at youtube.com/umbcgaim

Modifying UE4 — Part 1: The Setup

Introduction

This new series of posts will be detailing the process of modifying Unreal Engine 4 in order to improve its static lighting capabilities. This series will assume a decent understanding of GitHub, UE4, and Visual Studio, as well as C++. In this first part, we will be describing how to set up the engine and download its source code. In later installments, we will be describing how to manage such a large code base as UE4, our methodology for modifying specific source files, and our specific modifications and improvements to the static lighting system.

Step 1: Gaining Access to the Source Code

In order to gain access to the source code, you first have to create an account on the Unreal Engine website. Once that is set up, you can follow the steps on their GitHub setup page to gain access to the project. You can then fork the project or simply directly download the source code; however, if you want to later submit a merge request, you will have to fork the project.

Once you have the project downloaded on your computer, you will have to run a script that will generate project files for the source code for your specific environment. Most of this information can also be found in the README.md file found at the project’s root directory. In our case, we set up the project on Windows for Visual Studio 2017, and will be detailing that process here. The readme has more information on setting up the project for a Mac environment.

Step 2: Building UE4

In order to generate the project files, run Seutp.bat found in the project root directory. This will download the project binaries. Once this is complete, you can then run GenerateProjectFiles.bat in the same directory. This will generate solution files for Visual Studio. After this is complete, UE4.sln will appear in the project root directory. Opening this file in Visual Studio will load the entire UE4 project. You can then set your solution configuration (their recommendation is Development Editor), set your platform to Win64, and build the UE4 solution. This will build all solutions found in the project, and generate a fresh copy of the Unreal Engine executable at <ProjectRoot>\Engine\Binaries\Win64\UE4Editor.exe. Running this will run a fully functional version of the UE4 editor that you can also debug using breakpoints.

Step 3: Setting Up a Debug Project

Depending on what you’re going to modify within the engine, you will need a specific solution configuration. Different configurations will optimize various parts of the engine, while still leaving certain parts debuggable. You can find a full list of configurations here. Because of our modifications, we have decided to use the Debug Editor configuration. This leaves most of the code unoptimized, but we have the most access to debug symbols.

In addition, we are also using the UnrealVS plugin. The link gives a very in-depth tutorial on installation and use. It allows for quick building of individual projects, as well as the use of command line arguments. It is not necessarily entirely needed, but it is a useful tool. It makes it much easier to run the editor in debug mode as using the -debug flag on the command line allows for on-the-fly changes in the source code.

In the next part of Modifying UE4, we will discuss our modifications in more depth as well as how we determined where our modifications would be in the source code.

Global Game Jam registration

Registration for the 2016 Global Game Jam is open!

UMBC is once again hosting the Global Gam Jam this January. It will run from 5pm Friday, January 29th to 5pm Sunday, January 31st, just after classes start. Space is limited, so sign up now!

For anyone who hasn’t participated, the global game jam is a 48 hour game development event with hundreds of host sites around the world. At 5pm local time, introduce the jam and announce this year’s theme. Previous year’s themes have ranged from a phrase (“as long as we’re together there will always be problems”) to a word (“extinction”) to an image (ouroboros: a snake eating its tail), to a sound (the recording of a heartbeat). Participants brainstorm game ideas around the theme, form into teams, and spend the weekend building games that are designed to be both fun and express the theme.

The UMBC site is not restricted to just students. In previous years, we have had a mix of UMBC students, alumni, students from other schools, game development professionals, and just people with an interest in game development. More details at gaim.umbc.edu/global-game-jam. However, we are limited to just 40 participants, so sign up early if you want to come. If UMBC fills up, other local(ish) sites include the Universities at Shady Grove and American University. If you are not near UMBC, check the Global Game Jam for a site near you.

Lessons from Image Quality Assessment

This posting was inspired by a conversation I had with one of my students a few weeks ago about the Wikipedia page for SSIM, the Structural Similarity Image Metric, and discussion on the topic today in my lab.

First some background

Image Quality Assessment (IQA) is about trying to estimate how good an image would seem to a human. This is especially important for video, image, and texture compression. In IQA terms, if you are doing a full reference comparison, have the reference (perfect) image, and a distorted image (e.g. after lossy compression), and want to know how good or bad the distorted image is.

Many texture compression papers have used Mean Square Error (MSE), or one of the other measures derived from it, Root Mean Square Error (RMS), or Peak Signal to Noise Ratio (PSNR). All of these easy to compute, but are sensitive to a variety of things that people don’t notice: global shifts in intensity, slight pixel shifts, etc. IQA algorithms try to come up with a measurement that’s more connected with the differences humans notice.

How does an IQA researcher know if their algorithm is doing well? Well, there are several databases of original and distorted images together with the results of human experiments comparing or rating them (e.g. [1]). You want the results of your algorithm to correlate well with the human data. Most of the popular algorithms have been shown better match the human experiments than MSE alone, and this difference is statistically significant (p=0.05) [2].

What is wrong with IQA?

There are a couple of problems I see that crop up from this method of evaluating image quality assessment algorithms.

First, the data sets are primarily photographs with certain types of distortions (different amounts of JPEG compression, blurring, added noise, etc.). If your images are not photographs, or if your distortions are not well matched by the ones used in the human studies, there’s no guarantee that any of the IQA results will actually match what a human would say [3]. In fact Cadik et al. didn’t find any statistically significant differences between metrics when applied to their database of rendered image artifacts, even though there were statistically significant differences between these same algorithms on the photographic data.

Second, even just considering those photographic datasets, there is a statistically significant difference between the user study data and every existing IQA algorithm [2]. Ideally, there would be no significant difference between image comparisons from an IQA algorithm and what how a human would judge those same images, but we’re not there yet. We can confidently say that one algorithm is better than another, but none are as good as people.

Is SSIM any good?

SSIM [4] is one of the most popular image quality assessment algorithms. Some IQA algorithms try to mimic what we know about the human visual system, but SSIM just combines computational measures of luminance, contrast, and structure to come up with a rating. It is easy to compute and correlates pretty well with the human experiments.

The Wikipedia article (at least as of this writing) mentions a paper by Dosselmann and Yang [5] that questions SSIM’s effectiveness. In fact, the Wikipedia one-sentence summary of the Dosselmann and Yang paper is egregiously wrong (“show […] that SSIM provides quality scores which are no more correlated to human judgment than MSE (Mean Square Error) values.”). That’s not at all what that paper claims. The SSIM correlation to the human experiments is 0.9393, while MSE is 0.8709 (where correlations go from -1 for complete inverse correlation to 0 for completely uncorrelated, to 1 for completely correlated). Further, the difference is is statistically significant (p=0.05). The paper does “question whether the structural similarity index is ready for widespread adoption”, but definitely doesn’t claim that it is equivalent to MSE. They do point out that SSIM and MSE are algebraically related (that the structure measure is just a form of MSE on image patches), but that’s not the same as equivalent. That MSE in image patches does better than MSE over the whole image is the whole point!

Overall, when it comes to evaluating image quality, I’m probably going to stick with SSIM, at least for now. There are some better-performing metrics, but SSIM is far easier to compute than anything else comparable that I’ve found (yet). It definitely does better than the simpler MSE or PSNR on at least some types of images, and is statistically similar on others. In other words, if the question is “will people notice this error”, SSIM isn’t perfect, but it’s also not bad.

Extending to other areas?

We had some interesting discussion about whether this kind of approach could apply in other places. For example, if you could set up a user study to compare a bunch of cloth simulations, maybe changing grid, step size, etc. From that data alone, you just directly have a ranking of those simulations. However, if you use that dataset to evaluate some model that could measure the simulation and estimate the quality, you might then be able to use that assessment model to say whether a new simulation was good enough or not. Like the image datasets, the results would likely be limited to the types of cloth in the initial study. If you only tested cotton but not silk, any quality assessment measure you built wouldn’t be able to tell you much useful about how well your simulation matches on silk. I’m not likely to try doing these tests, but it’d be pretty interesting if someone did!

References

[1] Ponomarenko, Lukin, Egiazarian, Astola, Carli, and Battisti, “Color Image Database for Evaluation of Image Quality Metrics“, MMSP 2008.

[2] Sheikh, Sabir, and Bovik, “A Statistical Evaluation of Recent Full Reference Image Quality Assessment Algorithms“, IEEE Transactions on Image Processing, v15n11, 2006.

[3] Cadik, Harzog, Mantiuk, Myszkowski, and Seidel, “New Measurements Reveal Weaknesses of Image Quality Metrics in Evaluting Graphics Artifacts“, ACM SIGGRAPH Asia 2012.

[4] Wang, Bovik, Sheik, and Simoncelli, “Image Quality Assessment: From Error Visibility to Structural Similarity“, IEEE Transactions on Image Processing, v13n4, 2004.

[5] Dosselmann and Yang, “A Comprehensive Assessment of the Structural Similarity Image Metric”, Signal, Image and Video Processing, v5n1, 2011.

Early stories of shading on graphics hardware

I know you can’t take everything in a PR posting at face value, but the phrase “our invention of programmable shading” in NVIDIA’s announcement of their patent suits against Samsung and Qualcomm definitely rubbed me the wrong way. Maybe it’s something about personally having more of a claim to having invented programmable shading, at least on graphics hardware, than NVIDIA. Since many of the accounts (I’m looking at you Wikipedia) of the background of programmable shading seem to have been written by people who don’t even remember a time before GPUs, this seems like a good excuse for some historical recollections.

In the beginning…

The seeds of programmable shading were planted in 1981 by Turner Whitted and David Weimer. They didn’t have a shading language, but did create the first deferred shading renderer by splitting a scan line renderer into two parts. The first part rasterized all the parameters you’d need for shading (we’d call it a G-buffer now), and the second part could compute the shading from that G-buffer. The revolutionary idea was that you could make changes and re-shade the image without needing to redo the (then expensive) rasterization. Admittedly, no shading language, so you’d better be comfortable writing your shading code in C.

The real invention of the shading language

In 1984 (yes, 30 years ago), Rob Cook published a system called “Shade Trees”, that let you write shading expressions that it parsed. I’ve seen some mis-interpretation (maybe because of the name?) that this was a graphical node/network interface for creating shaders. It wasn’t. That was Abram and Whitted’s Building Block Shaders in 1990. Shade Trees was more like writing a single expression a C-like language, without loops or branches. It also introduced the shader types of surface, light, atmosphere, etc. still present in RenderMan today.

In 1985, Ken Perlin’s Image Synthesizer expanded this into a full language, with functions, loops and branching. This is the same paper that introduced his noise function — talk about packing a lot into one paper!

Over the next few years, Pixar built a shading language based on these ideas into RenderMan. This was published in The RenderMan Companion by Steve Upstill in 1989, with more technical detail in Hanrahan and Lawson’s 1990 SIGGRAPH paper.

Shading comes to graphics hardware

In 1990, I started as a new grad student at the University of North Carolina. I was working on the Pixel-Planes 5 project, which, among other things, featured a 512×512 processor-per-pixel SIMD array. It only had 208 bits per pixel, but had a 1-bit ALU, so you could make your data any size you wanted, not just multiples of bytes (13 bit normals? No problem!). This was, of course, important to give you any chance of having everything (data and computation) fit into just 26 bytes. I was writing shading code for it inside the driver/graphics library in something that basically looked like assembly language.

By 1992, a group of others at UNC created an assembly language interface that could be directly programmed without having to change the guts of the graphics library. This is really the first example of end-user programmable shading in graphics hardware. Unfortunately, the limitations of the underlying system made it really, really hard to do anything complex, so it didn’t end up being used outside the Pixel-Planes team.

Meanwhile, we were making plans for the next machine. This ended up being PixelFlow, largely an implementation of ideas Steve Molnar had just finished in his dissertation, with shading accommodations for what I was planning for my dissertation. I had this crazy idea that you ought to be able to compile high-level shading code for graphics hardware, and if you abstracted away enough of the hardware details and relied on the compiler to manage the mapping between shading code (what I want to do) and implementation (how to make it happen), that you’d get something an actual non-hardware person would be able to use.

It took a while, and a bunch of people to make it work, but the result of actual high-level programmable shading on actual graphics hardware was published in 1998. I followed the RenderMan model of surface/light shaders, rather than the vertex/pixel division that became popular in later hardware. I still think the “what I want to do” choice is better than the “how I think you should to do it”, though the latter does have advantages when you are working near the limits of what the hardware can do.

The SIGGRAPH paper described just the language and the surface and light shaders, along with a few of the implementation/translation problems I had to solve to make it work. There’s more in my dissertation itself, including shading stages for transformation and for primitives. The latter was combination of what you can do in geometry shaders with a shading interface for rasterization (much like pixel shader/pixel discard approaches for billboards or some non-linear distortion correction rendering).

Note that both Pixel-Planes 5 and PixelFlow were SIMD engines, processing a bunch of pixels at once, so they actually had quite a bit in common with that aspect of current GPUs. Much more so than some of the intermediate steps that the actual GPUs went through before they got to the same place.

OK, but how about on commercial hardware?

PixelFlow was developed in partnership with HP, and they did demo it as a product at SIGGRAPH, but it was cancelled before any shipped. After I graduated in 1998, I went to SGI to work on adding shading to a commercial product. At first, we were working on a true RenderMan implementation on a new hardware product (that never shipped). That hardware would have done just one or two operations per pass, and relied on streaming and prefetching of data per pixel to hide the framebuffer accesses.

After it was cancelled, we switched to something that would use a similar approach on existing hardware. The language ended up being being very assembly-like, with one operation per statement, but we did actually ship that and had at least a few external customers who used it. Both shading systems were described in a 2000 SIGGRAPH paper.

In 2000, I co-taught a class at Stanford with Bill Mark. That helped spur their work in hardware shading compilers. Someone who was there would have to say whether they were doing anything before that class, though I would not be surprised if they were already looking at it, given Pat Hanrahan’s involvement in the original RenderMan. In any case in 2001 they published a paper about their RTSL language and compiler, which could compile a high level shading language to the assembly-language vertex shaders NVIDIA had introduced, to the NVIDIA register combiner insanity, and to multiple rendering passes in the way the SGI stuff worked, if necessary.

RTSL was also the origin of the Vertex/Pixel division that still exists today.

And on GPUs

And now we leave the personal reminiscing part of this post. I did organize a series of SIGGRAPH courses on programmable shading in graphics hardware from 2000 through 2006, but since I wasn’t actually at NVIDIA, ATI, 3DLabs or Microsoft, I don’t know the details of when some of these efforts started or what else might have been going on behind the scenes.

Around 1999, NVIDIA’s register combiners were probably the first step from fixed-function to true programmability. Each of the two (later eight) combiner stages could do two vector multiples and add or dot product the result. You just had to make separate API calls to set each of the four inputs, each of the three outputs, and the function. In the SGI stuff, we were using blend, color transform and multi-texture operations as ALU ops for the compiler to target. The Stanford RTSL compiler could do the same type of compilation with the register combiners as well. Without something like RTSL, it was pretty ugly, and definitely wins the award for most lines of code per ALU operation.

Better, was the assembler vertex programs in the GeForce3, around 2000. It didn’t allow branching or looping, but executed in a deep instruction-per-stage pipeline. Among other things, that meant that as long as your program fit, doing one instruction was exactly the same cost as doing the maximum your hardware supported.

Assembly-level fragment programs came in around 2001-2002, and shared originally shared that same characteristic — anywhere from 1 to 1024 instructions with no branching at the same performance.

Around 2002, there was an explosion of high-level shading languages targeting GPUs, including NVIDIA’s Cg, the closely related DirectX HLSL, and the OpenGL Shading Language.

This list of dates is pretty NVIDIA centric, and they were definitely pushing the feature envelope on many of these elements. On the other hand, most of these also were connected to DirectX versions requiring similar features, so soon everyone had some kind of programmability. NVIDIA’s Cg tutorial put’s the first generation of programmable GPUs as appearing around 2001. ATI and 3DLabs also started to introduce programmable shading in a similar time period (some of which was represented in my 2002 SIGGRAPH course).

As a particular example of multiple companies all working toward similar goals, NVIDIA’s work, especially on Cg, had a huge influence on DirectX. Meanwhile, 3DLabs was introducing their own programmable hardware that I believe was a bit more flexible, and they had a big influence on the OpenGL Shading Language. As a result, though they were very similar in many ways, especially in the early versions there was a significant difference in philosophy between exposing hardware limitations in Direct3D vs. generality (even when slow on a particular GPU) in OpenGL. In hindsight, though generality makes sense now, on that original generation of GPUs, it lead too often to unexpected performance cliffs, which certainly hurt OpenGL’s reputation among game developers.

References

Gregory D. Abram and Turner Whitted. 1990. Building block shaders. In Proceedings of the 17th annual conference on Computer graphics and interactive techniques (SIGGRAPH ’90). ACM, New York, NY, USA, 283-288. DOI=10.1145/97879.97910

Robert L. Cook. 1984. Shade trees. In Proceedings of the 11th annual conference on Computer graphics and interactive techniques (SIGGRAPH ’84), ACM, New York, NY, USA, 223-231. DOI=10.1145/800031.808602

Pat Hanrahan and Jim Lawson. 1990. A language for shading and lighting calculations. In Proceedings of the 17th annual conference on Computer graphics and interactive techniques (SIGGRAPH ’90). ACM, New York, NY, USA, 289-298. DOI=10.1145/97879.97911 

Steven Molnar, John Eyles, and John Poulton. 1992. PixelFlow: high-speed rendering using image composition. In Proceedings of the 19th annual conference on Computer graphics and interactive techniques (SIGGRAPH ’92), James J. Thomas (Ed.). ACM, New York, NY, USA, 231-240. DOI=10.1145/133994.134067

Marc Olano and Anselmo Lastra. 1998. A shading language on graphics hardware: the pixelflow shading system. In Proceedings of the 25th annual conference on Computer graphics and interactive techniques (SIGGRAPH ’98). ACM, New York, NY, USA, 159-168. DOI=10.1145/280814.280857

Mark S. Peercy, Marc Olano, John Airey, and P. Jeffrey Ungar. 2000. Interactive multi-pass programmable shading. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques (SIGGRAPH ’00). ACM Press/Addison-Wesley Publishing Co., New York, NY, USA, 425-432. DOI=10.1145/344779.344976

Ken Perlin. 1985. An image synthesizer. In Proceedings of the 12th annual conference on Computer graphics and interactive techniques (SIGGRAPH ’85). ACM, New York, NY, USA, 287-296. DOI=10.1145/325334.325247

Kekoa Proudfoot, William R. Mark, Svetoslav Tzvetkov, and Pat Hanrahan. 2001. A real-time procedural shading system for programmable graphics hardware. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques (SIGGRAPH ’01). ACM, New York, NY, USA, 159-170. DOI=10.1145/383259.383275

John Rhoades, Greg Turk, Andrew Bell, Andrei State, Ulrich Neumann, and Amitabh Varshney. 1992. Real-time procedural textures. In Proceedings of the 1992 symposium on Interactive 3D graphics (I3D ’92). ACM, New York, NY, USA, 95-100. DOI=10.1145/147156.147171 

Steve Upstill. 1989. Renderman Companion: A Programmer’s Guide to Realistic Computer Graphics. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA.

Turner Whitted and David M. Weimer. 1981. A software test-bed for the development of 3-D raster graphics systems. In Proceedings of the 8th annual conference on Computer graphics and interactive techniques (SIGGRAPH ’81). ACM, New York, NY, USA, 271-277. DOI=10.1145/800224.806815

« Older posts