2B0ST0N6 day three

Today I tried to get an early start on the day by avoiding the infamous Boston Big Dig inspired traffic and walking to the convention center. It was a great route that took me past the Boston Common and through Chinatown, and past a very Bostonian train station (South Station). I don't know if the walk was faster than waiting for the shuttle, but I did make it to the first paper talk of the morning.

The first paper was "Real-Time GPU Rendering of Piecewise Algebraic Surfaces". This is a paper by Charles Loop and Jim Blinn that builds on a similar theory to their 2005 paper "Resolution independent curve rendering using programmable graphics hardware" to render surfaces on the GPU. Personally, I think that this is one of the most impressive developments in GPU use that I have seen. This paper shows how the triangles given to the GPU can be used as control points for algebraic surfaces (such as a cone or sphere) and hence have the GPU render them through something like ray casting and root finding. The most impressive part of things is that the curves are evaluated at the resolution that they are rendered at, thus removing subdivision entirely from the pipeline of curve rendering. The only big problem that I see (which was mentioned in the talk) is the lack of tools that can output these surfaces, but I hope that some tool vendor picks up on that at some point.

The paper following that was "Point-Sampled Cell Complexes". As with the Loop/Blinn paper, this shows a method of representing surfaces. The novel thing about the representation is that it allows for point samples to represent surface samples, curve samples or vertex definitions. The algorithms presented join the points together along the given curves, and join surfaces together between those curves. I think that this was very interesting because it allowed for modelling some things that traditional techniques find challenging, such as cutting an arbitarily shaped hole into a surface. One thing that I want to read more about with this paper is if the representation method was designed with real time evaluation in mind, because some of the attributes of the algorithm look like it might make that difficult. But then again I have been surprised before.

After these papers I decided to join the course "Advanced Real-Time Rendering in 3D Graphics and Games". This course looked very practical, and I noticed that Natalya Tatarchuk was chairing the course and doing some lectures there based on the Toy Store demo that ATI have done. I have read a bit about the Toy Store demo before so I thought that it would be good to hear the material covered by the people that worked on it.

The first lecture that I saw was from Alex Evans. He presented some techniques that he put blended together to create fast approximations of global illumination on dynamic scenes. This was a great talk - Alex covered some of the experimental ideas that he played with and explained what he did with them, even if they were just ideas and did not go anywhere. I liked the philosophy that he extolled around combining ideas from many different places to tackle problems that have specific constraints. I can tell that Alex loves volume textures, and several of his techniques involved using them in one way or another to try and store volumetric lighting information of some kind. After seeing this I want to play more in this area, because it looks like there are some powerful things that can be done here.

After Alex's talk Natalya Tatarchuk talked about the parallax occlusion mapping done in the Toy Store demo. I have read about this before, but you get so much more depth when you have the person to go with the slides. The basic idea is to store a height field with a normal map (in the alpha channel, for example) that allows you then do some rudimentary ray casting style calculation to figure out what you are hitting. While the idea sounds basic, Natalya had a lot of details about tuning that was done to remove artifacts and increase the quality of the rendering. This was good to see because it makes the difference between something that is an interesting experiment and something that you can use for real.

After lunch Jason Mitchell from Valve talked about the Source engine, and discussed how shading works in it. Some interesting aspects of the engine that were discussed included varying light map density across a scene, storing light from a part of the environment at a particular point for use later, how they do HDR with an RGBA back buffer, and how they use histograms of light intensity to simulate how your eye adjusts to different lighting conditions. Of particular interest was how pretty much every feature had some artist controlled component in their tools. I think that this is very important, because it allows the people with the pretty sticks to beat the content into submission more easily.

The session continued with Natalya again, this time discussing how the water effects in Toy Store were done. There were a lot of different things covered here (she mentions that they had about 300 shader routines for all of the water effects). She showed how they did rain, how they used various particle simulation systems on GPU to calculate water flow, the way that droplets of water were modelled with sprites, the usage of fins (similar to how fur is done) to do haze on the taxi. The detail of the demo is incredible - they even have the water dripping down the glass store front, and these water trails leave shadows and pretend caustics on the toys behind the window. It was interesting that they used an old movie trick (mixing milk with the water for rain scenes) to solve the same problem in a virtual world as the problem in the real one.

In the afternoon the opening talk was from Chris Oat, who presented a method for computing lighting for static meshes with dynamic area light sources. This was demonstated with a potential application, which was landscape lighting. The idea of the approach was to precompute visibility data for the mesh (hence the static mesh restriction) and then use this information at render time together with spherical cap intersection calculation to estimate the amount of the area light that illuminates the point.

After this Carsten Wenzel from Crytek gave a talk on real time atmospheric effects in games. There were many different techniques presented, and most of them built on the technique of using a scene depth buffer rendered before the scene is rendered (in a similar fashion to deferred shading) to get fast access to depth results. Some of the techniques shown included sky light rendering (done on both CPU and GPU, with a novel scheme for calculating new sky data across multiple frames), volumetric fog, locally refined fog and soft particles. This was a great demonstration of the work that Crytek has done, and I really appreciated how Carsten came to share this at the conference.

In the afternoon I checked out some of the exhibition. The exhibition was pretty big, and there was a wide variety of stuff there from animation schools, animation studios, 3D editing tools and rendering tools. It seems like there are a lot of people doing stuff with high quality renderings of cars. Several companies have photorealistic rendering of cars down pat, from my eye.

Comments