GoodGraphics23.png, More Lighting, More Transparency, And Performance

So, a lot of backend updates have been happening in the last month, but the visual part of this update actually happened the day after the last GoodGraphics post.  But school’s been busy (what’s new?).  And then I got Premake and FXC working and wanted to post about that.  And then school’s been busy…  it isn’t like it’s a pattern or anything.

The visual part is that after talking with designers and artists, the simplest solution to dealing with transparency in the new lighting system was to just not illuminate transparent objects at all.  While it would be nice to have light partially “hit” and partially go through transparent objects, it just isn’t feasible for me to do it in a way that both looks good and keeps reasonable performance at this point in the year.  So, instead I just exclude transparent objects from the depth buffer used by the light accumulator.  The loss of not illuminating transparent objects is pretty minor compared to the gain of seeing light sources through them.

One of the major backend changes I’ve made is to start using instanced drawing.  I had originally, and incorrectly, assumed that I could write a very optimized material system to generate the fewest number of state changes and data transfers, and couple that with the reduced overhead of draw calls since DirectX 9, and be alright.  Turns out that while draw call overhead has been reduced, it is still pretty major, and making a separate draw call for every object takes a toll.  As a result, I’ve switched my lighting over to using DrawIndexedInstanced and it has made a huge difference.  A lot of our levels are now able to handle the entire light accumulation pass in a single draw call, and frame times have cut in half.  So, I’m pretty stoked about that.

Beyond that, I’ve been working to move as much calculation from pixel shaders to vertex shaders to cut down overall instruction counts without sacrificing visual fidelity, and it’s been pretty successful.  The only major outstanding calculation left in a pixel shader that I think I can move is the inverse view projection matrix multiplication in the light accumulation shader that uses depth to recreate pixel position in world space.  And I think the information in this thread has everything I need to solve that.  Here’s hoping!

So, that’s it.  I’ve nearly got our particle system working (finally), so hopefully there’ll be a new post up soon about that.  And if all goes well, it should be a two for one with texture animation as well (which I wrote last year, but still haven’t gotten around to integrating into this engine yet).  I’d really like to get it implemented and post about it before heading off to GDC next week, but you know, school.  We’ll see what happens.

GoodGraphics22.png, We Have Lighting

I finally solved the last issues with my light pre-pass renderer yesterday and designers got to work with point lights today.  After all the time I spent on implementing it, it’s a huge relief for it to work finally, and look good to boot!

As happy as I am with current progress, it also led me to some brand new problems.  For example, I “discovered” that setting all render target textures to R32G32B32A32_Float is a horrible, horrible idea.  Who knew?  Ryan Rohrer did, so thanks!  It also has led to additional work needed on handling transparency in models, but that was to be expected.  For the scope and time constraints of our project, I’m planning to do a basic forward lighting pass for models that have transparency, and designers can set the primary light to use for that in editor rather than do any closest light calculations at runtime.  It might not be a super elegant way to solve it, but it’ll work and I can get it done, so!

Now that the basic implementation is finished, I’m hoping that progress will speed up considerably on a large number of smaller tasks.  We’ll see how that pans out.

This Is Not A Pointlight

With the shift in game locales from being outdoors to indoors on a space station, my lighting model needs have shifted.  Previously I was using a single, dominant directional light as the main light source and shadow mapping off of that.  Now that doesn’t make sense, so I need a lot of pointlights instead.  When faced with the decision on how to approach that, I decided to implement a light pre-pass system in my forward renderer instead of a complete rewrite to a fully deferred system.  It’s generally been progressing well until I tried to actually store a pointlight into the lighting buffer and then sample that in place of the standard Phong lighting equation.  Silly me!

This is not a pointlight.  It’s supposed to be, but it isn’t.  Tomorrow, I’m hopeful I’ll get it working and be able to post some great new screen captures.  Right now, however,  life kind of sucks.

GoodGraphics21.png, Doing Transparency Right

This post should more or less bring things current.  I think.  I completely reserve the right to change my mind on that later.  We’ll see what happens.  Or what I remember.

Anyway, lots of new assets make this look very pretty, but the big thing here is properly handled transparency.  To give transparent objects a proper sense of volumetric depth, I just had to add a back face pass before the front face pass and everything was great.  Was about 6 lines of code to add the pass and appropriate state change calls.  And that model depth makes things feel “right” in a way that they didn’t before.  So, pretty happy about this new functionality and how simple it was to implement.

GoodGraphics20.png, Glow Mapping Actually Works As Intended

Like I mentioned in my previous post, in implementing my emissive mapping, I had a huge breakthrough that allowed me to fix my runtime glow mapping system to do what I had always intended.  While it provided a unique effect, the previous implementation had weird black borders that I couldn’t seem to get rid of and which prevented it from looking like the nice, blurred glow that I wanted.  The solution, it turned out, was very simple.

The fix was doing purely additive blending in my post processing.  Previously, I had figured that doing this would create an undesired effect were something to be a different color underneath.  Looking back, I have no idea why I thought that, it’s obviously wrong.  The original test I was performing was just to reject black pixels as a mask and layer on anything else over the final scene.  So, it should make perfect sense why there was a black band around the glow outlines; the blur created many pixels that were nearly black, but not quite, so they all passed the test and got layered in.  By switching to a pure additive system, the closer to black a pixel was, the less impact it had in the final composite scene.  And I finally had the effect I was after the whole time.  Like 5 months later.  I am smart.

This image was taken on 2/3/13.