So, in my last post, I said that my big goal was to thread my renderer and to do a better job of writing it against a public interface so that it could be a stand-alone library. Good goals, but now that they’re basically finished I have a lot of new goals. However, since that post was made a week+ after the work it described, and it’s now been a month since that post, I think my primary goal is to get better at writing these more regularly. It might mean that the technology changes in each update is less dramatic, but I think it’ll let me spend more time to properly illustrate what I’m working on at the time and lead to overall better posts. We’ll see how that turns out.
So, previously I had reworked my rendering pipeline to be multi-threaded, and managed to draw a triangle. Which is great and terrible at the same time. But, since the rewrite was still 85% built on my framework from last year, once the core worked in the new thread-safe system it was much less work to re-implement everything else I had last year. I was quickly able to bring in refactored versions of my previous material and lighting processes and get to a point where I could render a lit scene that was as complex as I cared to hard code (since the editor wasn’t quite ready to go yet). As you can tell from the image, I did’t care to hard code very much.
So, that’s what you can see; let’s talk about what you can’t see. Last year, I built my render target class to own a back buffer and a depth buffer, and created a system to copy a depth buffer from one render target into another. I was told it was strange to not share a single depth buffer across my render targets, but there were cases where I wanted the main geometry buffer for a process and I wanted to change it for current use without changing it for geometry. There were also cases where I down and up scaled render targets, which necessitated different depth buffers with different sizes; it’s how I achieved my geometry occluded outlines last year. It worked for me, but it didn’t come without it’s difficulties. And as I was re-implementing render targets this year, I realized that I overlooked a lot of potential performance gains last year.
I was re-using depth information for geometry occluded outlines, and I was using it to generate lighting information, but I wasn’t actually using it to cut down on overdraw. And overdraw was a huge problem in Evac (especially the final level, which was huge). So, I re-factored my render target system to instead keep a pool of back and depth buffers, and a render target now makes associations to the buffers in the pool that it wants. This allows easy sharing of buffers between targets (and without DirectX complaining about binding a buffer for read and write simultaneously!) in a way that let me do a Z Pre-pass process that is intended to build the shared depth buffer but remain lightweight by only ever engaging the vertex shader. Once that depth buffer is built, all processes that utilize it can set their depthstencil state to D3D11_COMPARISON_EQUAL to ensure that each screen pixel is engaged by a pixel shader only once. And since the pixel shader is often where you can find pipeline bottlenecks, optimizing it’s usage can be key to keeping a high performing renderer. So, ensuring that each process pass at most engages a pixel shader once per pixel is a huge boon and I was very pleased with the results.
The other thing you can tell from the image is that I’ve started work on my post-processing system. It’s not going to be terribly different from last year’s base system, but I’m looking to implement a new set of effects to give the game a filmic look. It starts with the depth of field effect you can see here, but there’s a lot more in the works to eventually make the game look good. I hope. Also, with the re-organization and re-optimization of the whole system, I’m hoping to have the frame time available to do a LOT more post-processing this year.
So, that’s all for this time. There were actually a lot of errors going on behind the scenes around this time that looked correct enough to not notice them right away, but which created huge problems when I tried to tweak data. I’ll probably talk about them next time as I think that they’re pretty interesting, and might be useful to anyone implementing a similar system. Also, I’ll be talking about my initial work on terrain, so look forward to that. Hopefully it won’t take a month.
Robert Evinger liked this on Facebook.
James Farris liked this on Facebook.