A Look Inside Farpoint’s Rendering Techniques for VR

Impulse Gear Developer Blog  |  Posted by Victor Ceitelis (Senior Programmer), Greg Koreman (Founder/CTO)  |  Mar 15, 2018


 

Impulse Gear & Farpoint

It has been almost a year since we shipped Farpoint — which has had an excellent reception by the fans, won awards, and marks a major milestone in Impulse Gear’s fledgling history.  We are a game studio that focuses on Virtual Reality games and as such we frequently get asked about the challenges of working with this new medium.  In this post we want to explain how we approached development and highlight some graphics techniques we employed on Farpoint.

To better understand how we approached developing the rendering techniques used on Farpoint I think it is first useful to understand a bit about the company and how we work.  We were a very small team when we started; 3 people at the beginning and 15 people for development by the end. In just over two years we grew the company and developed a successful product with a new team on a new platform with custom hardware.  We did this by focusing on quality.  This sentence sounds a lot like a PR leitmotif without much substance, but let's unpack that a little bit and see what it means to us.  Because we are building something new it means we must be prepared to try something new, to fail, and to try again.  We must accept that we don't know enough about this new medium to decide in advance whether or not to develop a feature, so the best thing we can do is build it fast and try it out.  If it has promise, iterate.  If we don't think it will be good enough or doesn't provide enough benefit, move on to the next feature that will increase the quality of the game.

With such a small team we knew that building our own engine would take our limited resources away from our primary focus of creating a great game so we decided to use Unreal Engine 4.  At this time UE4 did support PSVR in a limited way, however we knew we would need to modify the engine to reach our quality and performance goals, as well as to support major features like the Aim controller which we were developing with Sony.  We also knew that we needed to be ruthless when it came to performance.  The game had to constantly output a frame every 16.6ms or we would fail a technical requirement from Sony and not be able to ship the game.  With this in mind we knew it was very important to put our efforts towards performance improvements, but we had another problem we needed to solve first.

 

To MSAA, or Not To MSAA:

Early in the development of Farpoint we were very unhappy with the image quality.  Our world looked low resolution and very aliased.  To address the aliasing we decided to try MSAA, but we had a major problem - Epic’s deferred renderer doesn’t support MSAA.  This was before Epic had their clustered forward renderer and they only supported a limited forward renderer for mobile.  The mobile renderer wasn’t going to cut it and we didn’t have enough time to write a full clustered forward renderer on our own so we were in a bit of trouble.

Luckily, around that time Oculus wrote a clustered forward renderer for UE4 (Demoreeuille, 2016) and released the source code for everyone to use.  We integrated the Oculus renderer and made it work for PS4.  With the Oculus renderer in place and running on the PS4 we were finally ready to test MSAA.  Unfortunately our first tests were very disappointing.  MSAA was not only very slow, it was also not very effective at removing the aliasing.  After these tests we decided not to use MSAA on Farpoint, but we did find out something very interesting.

 

Fast But Incomplete:

This new renderer was fast.  Much faster than the Epic deferred renderer for our workloads.  This was really promising, but the Oculus renderer didn’t support many of the features of the Epic deferred renderer.  We went through the list of missing features and discussed it with our artists.  Some features we could do without, but some of them we were relying on.  For example, we needed more than a few dynamic shadows and we needed per cluster cube map reflections.  We looked at the missing feature set and decided that we could re-implement the features in the forward renderer and still have it be faster than the Epic deferred renderer.  All in all we saved about 2ms on our frame.

 

Supersampling:

Since we weren’t using MSAA for anti-aliasing we still needed a solution for the terrible aliasing and low image quality we were getting.  We had been using FXAA since we started the project (and ultimately shipped with FXAA on), but were still unhappy with the clarity and quality of the image.  We wanted a solution that made the entire frame look better, so we tried brute force: supersampling ("SuperSampling," n.d.).  This made the game look much better but it was extremely expensive, and when there was a lot of action happening on-screen the extra resolution was even more expensive, due to overdraw with VFX.  We wanted to find a middle-ground that gave us the best of both worlds.

 

    Dynamic Resolution:

    The speed and the resolution in a VR game are very important and we struggled to kill bottlenecks on the GPU to have both.  We found a solution that made the game faster, more beautiful, higher resolution and that matched well with the limitations of our VR game.  In Farpoint there are high moments and low moments. In one moment you may find yourself walking along a beautifully serene mountain ridge, gazing out at breathtaking vistas, and in the next moment you may find yourself fighting off the bites of hairy alien spiders.  In the vistas we needed to push the resolution as high as we could - up to 200% (four times as many pixels), but in the intense action sequences when an alien spider is jumping at your face you need silky smooth 60fps gameplay to blast those critters out of the sky and no longer need to see the details in the vista.  To allow for both high resolution and a solid framerate we implemented dynamic resolution scaling.  This way vistas and cinematics can be 200% and we drop the resolution back down to relieve GPU pressure and ensure a solid 16.6ms frame when things get hectic.  All of this happens dynamically every frame and allowed us a lot of flexibility in our scenes while ensuring a great stutter-free experience. Because we implemented dynamic resolution we gained a little bit of leverage on our dear GPU and then it was the CPU that became our bottleneck, of course... :)

    140% screen percentage @ 16.6 ms  |  112% screen percentage @ 16.6 ms  |  95% screen percentage @ 16.6 ms

     

    One-Eyed Occlusion:

    In VR you render everything twice, once for each eye.  That’s a lot of objects.  In order to avoid rendering invisible objects, Unreal uses something called occlusions queries (Sekulic, 2004).  Occlusion queries work by submitting a fast draw call to determine if the draw call would result in the rendering of any pixels.  This technique works very well when you just need to render one viewport, but you have to submit one draw call per object per eye and this turned out to be expensive on the CPU/Render thread.

    We decided to try something new: do the occlusion tests only for one eye, with a larger frustum.  For that purpose we merged the right and left depth buffer and did our test with this merged depth buffer.  Both eyes rendered the exact same list of objects, but in the end, even if some invisible objects were sent to the GPU for nothing, that meant less CPU time for us (4.5-6 ms faster).

    The two depth buffers are "mixed" into one that will be used for occlusion query tests.

     

    Faking Foveated:

    This next trick took fewer lines of code than this paragraph explaining it and it saved us 0.5ms on our frame.  Foveated rendering is a technique that speeds up rendering by reducing image quality around the edges of the screen where you aren’t looking (Vlachos, 2016).  This technique is usually done by only rendering some of the pixels around the outer edge of the screen and then doing a fixup pass to fill in the gaps.  We took another approach.  In the main lighting shader we changed the mipbias, increasing the mip level closer to the edge of the screen, like a cheap foveated view.  We saved a lot of bandwidth with this trick and gained on average 0.5ms.  I have to say it seems nobody noticed the artefact, until, well...now.

    Mip bias applied according to the distance to the center, borders are blurry

     

    Global Illumination:

    We used lightmaps extensively on Farpoint to provide high quality lighting with a very low runtime cost for static scenes.  Lightmaps have a lot of problems though. One problem we encountered was the iteration time between moving a light and seeing the result of its indirect lighting.  For us, re-generating lightmaps for a level could take all night and there was no good way to preview your changes.  This was a terrible workflow for our lighting artists.  We liked the quality and runtime cost of lightmaps but hated the iteration time. While thinking of other solutions we considered a few approaches.  For example, we considered using more dynamic lights, but dynamic shadowed lights are very expensive and don’t give you any bounce lighting.  Non-shadowed dynamic lights are cheaper but still don’t give you any bounce lighting and without shadows they are not very useful for lighting a scene.  Real-time global illumination was a very interesting solution as it would give us the bounce lighting and shadowing we wanted. It is very fast, but not quite fast enough for us to use it in VR.  We liked the speed and quality of the GI and thought that we could use the GI solution to create our lightmaps and drastically decrease our iteration time while still just using baked data at runtime.

    Our GI solution was inspired by the well known Cone Tracing Global Illumination by (Crassin et al., 2011) that consisted of : in screen space get the world position and normal and trace a bunch of cones into a voxel representation of the world.  We changed the way we managed the lighting data, in our case instead of generating and regenerating the lighting for each pixel on screen each frame, we stored the result in a world aligned grid during multiple frames.  That's it, we had a volume of probes, or in our case 6 colors per point per axis.  At each frame we sampled the most important probes (near the camera), and cache the lighting for the next frame.  It is world space cone tracing.  This technique permits various bounce calculations.  We did not use this tech in Farpoint because of time constraints, but we are excited by the results and would like to explore this further.

    GI Solution: The grid in world space; each probe contains the indirect lighting data  |  Results in Sponza

     

    Conclusion:

    By utilizing tools like UE4 we were able to get a huge head start on Farpoint.  We are always looking at the latest graphics techniques being developed by pioneers in the game industry and use this as inspiration in our own work.  We benefit from standards in computing and development software like Physically Based Rendering and Substance or ZBrush.  All of this makes the art and science of game development much easier.

    But game development is still hard — and developing virtual reality games even more so.  We need to go further and extend industry standard tools like UE4.  We need to rethink the latest graphics techniques to make them suitable for VR.  We need to push the hardware to its limits while strictly adhering to our rigid performance constraints.  All of this makes game development fun again — we welcome this challenge and find an ever-renewed excitement in our work, and we look forward to the challenges of the future.

     

    Do you want to hear more about our process or about our tech?  Let us know on social media.  Interested in joining the team?  Checkout our jobs page.

     

    Sources

    • Crassin, C., Neyret, F., Sainz, M., Green, S., and Eisemann, E. (2011). Interactive indirect illumination using voxel cone tracing. Computer Graphics Forum 30, 7.
    • Vlachos, A. (2016). Advanced VR Rendering Performance. GDC 2016.
    • Sekulic, D. (2004)..Efficient Occlusion Culling. http://developer.download.nvidia.com/books/HTML/gpugems/gpugems_ch29.html
    • Demoreuille, P.. (2016). Optimizing the Unreal Engine 4 Renderer for VR. https://developer.oculus.com/blog/introducing-the-oculus-unreal-renderer/
    • https://en.wikipedia.org/wiki/Supersampling