Dead Secret Diary: Lightmapping in Unity 5

Chris

DEAD SECRET makes careful use of light mapping to control the mood and tone of each room.  Careful manipulation of light and darkness was one of our key tasks in building the game, and Art Director Mike spent almost as long on lighting our scenes as he did building them.  We structured our lights and light maps very carefully to produce subtle lighting and also maximize rendering efficiency.  In the end we were pretty happy with the result.  Then we upgraded the project to Unity 5.

You may have read about other developers who spent a lot of time and money on upgrading to Unity 5, mostly because of changes to the lighting system.  Unity 5 completely replaces the light mapping system used in previous versions (Autodesk’s Beast) with a new lighting system (Geomerics Enlighten) that specializes in realtime global illumination.  We had heard horror stories from other developers who attempted the transition of large projects to Unity 5, and so we waited, hoping that the issues would be worked out in time.  By all accounts the Unity team spent 2015 burning the midnight oil to fix bugs and improve workflows in particles, physics, performance, and lighting.  But, over a year since the release of Unity 5, transitioning a large project to Enlighten is still a pretty brutal experience.  Here’s how we did it.

Dead Secret is all one scene and has a lot of baked lights in it.

Dead Secret is all one scene and has a lot of baked lights in it.

DEAD SECRET Lighting Under Unity 4

Dead Secret’s scene is organized around the following constraints:

  • Scene loading is unacceptably slow, especially on mobile platforms.  Therefore the entire game must be implemented within a single scene, which we’ll load up once at startup.
    • Because scene loading is slow we need to be able to instantaneously swap light maps in order to implement a transition from daytime to nighttime.
  • Performance is highly dependent on batching, and to maintain maximum batching efficiency we want a small number of very large light map textures.
  • Almost all lights are static, but we also have a few realtime, moving lights as well (e.g. a flashlight).  Almost all geometry is static.
  • Some lights should cast both into light maps and create dynamic shadows for non-lightmapped objects (a “Mixed” light).

Given those requirements, our Unity 4 implementation in Dead Secret looked like this:

  • A single scene, full of static geometry with tons of lights, all set to Baked and sorted into buckets of Daytime Lights, Nighttime Lights, or Both.
  • One or two important lights in each scene set to Mixed for real-time shadow casting.
  • A custom culling system (described here) that turned lights on and off depending on where the player was standing.
  • A complicated Beast settings XML file (authored via the excellent Lightmapping Extended tool) for daytime light settings, another for nighttime settings.
  • An editor script that could set the Daytime ambient light color, move the correct Beast.xml file to the proper place in the file system, turn on the right set of lights, kick off a light map bake and then, when it was finished, move all the generated light map textures into a different folder and kick off another bake for Night.
  • A runtime script that could, in a single frame, change active lights between day and night sets, swap out the textures being used for light mapping (via LightmapSettings), set the proper LightProbes, and change the ambient light.

This gave us pretty good results.  We got our huge scene down to seven 4096×4096 light maps, which accommodated our batching requirements.  We could dynamically swap between day and night and see the lights, ambient, and light maps instantly change.  Because almost everything was static and baked the runtime cost was low enough for us to hit 60 fps in VR on mobile platforms.  It looked good and we were pretty happy with it.

Though the final results were good, rendering light maps in this way had two major problems in Unity 4.

First, rendering light maps was slow.  Like, really slow.  20 to 30 hours on my work machine to render both day and night maps.  This wouldn’t have been so bad except that when light mapping completes the scene file is modified.  Since the entire game is implemented in one scene, nobody on the team could do any work while light mapping was running.

Second, having 14 4096×4096 textures in your game (along with everything else) was too much for Unity 4’s 4 GB of addressable memory.  As a 32-bit application, the large light maps caused the Unity editor to crash all the time.  Now, a 4096 texture uncompressed with mip maps is about 85 mb, and with 14 of these you’re talking about over a gig of memory.  Still, it was annoying.  To continue working we had to drop the resolution of the light maps to 1024 and then write a command line build script that resized the textures, made a build, and then sized them back down, all without every initializing the graphics system to avoid extra memory overhead.

With those caveats aside, lighting in Unity 4 worked well.  We shipped on Gear VR in October 2015 based on Unity 4.  But, for PC and upcoming PS4 releases, we knew we needed to finally ditch Unity 4 and move on to 5.

Unity 5 Lighting Woes

The good news was that, other than lighting, almost everything about our project worked without modification under Unity 5. We had a couple of scripts that needed modification, and the transition exposed a few race conditions in the game that hadn’t manifested earlier.  But DEAD SECRET was playable under Unity 5 after just a day or two of work.

Same scene, same lights, same mesh in Unity 4 vs 5. We get those jaggy shadows on all sorts of edges throughout the game.

Same scene, same lights, same mesh in Unity 4 vs 5. We get those jaggy shadows on all sorts of edges throughout the game. Note that actual Unity 4-based versions have 4x more map res than shown here.

Lighting, on the other hand, was pretty busted. Over the course of several months we worked to recreate the lighting quality we had in Unity 4 using Enlighten, and the road was not easy.  Along the way I filed more bugs against Unity than I have in the five years of Unity game development.  Not only is Unity 5 lighting different than its predecessor, it’s still a work in progress.  The main challenges we face under Unity 5 are:

  • Light map rendering is, for our scene, about 5x slower than Unity 4 was for the same scene.  That’s almost a week of render time on my work machine.  We bought a new computer just to bake light maps.
  • Mixed lights do not work properly (case #750836). To cast dynamic shadows against light mapped surfaces in DEAD SECRET we end up lighting all of the geometry twice at rather enormous frame time cost.  We can only get away with it because the game is so efficient in other areas.
  • Though light map information is no longer stored in the scene (good!), it’s now stored in an opaque structure called LightingData, which overrides scene parameters and limits the control we have over our scene (bad!).
    • LightingData only stores information relevant to the last bake. In particular, it stores which lights were applied to the lightmap and which were not.  This as a number of bad side-effects:
      • Changing a light from baked to realtime has no immediate effect (case #758744, closed as “by design”).  This means you can’t see what lights will look like, even in real time, without kicking off another bake (which, as above, is hours or days of your life gone).
      • You can no longer create multiple sets of light maps from different sets of lights in the same scene. Rendering multiple light map passes causes information about which lights were used in the bake, now stored only in LightingData, to be lost.  This means that even if you swap light map textures at runtime, some of your lights will behave as unbaked realtime lights because they don’t know that they were accounted for in a previous bake.  To work around this we actually have to bake lights in three passes now: once for Day, once for Night, and once with all lights on just to generate a LightingData struct that works. This also means we can’t see what our lighting looks like in the scene view any longer.
      • Light.alreadyLightmapped, which ostensibly serves to control which lights were baked into the current set of light maps, is overridden by LightingData, making it useless.
  • “Ambient” light isn’t actually ambient in Unity 5 (case #753023).  In Unity 4, ambient light is just a color modification applied to all pixels, which allows you to control the minimum darkness of a scene.  In Unity 5, ambient behaves as if there is glowing sphere around the outside of the world, with light emitting from it equally across its surface.  The result is that ambient light is occluded by geometry: if you make a box and put the camera inside it, it will be absolute black regardless of the ambient light color or intensity.  This significantly changed the look of DEAD SECRET, and we struggled for months to undo it.  In the end the solution was to hack old-school ambient back into the standard shader.  It’s dumb: the standard shader supports all kinds of different lighting modes, controlled by #ifdefs, and adding support for “legacy” ambient is only a one- or two-line change.  Unity could easily support old-style ambient the way it supports other lighting modes.  When I asked them about it I was told, “old ambient was a hack, that’s not how lighting really works,” which I thought was a pretty ignorant answer. I don’t care how lighting “really works,” I care about realizing the art style my art director has selected.  I care about compatibility with years of development spent in Unity 4.  For new projects there are some advantages to the new ambient lighting scheme, but failing to support the old system cost us several months of dev time.
  • Lighting is just sort of generally busted in Unity 5.  I filed bugs about light mapping overwritting finalgbuffer shader output when rendering in deferred (case #757945), that LightmapEditorSettings.resolution has changed meaning from “baked resolution” to “indirect resolution” (case #753022), which caused our baking tools to set insanely high indirect resolution values and hang the light mapper.  There used to be a way to toggle light maps on and off in the scene view, but that’s gone now. The errors that the light mapping tool generate only make sense if you happen to be an expert in what the heck Enlighten does.  Do you know what it means when there’s a “light transport” error?  Or what is happening when it sits on the “clustering” phase for 36 hours straight?  I’m sure there are experts out there who get this, but it’s certainly not documented and the errors themselves don’t give a whole lot of hints.
Our scene view is a mess under Unity 5. Due to LightingData hacks we can't see real lighting until we hit play.

Our scene view is a mess under Unity 5. Due to LightingData hacks we can’t see real lighting until we hit play.

On the upside, Unity 5 is a 64-bit app and doesn’t crash because of large light map textures any more.  But our lighting takes longer, took us months to figure out the proper setup for, and it looks significantly worse than the Unity 4 build of the same scene.  The realtime GI features of Enlighten look nice but as a mobile and VR developer, I have no use for them today and am unlikely to have any use for them at any time in the next few years.  Therefore my conclusion is that the move from Beast to Enlighten has been, for developers like ourselves, a disaster.

I do think that Unity understands that the situation isn’t good.  I’ve been told that mixed lights are expected to work again in Unity 5.4.  I’ll find out if that’s true when 5.4 comes out of beta and becomes the stable branch.  A new light mapper was announced at GDC this year, and the demo they showed was impressive.  But since there was no hint of a release date I expect it won’t be usable for at least a year.  Going forward, we won’t need to do the Unity 4 -> Unity 5 transition ever again (and, per this experience, the cost/benefit of upgrading our other old games is deeply negative, so those games are effectively deprecated).  New games written against Unity 5 (including our next super-secret mobile VR project) should be easier to manage.  Maybe one day Unity scene loading on mobile will get fast enough that I can actually use multiple scenes without significant loads between them, which would ease the burden put on the light mapping system.

Speaking with other developers, a bunch of folks have similar issues with Unity 5’s new approach to lighting.  Some are doing their light baking outside of Unity, and a few have gone so far as to implement their own light mappers.  Folks using the realtime GI stuff also have complaints, although theirs are different.  I suspect the lighting team at Unity is under significant pressure from a bunch of different sources, and I don’t envy that position.  Here’s hoping it the situation improves soon.

Posted in dead secret, unity | Leave a comment

Dead Secret Summer Sale!

Chris

Dead Secret is now on sale everywhere!  Through July 4 you can get Dead Secret for Steam, Rift, or Gear VR for less than $10!  Don’t wait, snag it today!

Steam Store Page
Oculus Rift Store
Oculus Gear VR Store

hero_art_image

Posted in dead secret, virtual reality | Comments Off on Dead Secret Summer Sale!

Stealth Education and Video Game Chautauqua

Chris

If you signed up for the Dead Secret mailing list or follow us on Twitter, you might have heard of the DEAD SECRET Puzzle Challenge.  Each puzzle is unlocked by a YouTube streamer and anybody can submit an answer during the few hours that each puzzle is open.  The first ten people to submit correct answers get a copy of Dead Secret for free.

At the time of this writing three of the five puzzles have been unlocked, and so far the response has been phenomenal.  The puzzle questions are designed to be just hard enough that a quick Google search will not yield the answer. Some of them also have another purpose: to force respondents to learn a little tidbit about something that they might otherwise never have encountered.  So far topics covered have included pioneering psychologists, binary numbers, and the historic underpinnings of a classic Japanese folktale.

chautauqua

A chautauqua. Relating it to this article is a task I leave to you.

Most folks who submit an answer will just find the information they need, type it into the field, and hit “send.”  But a few might keep their research tabs open to be read in greater detail later.  A yet smaller audience might become interested in what they’ve found and spend some time learning more about it.  This is my secret goal.  My hope is that, as people trace the story paths we’ve laid, a few will notice a back alley, explore it, and discover a fascinating new world.

In solving Puzzle #3 perhaps somebody will read Hoichi the Earless, a Japanese folktale about a blind minstrel who is bewitched into playing for ghosts.  To complete the puzzle maybe they’ll discover that the ghosts he’s playing for are the deceased Taira clan, who are the subjects of the story Hoichi sings about.  Maybe they’ll realize that Hoichi’s temporary home, Amidaji Temple, is located on the straights of Shimonoseki, which is where the decisive naval battle that ended the Taira clan took place in the 12th century. The temple still stands there today, although it was converted to a Shinto shrine and its name was changed during the Meiji era.  Maybe one of the respondents to our quiz will go there someday.

Or maybe not.  There’s no way to know if we can really spur learning with the offer of a free Steam code to a horror game.  As long as folks are having a good time it’s not important that we cram some stealth education down their throats.  But if we can tickle the interest of even a few and lead them down a path to opportunities for learning and deeper thought, they’ll remember us later.  Maybe we will have enriched their lives, even just a tiny bit.  Seems worthwhile to try.

Posted in dead secret, game design | Comments Off on Stealth Education and Video Game Chautauqua

Dead Secret Launching on 3/28!

Chris

DEAD SECRET, Robot Invader’s seventh video game, will launch for Desktop and VR on March 28, 2016!  DEAD SECRET is a mystery / horror game that takes place in rural Kansas in 1965.  Here are all the details on the launch:

Desktop and VR Versions

We’re shipping two different versions of DEAD SECRET: a non-VR, Desktop version via Steam, and a VR version via the Oculus Store.  Wherever you choose to buy it you’ll get both versions, either via a hybrid build of the game or a free unlock code.  If you pre-ordered DEAD SECRET on our web site we’ll send you codes for both the Steam and the Oculus versions.

Here’s DEAD SECRET on Steam: http://store.steampowered.com/app/402260

Soundtrack

We’re also pleased to announce that Ben Prunty, the intrepid composer of the DEAD SECRET score (and many others, including FTL and Gravity Ghost) is releasing the soundtrack on Bandcamp and Steam.  You can listen to the title track here!

Reviews, Let’s Plays, and More

Since launching the Gear VR version of DEAD SECRET late last year the response has been overwhelmingly positive.  Scott Hayden at Road to VR called Dead Secret “by far one of the longest, and most engaging VR experiences I’ve ever had—mobile or otherwise,” while VRGiant named it a “must play,” and Gamezebo called it an “unforgettable experience.”  Time Magazine labeled Dead Secret “captivating” and “deeply creepy.” Finally, DEAD SECRET was nominated for “Best VR Game” at the IMG Awards.  Winners are announced next week, and we’ve got our fingers and toes double-crossed.

User feedback has been stellar as well.  Our analytics show that players are spending hours playing DEAD SECRET, and the title has managed to remain one of the top-ten best-selling applications on Gear VR for almost its entire tenure on that store.  We’re pretty happy about our 4.5 / 5 star rating as well.

We’ve started to see Let’s Play videos of DEAD SECRET appear.  These are great for giving you a taste of the game play.  Here’s a video played in VR and here’s another playing the Desktop version.  Many thanks to the folks recording and uploading these videos!

monkey

What about Playstation?

We’re still working hard on a version of DEAD SECRET for Playstation platforms.  We don’t have a date for these to announce yet but will be in touch about them soon!

We need your help!

If you like weird games, VR or otherwise, please help us make the launch of DEAD SECRET a success.  Tell your friends, write a tweet, or post the trailer somewhere–anything you can do to help us get the word out is incredibly valuable.  We are a small team and we are funded entirely with the sales of our games, so we very much appreciate your support.  More information and screenshots are at http://deadsecret.com.

Here are some handy social buttons to make it easy!

facebook-share-button

DEAD SECRET ships for PC and VR in 17 days!  See you soon!

The Robot Invader Team

Posted in dead secret | 4 Comments

Dead Secret Nominated for Best VR Game

Chris

Dead Secret is a finalist in the IMG Awards for Best VR Game!

We’re super excited to be in the running!  Please vote for Dead Secret!

Posted in dead secret, mobile games, Robot Invader | Comments Off on Dead Secret Nominated for Best VR Game

Comfortable VR Movement in Dead Secret

Chris

One of the big unsolved problems in virtual reality game design is movement.  Standing still feels great, but things go south when you start to move.  Many developers have experimented with standard first-person shooter movement systems in virtual reality games, and the result is always nauseating. Even worse, some seem unwilling to admit that their standard FPS controls feel terrible in VR. “It feels fine to me,” is the refrain of a person who doesn’t understand simulation sickness, hasn’t done any testing, and isn’t taking virtual reality seriously.  When such dismissals are code for “it’s too much work to change my game design to accommodate comfort,” it might be an indication that the game isn’t a fit for VR at all.

Dead Secret is a first-person game with first-person movement, and we worked really hard to ensure that movement is comfortable.  We’ve tested our solution on a wide audience–a large number of people, as diverse as we could manage–and have exceptionally positive results.  Dead Secret‘s movement system isn’t perfect, and it’s not a general solution for all first-person movement in games, but it works very well for our purposes.

type_dead_small

Before we get into the details of Dead Secret‘s locomotion system, it is worth reviewing the physiology behind motion and simulation sickness.  There are a ton of triggers for motion sickness, but the common one for VR is called vection, and it occurs when your brain encounters a disparity between the information reported by your vestibular system (that’s the part of your inner ear that keeps you balanced) and the information coming from your eyes.  When your inner ears and your eyes disagree it can feel like the world is moving while you are not.  Vection probably evolved as an anti-poison response; apparently there are a lot of toxins that will disrupt your vestibular system, and so your brain’s first move is to make you vomit.  This is why you can get sick by reading in a car: your ears report the motion of the vehicle but your eyes, which are focused on the page, do not corroborate it.

Of course, there’s more to it than that.  Your body is incredibly complicated and individual responses vary quite a bit.  There’s a ton more to learn about how virtual reality can confuse your brain precisely because it is so convincing.  For a lot more detail, I recommend this fantastic talk by Oculus’ Richard Yao.

That said, understanding the basics of vection can help us define some base principles for VR movement.  Vection occurs when your ears and your eyes disagree.  In VR, any  movement that you do not make yourself is a potential source of vection.  But there’s some hope: as Yao points out, your vestibular system can only detect acceleration, not linear velocity.  When you move at a fixed speed your inner ears do not detect any change.  Therefore we should be able to avoid vection if we simply remove all acceleration from movement.

If that seems like a tall order, I have some bad news for you.  Just about every interesting camera movement you might perform in a traditional first-person game causes acceleration.  One of the main reasons that naively-implemented FPS control schemes feel so bad in VR is that they usually continue to rely on mechanics like right stick body rotation.  Rotation in place requires angular acceleration, which your ears can totally feel, and when you do it to the player in VR it feels totally bad.  FPS run bouncing, originally invented to simulate shifting of weight from foot to foot as your avatar runs, feels particularly bad because it’s a parabolic motion–that’s 100% acceleration, people.  Don’t even get me started on canned camera animation; the fastest way to suck somebody out of a VR experience is to take away their head tracking.

Now, if I were to suggest that a traditional game remove all acceleration from its camera, I’d be laughed out of the room.  Camera animation is a big part of the experience in a traditional first-person game.  But we’re not making a traditional game, we’re making a VR game, and the rules are different.  Rather than blindly applying grammar from a different medium we have to come up with theories and test them, which is what we spent the better part of a year doing.  The results have very little to do with what works in traditional games, but a lot to do with what works in our VR game.

Here are the rules for Dead Secret’s camera system:

  • No acceleration, ever.  Linear movement only.
  • No rotating the camera (other than rotation coming from the HMD).
  • You can only move in straight lines. Prefer not to change direction while moving.
  • Motion should be short.  Rule of thumb is to keep all motion to bursts of 5 seconds or less.
  • Never ever take away head tracking or lock something to the view.
  • Maintain frame rate at all times.
enterstudy

Zero acceleration or artificial rotation.

That last one is pretty important.  In our tests we were able to remove almost all vection from testers by removing acceleration and rotation, but folks still felt bad if the frame rate started to drop.  Latency on the HMD is another vector for sickness, and it’s one that can bite you regardless of how careful you are with your movement system.  We worked hard to keep the frame rate high throughout Dead Secret.
To make our game actually playable within those rules the movement scheme had to change quite a bit.  You investigate the scene of a perfect murder in Dead Secret by moving between fixed positions in the room.  That had been part of the design since Day 1.  But to accommodate the requirements of VR the layout and design of our rooms changed dramatically.  I wrote a bit before about the use of space in Dead Secret’s level design, if you’re interested.

There are a number of odd side-effects to this design.  For example, the player can turn 180 degrees and walk to their destination backwards.  The system relies upon the player rotating his whole body to look around the room, but we can’t expect every player to be sitting in a swivel chair.  We added controller-based rotation to accommodate this, and implemented rotation as a 40-degree click with a “blink” transition.  This doesn’t trigger vection because your brain never sees any angular movement (via “change blindness,” which Yao covers in his talk).  And we found that while no testers reported feeling nauseous or sick, about 1% felt disoriented by having to actually turn their body to see things behind them.  For these folks we added a “comfort mode” which omits all motion completely.

Zero reports of nausea. Not even Sharapova.

The last thing we did to ensure our camera system was comfortable was to test the heck out of it.  I’ve read that about 10% of the population is susceptible to motion sickness.  In order to properly test a system you need a large enough testing group to identify folks who might be within that ten percent.  We put Gear VR devices on as many people as we could to help verify our design.  As we’ve iterated the design we’ve been able to push the number of people reporting discomfort to nearly zero.

There’s a lot more experimentation to do in this area.  One idea, which we haven’t tried, is to black out the view at the start and end of a movement.  This is based on the theory that even if the camera is moving at linear speed, the brain can infer acceleration just from visual input.  Another approach, which I’ve seen work well in other games, is to black out the peripheral view of the horizon (e.g. by placing the view in a cockpit).  The brain apparently uses motion in your peripheral vision to compute velocity, so denying it that information can lead to a more comfortable experience.  There are also folks experimenting with transitions between first-person and third-person camera angles for the purposes of movement.

It’s almost impossible to guess how a system will feel in VR without implementing it.  Dead Secret‘s movement system was designed by iteration–we tested and discarded many variants before we hit upon a generally comfortable model.  And that’s one of the amazing things about working in VR today–there’s so much design space to explore. Tried-and-true tricks from traditional games might not work in VR, but there are a ton of new tricks out there, just waiting to be found.

We have a lot more to say about Dead Secret in the very near future, so if you’re interested check us out on Twitter, Facebook, or sign up for the mailing list.

Posted in dead secret, virtual reality | 6 Comments

Dead Secret Diary: Locomotion and Space

Chris

gave a talk at GDC 2015 about designing our new title, Dead Secret, for mobile VR platforms like the Gear VR.  That seemed to go over well, so I thought I’d write a little bit about the design of the game itself.

Dead Secret is a murder mystery that takes place entirely within the home of the victim.  Your goal is to search the house for clues, piece together the events leading up to the death, and finally name the killer.  In designing this game one of the main challenges has been to define how the physical space, puzzles, and pacing interact.  This can be thought of as the problem of density: what is the effect of packing lots of information into a small space compared to spreading it out over a larger space?

4-10-15_small

To some extent, this question is answered for us by other design decisions we’ve made.  The house in Dead Secret is based on real architectural plans for a home of the proper era and location.  It’s not a mansion, it’s a two story home with one bathroom, two bedrooms and, ahem, a basement.  We’ve made some modifications here and there, and some of the game takes place outside the home itself.  But the space is relatively small.

More importantly, individual rooms are sized the way they should be, which means that once we fill them with bookshelves, tables, cupboards, and esoteric 19th-century mechanical instruments, there’s not a whole lot of space to get into a firefight, parkour up a wall, or  even sneak through some air ducts. This house is old enough that it doesn’t even have air ducts.

By electing a dense, cramped environment, we implicitly closed the door on things like shooting and platforming.  It’s a good thing, too, because those sorts of interactions typically rely on locomotion systems that probably make people sick in VR.  Instead, Dead Secret is about exploration, about finding clues, and about solving puzzles.  For this, the tight, contained space of the house works really well.  We can pack a ton of detail into each room and simplify our locomotion system to encourage methodical investigation.  One of the most surprising aspects of VR for us is the sense of spaciousness of virtual spaces.  When the scale is right, an environment that appears noisy and cluttered on a screen feels open and airy in VR.

The tight coupling of rooms also lets us engage in a level design pattern that I call recursive unlocking.  Recursive unlocking describes a map design with tightly packed rooms connected by doors that are initially locked.  The space available to the player starts out small, but as they unlock one room after another it begins to unwind like a shell.  Rooms interconnect and shortcuts are created, and traversing the space efficiently becomes a puzzle in and of itself.  Resident Evil is the archetypical example of this pattern, and if you’re interested you can read my analysis of recursive unlocking in that game.

4-3-15_small

Since our crime scene has many fewer rooms than Raccoon City’s Spencer Mansion, the implementation of recursive unlocking in Dead Secret is focused on aligning new areas to beats in the narrative, and eventually reconnecting them back to a common space.  The player will visit a new space and find themselves unable to return to the area they were previously in.  After resolving the new space they find a path back to an area that they know, and eventually into another new space.  Thanks to the density of content in each space, this approach lets us cram the whole game into just one house.

A highly dense space does have disadvantages, though.  Locomotion needs to be precise, and therefore ends up being a bit slower than in other forms of games. In a detailed environment, finding items to use for puzzles can be tricky because there is so much visual information to process.  Puzzles are used to gate progression, so we need to organize our puzzle dependency charts to prevent frustrating shelf moments at all costs.  Puzzle interfaces need to be fairly expressive, so we end up writing a lot of one-off code for specific puzzle interactions.  Recursive unlocking helps us keep items local to a common area of relevance, but wandering has a higher cost in Dead Secret than in other games in this genre (due to being in VR and also because we’ve traded control flexibility for environment detail), so we sometimes need to be more heavy handed about progression than I would prefer.

Still, this type of experience seems perfect for VR.  The trade-offs required to make the home of our murder victim interesting and compelling are generally things that are good for VR anyway. We want you to be in this house, and while VR technology can open the front door, it’s still up to us to make the floorboards creak as you cross the threshold.

Look for Dead Secret later this year on Gear VR, and on other platforms thereafter.

Posted in dead secret, game design | Comments Off on Dead Secret Diary: Locomotion and Space

GDC Talk: Designing for Mobile VR in Dead Secret

Chris

It’s been a few months since the 2015 Game Developers Conference was held in San Francisco, but we’ve been so busy with Dead Secret that we barely noticed.  I gave a talk about the game, and how we changed it dramatically to meet the requirements of VR, which the kind folks who run the conference have posted for free.  It’s only 25 minutes, but if you’re short on time then UploadVR has a quick summary.

Chris_GDC-1000x750

Posted in dead secret, game design, virtual reality | Comments Off on GDC Talk: Designing for Mobile VR in Dead Secret

Dead Secret at GDC

Chris

Hey! We’re going to the Game Developers Conference in March and we’ll be talking about Dead Secret.  The topic is designing for mobile VR, and the work we went through to convert Dead Secret from a tablet game to virtual reality experience.  Here’s the link:

http://schedule.gdconf.com/session/designing-for-mobile-vr-in-dead-secret

And here’s a sneak preview:

Dead Secret GDC Preview

 

See you there!

Posted in dead secret, game engineering, game industry, virtual reality | Comments Off on Dead Secret at GDC

Custom Occlusion Culling in Unity

Chris

Here at the Robot Invader compound we are hard at work on our new game, a VR murder mystery title called Dead Secret.  There’s a very early trailer to see over at deadsecret.com.

Dead Secret is designed for VR devices, particularly mobile VR devices like the Gear VR.  But developing for VR on mobile hardware can be a performance challenge.  All the tricks in my last post apply, but the threshold for error is much lower.  Not only must you render the frame twice (once for each eye), but any dip below 60 fps can be felt by the player (and it doesn’t feel good).  Maintaining a solid frame rate is an absolute must for mobile VR.

Door

For Dead Secret, one of the major time costs is draw calls.  The game takes place in the rural home of a recently-deceased recluse, and the map is a tight organization of rooms.  If we were to simply place the camera in a room and render normally, the number of objects that would fall within the frustum would be massive.  Though most would be invisible (z-tested away behind walls and doors), these objects would still account for a huge number of extraneous (and quite expensive) draw calls.  In fact, even though we have not finished populating all of the rooms with items, furniture, and puzzles, a normal render of the house with just culling requires about 1400 draw calls per frame (well, actually, that’s per eye, so more like 2800 per frame).

The thing is, you can only ever see a tiny fraction of those objects at once.  When you are in a room and the doors are closed, you can only see the contents of that room, which usually accounts for about 60 draw calls.  What we need is a way to turn everything you can’t see off, and leave the things around you that you might see turned on.  That is, we want to cull away all of the occluded objects before they are submitted to render.  This is often called occlusion culling.

There are many approaches to solving this problem, but most of them fall within the definition of a Potential Visibility Set system.  A PVS system is a system that knows what you can probably see from any given point in the game, a system that knows the “potentially visible” set of meshes for every possible camera position.  With a PVS system, we should know the exact set of geometry that you might see, and thus must be considered for render, at any given time.  Everything else can just be turned off.

visibility short-2

A rudimentary form of PVS is a Portal System, where you define areas that are connected by passages (“portals”).  When the camera is in one area, you can assume that only that area and the immediately connected areas are potentially visible.  Portals can further be opened and closed, giving you more information about which meshes in your game world are possible to see from your current vantage point.

More complex PVS systems typically cut the world up into segments or regions and then compute the visible set of geometry from each region.  As the camera passes from region to region, some meshes are activated while others are turned off.  As long as you know where your camera is going to be, you can compute a (sometimes very large) data structure defining the potentially visible set of geometry from any point in that space.

The good news is, Unity comes with a pretty high-end PVS system built right in.  It’s based on a third-party tool called Umbra, which by all accounts is a state-of-the-art PVS system (actually, it’s a collection of PVS systems for different use cases).  If you need occlusion culling in your game, this is where you should start.

The bad news is, the interface that Unity exposes to the Umbra tool is fairly cryptic and the results are difficult to control.  It works really well for the simple scenes referenced by the documentation, but it’s pretty hard to customize specifically for the use-case needed by your game.  At least, that’s been my experience.

Dead Secret has a very simple visibility problem to solve.  The house is divided into rooms with doors that close, so at a high level we can just consider it a portal system.  In fact, if all we needed was portals there are some pretty solid-looking tools available on the Asset Store.  Within each room, however, we know exactly where the camera can be, and we’d like to do proper occlusion culling from each vantage point to maximize our draw call savings.  If we’re going to go from 1400 draw calls a frame down to 50 or 60, we’re going to have to only draw the things that you can actually see.

My first attempt at a visibility system for Dead Secret was just a component with a list of meshes.  I hand-authored the list for every room and used an algorithm with simple rules:

  1. When standing in a room, enable only the mesh objects in that room’s visibility set.
  2. When you move to a new room, disable the old room’s visibility set and enable the new room’s visibility set.
  3. While in transit from one room to another, enable both the visibility set of the old room and the new room.

This works fine, and immediately dropped my draw call count by 98%.  But it’s also exceptionally limited: there’s no occlusion culling from different vantage points within the rooms themselves, and the lists have to be manually maintained.  It’s basically just a rather limited portal system.

As we started to add more objects to our rooms this system quickly became untenable.  The second pass, then, was to compute the list of visible geometry automatically from several vantage points within each room, and apply the same algorithm not just between rooms, but between vantage points within rooms as well.  Just as I was thinking about this Matt Rix posted code to access an internal editor-only ray-mesh intersection test function (why isn’t this public API!?), and I jumped on it.  By casting rays out in a sphere from each vantage point, I figured I could probably collect a pretty reasonable set of visible geometry.

Shoot a bunch of rays, find a bunch of mesh, what could go wrong?

Shoot a bunch of rays, find a bunch of mesh, what could go wrong?

Turns out that while this method works, it has some problems.  First, as you might have predicted, it misses small, thin objects that are somewhat far from the camera point.  Even with 26,000 rays (five degree increments, plus a little bit of error to offset between sphere scan lines), the rays diverge enough at their extent that small objects can easily be missed. In addition, this method takes a long time to run through the combinatorial explosion of vantage points and mesh objects–about seven hours in our case.  It could surely be optimized, but what’s the point if it doesn’t work very well?

For my third attempt, I decided to try a method a co-worker of mine came up with ages ago.  Way back in 2006 Alan Kimball, who I worked with at Vicarious Visions, presented a visibility algorithm at GDC based on rendering a scene by coloring each mesh a unique color.  If I remember correctly, Alan’s goal was to implement a pixel-perfect mouse picking algorithm.  He rendered the scene out to a texture using a special shader that colored each mesh a unique solid color, then just sampled the color under the mouse pointer to determine which mesh had been clicked on.  Pretty slick, and quite similar to my current problem.

To turn this approach into a visibility system I implemented a simple panoramic renderer.  To render a panorama, I just instantiate a bunch of cameras, rotate them to form a circle, and adjust their viewport rectangles to form a series of slices.  Then I render all that into a texture.  For the purposes of a visibility system it doesn’t actually matter if the panorama looks good or not, but actually they look pretty nice.

The second bit is to change all of the materials on all of the mesh to something that can render a solid color, and then assign colors to each based on some unique value.  The only trickiness here is that the color value must be unique per mesh, and I ended up setting a shader keyword on every material in the game, which meant that I couldn’t really leverage Unity’s replacement shader system.  This also means that and I must manually clean the materials up when I’m done, and be careful to assign each back to sharedMaterial so that I don’t break dynamic batching.  Unity assumes I don’t know what I am doing and throws a load of warnings about leaking materials (which, of course, there are none).  But it works!

I would actually play a game that looked like this.

I would actually play a game that looked like this.

Once the colorized panorama is rendered to a texture (carefully created with antialiasing and all other blending turned off), it’s a simple matter to walk the pixels and look each new color up in a table of colors-to-mesh.  The system is so precise that it will catch mesh peaking through polygon cracks, so I ended up adding a small pixel threshold (say, ten pixels of the same color) before a mesh can be considered visible.

The output of this function is a highly accurate list of visible geometry that I can plug into the mesh list algorithm described above.  In addition, it runs about 60x faster than the ray cast method (yep, seven minutes instead of seven hours for a complete world compute) before any optimizations.

What I’ve ended up with is an exceptionally simple (at runtime), exceptionally accurate visibility system.  Its main weakness is that it only computes from specific vantage points, but the design of Dead Secret makes that a non-issue.  It doesn’t handle transparent surfaces well (it sees them as opaque occluders), but that’s not an issue for me either.

The result is that Dead Secret is running at a solid 60 fps on the Gear VR hardware.  We have enough headroom to experiment with expensive shaders that we should probably avoid, like mirrors (the better to lurk behind you, my dear).  This performance profile gives us space to stock the house with details, clues, a dead body or two, and maybe even a psycho killer.  Ah, but, I mustn’t spoil it for you.  I’ve already said too much.  Just, uh, keep your eyes peeled for Dead Secret in 2015.

 

Posted in game engineering, unity | Comments Off on Custom Occlusion Culling in Unity