Dev Diary: Branching Dialog

Chris

One of the new systems we’ve built for DEAD SECRET CIRCLE is a branching dialog system.  As you explore the world you periodically interview suspects, and depending on your choices the conversation can go many different ways.

Branching dialog systems are pretty common, and ours is not particularly exotic. But even in the implementation of a straightforward branching conversation system can be fairly complicated. For our system, I was interested in finding or building tools that would help me explore the flow of the dialog, and its various branches, in real-time as I was editing.  For me, the hardest part of this system is actually writing the dialog itself, so I needed a toolchain that would let me quickly edit and revise.

dsc_screenshot02Building a Dialog System

There are a number of tools available for creating branching dialog trees.  I looked at Yarn, a Twine-like editor built for Night In the Woods.  Chat Mapper is a very serious–and very complicated-looking–tool that has an order of magnitude more features than I need.  I even realized that I wrote a markup syntax for branching dialog a decade ago that I’ve never used for anything.  Though there are a lot of tools out there, it was hard to find something that matched my needs and powerful enough to justify not writing something myself.
In the end I went with Inklewriter, a web-based tool that allows you to quickly lay out (and play through) branching dialog trees. It was written as a Twine competitor, I think, but the feature set was just the right fit for DEAD SECRET CIRCLE.  It supports named variables, conditional branches, divert nodes (where dialog flow is diverted to another node in the middle of playback), and can output json.  The interface is simple but powerful, and I can share fully playable dialog sequences with others before we push it into the game.  Overall, it’s a smart tool made by developers who’ve done a lot of interactive storytelling themselves.

The next step was to write a Unity importer for Inklewriter’s output json.  Inkle Studios, the authors of Inklewriter, actually supply their own Unity plugin for Ink, their (much more powerful, and complex) interactive novel language, but I needed to roll my own to use the simpler Inklewriter output.  I did this by creating a custom AssetPostprocessor for text files that looks for json files and parses them.  The parser itself is straightforward–mostly just a translation of the json node hierarchy to a similar ScriptableObject graph which is written to disk as an asset file.  It also pulls all the strings out and puts them into a separate dictionary system, which provides key/value pairs for all strings in the game and is our main infrastructure for localization. My workflow is to simply save Inklewriter’s output json into my Unity Assets folder and then point the runtime dialog system at the auto-generated asset, the root of which is the first node in the conversation.  At runtime, a dialog manager “runs” each node by displaying the node’s text, flipping node variables, and presenting the player with response choices, which select transitions to other nodes.  Text is pulled from the dictionary, and voice acting samples can also be pulled from another database using the same key.  I even made a nifty visualization tool out of Unity’s undocumented graph API.

Writing Branching Dialog

Once the tools and runtime were in order I was faced with the real task, which I had been avoiding: writing the actual dialog.  There are so many approaches to branching conversation design that it’s hard to know where to start.  I settled early on a fairly standard “call and response” model, in which you ask the NPC a question from a list of options and they give you an answer, because it mapped well to the story frame of interviewing suspects.  But where to go from there?  Should I encode a subtle moral choice into each question a la Mass Effect?  Should I provide two plausible options and two joke options a la Disaster Report (I call this the “DMV Written Test Design”)?  Should I allow the player chances to ask questions more than once, or should should choosing an option automatically close other options off (a la many games, but the best example is Firewatch)? Should the player’s question choices inform the personality of their player character?  The design of the conversation itself was much more daunting than the code to run it.

dialog tree

The main reason to h
ave a branching dialog system to begin with is to deliver information to the player in a way that gives her some agency and (hopefully) engenders some empathy for the NPC.  Some information is critical, and I can’t allow the player to miss it by making bad dialog choices.  Other information is optional, available to the players who choose to delve in further, who ask the game for more detail.  The DEAD
SECRET series is generally built upon a philosophy of narrative levels-of-detail: some folks will simply skim across the surface while others will choose to dive deep, and both should have a good time.  I wanted the dialog system to be the same.

In the end I settled on a model in which the player needs to make decisions, but there are no bad choices.  Once the player has chosen a question to ask, the conversation shifts in that direction, and (usually) does not return.  The player must choose which topics to broach, which bits they want to hone in on, but none of the choices are wrong or bad.  They just cause different tidbits of information to be revealed, and no matter what path is taken I can ensure that the critical pieces of information are displayed.

Inklewriter’s toolset gave me enough power to author conversations with a lot of structural variance.  Some conversations loop (allowing several chances to ask the same question), others are nearly linear.  Some conversations result in significantly different revelations, others end up at the same place via different paths through the tree.  The structure is fairly free-form, which I like.  My goal is to make it feel as little like a mechanism to be reverse-engineered as possible.

Discoveries

I have never written branching dialog before, and I learned a lot in the process of writing for DEAD SECRET CIRCLE.  This stuff is probably old hat for folks that have built these types of systems before, but it was new to me.

The biggest realization I had was that I could communicate the protagonist’s personality to the player through her questions.  Communicating the protagonist’s feelings is a constant struggle for me.  She has very few opportunities to talk about herself or what she is thinking.  Her main mode of communication is commentary on things that the player examines in the game world, but these messages must be succinct and to-the-point. There’s not a lot of time for introspection.  Figuring out that I could hint at her thought process by writing questions in a certain way was a revelation for me.

I also learned how important it is to record dialog early, long before there are voice actors working on the project.  Jonny and his wife Shannon recorded all of the dialog in the game themselves, which let us test all kinds of critical systems like spatialized VO and lip synching.  But most importantly, it made it very obvious when a conversation made no sense.  Reading it out loud, with all the pauses and inflections and imperfect pronunciations that are normal to human speech, clearly separated the text that sounded natural from the text that did not.  By the time we did get voice acting done, we already knew what we wanted from nearly every line because we’d had placeholder audio in the game for months.

Speaking of voice acting, I also learned how to make life really hard for the men and women who lent their voices to my characters. Forcing them to say foreign words in languages they don’t speak was one mistake. Relying on hard-to-say-out-loud technical words (like “ideomotor”) cost us some takes.  I briefly panicked when I realized I’d written a character with an accent that I couldn’t verify the accuracy of myself.  Fortunately we were lucky enough to work with seasoned pros who got through the minefield of my dialog text without losing limbs.  But next time I’ll try to remember that actual humans have to perform the words I write out loud.

And… Scene

The branching dialog system in DEAD SECRET CIRCLE was one of the most enjoyable parts of the project for me.  I liked building the system to run it and writing the dialog itself, and I learned a bunch in the process.  The original DEAD SECRET was a bit lonesome–there’s nobody in that house but you and the killer–and allowing direct interaction with a wider cast of characters was one of our core design goals for CIRCLE.  The dialog system (and related character animation and lip sync systems, which I’ll write about another time) ended up doing nearly all the heavy lifting here, and I’m really happy with the result.

DEAD SECRET CIRCLE comes out pretty soon for both VR and traditional platforms.  There’s a Steam page up if you are interested, and a mailing list you can join if you’d like to get updates about the game.

Posted in dead secret, dead secret dev diary, game design | 2 Comments

Dev Diary: Custom Occlusion Culling in Unity Improved

Chris

Back in 2014 I wrote about the custom occlusion system we built for DEAD SECRET.  It’s a pretty simple system that works by knowing all of the places the player can stand ahead of time.  It allowed us to cut our draw call count way down and ship high-fidelity mobile VR nearly two years ago.  But DEAD SECRET CIRCLE, the sequel to DEAD SECRET that we announced last month (check the teaser!), has a lot of new requirements.  One major change is the ability to move around the environment freely, which the DEAD SECRET system didn’t support.  We needed a new way to manage occlusion for this title.

First we tried to leverage Unity’s built-in occlusion system, which is based on Umbra, an industry-standard tool that’s been around for over a decade.  But Unity’s interface to this tool is exceptionally restricted, with very few controls available.  The values that are exposed are hard to understand in terms of world units (the internet theorizes that the scale value is off by a factor of 10), and in some cases the documentation Unity provides is misleading and/or false.  While Unity’s built-in culling does work (very well!) in some cases, it is hard to understand why it fails in others.  The debug visualization adds to the confusion: when a “view line” passes straight through a wall, are we supposed to believe that the debug view is inaccurate, or the occlusion has broken, or that this is the way it’s “supposed” to work?  After about six months of trying to get by with Unity’s occlusion system, I gave up and decided to revisit the custom tech we wrote for DEAD SECRET.

KARLOFFlogoRobot Invader’s internal technology stack is called KARLOFF, and it has had a hand in every game we’ve shipped since our first in 2011.  The great thing about a mature technology stack is that we can build new tools very quickly.  Using existing KARLOFF tech, we built a new version of our occlusion system in less than a week.

Our new system is still based on rendering panoramas and color-coding geometry to find potentially visible geometry sets from a given point in space.  But for a world in which the camera can move freely, we need a lot more points.  We also need a way to map the current camera position to a set of visible geometry.  All of a sudden this system goes from being a simple occlusion calculation to computing a full-blown potentially visible set.

My goal is always to trade build time and runtime memory (both of which we have plenty of) for runtime performance (which we are always constrained by).  Therefore this system uses a 3D grid (as opposed to a BSP or kd-tree, which are common for this purpose) that can index into a set of visible geometry in O(1) at runtime.  The world is cut up into grid cells and a panorama is rendered for each.  The resulting geometry information is stored back to the grid and then looked up as the camera moves through the scene at runtime.  Simple, right?

Well, the devil is in the details.  There’s a trade-off between cell size, accuracy, and bake time.  Very small cells (say, 0.3 meters per side) result in highly accurate occlusion but take a long time to bake.  For DEAD SECRET CIRCLE our primary goal is to cut entire rooms and building floors that are on the other side of a wall away, not to occlude small objects within the frustum.  We can get away with a larger cell if we render occlusion from several points within the cell and then union the results into a single set.  We actually need to do two passes, one with transparent geometry hidden and the other with it opaque (in order to catch both the objects behind a transparent surface and the surface itself).  Here’s an example of these two passes rendered from four different points within a cell.

Last Visibility Render

The output of this boils down to a bunch of lists of MeshRenderer pointers that get enabled and disabled as sets are selected.  It’s also necessary to do another set of renders for every occlusion portal (e.g. a door that can be opened or closed) so that we can adjust the visibility of the objects on the other side when the portal is opened at runtime.  At this point we have a fully functioning, highly accurate occlusion system that is nearly free at runtime.

But there’s a catch: this method relies on walking a texture like the one above and picking colors to match to mesh.  At 1024×512 per panorama (which seems to be the minimum resolution we can get away with based on the size of our objects in the world), a full transparent / nontransparent pass from four points results in a 4096×1024 image.  With a 1x1x1 cell size we end up with about 450 cells for this small apartment level, which is 1,426,063,360 pixel compares.  Add in more passes for portals and this time starts to grow exponentially.  Plus, 1x1x1 might not be small enough for perfect accuracy–we get better results with a 0.5 unit square cube, which on this level takes nearly 20 minutes to compute.  I know I just wrote that I was willing to trade build time for runtime performance, but 20 minutes to compute occlusion is unreasonable.Cell division

There are probably some smart ways to tighten the algorithm itself up.  I managed to achieve a 5x speedup by optimizing just the inner pixel compare loop.  But part of the problem here is the design: the cell sizes are of fixed size and sometimes intersect with walls.  Cells have to be fairly small to prevent objects from the other side of a wall from being pulled into the set.  Plus levels that aren’t rectangular in shape end up with a lot of cells in dead areas the camera will never go, rendered for no reason.

The next iteration of this system is to stop blindly mapping the entire level and instead restrict cells to hand-authored volumes.  Unity even provides a useful interface for this with the fairly mysterious Occlusion Area.  Exactly how it works with Umbra is the topic of some debate (compare Intel’s documentation about the use of Occlusion Areas to Unity’s own), but for our purposes we’re just using it as a way to size axis-aligned volumes in the world.  Each Occlusion Area produces an “island” of cells, and when within an island the camera can still find its cell in O(1).  Occlusion Islands don’t need to be the same resolution.  In fact, the cells don’t even need to be cubes any longer.  We can expose controls to control the granularity of the world per axis, resulting in rectangular volumes.  Why compute extra cells near the ceiling if the camera isn’t ever going to go up there?

ezgif.com-optimize

So now we have something that looks like a real occlusion system.  It handles transparent objects, occlusion portals, and can be easily controlled on a per-area basis by breaking the world up into islands of culling information.  By tuning the granularity per island the apartment area above went from 20 minutes to bake down to a very reasonable 1.5 minutes.  This system can still be improved, and it doesn’t solve for dynamic objects at all, but already we’re achieving better results than we were able to after months of twiddling with Unity’s built in occlusion system.

DEAD SECRET CIRCLE is scheduled to come out this year.  Follow us on twitter for more information, or sign up to the mailing list to get updates as they come out.

 

Posted in dead secret, game engineering, unity | Comments Off on Dev Diary: Custom Occlusion Culling in Unity Improved

DEAD SECRET Released for HTC Vive

Chris

The crew at Robot Invader has been hard at work on several projects, and today we can finally announce one of them: DEAD SECRET is now available for the HTC Vive as a free update on Steam.

With this update DEAD SECRET is now compatible with the Samsung Gear VR, Oculus Rift, HTC Vive, as well as regular monitors for folks without fancy-pants VR headsets.

To celebrate, we’re putting DEAD SECRET on sale for two weeks starting today.  At 34% off that’s less than $10!

We have more stuff to announce in the very near future, but in the mean time grab your Vive Wand, slap on a headset, and go solve a murder!

monkey

Posted in dead secret, virtual reality | Comments Off on DEAD SECRET Released for HTC Vive

Dead Secret Diary: Lightmapping in Unity 5

Chris

DEAD SECRET makes careful use of light mapping to control the mood and tone of each room.  Careful manipulation of light and darkness was one of our key tasks in building the game, and Art Director Mike spent almost as long on lighting our scenes as he did building them.  We structured our lights and light maps very carefully to produce subtle lighting and also maximize rendering efficiency.  In the end we were pretty happy with the result.  Then we upgraded the project to Unity 5.

You may have read about other developers who spent a lot of time and money on upgrading to Unity 5, mostly because of changes to the lighting system.  Unity 5 completely replaces the light mapping system used in previous versions (Autodesk’s Beast) with a new lighting system (Geomerics Enlighten) that specializes in realtime global illumination.  We had heard horror stories from other developers who attempted the transition of large projects to Unity 5, and so we waited, hoping that the issues would be worked out in time.  By all accounts the Unity team spent 2015 burning the midnight oil to fix bugs and improve workflows in particles, physics, performance, and lighting.  But, over a year since the release of Unity 5, transitioning a large project to Enlighten is still a pretty brutal experience.  Here’s how we did it.

Dead Secret is all one scene and has a lot of baked lights in it.

Dead Secret is all one scene and has a lot of baked lights in it.

DEAD SECRET Lighting Under Unity 4

Dead Secret’s scene is organized around the following constraints:

  • Scene loading is unacceptably slow, especially on mobile platforms.  Therefore the entire game must be implemented within a single scene, which we’ll load up once at startup.
    • Because scene loading is slow we need to be able to instantaneously swap light maps in order to implement a transition from daytime to nighttime.
  • Performance is highly dependent on batching, and to maintain maximum batching efficiency we want a small number of very large light map textures.
  • Almost all lights are static, but we also have a few realtime, moving lights as well (e.g. a flashlight).  Almost all geometry is static.
  • Some lights should cast both into light maps and create dynamic shadows for non-lightmapped objects (a “Mixed” light).

Given those requirements, our Unity 4 implementation in Dead Secret looked like this:

  • A single scene, full of static geometry with tons of lights, all set to Baked and sorted into buckets of Daytime Lights, Nighttime Lights, or Both.
  • One or two important lights in each scene set to Mixed for real-time shadow casting.
  • A custom culling system (described here) that turned lights on and off depending on where the player was standing.
  • A complicated Beast settings XML file (authored via the excellent Lightmapping Extended tool) for daytime light settings, another for nighttime settings.
  • An editor script that could set the Daytime ambient light color, move the correct Beast.xml file to the proper place in the file system, turn on the right set of lights, kick off a light map bake and then, when it was finished, move all the generated light map textures into a different folder and kick off another bake for Night.
  • A runtime script that could, in a single frame, change active lights between day and night sets, swap out the textures being used for light mapping (via LightmapSettings), set the proper LightProbes, and change the ambient light.

This gave us pretty good results.  We got our huge scene down to seven 4096×4096 light maps, which accommodated our batching requirements.  We could dynamically swap between day and night and see the lights, ambient, and light maps instantly change.  Because almost everything was static and baked the runtime cost was low enough for us to hit 60 fps in VR on mobile platforms.  It looked good and we were pretty happy with it.

Though the final results were good, rendering light maps in this way had two major problems in Unity 4.

First, rendering light maps was slow.  Like, really slow.  20 to 30 hours on my work machine to render both day and night maps.  This wouldn’t have been so bad except that when light mapping completes the scene file is modified.  Since the entire game is implemented in one scene, nobody on the team could do any work while light mapping was running.

Second, having 14 4096×4096 textures in your game (along with everything else) was too much for Unity 4’s 4 GB of addressable memory.  As a 32-bit application, the large light maps caused the Unity editor to crash all the time.  Now, a 4096 texture uncompressed with mip maps is about 85 mb, and with 14 of these you’re talking about over a gig of memory.  Still, it was annoying.  To continue working we had to drop the resolution of the light maps to 1024 and then write a command line build script that resized the textures, made a build, and then sized them back down, all without every initializing the graphics system to avoid extra memory overhead.

With those caveats aside, lighting in Unity 4 worked well.  We shipped on Gear VR in October 2015 based on Unity 4.  But, for PC and upcoming PS4 releases, we knew we needed to finally ditch Unity 4 and move on to 5.

Unity 5 Lighting Woes

The good news was that, other than lighting, almost everything about our project worked without modification under Unity 5. We had a couple of scripts that needed modification, and the transition exposed a few race conditions in the game that hadn’t manifested earlier.  But DEAD SECRET was playable under Unity 5 after just a day or two of work.

Same scene, same lights, same mesh in Unity 4 vs 5. We get those jaggy shadows on all sorts of edges throughout the game.

Same scene, same lights, same mesh in Unity 4 vs 5. We get those jaggy shadows on all sorts of edges throughout the game. Note that actual Unity 4-based versions have 4x more map res than shown here.

Lighting, on the other hand, was pretty busted. Over the course of several months we worked to recreate the lighting quality we had in Unity 4 using Enlighten, and the road was not easy.  Along the way I filed more bugs against Unity than I have in the five years of Unity game development.  Not only is Unity 5 lighting different than its predecessor, it’s still a work in progress.  The main challenges we face under Unity 5 are:

  • Light map rendering is, for our scene, about 5x slower than Unity 4 was for the same scene.  That’s almost a week of render time on my work machine.  We bought a new computer just to bake light maps.
  • Mixed lights do not work properly (case #750836). To cast dynamic shadows against light mapped surfaces in DEAD SECRET we end up lighting all of the geometry twice at rather enormous frame time cost.  We can only get away with it because the game is so efficient in other areas.
  • Though light map information is no longer stored in the scene (good!), it’s now stored in an opaque structure called LightingData, which overrides scene parameters and limits the control we have over our scene (bad!).
    • LightingData only stores information relevant to the last bake. In particular, it stores which lights were applied to the lightmap and which were not.  This as a number of bad side-effects:
      • Changing a light from baked to realtime has no immediate effect (case #758744, closed as “by design”).  This means you can’t see what lights will look like, even in real time, without kicking off another bake (which, as above, is hours or days of your life gone).
      • You can no longer create multiple sets of light maps from different sets of lights in the same scene. Rendering multiple light map passes causes information about which lights were used in the bake, now stored only in LightingData, to be lost.  This means that even if you swap light map textures at runtime, some of your lights will behave as unbaked realtime lights because they don’t know that they were accounted for in a previous bake.  To work around this we actually have to bake lights in three passes now: once for Day, once for Night, and once with all lights on just to generate a LightingData struct that works. This also means we can’t see what our lighting looks like in the scene view any longer.
      • Light.alreadyLightmapped, which ostensibly serves to control which lights were baked into the current set of light maps, is overridden by LightingData, making it useless.
  • “Ambient” light isn’t actually ambient in Unity 5 (case #753023).  In Unity 4, ambient light is just a color modification applied to all pixels, which allows you to control the minimum darkness of a scene.  In Unity 5, ambient behaves as if there is glowing sphere around the outside of the world, with light emitting from it equally across its surface.  The result is that ambient light is occluded by geometry: if you make a box and put the camera inside it, it will be absolute black regardless of the ambient light color or intensity.  This significantly changed the look of DEAD SECRET, and we struggled for months to undo it.  In the end the solution was to hack old-school ambient back into the standard shader.  It’s dumb: the standard shader supports all kinds of different lighting modes, controlled by #ifdefs, and adding support for “legacy” ambient is only a one- or two-line change.  Unity could easily support old-style ambient the way it supports other lighting modes.  When I asked them about it I was told, “old ambient was a hack, that’s not how lighting really works,” which I thought was a pretty ignorant answer. I don’t care how lighting “really works,” I care about realizing the art style my art director has selected.  I care about compatibility with years of development spent in Unity 4.  For new projects there are some advantages to the new ambient lighting scheme, but failing to support the old system cost us several months of dev time.
  • Lighting is just sort of generally busted in Unity 5.  I filed bugs about light mapping overwritting finalgbuffer shader output when rendering in deferred (case #757945), that LightmapEditorSettings.resolution has changed meaning from “baked resolution” to “indirect resolution” (case #753022), which caused our baking tools to set insanely high indirect resolution values and hang the light mapper.  There used to be a way to toggle light maps on and off in the scene view, but that’s gone now. The errors that the light mapping tool generate only make sense if you happen to be an expert in what the heck Enlighten does.  Do you know what it means when there’s a “light transport” error?  Or what is happening when it sits on the “clustering” phase for 36 hours straight?  I’m sure there are experts out there who get this, but it’s certainly not documented and the errors themselves don’t give a whole lot of hints.
Our scene view is a mess under Unity 5. Due to LightingData hacks we can't see real lighting until we hit play.

Our scene view is a mess under Unity 5. Due to LightingData hacks we can’t see real lighting until we hit play.

On the upside, Unity 5 is a 64-bit app and doesn’t crash because of large light map textures any more.  But our lighting takes longer, took us months to figure out the proper setup for, and it looks significantly worse than the Unity 4 build of the same scene.  The realtime GI features of Enlighten look nice but as a mobile and VR developer, I have no use for them today and am unlikely to have any use for them at any time in the next few years.  Therefore my conclusion is that the move from Beast to Enlighten has been, for developers like ourselves, a disaster.

I do think that Unity understands that the situation isn’t good.  I’ve been told that mixed lights are expected to work again in Unity 5.4.  I’ll find out if that’s true when 5.4 comes out of beta and becomes the stable branch.  A new light mapper was announced at GDC this year, and the demo they showed was impressive.  But since there was no hint of a release date I expect it won’t be usable for at least a year.  Going forward, we won’t need to do the Unity 4 -> Unity 5 transition ever again (and, per this experience, the cost/benefit of upgrading our other old games is deeply negative, so those games are effectively deprecated).  New games written against Unity 5 (including our next super-secret mobile VR project) should be easier to manage.  Maybe one day Unity scene loading on mobile will get fast enough that I can actually use multiple scenes without significant loads between them, which would ease the burden put on the light mapping system.

Speaking with other developers, a bunch of folks have similar issues with Unity 5’s new approach to lighting.  Some are doing their light baking outside of Unity, and a few have gone so far as to implement their own light mappers.  Folks using the realtime GI stuff also have complaints, although theirs are different.  I suspect the lighting team at Unity is under significant pressure from a bunch of different sources, and I don’t envy that position.  Here’s hoping it the situation improves soon.

Posted in dead secret, unity | Comments Off on Dead Secret Diary: Lightmapping in Unity 5

Dead Secret Summer Sale!

Chris

Dead Secret is now on sale everywhere!  Through July 4 you can get Dead Secret for Steam, Rift, or Gear VR for less than $10!  Don’t wait, snag it today!

Steam Store Page
Oculus Rift Store
Oculus Gear VR Store

hero_art_image

Posted in dead secret, virtual reality | Comments Off on Dead Secret Summer Sale!

Stealth Education and Video Game Chautauqua

Chris

If you signed up for the Dead Secret mailing list or follow us on Twitter, you might have heard of the DEAD SECRET Puzzle Challenge.  Each puzzle is unlocked by a YouTube streamer and anybody can submit an answer during the few hours that each puzzle is open.  The first ten people to submit correct answers get a copy of Dead Secret for free.

At the time of this writing three of the five puzzles have been unlocked, and so far the response has been phenomenal.  The puzzle questions are designed to be just hard enough that a quick Google search will not yield the answer. Some of them also have another purpose: to force respondents to learn a little tidbit about something that they might otherwise never have encountered.  So far topics covered have included pioneering psychologists, binary numbers, and the historic underpinnings of a classic Japanese folktale.

chautauqua

A chautauqua. Relating it to this article is a task I leave to you.

Most folks who submit an answer will just find the information they need, type it into the field, and hit “send.”  But a few might keep their research tabs open to be read in greater detail later.  A yet smaller audience might become interested in what they’ve found and spend some time learning more about it.  This is my secret goal.  My hope is that, as people trace the story paths we’ve laid, a few will notice a back alley, explore it, and discover a fascinating new world.

In solving Puzzle #3 perhaps somebody will read Hoichi the Earless, a Japanese folktale about a blind minstrel who is bewitched into playing for ghosts.  To complete the puzzle maybe they’ll discover that the ghosts he’s playing for are the deceased Taira clan, who are the subjects of the story Hoichi sings about.  Maybe they’ll realize that Hoichi’s temporary home, Amidaji Temple, is located on the straights of Shimonoseki, which is where the decisive naval battle that ended the Taira clan took place in the 12th century. The temple still stands there today, although it was converted to a Shinto shrine and its name was changed during the Meiji era.  Maybe one of the respondents to our quiz will go there someday.

Or maybe not.  There’s no way to know if we can really spur learning with the offer of a free Steam code to a horror game.  As long as folks are having a good time it’s not important that we cram some stealth education down their throats.  But if we can tickle the interest of even a few and lead them down a path to opportunities for learning and deeper thought, they’ll remember us later.  Maybe we will have enriched their lives, even just a tiny bit.  Seems worthwhile to try.

Posted in dead secret, game design | Comments Off on Stealth Education and Video Game Chautauqua

Dead Secret Launching on 3/28!

Chris

DEAD SECRET, Robot Invader’s seventh video game, will launch for Desktop and VR on March 28, 2016!  DEAD SECRET is a mystery / horror game that takes place in rural Kansas in 1965.  Here are all the details on the launch:

Desktop and VR Versions

We’re shipping two different versions of DEAD SECRET: a non-VR, Desktop version via Steam, and a VR version via the Oculus Store.  Wherever you choose to buy it you’ll get both versions, either via a hybrid build of the game or a free unlock code.  If you pre-ordered DEAD SECRET on our web site we’ll send you codes for both the Steam and the Oculus versions.

Here’s DEAD SECRET on Steam: http://store.steampowered.com/app/402260

Soundtrack

We’re also pleased to announce that Ben Prunty, the intrepid composer of the DEAD SECRET score (and many others, including FTL and Gravity Ghost) is releasing the soundtrack on Bandcamp and Steam.  You can listen to the title track here!

Reviews, Let’s Plays, and More

Since launching the Gear VR version of DEAD SECRET late last year the response has been overwhelmingly positive.  Scott Hayden at Road to VR called Dead Secret “by far one of the longest, and most engaging VR experiences I’ve ever had—mobile or otherwise,” while VRGiant named it a “must play,” and Gamezebo called it an “unforgettable experience.”  Time Magazine labeled Dead Secret “captivating” and “deeply creepy.” Finally, DEAD SECRET was nominated for “Best VR Game” at the IMG Awards.  Winners are announced next week, and we’ve got our fingers and toes double-crossed.

User feedback has been stellar as well.  Our analytics show that players are spending hours playing DEAD SECRET, and the title has managed to remain one of the top-ten best-selling applications on Gear VR for almost its entire tenure on that store.  We’re pretty happy about our 4.5 / 5 star rating as well.

We’ve started to see Let’s Play videos of DEAD SECRET appear.  These are great for giving you a taste of the game play.  Here’s a video played in VR and here’s another playing the Desktop version.  Many thanks to the folks recording and uploading these videos!

monkey

What about Playstation?

We’re still working hard on a version of DEAD SECRET for Playstation platforms.  We don’t have a date for these to announce yet but will be in touch about them soon!

We need your help!

If you like weird games, VR or otherwise, please help us make the launch of DEAD SECRET a success.  Tell your friends, write a tweet, or post the trailer somewhere–anything you can do to help us get the word out is incredibly valuable.  We are a small team and we are funded entirely with the sales of our games, so we very much appreciate your support.  More information and screenshots are at http://deadsecret.com.

Here are some handy social buttons to make it easy!

facebook-share-button

DEAD SECRET ships for PC and VR in 17 days!  See you soon!

The Robot Invader Team

Posted in dead secret | 4 Comments

Dead Secret Nominated for Best VR Game

Chris

Dead Secret is a finalist in the IMG Awards for Best VR Game!

We’re super excited to be in the running!  Please vote for Dead Secret!

Posted in dead secret, mobile games, Robot Invader | Comments Off on Dead Secret Nominated for Best VR Game

Comfortable VR Movement in Dead Secret

Chris

One of the big unsolved problems in virtual reality game design is movement.  Standing still feels great, but things go south when you start to move.  Many developers have experimented with standard first-person shooter movement systems in virtual reality games, and the result is always nauseating. Even worse, some seem unwilling to admit that their standard FPS controls feel terrible in VR. “It feels fine to me,” is the refrain of a person who doesn’t understand simulation sickness, hasn’t done any testing, and isn’t taking virtual reality seriously.  When such dismissals are code for “it’s too much work to change my game design to accommodate comfort,” it might be an indication that the game isn’t a fit for VR at all.

Dead Secret is a first-person game with first-person movement, and we worked really hard to ensure that movement is comfortable.  We’ve tested our solution on a wide audience–a large number of people, as diverse as we could manage–and have exceptionally positive results.  Dead Secret‘s movement system isn’t perfect, and it’s not a general solution for all first-person movement in games, but it works very well for our purposes.

type_dead_small

Before we get into the details of Dead Secret‘s locomotion system, it is worth reviewing the physiology behind motion and simulation sickness.  There are a ton of triggers for motion sickness, but the common one for VR is called vection, and it occurs when your brain encounters a disparity between the information reported by your vestibular system (that’s the part of your inner ear that keeps you balanced) and the information coming from your eyes.  When your inner ears and your eyes disagree it can feel like the world is moving while you are not.  Vection probably evolved as an anti-poison response; apparently there are a lot of toxins that will disrupt your vestibular system, and so your brain’s first move is to make you vomit.  This is why you can get sick by reading in a car: your ears report the motion of the vehicle but your eyes, which are focused on the page, do not corroborate it.

Of course, there’s more to it than that.  Your body is incredibly complicated and individual responses vary quite a bit.  There’s a ton more to learn about how virtual reality can confuse your brain precisely because it is so convincing.  For a lot more detail, I recommend this fantastic talk by Oculus’ Richard Yao.

That said, understanding the basics of vection can help us define some base principles for VR movement.  Vection occurs when your ears and your eyes disagree.  In VR, any  movement that you do not make yourself is a potential source of vection.  But there’s some hope: as Yao points out, your vestibular system can only detect acceleration, not linear velocity.  When you move at a fixed speed your inner ears do not detect any change.  Therefore we should be able to avoid vection if we simply remove all acceleration from movement.

If that seems like a tall order, I have some bad news for you.  Just about every interesting camera movement you might perform in a traditional first-person game causes acceleration.  One of the main reasons that naively-implemented FPS control schemes feel so bad in VR is that they usually continue to rely on mechanics like right stick body rotation.  Rotation in place requires angular acceleration, which your ears can totally feel, and when you do it to the player in VR it feels totally bad.  FPS run bouncing, originally invented to simulate shifting of weight from foot to foot as your avatar runs, feels particularly bad because it’s a parabolic motion–that’s 100% acceleration, people.  Don’t even get me started on canned camera animation; the fastest way to suck somebody out of a VR experience is to take away their head tracking.

Now, if I were to suggest that a traditional game remove all acceleration from its camera, I’d be laughed out of the room.  Camera animation is a big part of the experience in a traditional first-person game.  But we’re not making a traditional game, we’re making a VR game, and the rules are different.  Rather than blindly applying grammar from a different medium we have to come up with theories and test them, which is what we spent the better part of a year doing.  The results have very little to do with what works in traditional games, but a lot to do with what works in our VR game.

Here are the rules for Dead Secret’s camera system:

  • No acceleration, ever.  Linear movement only.
  • No rotating the camera (other than rotation coming from the HMD).
  • You can only move in straight lines. Prefer not to change direction while moving.
  • Motion should be short.  Rule of thumb is to keep all motion to bursts of 5 seconds or less.
  • Never ever take away head tracking or lock something to the view.
  • Maintain frame rate at all times.
enterstudy

Zero acceleration or artificial rotation.

That last one is pretty important.  In our tests we were able to remove almost all vection from testers by removing acceleration and rotation, but folks still felt bad if the frame rate started to drop.  Latency on the HMD is another vector for sickness, and it’s one that can bite you regardless of how careful you are with your movement system.  We worked hard to keep the frame rate high throughout Dead Secret.
To make our game actually playable within those rules the movement scheme had to change quite a bit.  You investigate the scene of a perfect murder in Dead Secret by moving between fixed positions in the room.  That had been part of the design since Day 1.  But to accommodate the requirements of VR the layout and design of our rooms changed dramatically.  I wrote a bit before about the use of space in Dead Secret’s level design, if you’re interested.

There are a number of odd side-effects to this design.  For example, the player can turn 180 degrees and walk to their destination backwards.  The system relies upon the player rotating his whole body to look around the room, but we can’t expect every player to be sitting in a swivel chair.  We added controller-based rotation to accommodate this, and implemented rotation as a 40-degree click with a “blink” transition.  This doesn’t trigger vection because your brain never sees any angular movement (via “change blindness,” which Yao covers in his talk).  And we found that while no testers reported feeling nauseous or sick, about 1% felt disoriented by having to actually turn their body to see things behind them.  For these folks we added a “comfort mode” which omits all motion completely.

Zero reports of nausea. Not even Sharapova.

The last thing we did to ensure our camera system was comfortable was to test the heck out of it.  I’ve read that about 10% of the population is susceptible to motion sickness.  In order to properly test a system you need a large enough testing group to identify folks who might be within that ten percent.  We put Gear VR devices on as many people as we could to help verify our design.  As we’ve iterated the design we’ve been able to push the number of people reporting discomfort to nearly zero.

There’s a lot more experimentation to do in this area.  One idea, which we haven’t tried, is to black out the view at the start and end of a movement.  This is based on the theory that even if the camera is moving at linear speed, the brain can infer acceleration just from visual input.  Another approach, which I’ve seen work well in other games, is to black out the peripheral view of the horizon (e.g. by placing the view in a cockpit).  The brain apparently uses motion in your peripheral vision to compute velocity, so denying it that information can lead to a more comfortable experience.  There are also folks experimenting with transitions between first-person and third-person camera angles for the purposes of movement.

It’s almost impossible to guess how a system will feel in VR without implementing it.  Dead Secret‘s movement system was designed by iteration–we tested and discarded many variants before we hit upon a generally comfortable model.  And that’s one of the amazing things about working in VR today–there’s so much design space to explore. Tried-and-true tricks from traditional games might not work in VR, but there are a ton of new tricks out there, just waiting to be found.

We have a lot more to say about Dead Secret in the very near future, so if you’re interested check us out on Twitter, Facebook, or sign up for the mailing list.

Posted in dead secret, virtual reality | 6 Comments

Dead Secret Diary: Locomotion and Space

Chris

gave a talk at GDC 2015 about designing our new title, Dead Secret, for mobile VR platforms like the Gear VR.  That seemed to go over well, so I thought I’d write a little bit about the design of the game itself.

Dead Secret is a murder mystery that takes place entirely within the home of the victim.  Your goal is to search the house for clues, piece together the events leading up to the death, and finally name the killer.  In designing this game one of the main challenges has been to define how the physical space, puzzles, and pacing interact.  This can be thought of as the problem of density: what is the effect of packing lots of information into a small space compared to spreading it out over a larger space?

4-10-15_small

To some extent, this question is answered for us by other design decisions we’ve made.  The house in Dead Secret is based on real architectural plans for a home of the proper era and location.  It’s not a mansion, it’s a two story home with one bathroom, two bedrooms and, ahem, a basement.  We’ve made some modifications here and there, and some of the game takes place outside the home itself.  But the space is relatively small.

More importantly, individual rooms are sized the way they should be, which means that once we fill them with bookshelves, tables, cupboards, and esoteric 19th-century mechanical instruments, there’s not a whole lot of space to get into a firefight, parkour up a wall, or  even sneak through some air ducts. This house is old enough that it doesn’t even have air ducts.

By electing a dense, cramped environment, we implicitly closed the door on things like shooting and platforming.  It’s a good thing, too, because those sorts of interactions typically rely on locomotion systems that probably make people sick in VR.  Instead, Dead Secret is about exploration, about finding clues, and about solving puzzles.  For this, the tight, contained space of the house works really well.  We can pack a ton of detail into each room and simplify our locomotion system to encourage methodical investigation.  One of the most surprising aspects of VR for us is the sense of spaciousness of virtual spaces.  When the scale is right, an environment that appears noisy and cluttered on a screen feels open and airy in VR.

The tight coupling of rooms also lets us engage in a level design pattern that I call recursive unlocking.  Recursive unlocking describes a map design with tightly packed rooms connected by doors that are initially locked.  The space available to the player starts out small, but as they unlock one room after another it begins to unwind like a shell.  Rooms interconnect and shortcuts are created, and traversing the space efficiently becomes a puzzle in and of itself.  Resident Evil is the archetypical example of this pattern, and if you’re interested you can read my analysis of recursive unlocking in that game.

4-3-15_small

Since our crime scene has many fewer rooms than Raccoon City’s Spencer Mansion, the implementation of recursive unlocking in Dead Secret is focused on aligning new areas to beats in the narrative, and eventually reconnecting them back to a common space.  The player will visit a new space and find themselves unable to return to the area they were previously in.  After resolving the new space they find a path back to an area that they know, and eventually into another new space.  Thanks to the density of content in each space, this approach lets us cram the whole game into just one house.

A highly dense space does have disadvantages, though.  Locomotion needs to be precise, and therefore ends up being a bit slower than in other forms of games. In a detailed environment, finding items to use for puzzles can be tricky because there is so much visual information to process.  Puzzles are used to gate progression, so we need to organize our puzzle dependency charts to prevent frustrating shelf moments at all costs.  Puzzle interfaces need to be fairly expressive, so we end up writing a lot of one-off code for specific puzzle interactions.  Recursive unlocking helps us keep items local to a common area of relevance, but wandering has a higher cost in Dead Secret than in other games in this genre (due to being in VR and also because we’ve traded control flexibility for environment detail), so we sometimes need to be more heavy handed about progression than I would prefer.

Still, this type of experience seems perfect for VR.  The trade-offs required to make the home of our murder victim interesting and compelling are generally things that are good for VR anyway. We want you to be in this house, and while VR technology can open the front door, it’s still up to us to make the floorboards creak as you cross the threshold.

Look for Dead Secret later this year on Gear VR, and on other platforms thereafter.

Posted in dead secret, game design | Comments Off on Dead Secret Diary: Locomotion and Space