Stealth Education and Video Game Chautauqua

Chris

If you signed up for the Dead Secret mailing list or follow us on Twitter, you might have heard of the DEAD SECRET Puzzle Challenge.  Each puzzle is unlocked by a YouTube streamer and anybody can submit an answer during the few hours that each puzzle is open.  The first ten people to submit correct answers get a copy of Dead Secret for free.

At the time of this writing three of the five puzzles have been unlocked, and so far the response has been phenomenal.  The puzzle questions are designed to be just hard enough that a quick Google search will not yield the answer. Some of them also have another purpose: to force respondents to learn a little tidbit about something that they might otherwise never have encountered.  So far topics covered have included pioneering psychologists, binary numbers, and the historic underpinnings of a classic Japanese folktale.

chautauqua

A chautauqua. Relating it to this article is a task I leave to you.

Most folks who submit an answer will just find the information they need, type it into the field, and hit “send.”  But a few might keep their research tabs open to be read in greater detail later.  A yet smaller audience might become interested in what they’ve found and spend some time learning more about it.  This is my secret goal.  My hope is that, as people trace the story paths we’ve laid, a few will notice a back alley, explore it, and discover a fascinating new world.

In solving Puzzle #3 perhaps somebody will read Hoichi the Earless, a Japanese folktale about a blind minstrel who is bewitched into playing for ghosts.  To complete the puzzle maybe they’ll discover that the ghosts he’s playing for are the deceased Taira clan, who are the subjects of the story Hoichi sings about.  Maybe they’ll realize that Hoichi’s temporary home, Amidaji Temple, is located on the straights of Shimonoseki, which is where the decisive naval battle that ended the Taira clan took place in the 12th century. The temple still stands there today, although it was converted to a Shinto shrine and its name was changed during the Meiji era.  Maybe one of the respondents to our quiz will go there someday.

Or maybe not.  There’s no way to know if we can really spur learning with the offer of a free Steam code to a horror game.  As long as folks are having a good time it’s not important that we cram some stealth education down their throats.  But if we can tickle the interest of even a few and lead them down a path to opportunities for learning and deeper thought, they’ll remember us later.  Maybe we will have enriched their lives, even just a tiny bit.  Seems worthwhile to try.

Posted in dead secret, game design | Leave a comment

Dead Secret Launching on 3/28!

Chris

DEAD SECRET, Robot Invader’s seventh video game, will launch for Desktop and VR on March 28, 2016!  DEAD SECRET is a mystery / horror game that takes place in rural Kansas in 1965.  Here are all the details on the launch:

Desktop and VR Versions

We’re shipping two different versions of DEAD SECRET: a non-VR, Desktop version via Steam, and a VR version via the Oculus Store.  Wherever you choose to buy it you’ll get both versions, either via a hybrid build of the game or a free unlock code.  If you pre-ordered DEAD SECRET on our web site we’ll send you codes for both the Steam and the Oculus versions.

Here’s DEAD SECRET on Steam: http://store.steampowered.com/app/402260

Soundtrack

We’re also pleased to announce that Ben Prunty, the intrepid composer of the DEAD SECRET score (and many others, including FTL and Gravity Ghost) is releasing the soundtrack on Bandcamp and Steam.  You can listen to the title track here!

Reviews, Let’s Plays, and More

Since launching the Gear VR version of DEAD SECRET late last year the response has been overwhelmingly positive.  Scott Hayden at Road to VR called Dead Secret “by far one of the longest, and most engaging VR experiences I’ve ever had—mobile or otherwise,” while VRGiant named it a “must play,” and Gamezebo called it an “unforgettable experience.”  Time Magazine labeled Dead Secret “captivating” and “deeply creepy.” Finally, DEAD SECRET was nominated for “Best VR Game” at the IMG Awards.  Winners are announced next week, and we’ve got our fingers and toes double-crossed.

User feedback has been stellar as well.  Our analytics show that players are spending hours playing DEAD SECRET, and the title has managed to remain one of the top-ten best-selling applications on Gear VR for almost its entire tenure on that store.  We’re pretty happy about our 4.5 / 5 star rating as well.

We’ve started to see Let’s Play videos of DEAD SECRET appear.  These are great for giving you a taste of the game play.  Here’s a video played in VR and here’s another playing the Desktop version.  Many thanks to the folks recording and uploading these videos!

monkey

What about Playstation?

We’re still working hard on a version of DEAD SECRET for Playstation platforms.  We don’t have a date for these to announce yet but will be in touch about them soon!

We need your help!

If you like weird games, VR or otherwise, please help us make the launch of DEAD SECRET a success.  Tell your friends, write a tweet, or post the trailer somewhere–anything you can do to help us get the word out is incredibly valuable.  We are a small team and we are funded entirely with the sales of our games, so we very much appreciate your support.  More information and screenshots are at http://deadsecret.com.

Here are some handy social buttons to make it easy!

facebook-share-button

DEAD SECRET ships for PC and VR in 17 days!  See you soon!

The Robot Invader Team

Posted in dead secret | 4 Comments

Dead Secret Nominated for Best VR Game

Chris

Dead Secret is a finalist in the IMG Awards for Best VR Game!

We’re super excited to be in the running!  Please vote for Dead Secret!

Posted in dead secret, mobile games, Robot Invader | Comments Off on Dead Secret Nominated for Best VR Game

Comfortable VR Movement in Dead Secret

Chris

One of the big unsolved problems in virtual reality game design is movement.  Standing still feels great, but things go south when you start to move.  Many developers have experimented with standard first-person shooter movement systems in virtual reality games, and the result is always nauseating. Even worse, some seem unwilling to admit that their standard FPS controls feel terrible in VR. “It feels fine to me,” is the refrain of a person who doesn’t understand simulation sickness, hasn’t done any testing, and isn’t taking virtual reality seriously.  When such dismissals are code for “it’s too much work to change my game design to accommodate comfort,” it might be an indication that the game isn’t a fit for VR at all.

Dead Secret is a first-person game with first-person movement, and we worked really hard to ensure that movement is comfortable.  We’ve tested our solution on a wide audience–a large number of people, as diverse as we could manage–and have exceptionally positive results.  Dead Secret‘s movement system isn’t perfect, and it’s not a general solution for all first-person movement in games, but it works very well for our purposes.

type_dead_small

Before we get into the details of Dead Secret‘s locomotion system, it is worth reviewing the physiology behind motion and simulation sickness.  There are a ton of triggers for motion sickness, but the common one for VR is called vection, and it occurs when your brain encounters a disparity between the information reported by your vestibular system (that’s the part of your inner ear that keeps you balanced) and the information coming from your eyes.  When your inner ears and your eyes disagree it can feel like the world is moving while you are not.  Vection probably evolved as an anti-poison response; apparently there are a lot of toxins that will disrupt your vestibular system, and so your brain’s first move is to make you vomit.  This is why you can get sick by reading in a car: your ears report the motion of the vehicle but your eyes, which are focused on the page, do not corroborate it.

Of course, there’s more to it than that.  Your body is incredibly complicated and individual responses vary quite a bit.  There’s a ton more to learn about how virtual reality can confuse your brain precisely because it is so convincing.  For a lot more detail, I recommend this fantastic talk by Oculus’ Richard Yao.

That said, understanding the basics of vection can help us define some base principles for VR movement.  Vection occurs when your ears and your eyes disagree.  In VR, any  movement that you do not make yourself is a potential source of vection.  But there’s some hope: as Yao points out, your vestibular system can only detect acceleration, not linear velocity.  When you move at a fixed speed your inner ears do not detect any change.  Therefore we should be able to avoid vection if we simply remove all acceleration from movement.

If that seems like a tall order, I have some bad news for you.  Just about every interesting camera movement you might perform in a traditional first-person game causes acceleration.  One of the main reasons that naively-implemented FPS control schemes feel so bad in VR is that they usually continue to rely on mechanics like right stick body rotation.  Rotation in place requires angular acceleration, which your ears can totally feel, and when you do it to the player in VR it feels totally bad.  FPS run bouncing, originally invented to simulate shifting of weight from foot to foot as your avatar runs, feels particularly bad because it’s a parabolic motion–that’s 100% acceleration, people.  Don’t even get me started on canned camera animation; the fastest way to suck somebody out of a VR experience is to take away their head tracking.

Now, if I were to suggest that a traditional game remove all acceleration from its camera, I’d be laughed out of the room.  Camera animation is a big part of the experience in a traditional first-person game.  But we’re not making a traditional game, we’re making a VR game, and the rules are different.  Rather than blindly applying grammar from a different medium we have to come up with theories and test them, which is what we spent the better part of a year doing.  The results have very little to do with what works in traditional games, but a lot to do with what works in our VR game.

Here are the rules for Dead Secret’s camera system:

  • No acceleration, ever.  Linear movement only.
  • No rotating the camera (other than rotation coming from the HMD).
  • You can only move in straight lines. Prefer not to change direction while moving.
  • Motion should be short.  Rule of thumb is to keep all motion to bursts of 5 seconds or less.
  • Never ever take away head tracking or lock something to the view.
  • Maintain frame rate at all times.
enterstudy

Zero acceleration or artificial rotation.

That last one is pretty important.  In our tests we were able to remove almost all vection from testers by removing acceleration and rotation, but folks still felt bad if the frame rate started to drop.  Latency on the HMD is another vector for sickness, and it’s one that can bite you regardless of how careful you are with your movement system.  We worked hard to keep the frame rate high throughout Dead Secret.
To make our game actually playable within those rules the movement scheme had to change quite a bit.  You investigate the scene of a perfect murder in Dead Secret by moving between fixed positions in the room.  That had been part of the design since Day 1.  But to accommodate the requirements of VR the layout and design of our rooms changed dramatically.  I wrote a bit before about the use of space in Dead Secret’s level design, if you’re interested.

There are a number of odd side-effects to this design.  For example, the player can turn 180 degrees and walk to their destination backwards.  The system relies upon the player rotating his whole body to look around the room, but we can’t expect every player to be sitting in a swivel chair.  We added controller-based rotation to accommodate this, and implemented rotation as a 40-degree click with a “blink” transition.  This doesn’t trigger vection because your brain never sees any angular movement (via “change blindness,” which Yao covers in his talk).  And we found that while no testers reported feeling nauseous or sick, about 1% felt disoriented by having to actually turn their body to see things behind them.  For these folks we added a “comfort mode” which omits all motion completely.

Zero reports of nausea. Not even Sharapova.

The last thing we did to ensure our camera system was comfortable was to test the heck out of it.  I’ve read that about 10% of the population is susceptible to motion sickness.  In order to properly test a system you need a large enough testing group to identify folks who might be within that ten percent.  We put Gear VR devices on as many people as we could to help verify our design.  As we’ve iterated the design we’ve been able to push the number of people reporting discomfort to nearly zero.

There’s a lot more experimentation to do in this area.  One idea, which we haven’t tried, is to black out the view at the start and end of a movement.  This is based on the theory that even if the camera is moving at linear speed, the brain can infer acceleration just from visual input.  Another approach, which I’ve seen work well in other games, is to black out the peripheral view of the horizon (e.g. by placing the view in a cockpit).  The brain apparently uses motion in your peripheral vision to compute velocity, so denying it that information can lead to a more comfortable experience.  There are also folks experimenting with transitions between first-person and third-person camera angles for the purposes of movement.

It’s almost impossible to guess how a system will feel in VR without implementing it.  Dead Secret‘s movement system was designed by iteration–we tested and discarded many variants before we hit upon a generally comfortable model.  And that’s one of the amazing things about working in VR today–there’s so much design space to explore. Tried-and-true tricks from traditional games might not work in VR, but there are a ton of new tricks out there, just waiting to be found.

We have a lot more to say about Dead Secret in the very near future, so if you’re interested check us out on Twitter, Facebook, or sign up for the mailing list.

Posted in dead secret, virtual reality | 6 Comments

Dead Secret Diary: Locomotion and Space

Chris

gave a talk at GDC 2015 about designing our new title, Dead Secret, for mobile VR platforms like the Gear VR.  That seemed to go over well, so I thought I’d write a little bit about the design of the game itself.

Dead Secret is a murder mystery that takes place entirely within the home of the victim.  Your goal is to search the house for clues, piece together the events leading up to the death, and finally name the killer.  In designing this game one of the main challenges has been to define how the physical space, puzzles, and pacing interact.  This can be thought of as the problem of density: what is the effect of packing lots of information into a small space compared to spreading it out over a larger space?

4-10-15_small

To some extent, this question is answered for us by other design decisions we’ve made.  The house in Dead Secret is based on real architectural plans for a home of the proper era and location.  It’s not a mansion, it’s a two story home with one bathroom, two bedrooms and, ahem, a basement.  We’ve made some modifications here and there, and some of the game takes place outside the home itself.  But the space is relatively small.

More importantly, individual rooms are sized the way they should be, which means that once we fill them with bookshelves, tables, cupboards, and esoteric 19th-century mechanical instruments, there’s not a whole lot of space to get into a firefight, parkour up a wall, or  even sneak through some air ducts. This house is old enough that it doesn’t even have air ducts.

By electing a dense, cramped environment, we implicitly closed the door on things like shooting and platforming.  It’s a good thing, too, because those sorts of interactions typically rely on locomotion systems that probably make people sick in VR.  Instead, Dead Secret is about exploration, about finding clues, and about solving puzzles.  For this, the tight, contained space of the house works really well.  We can pack a ton of detail into each room and simplify our locomotion system to encourage methodical investigation.  One of the most surprising aspects of VR for us is the sense of spaciousness of virtual spaces.  When the scale is right, an environment that appears noisy and cluttered on a screen feels open and airy in VR.

The tight coupling of rooms also lets us engage in a level design pattern that I call recursive unlocking.  Recursive unlocking describes a map design with tightly packed rooms connected by doors that are initially locked.  The space available to the player starts out small, but as they unlock one room after another it begins to unwind like a shell.  Rooms interconnect and shortcuts are created, and traversing the space efficiently becomes a puzzle in and of itself.  Resident Evil is the archetypical example of this pattern, and if you’re interested you can read my analysis of recursive unlocking in that game.

4-3-15_small

Since our crime scene has many fewer rooms than Raccoon City’s Spencer Mansion, the implementation of recursive unlocking in Dead Secret is focused on aligning new areas to beats in the narrative, and eventually reconnecting them back to a common space.  The player will visit a new space and find themselves unable to return to the area they were previously in.  After resolving the new space they find a path back to an area that they know, and eventually into another new space.  Thanks to the density of content in each space, this approach lets us cram the whole game into just one house.

A highly dense space does have disadvantages, though.  Locomotion needs to be precise, and therefore ends up being a bit slower than in other forms of games. In a detailed environment, finding items to use for puzzles can be tricky because there is so much visual information to process.  Puzzles are used to gate progression, so we need to organize our puzzle dependency charts to prevent frustrating shelf moments at all costs.  Puzzle interfaces need to be fairly expressive, so we end up writing a lot of one-off code for specific puzzle interactions.  Recursive unlocking helps us keep items local to a common area of relevance, but wandering has a higher cost in Dead Secret than in other games in this genre (due to being in VR and also because we’ve traded control flexibility for environment detail), so we sometimes need to be more heavy handed about progression than I would prefer.

Still, this type of experience seems perfect for VR.  The trade-offs required to make the home of our murder victim interesting and compelling are generally things that are good for VR anyway. We want you to be in this house, and while VR technology can open the front door, it’s still up to us to make the floorboards creak as you cross the threshold.

Look for Dead Secret later this year on Gear VR, and on other platforms thereafter.

Posted in dead secret, game design | Comments Off on Dead Secret Diary: Locomotion and Space

GDC Talk: Designing for Mobile VR in Dead Secret

Chris

It’s been a few months since the 2015 Game Developers Conference was held in San Francisco, but we’ve been so busy with Dead Secret that we barely noticed.  I gave a talk about the game, and how we changed it dramatically to meet the requirements of VR, which the kind folks who run the conference have posted for free.  It’s only 25 minutes, but if you’re short on time then UploadVR has a quick summary.

Chris_GDC-1000x750

Posted in dead secret, game design, virtual reality | Comments Off on GDC Talk: Designing for Mobile VR in Dead Secret

Dead Secret at GDC

Chris

Hey! We’re going to the Game Developers Conference in March and we’ll be talking about Dead Secret.  The topic is designing for mobile VR, and the work we went through to convert Dead Secret from a tablet game to virtual reality experience.  Here’s the link:

http://schedule.gdconf.com/session/designing-for-mobile-vr-in-dead-secret

And here’s a sneak preview:

Dead Secret GDC Preview

 

See you there!

Posted in dead secret, game engineering, game industry, virtual reality | Comments Off on Dead Secret at GDC

Custom Occlusion Culling in Unity

Chris

Here at the Robot Invader compound we are hard at work on our new game, a VR murder mystery title called Dead Secret.  There’s a very early trailer to see over at deadsecret.com.

Dead Secret is designed for VR devices, particularly mobile VR devices like the Gear VR.  But developing for VR on mobile hardware can be a performance challenge.  All the tricks in my last post apply, but the threshold for error is much lower.  Not only must you render the frame twice (once for each eye), but any dip below 60 fps can be felt by the player (and it doesn’t feel good).  Maintaining a solid frame rate is an absolute must for mobile VR.

Door

For Dead Secret, one of the major time costs is draw calls.  The game takes place in the rural home of a recently-deceased recluse, and the map is a tight organization of rooms.  If we were to simply place the camera in a room and render normally, the number of objects that would fall within the frustum would be massive.  Though most would be invisible (z-tested away behind walls and doors), these objects would still account for a huge number of extraneous (and quite expensive) draw calls.  In fact, even though we have not finished populating all of the rooms with items, furniture, and puzzles, a normal render of the house with just culling requires about 1400 draw calls per frame (well, actually, that’s per eye, so more like 2800 per frame).

The thing is, you can only ever see a tiny fraction of those objects at once.  When you are in a room and the doors are closed, you can only see the contents of that room, which usually accounts for about 60 draw calls.  What we need is a way to turn everything you can’t see off, and leave the things around you that you might see turned on.  That is, we want to cull away all of the occluded objects before they are submitted to render.  This is often called occlusion culling.

There are many approaches to solving this problem, but most of them fall within the definition of a Potential Visibility Set system.  A PVS system is a system that knows what you can probably see from any given point in the game, a system that knows the “potentially visible” set of meshes for every possible camera position.  With a PVS system, we should know the exact set of geometry that you might see, and thus must be considered for render, at any given time.  Everything else can just be turned off.

visibility short-2

A rudimentary form of PVS is a Portal System, where you define areas that are connected by passages (“portals”).  When the camera is in one area, you can assume that only that area and the immediately connected areas are potentially visible.  Portals can further be opened and closed, giving you more information about which meshes in your game world are possible to see from your current vantage point.

More complex PVS systems typically cut the world up into segments or regions and then compute the visible set of geometry from each region.  As the camera passes from region to region, some meshes are activated while others are turned off.  As long as you know where your camera is going to be, you can compute a (sometimes very large) data structure defining the potentially visible set of geometry from any point in that space.

The good news is, Unity comes with a pretty high-end PVS system built right in.  It’s based on a third-party tool called Umbra, which by all accounts is a state-of-the-art PVS system (actually, it’s a collection of PVS systems for different use cases).  If you need occlusion culling in your game, this is where you should start.

The bad news is, the interface that Unity exposes to the Umbra tool is fairly cryptic and the results are difficult to control.  It works really well for the simple scenes referenced by the documentation, but it’s pretty hard to customize specifically for the use-case needed by your game.  At least, that’s been my experience.

Dead Secret has a very simple visibility problem to solve.  The house is divided into rooms with doors that close, so at a high level we can just consider it a portal system.  In fact, if all we needed was portals there are some pretty solid-looking tools available on the Asset Store.  Within each room, however, we know exactly where the camera can be, and we’d like to do proper occlusion culling from each vantage point to maximize our draw call savings.  If we’re going to go from 1400 draw calls a frame down to 50 or 60, we’re going to have to only draw the things that you can actually see.

My first attempt at a visibility system for Dead Secret was just a component with a list of meshes.  I hand-authored the list for every room and used an algorithm with simple rules:

  1. When standing in a room, enable only the mesh objects in that room’s visibility set.
  2. When you move to a new room, disable the old room’s visibility set and enable the new room’s visibility set.
  3. While in transit from one room to another, enable both the visibility set of the old room and the new room.

This works fine, and immediately dropped my draw call count by 98%.  But it’s also exceptionally limited: there’s no occlusion culling from different vantage points within the rooms themselves, and the lists have to be manually maintained.  It’s basically just a rather limited portal system.

As we started to add more objects to our rooms this system quickly became untenable.  The second pass, then, was to compute the list of visible geometry automatically from several vantage points within each room, and apply the same algorithm not just between rooms, but between vantage points within rooms as well.  Just as I was thinking about this Matt Rix posted code to access an internal editor-only ray-mesh intersection test function (why isn’t this public API!?), and I jumped on it.  By casting rays out in a sphere from each vantage point, I figured I could probably collect a pretty reasonable set of visible geometry.

Shoot a bunch of rays, find a bunch of mesh, what could go wrong?

Shoot a bunch of rays, find a bunch of mesh, what could go wrong?

Turns out that while this method works, it has some problems.  First, as you might have predicted, it misses small, thin objects that are somewhat far from the camera point.  Even with 26,000 rays (five degree increments, plus a little bit of error to offset between sphere scan lines), the rays diverge enough at their extent that small objects can easily be missed. In addition, this method takes a long time to run through the combinatorial explosion of vantage points and mesh objects–about seven hours in our case.  It could surely be optimized, but what’s the point if it doesn’t work very well?

For my third attempt, I decided to try a method a co-worker of mine came up with ages ago.  Way back in 2006 Alan Kimball, who I worked with at Vicarious Visions, presented a visibility algorithm at GDC based on rendering a scene by coloring each mesh a unique color.  If I remember correctly, Alan’s goal was to implement a pixel-perfect mouse picking algorithm.  He rendered the scene out to a texture using a special shader that colored each mesh a unique solid color, then just sampled the color under the mouse pointer to determine which mesh had been clicked on.  Pretty slick, and quite similar to my current problem.

To turn this approach into a visibility system I implemented a simple panoramic renderer.  To render a panorama, I just instantiate a bunch of cameras, rotate them to form a circle, and adjust their viewport rectangles to form a series of slices.  Then I render all that into a texture.  For the purposes of a visibility system it doesn’t actually matter if the panorama looks good or not, but actually they look pretty nice.

The second bit is to change all of the materials on all of the mesh to something that can render a solid color, and then assign colors to each based on some unique value.  The only trickiness here is that the color value must be unique per mesh, and I ended up setting a shader keyword on every material in the game, which meant that I couldn’t really leverage Unity’s replacement shader system.  This also means that and I must manually clean the materials up when I’m done, and be careful to assign each back to sharedMaterial so that I don’t break dynamic batching.  Unity assumes I don’t know what I am doing and throws a load of warnings about leaking materials (which, of course, there are none).  But it works!

I would actually play a game that looked like this.

I would actually play a game that looked like this.

Once the colorized panorama is rendered to a texture (carefully created with antialiasing and all other blending turned off), it’s a simple matter to walk the pixels and look each new color up in a table of colors-to-mesh.  The system is so precise that it will catch mesh peaking through polygon cracks, so I ended up adding a small pixel threshold (say, ten pixels of the same color) before a mesh can be considered visible.

The output of this function is a highly accurate list of visible geometry that I can plug into the mesh list algorithm described above.  In addition, it runs about 60x faster than the ray cast method (yep, seven minutes instead of seven hours for a complete world compute) before any optimizations.

What I’ve ended up with is an exceptionally simple (at runtime), exceptionally accurate visibility system.  Its main weakness is that it only computes from specific vantage points, but the design of Dead Secret makes that a non-issue.  It doesn’t handle transparent surfaces well (it sees them as opaque occluders), but that’s not an issue for me either.

The result is that Dead Secret is running at a solid 60 fps on the Gear VR hardware.  We have enough headroom to experiment with expensive shaders that we should probably avoid, like mirrors (the better to lurk behind you, my dear).  This performance profile gives us space to stock the house with details, clues, a dead body or two, and maybe even a psycho killer.  Ah, but, I mustn’t spoil it for you.  I’ve already said too much.  Just, uh, keep your eyes peeled for Dead Secret in 2015.

 

Posted in game engineering, unity | Comments Off on Custom Occlusion Culling in Unity

Performance Optimization for Mobile Devices

Chris

This week at the Robot Invader compound we’ve been putting the finishing touches on our new Nanobots game, Dungeon Slots.  This game started out as another week-long experiment and has stretched into a month-long development cycle because we like the concept so much.  The game itself is finished, and we spent this week working on polish and performance.

title

Some engineers treat performance optimization as something of a black art.  Folks are especially cautious on Android, where there are a wide variety of devices and the performance characteristics of a given device are not always obvious.  We’ve found, however, that despite large differences in the philosophical design of various mobile GPUs, there are a few simple rules we can follow that keep us running well on pretty much everything.  Here’s the checklist we follow when designing our scenes for performance:

  1. Fill is your enemy.  Every time you write to a pixel you incur a cost.  Filling the screen of pixels, even just a solid color, is an expensive operation on just about every mobile chipset available.  Even as mobile GPUs get better at this, screen resolutions seem to increase at exactly the same rate.  Our #1 source of performance slowdown is pixel overdraw–writing to too many pixels more than one time in the frame.
  2. Draw calls are expensive.  Every time you tell OpenGL ES to draw a buffer of verts, that call itself has a cost.  Actually, on most devices I think it is the state switch involved in selecting the verts that you wish to draw that incurs the real cost; if you were to draw the same buffer multiple times, the first draw call would be more expensive than the subsequent calls.  But generally speaking, we try to keep the number of draw calls as low as possible.  In Wind-up Knight 2 we have about 100 – 120 per frame.  Dungeon Slots is less than 40 per frame.
  3. Lights are expensive. Depending on how you’ve implemented your lighting, realtime lights can destroy your performance on a mobile device.  Lights often require multiple sets of geometry to be submitted to the GPU, or multiple passes over the pixels being lit, or more high-precision registers in a shader than your GPU has available.  The actual costs come down to the individual implementation, but there are a number of ways lights can eat into your perf.
  4. Watch out for vertex creep. Many mobile devices are actually pretty good at handling scenes with lots of verts.  But most GPUs fall down really hard after you pass a certain threshold of geometry per frame.  In order to run on lower-end hardware, we target 30k triangles per frame as a soft upper limit.  This might be a little conservative, but remember that some types of lights can increase your triangle count!

There are a few other rules of thumb, but they are less important: the rules above cover 95% of cases of poor performance.  And of those, I’d say that fill-related slowdown accounts for the vast majority of cases.

Brimstone4

This scene is about 100 draw calls.

Our strategy for dealing with these problems also boil down to a few rules:

  1. Macrotexture everything. Macrotexturing is the process of using the smallest number of textures possible in the scene.  The levels in Wind-up Knight 2 all fit into 4 1024×1024 textures.  This is fast for a number of reasons, but one of the main benefits is that it allows us to batch all of the visible geometry using the same texture up into a single VBO and send it to the GPU all at once.  Unity does a good job of this automatically with its dynamic batching option.  Macrotexturing is hard, and it requires an artist with a lot of foresight, serious modeling skills, and a willingness to rework things to accommodate changes in the textures.  But it’s absolutely worth it.
  2. Batch everything. In addition to dynamic batching based on material, we also try to combine meshes that we know won’t move.  Unity calls this static batching, and it’s great for level geometry or other mesh elements that never move.  Rather than making our scene static in the editor, we usually mark all objects that can be made static with a particular layer, then use Unity’s StaticBatchingUtility to combine static meshes at level load time.  This increases load time a bit but dramatically reduces the size of our game binary.
  3. Control draw order.  On a PC, you probably draw your scene from back to front, starting with a skybox and ending with the closest bits to the camera, followed by a pass for transparent objects or other items needing blending.   On mobile, however, this incurs an unacceptable amount of overdraw.  So we try to draw as much as possible front-to-back, with the skybox and other large objects that can potentially touch a large number of pixels on the screen drawn as the last step before transparent objects.  Rejecting a pixel with a depth test is much faster than filling that pixel unnecessarily several times, so front-to-back for opaque geometry is a big win.
  4. Watch out for transparency.  Transparency is, by definition, the process of filling a pixel more than one time.  Therefore, on mobile, it’s very expensive to have large objects that covers part of the screen in semi-transparent pixels.  Even worse is layers of transparency.  You can get away with small small regions of the screen, but once a transparent object starts to touch a lot of pixels, the frame time cost will be high.  We try to organize our transparent objects such that there is minimal overlap and that they take up as few pixels on the screen as possible.
  5. Design to scale.  It’s hard to find a perfect balance between “looks good” and “runs fast” on mobile, mostly because there’s such a wide spectrum of power out there.  A modern device like the Nexus 5 or iPhone 5 can push scenes that are orders of magnitude more complex than their predecessors from three or four years ago.  Therefore, we design our scene such that we can tone down the graphics quality in exchange for performance on lower-end displays.  We drop the highest texture mip on displays smaller than iPhone 4 resolution.  We down-res the size of the final render target by 15% or 25% on very slow devices. We dynamically detect framerate changes and switch between pixel lights and spherical harmonics on the fly.  These are easy to do if you are thinking about them early.
Dungeon Slots

Dungeon Slots!

With those rules of thumb in mind, here’s how we optimized Dungeon Slots this week.

At the beginning of the week, Dungeon Slots ran great on a Nexus 5 and absolutely terribly on a 2012 Nexus 7.  Now, the Nexus 7 is a few years old, but it’s still quite a bit more powerful than what we’d generally consider to be our minimum spec.  The game was running at less than 15 fps on that device, and we needed to find out why.

The first thing I did was connect the Unity profiler to the device and look at the logs.  The profiler is a bit flakey, especially in situations where the CPU is hosed, but we could see that some of our GUI code (managed by the NGUI framework) was spiking every frame.  I looked at the scene we were rendering and noticed that it had been constructed out of a bunch of tiny sprites.  NGUI does a good job of maintaining a single texture atlas for those sprites, and it collects them all into a single draw call every frame.  But it also has to regenerate the verts for that draw call if anything in the scene (well, in NGUI terms, within the parent panel) changes.  This game has a number of rotating slot-machine-like slots, both for the slot machine itself and for various numerical displays, and those were implemented with a bunch of sprites that were clipped into a small window.  The main source of overhead, according to the profiler, was just updating the positions and clipping rectangles for all of those sprites every frame.  The clip regions are pretty expensive, too.

We replaced the numerical displays with a system based on a scrolling texture, which increased our draw call count slightly but dramatically reduced the number of sprites that NGUI needed to manage.  We also reorganized our NGUI panels such that bits of the scene that are static were separated from the bits that were animated to avoid unnecessary vertex buffer recreation.  This change caused NGUI to drop a number of large notches in the profiler, and while it’s still a little more expensive than it should be, it’s no longer the focus of our attention.

Even with that change, however, the game was still running very slowly on the Nexus 7.  The next step was to enable Unity’s internal profiler log and take a look at the output.  That output looks something like this:

cpu-player> min: 102.8 max: 132.7 avg: 117.5
cpu-ogles-drv> min: 0.9 max: 3.0 avg: 1.5
cpu-present> min: 0.0 max: 1.0 avg: 0.1
frametime> min: 103.8 max: 135.0 avg: 119.1
draw-call #> min: 44 max: 44 avg: 44 | batched: 2
tris #> min: 82126 max: 122130 avg: 122126 | batched: 64
verts #> min: 83997 max: 124005 avg: 123998 | batched: 50
player-detail> physx: 1.1 animation: 0.8 culling 0.0 skinning: 0.0
               batching: 0.1 render: 8.1 
               fixed-update-count: 5 .. 7
mono-scripts> update: 3.1 fixedUpdate: 0.0 coroutines: 0.0 
mono-memory> used heap: 1900544 allocated heap: 2052096 
             max number of collections: 0 
             collection total duration: 0.0

What this told us was that the CPU was still hosed, but not by mono scripts.  The incredibly high cpu-player time indicated that a lot of work was going on before the GPU even got any verts to draw.  The OMGWTFBBQ moment came when we noticed that the vertex and triangle count were averaging in the ~100k tris / frame, way over our target of 30k.

Switching back to the Unity editor, the Stats overlay window told the same story: our simple scene was pushing way more polygons than we expected. After some investigation we realized that while the meshes in the scene itself were right on target in terms of complexity, we’d started using a standard diffuse shader on them in order to achieve certain lighting effects.  Unity’s Mobile Diffuse shader only supports one directional light, but the stock Diffuse shader supports any number of pixel lights.  What was happening here is that our geometry was being submitted many times over, once for each light source that touched it, which caused our triangle count to skyrocket and our CPU to collapse. I modified the setup to use only the faster Mobile Diffuse shader.  This fixed our crazy triangle load but removed the neat lighting effects in the process.

Still, it was probably worth it: the game had gone from about 10 fps when we started to about 22 fps via these changes.  That’s a savings of about 35 ms per frame, which is pretty significant.  Still, 22 fps remains way too slow.

To delve deeper into where our frame time was going, I decided to bust out the big guns.  NVIDIA produces a neat performance tool called PerfHUD ES, which allows you to connect to an Android-based dev kit and get detailed profiling information about the scene you are rendering.  I have an ancient Tegra 2 dev kit that I got from NVIDIA years ago, and it’s fantastic for this kind of performance testing precisely because it’s pretty slow by modern standards.  Getting it to work requires a little dance of shell scripts, adb port forwarding, and prayers to various moon gods.  The process has been much improved by NVIDIA in more recent kits, but I like the old one because its performance characteristics are so easy to understand.

Dungeon Slots particles

A shot from NVIDIA’s PerfHUD ES showing that our transparent particle fog touches a lot more pixels than we intended.

The best thing about PerfHUD is that it can show you a step-by-step rendering of how your scene is put together by the hardware, draw call by draw call.  This tool, combined with timing information about each draw call, is usually more than enough to identify performance culprits.  When I ran Dungeon Slots through PerfHUD’s frame analyzer, I learned two important things:

  1. Though the UI completely covers the bottom half of the screen, we were rending the 3D world underneath it.  That’s overdrawing 50% of the pixels on the screen!
  2. A transparent particle effect we place on the ground in front of the camera was actually much larger than we anticipated, and most of it was hidden behind the UI.  More overdraw!

Once identified, these are easy problems to solve.  The first step was to just reduce the size of the 3D camera’s viewport to cover only half the screen.  This way the bottom half has no overdraw from frame to frame.  That also cut the size of the particle effect in half.  Even so, subsequent profiling showed that the particle effect was still touching too many pixels to be performant on an older device with a big screen like the Tegra 2 dev kit.  It needed to be turned off entirely. With these changes, the game now runs at 60 fps on the Nexus 7, and at a very respectable 30 fps on the ancient dev kit.  We lost a few graphical effects in the process (some animated lighting and a particle effect), but overall the game still looks good, and now will run well on devices we consider to be our minimum spec.

Dungeon Slots 2

Still, it’s tough to play the game without those extra effects now that we’ve gotten used to them being there.  A player who’s never seen them before won’t miss them, but there’s no denying that the game looks more dynamic, more interesting, and more polished with all the extras turned on.  And after all, the Nexus 5 ran Dungeon Slots at full speed even before we started with all of this optimization.  It sucks that folks with high-end devices get a degraded experience simply because there are also lots of low-end devices out there.

But maybe they don’t have to after all.  The last change I made this week was to add code that samples the framerate as the player plays their first round of monster-slashing slot madness.  If the device is performing well, over 50 fps, I go ahead and turn the particle effects back on and change the shaders back to Diffuse for full lighting.  In my tests, this produces a good middle ground: the game runs fast for everybody, and high-end devices get the extra graphical whiz-bang polish features as well.

We’ve still got a little bit of work to do before Dungeon Slots is ready to go, but you should be able to play it soon.

Posted in Android, game engineering, mobile games | 3 Comments

Android TV and the Video Game Middle Ground

Chris

We shipped Wind-up Knight 2 for Android TV last week.  If you were at Google IO, maybe you saw our logo flash up there for a moment.  If you actually have an Android TV, Wind-up Knight 2 is one of a small set of games that is already setup for it.  Playing with a controller is sublime, and it’s absolutely beautiful at 1080p on a giant screen.

Wind-up Knight 2 is pretty flippin' sweet on a TV.

Wind-up Knight 2 is pretty flippin’ sweet on a TV.

Two weeks ago we also shipped for another TV-based device, Amazon’s FireTV.  Both of these devices required almost no effort on our part; if your game has controller support built in, shipping an Android game on a TV is trivial.

Of course, not every game has built-in support for controllers.  Adding controller support to user interface screens can be particularly challenging for games designed around touches and swipes.  But in this day and age, I think that every mobile developer should be thinking about how to integrate controllers into their games.  Our approach is to do almost all of our development on the Nvidia SHIELD.  It’s a fantastic device for developing controller-based games because it’s rugged, highly standards-compliant, and very fast.  Once it works on the SHIELD, it’ll work everywhere else.

A lot of developers are quick to write these devices off.  They find the idea that cheap set-top boxes based on mobile chipsets might “kill consoles” incredible.  But I don’t think we should cast Android TV and similar devices as simple competitors to dedicated gaming devices.  As I told Gamasutra last year, I think commoditization of game-playing hardware is an eventuality that cannot be avoided, and that’s going to present some tough challenges for traditional consoles.  My interest has less to do with who wins and who loses and more to do with expanding the range of markets available to game developers.  The more choice developers have about how much money to spend, what kind of game to create, which sort of user to target, how much to charge, and which distribution channel to use, the healthier the industry will be.  And a healthy game industry makes more interesting games, takes more creative risks, and reaches more people.

Nobody bought an iPhone to play video games at first; it stealthily worked its way into pockets around the world before blossoming into a huge market for games.  I think Android TV, and similar devices, can do the same for games running on TVs and played with controllers.  In doing so, there is an opportunity to create a whole new channel for games that cannot exist elsewhere; a middle ground between consoles and mobile devices.

As I get older, I’m finding it harder to play games.  I have seven different consoles connected to my TV at the moment, and yet I haven’t felt the need to go out and buy a PS4 or Xbox One.  Not because those devices are poor, just because I appear to have aged out of the target demographic.  The games available on those devices all look rad, and there are a couple that have really caught my eye (Access Games’ D4 and Frictional’s SOMA are on the top of my list), but the catalog doesn’t interest me enough to actually go buy a new $399.99 device.  At the same time, it’s hard for me to find games on mobile platforms too.  This might be a discoverability issue, or it might be that the games I like are not well suited to free-to-play or touch controls.  Almost 100% of the games I play these days are on mobile devices, but there are very few that really hook me.  I’d buy a Steambox but they aren’t really ready for sale yet, and aren’t likely to be cheap.

The promise of devices like Android TV is not to replace consoles, nor to reach the insane scale of mobile games.  Rather, they present an opportunity to become a channel for games that can’t be played with a touchscreen but aren’t big, expensive console games either.  Experimental games, retro games, narrative-heavy games.  Games that can be built and shipped cheaply thanks to standardized chipsets and digital distribution infrastructure, but can still be run on a big screen and played with a controller.  Games like Neverending Nightmares, which is shipping for OUYA first and feels perfectly at home on that device.

I don’t know if devices like Android TV will actually blossom into an alternative channel for games, but the promise is there.   And that’s more than enough reason to support them with Wind-up Knight 2 and future titles.

Posted in Android, controllers, game industry, mobile games, wind-up knight | 4 Comments